I study how the brain learns to process sensory information, with a specific focus on the primate visual system. The visual system needs to take the highly redundant visual information arriving from the outside world and transform it into a low-dimensional representation in which important features are made explicit. I aim to understand the computational strategies that the brain uses to accomplish this task.
A large gap remains between computational models of information processing and our experimentally based understanding of the brain. Most of the best computational algorithms for object recognition do not have biologically plausible implementations. Conversely, models which attempt to maintain biological realism usually lack theoretical underpinnings and computational power. However, I believe that recent experimental advances on the plasticity of synapses can be combined with recent computational advances on the training of deep neural networks to help bridge the gap between neuroscience and computation. I use analysis and simulations with the aim of creating biologically realistic circuits that can solve challenges in a rapid, efficient and fault-tolerant manner.
My general approach is to study how the connections between neurons within the visual system evolve over time so that the brain can become better at visual detection and recognition. A unique focus of my work is learning in feedback connections, those synapses leading from higher brain areas back towards the sensory periphery. Although feedback connections are abundant in the brain, many models of the visual system neglect them entirely. I argue that the inclusion of these connections during learning allows for models to implement a much broader range of computational strategies, importantly including generative models for unsupervised learning.
Last update: 3/22/16