Research in my group tackles fundamental questions in neuroscience by integrating theoretical approaches—mathematical modeling, statistical physics, and machine learning—with experimental data. Currently, we focus on understanding how the visual system supports complex functions, such as object recognition and tracking. We approach this from multiple perspectives:
- Functional models: Neuroanatomical evidence shows that the visual system in the brain consists of multiple interconnected areas arranged in a shallow hierarchy. How is visual information transformed along this hierarchy? Do different areas gradually transform raw sensory input into higher-level abstract representations, similar to modern deep learning models? Or do individual areas specialize in specific computations? To address these questions, we are developing and probing data-driven models of the visual hierarchy.
- Mechanistic models: How do neural responses to visual stimuli emerge from the collective dynamics of neurons? We investigate this process at multiple scales—from dendritic computations at the single-cell level to neural interactions within local circuits and multi-area interactions at the macro scale. Using a physics-inspired approach, we employ simple models constrained by experimental data to identify the key mechanisms driving visual processing.
- Normative models: Why are neural circuits structured as they are? Are features of brain networks different from artificial ones merely biological quirks, or do these features provide specific computational benefits? We aim to answer these questions using simplified models to explore the impact of particular features on computations, guided by a statistical physics approach.
If you are interested in joining/collaborating with us—or simply curious about this work—please reach out.