Meeting Notes
- Date: 2026-01-20
- Time: 09:00AM (PT)
- Location: Teams Meeting
Agenda
Presentation from John Meng on Duet model unifies diverse neuroscience experimental findings on predictive coding
Meeting Recording
Meeting Notes
Presentation of Dual Predictive Coding Framework: John presented their research on a dual predictive coding framework for surprise detection in sensory cortices, with contributions and data from Jordan and references to collaborative work with Jerome and Lucas.
Introduction to Predictive Coding: John introduced the concept of predictive coding as a unified framework for surprise detection in sensory cortices, emphasizing the distinction between positive and negative prediction errors and referencing foundational work in the field.
Biological Modeling and Learning Rule: John explained the need for separate positive and negative error neurons in the sensory cortex, described the three-factor learning rule (pre- and post-synaptic activity plus a modulatory third factor), and detailed how this rule enables the emergence of error neurons through biologically plausible mechanisms.
Simulation and Experimental Validation: John demonstrated through simulations that the three-factor learning rule leads to the formation of positive and negative prediction error neurons, and validated these findings with experimental data, including results from Kayla's and Jordan's labs.
Reproduction of Experimental Phenomena: The framework was shown to reproduce key experimental observations such as mismatch responses, omission responses, and modulation of firing rates in expected versus unexpected sensory contexts, aligning with data from David Schneider's and Kayla's labs.
Conceptual Challenges and Representational Collapse: John discussed the issue of representational collapse in predictive coding, highlighting the necessity of both positive and negative error signals for effective cognitive processing and distinguishing between expected and surprising sensory events.
Discussion of High-Dimensional Error Signals and Surprise: Jerome and John discussed the distinction between scalar surprise signals and high-dimensional error signals, addressing how the three-factor learning rule operates in complex sensory contexts and the implications for neural computation.
Scalar Versus High-Dimensional Error: Jerome asked how the model handles errors in higher-dimensional sensory spaces, to which John clarified that the surprise signal used for learning can remain a scalar, while the high-dimensional error is learned locally, allowing the framework to scale to complex sensory inputs.
Role of Neuromodulators: John elaborated that John signals, such as those from the locus coeruleus, can serve as the global scalar surprise signal, modulating local synaptic updates without requiring high-dimensional broadcast signals.
Implications for Model Generalization: The discussion highlighted that the separation of surprise and error allows the model to generalize to various sensory modalities and contexts, supporting both rapid detection of unexpected events and more detailed error-driven learning.
Experimental Data Analysis and Model Validation: John analyzed experimental data provided by Jordan to identify positive and negative prediction error neurons, validating the model's predictions and exploring their functional properties in sensory processing.
Classification of Error Neurons: John described the process of classifying neurons as putative positive or negative prediction error types based on their responses to deviant and control stimuli in oddball paradigms, using criteria derived from the model.
Functional Differences in Neuron Types: Analysis revealed that positive prediction error neurons are more stimulus-selective, while negative prediction error neurons are more responsive to top-down predictions and less selective for stimulus orientation, suggesting distinct roles in sensory processing.
Omission and Deviant Detection: The model and data showed that negative prediction error neurons respond to both omission and deviant detection paradigms, while positive prediction error neurons primarily signal deviant detection, supporting the model's prediction of co-contribution to surprise detection.
Open Questions and Future Research Directions: John outlined open questions regarding the localization and propagation of surprise signals in the brain, the integration of high-dimensional error signals, and plans for future experimental and modeling work.
Localization of Surprise Detection: John raised the question of where surprise signals are first detected in the brain, proposing future studies using whole-brain data to trace the emergence and propagation of surprise across regions.
Integration of Error Signals: The challenge of integrating high-dimensional error signals into a scalar surprise signal was discussed, with John noting the need for further modeling to understand how the brain estimates surprise from distributed sensory information.
Technical Discussion on Learning Rules and Hierarchical Processing: Lucas and John discussed the implementation of the three-factor learning rule, its relation to dendritic processing, and the challenges of connecting error neurons within hierarchical neural circuits.
Three-Factor Learning Rule Mechanism: John explained that the three-factor learning rule operates through pre- and post-synaptic activity modulated by a neuromodulatory signal, and that this mechanism can be implemented with or without explicit dendritic tagging, depending on the model's complexity.
Hierarchical Error Integration: Lucas inquired about how positive and negative error signals are combined and used for updating representations in higher layers, with John acknowledging that further excitatory plasticity and distributed learning are likely required for effective hierarchical processing.
💬 Start a discussion for this page on GitHub (A GitHub account is required to create or participate in discussions)