- Date: 2026-04-21
- Time: 09:00AM (PT)
- Location: Teams Meeting
Agenda
Coordinating discussion on the analysis for the data release paper. Go over tasks and check how to best to coordinate.
Meeting Recording
Meeting Notes
Data Release Paper Progress and Method Section Updates: Jerome provided an update on the progress of the data release paper, highlighting ongoing work on the method section and the imminent availability of merged NWB files, with Carter assisting in data structuring and documentation for downstream analysis.
Method Section Drafting: Jerome described the extensive progress made on the method section, including completed drafts for surgeries, ISI imaging, behavior training protocol, visual stimulus, stimulus parameters, Neuropixels, and imaging, while noting that behavior tracking, two photon, and SLAP 2 pre-processing require further input from the imaging team.
NWB File Preparation: Jerome and Carter are working to provide access to merged NWB files containing stimulus and neurophysiology data, with code alignment to facilitate downstream analysis; some files are already on Dandy pending verification, and Carter will add written descriptions of the data structure.
Documentation and Verification: Jerome emphasized the importance of thorough documentation and encouraged the team to ask questions starting next week to ensure the method section is sufficiently detailed for future analyses, aiming to finalize the core content within the next weeks.
Experimental Notes and CCF Alignment for Neuropixels Data: Alexander raised concerns about identifying probe locations in Neuropixels data, prompting Severine to explain ongoing efforts for CCF alignment and offer to provide experimental notes, with Sarah and Jerome discussing session table flagging and documentation improvements.
CCF Alignment Status: Severine explained that the team is working toward CCF alignment but faces delays due to pending imaging of multiple brains; pilot data already has CCF, which should represent the broader dataset, and users are asked to be patient.
Provision of Experimental Notes: Severine confirmed the ability to provide experimental notes indicating which probe targeted which brain area, suggesting these would be shared on GitHub for clarity and accessibility.
Session Table Flagging: Sarah and Jerome discussed the session table, noting that datasets with registered alignment are flagged in a specific column, and Jerome encouraged users to ask questions on the forum if visibility is unclear.
Documentation Enhancement: Jerome and Severine agreed to bolster documentation to clarify probe IDs and targeted areas, establishing this as an action item for the team.
Data Validation Figures and Quality Control Metrics: Jerome led a discussion on validating data quality through figures, with Sarah, Stefan, and Alexander contributing ideas for metrics, spike sorting statistics, and QC criteria, and Sarah agreeing to coordinate a list of modality-specific measures with input from Nicholas and others.
Figure Design and Metrics: Jerome outlined plans for figures assessing SNR, amplitude, and unit quality across Neuropixels, two photon, and SLAP 2 techniques, with Sarah clarifying the need for modality-specific metrics and proposing a collaborative list to ensure consistency.
Spike Sorting Statistics: Alexander suggested including statistics from spike sorting pipelines, such as candidate neuron counts and identification rates, which Jerome confirmed are accessible in shared files and could be incorporated into figures.
QC Criteria and Thresholds: Stefan and Sarah discussed the importance of defining QC thresholds for unit classification, with Sarah advocating for transparency in criteria and allowing users to adjust thresholds as needed, referencing previous standards for Neuropixels datasets.
Pipeline Accessibility: Stefan raised concerns about the difficulty of rerunning QC pipelines, prompting Jerome to explain that raw data and processing assets are accessible via S3, and NWB files can be regenerated with updated inclusion criteria as decided by the team.
LFP and Layer Characterization in Data Analysis: Alexander and Stefan discussed the inclusion of LFP characterization and layer identification in figures, with Severine and Sarah explaining anatomical alignment methods and challenges, and Jerome proposing the addition of layer/brain area figures to enhance downstream analysis.
LFP Metrics and Analysis: Sarah suggested adding basic LFP metrics such as power spectrum, noting the arbitrariness of analysis windows, while Jerome referenced prior analyses by Ali and others that could be incorporated.
Layer Identification Methods: Severine described the process of delineating cortical layers using CCF alignment, relying on standardized atlas definitions without manual adjustment, and acknowledged the potential for refinement using LFP and CSD data.
Challenges in Layer Resolution: Sarah and Alexander discussed difficulties in calculating absolute layer depths and boundaries, especially in mouse cortex, and proposed agnostic approaches based on CSD sinks and relative units.
Anatomy and Physiology Integration: Stefan recommended providing both anatomical and physiological data for conflict resolution, suggesting that Severine's pipeline could generate standard CCF outputs for users to interpret, while Alexander noted limitations due to individual variation.
Receptive Field Mapping and AI-Driven Analysis: Alexander shared progress on receptive field mapping across modalities using Python notebooks and Cloud Code, with Jerome proposing Alexander and Lucas lead further efforts, and the group discussing sharing skill files, AI-driven pipelines, and standards for reproducibility.
Receptive Field Mapping Results: Alexander reported successful receptive field and orientation tuning analyses for all three techniques, noting high quality in two photon and Neuropixels data, and broader, noisier results in SLAP 2 pilot data, with figures available on GitHub.
AI-Driven Analysis Pipelines: Alexander described using Cloud Code and skill files to automate analysis, highlighting complexity and the need for sharing files to synchronize efforts, while Jerome and Stefan discussed including AI-generated code and methods in the paper.
Standards and Reproducibility: Sarah and Karim advocated for parallel, independent analysis pipelines to cross-validate AI-generated results, suggesting that code be associated with specific files in supplementary materials, and Jerome committed to structuring figure code documentation for transparency.
Figure Creation and Software Standardization: Alexander raised the question of standardizing figure creation software, with Jerome, Sarah, Stefan, and Nicholas discussing the use of vector-compatible formats, free tools like Inkscape, and the strengths and limitations of AI-generated code for figure design.
Software Preferences and Compatibility: Sarah emphasized the importance of saving figures in vector-compatible formats, noting that software choice is flexible as long as outputs are generic, and Jerome shared experience generating figures entirely with matplotlib.
AI and Free Tools: Stefan and Nicholas discussed the utility of Cloud Code and Inkscape for figure creation, acknowledging AI's strengths in generating code for figures but cautioning against blindly trusting experimental outputs due to potential hallucinations.
Next Steps and Meeting Scheduling: Jerome outlined next steps, including structuring figure code documentation, delegating receptive field mapping to Alexander and Lucas, and scheduling the next meeting in two weeks due to an upcoming advisory board session.
Action Items and Delegation: Jerome tasked Alexander and Lucas with leading receptive field mapping efforts, and committed to organizing figure code documentation to enhance reproducibility and transparency.
Meeting Schedule: Jerome informed the group of an advisory board meeting next week, suggesting the team reconvene in two weeks to review progress and discuss generated content.