Experimental Launcher
The Experimental Launcher is an automated system that manages the complete lifecycle of running Bonsai experiments, from repository setup to experiment execution. It ensures reproducible, reliable experiment execution with automatic package management and error handling.
Overview
The launcher provides a unified interface for running Bonsai experiments with the following key capabilities:
- Automated Repository Management: Downloads and syncs experiment code from GitHub
- Bonsai Installation & Package Management: Automatically installs and verifies Bonsai packages
- Experiment Execution: Launches Bonsai workflows with proper parameter handling
- Error Recovery: Automatically fixes common package and dependency issues
- Data Management: Handles experiment data storage and session tracking
Architecture
Core Components
BonsaiExperiment Class (bonsai_experiment_launcher.py
)
- Main orchestrator for experiment execution
- Handles all phases of experiment lifecycle
- Provides error handling and recovery mechanisms
Parameter Management
- JSON-based configuration system
- Environment-specific settings (camstim config integration)
- Session metadata and UUID generation
Process Management
- Windows job object integration for proper cleanup
- Process monitoring and timeout handling
- Signal handling for graceful shutdown
Experiment Lifecycle
Phase 1: Repository Setup
- Clone/update repository from GitHub
- Checkout specific commit or branch
- Verify repository integrity
- Set up local working directory
Phase 2: Bonsai Installation
- Locate Bonsai executable in repository
- Verify Bonsai installation
- Run setup scripts if needed
- Validate Bonsai version compatibility
Phase 3: Package Verification
- Parse Bonsai.config for required packages
- Scan installed packages directory
- Compare versions and dependencies
- Auto-reinstall if mismatches detected
Phase 4: Experiment Execution
- Generate session UUID and metadata
- Prepare experiment parameters
- Launch Bonsai workflow with --start --no-editor
- Monitor process and capture output
- Capture and log experiment output
Phase 5: Cleanup & Data Management
- Ensure proper process termination
- Save experiment data and metadata
- Generate session reports
- Clean up temporary resources
Configuration
Parameter File Structure
The launcher uses JSON parameter files with the following structure:
{
"repository_url": "https://github.com/AllenNeuralDynamics/openscope-community-predictive-processing.git",
"commit_hash": "main",
"local_repository_path": "C:/BonsaiDataPredictiveProcessingTest",
"bonsai_path": "code/stimulus-control/src/Standard_oddball_slap2.bonsai",
"mouse_id": "test_mouse",
"user_id": "test_user"
}
Key Parameters
Parameter | Description | Required | Default |
---|---|---|---|
repository_url |
GitHub repository URL | Yes | - |
commit_hash |
Specific commit or branch name | Yes | - |
local_repository_path |
Local directory for repository | Yes | - |
bonsai_path |
Relative path to workflow file | Yes | - |
mouse_id |
Subject identifier | Yes | - |
user_id |
Experimenter identifier | Yes | - |
bonsai_exe_path |
Relative path to Bonsai executable | No | code/stimulus-control/bonsai/Bonsai.exe |
bonsai_setup_script |
Path to package installation script | No | code/stimulus-control/bonsai/setup.cmd |
Package Management
Automatic Package Verification
The launcher automatically verifies that installed Bonsai packages match the requirements specified in Bonsai.config
. This includes:
- Version Matching: Ensures exact version compatibility
- Dependency Resolution: Verifies all package dependencies
- Missing Package Detection: Identifies packages that need installation
Auto-Reinstall System
The launcher automatically fixes package issues when detected:
- Detection: Identifies version mismatches or missing packages
- Cleanup: Removes the entire Packages directory for a clean slate
- Reinstallation: Runs the setup script to install all packages fresh
- Verification: Re-checks packages to confirm the fix worked
This ensures experiments always run with the correct package versions.
Usage Examples
Basic Usage
from bonsai_experiment_launcher import BonsaiExperiment
# Create experiment instance
experiment = BonsaiExperiment()
# Run experiment with parameter file
success = experiment.run('experiment_params.json')
if success:
print("Experiment completed successfully!")
else:
print("Experiment failed - check logs for details")
Testing and Development
# Test the launcher with sample parameters
python test_bonsai_launcher.py
# Use custom parameter file
python test_bonsai_launcher.py --param-file my_params.json
Integration with Camstim Agent
The launcher is designed to be executed by the camstim agent - one of the Allen Institute's internal session management system. The integration works as follows:
Agent-Based Execution
- Agent receives: A script path and YAML parameter file from the experiment scheduling system
- Agent calls: The experimental launcher with the provided parameters
- Agent monitors: Experiment execution and handles results
Parameter Flow
1. Experiment scheduling system → YAML parameters → camstim agent
2. camstim agent → experimental launcher
3. experimental launcher → experiment execution → results back to agent
Camstim Infrastructure Integration
- Configuration: Reads from
C:/ProgramData/AIBS_MPE/camstim/config/stim.cfg
- Data Storage: Saves experiment data to camstim-compatible pickle files
- Session Management: Uses camstim session UUID format and metadata structure
- Logging: Compatible with camstim logging standards and tracking formats
- Process Management: Integrates with camstim's process monitoring and cleanup
Expected Usage Pattern
# This is typically called by the camstim agent, not directly by users
from bonsai_experiment_launcher import BonsaiExperiment
# Agent provides the YAML file path
experiment = BonsaiExperiment()
success = experiment.run('agent_provided_params.json') # Agent converts YAML→JSON
# Results are automatically saved in camstim-compatible format
# Agent handles success/failure reporting back to scheduling system
Camstim-Compatible Output Format
The launcher generates pickle files with the same structure as standard camstim experiments: - Platform and session metadata - Experiment parameters and checksums - Behavioral data integration points - LIMS-compatible data structure
See Also
- Bonsai Instructions - Basic Bonsai usage
- Bonsai for Python Programmers - Integration concepts
- Standard Oddball - Example experiment workflow
💬 Start a discussion for this page on GitHub (A GitHub account is required to create or participate in discussions)