API Reference - Base Module
The base module provides the foundational classes and utilities for all OpenScope experimental launchers.
BaseExperiment Class
- class openscope_experimental_launcher.base.experiment.BaseExperiment[source]
Bases:
object
Base class for OpenScope experimental launchers.
Provides core functionality for: - Parameter loading and management - Bonsai process management - Repository setup and version control - Process monitoring and memory management - Basic output file generation
The core experiment launcher class that provides Bonsai process management, parameter handling, and session tracking. Key Methods:
- post_experiment_processing()[source]
Perform post-experiment processing specific to each rig type. This method should be overridden in each rig-specific launcher.
Default implementation does nothing - each rig should implement its own data reformatting logic here.
- Return type:
- Returns:
True if successful, False otherwise
Properties:
- session_uuid
- subject_id
- user_id
- start_time
- stop_time
- collect_runtime_information()[source]
Collect key information from user at runtime.
This method can be extended in derived classes to collect rig-specific information.
- post_experiment_processing()[source]
Perform post-experiment processing specific to each rig type. This method should be overridden in each rig-specific launcher.
Default implementation does nothing - each rig should implement its own data reformatting logic here.
- Return type:
- Returns:
True if successful, False otherwise
- determine_session_directory()[source]
Determine or generate output directory path using AIND data schema standards.
- save_experiment_metadata(output_directory, param_file=None)[source]
Save experiment metadata to the output directory.
This includes: - Original parameter JSON file - Command line arguments used to run the experiment - Runtime information and system details - Experiment logs (if available)
- classmethod create_argument_parser(description=None)[source]
Create a standard argument parser for experiment launchers.
- Parameters:
description (
str
) – Description for the argument parser- Return type:
- Returns:
Configured ArgumentParser instance
Configuration and Utilities
ConfigLoader
GitManager
ProcessMonitor
- class openscope_experimental_launcher.utils.process_monitor.ProcessMonitor(kill_threshold=90.0)[source]
Handles process monitoring with memory usage tracking and runaway detection.
Monitors process health, memory usage, and handles cleanup operations.
- __init__(kill_threshold=90.0)[source]
Initialize the process monitor.
- Parameters:
kill_threshold (
float
) – Memory usage threshold (percentage) above initial usage that triggers process termination
- monitor_process(process, initial_memory_percent, kill_callback=None)[source]
Monitor a process until it completes or exceeds memory threshold.
Example Usage
Basic Experiment
from openscope_experimental_launcher.base.experiment import BaseExperiment
import logging
# Set up logging
logging.basicConfig(level=logging.INFO)
# Create experiment instance
experiment = BaseExperiment()
# Run with parameter file
success = experiment.run("experiment_params.json")
if success:
print(f"Experiment completed successfully!")
print(f"Session UUID: {experiment.session_uuid}")
print(f"Subject ID: {experiment.subject_id}")
print(f"User ID: {experiment.user_id}")
if experiment.start_time and experiment.stop_time:
print(f"Duration: {experiment.stop_time - experiment.start_time}")
else:
print("Experiment failed. Check logs for details.")
errors = experiment.get_bonsai_errors()
print(f"Bonsai errors: {errors}")
Manual Process Control
from openscope_experimental_launcher.base.experiment import BaseExperiment
experiment = BaseExperiment()
try:
# Load parameters manually
experiment.load_parameters("params.json")
# Start Bonsai process
experiment.start_bonsai()
# Monitor progress (this blocks until completion)
print(f"Experiment running with PID: {experiment.bonsai_process.pid}")
except Exception as e:
print(f"Error during experiment: {e}")
finally:
# Ensure cleanup
experiment.stop()
Parameter Access
experiment = BaseExperiment()
experiment.load_parameters("params.json")
# Access loaded parameters
print(f"Subject ID: {experiment.subject_id}")
print(f"User ID: {experiment.user_id}")
print(f"Repository URL: {experiment.params['repository_url']}")
print(f"Bonsai path: {experiment.params['bonsai_path']}")
# Check parameter validation
print(f"Parameter checksum: {experiment.params_checksum}")
Session Metadata
experiment = BaseExperiment()
success = experiment.run("params.json")
if success:
# Access session information
session_info = {
'uuid': experiment.session_uuid,
'subject_id': experiment.subject_id,
'user_id': experiment.user_id,
'start_time': experiment.start_time.isoformat(),
'end_time': experiment.stop_time.isoformat(),
'duration_seconds': (experiment.stop_time - experiment.start_time).total_seconds(),
'output_directory': experiment.session_directory,
'parameter_checksum': experiment.params_checksum,
'workflow_checksum': experiment.script_checksum
}
print(f"Session metadata: {session_info}")
Error Handling
from openscope_experimental_launcher.base.experiment import BaseExperiment
import logging
def robust_experiment_runner(params_file):
"""Run experiment with comprehensive error handling."""
experiment = BaseExperiment()
try:
# Validate parameters first
experiment.load_parameters(params_file)
# Check required fields
required_fields = ['repository_url', 'bonsai_path', 'subject_id', 'user_id']
missing_fields = [field for field in required_fields
if not experiment.params.get(field)]
if missing_fields:
raise ValueError(f"Missing required parameters: {missing_fields}")
# Run experiment
success = experiment.run(params_file)
if not success:
# Get detailed error information
bonsai_errors = experiment.get_bonsai_errors()
logging.error(f"Experiment failed. Bonsai errors: {bonsai_errors}")
# Check process return code
if experiment.bonsai_process:
return_code = experiment.bonsai_process.returncode
logging.error(f"Bonsai exit code: {return_code}")
return False
return True
except FileNotFoundError as e:
logging.error(f"Parameter file not found: {e}")
return False
except ValueError as e:
logging.error(f"Parameter validation error: {e}")
return False
except Exception as e:
logging.error(f"Unexpected error: {e}")
return False
finally:
# Ensure cleanup
experiment.stop()
Constants and Enums
Platform Information
The base experiment automatically detects system information:
experiment = BaseExperiment()
platform_info = experiment.platform_info
# Returns dictionary with:
# {
# 'python': '3.11.0',
# 'os': ('Windows', '10', '10.0.19041'),
# 'hardware': ('Intel64 Family 6 Model 142 Stepping 10, GenuineIntel', 'AMD64'),
# 'computer_name': 'DESKTOP-ABC123',
# 'rig_id': 'behavior_rig_1'
# }
Default Paths
# Default configuration locations
DEFAULT_CONFIG_PATH = "C:/ProgramData/AIBS_MPE/camstim/config/stim.cfg"
DEFAULT_OUTPUT_DIR = "data"
DEFAULT_REPO_PATH = "C:/BonsaiTemp"
Windows Integration
Process Management
The base experiment uses Windows-specific APIs for robust process control:
# Windows job objects are automatically created
experiment = BaseExperiment()
# Check if Windows modules are available
if experiment.hJob:
print("Windows job object created for process management")
else:
print("Limited process management (Windows modules not available)")
Memory Monitoring
# Memory usage is automatically monitored during experiments
experiment = BaseExperiment()
experiment.run("params.json")
# Access memory usage information
if hasattr(experiment, '_percent_used'):
print(f"Initial memory usage: {experiment._percent_used}%")
Extending BaseExperiment
Creating Custom Launchers
from openscope_experimental_launcher.base.experiment import BaseExperiment
import logging
class CustomExperiment(BaseExperiment):
"""Custom experiment launcher with additional features."""
def __init__(self):
super().__init__()
self.custom_output_path = None
logging.info("Custom experiment initialized")
def load_parameters(self, param_file):
"""Override to add custom parameter validation."""
super().load_parameters(param_file)
# Add custom parameter processing
if 'custom_setting' in self.params:
self._validate_custom_setting(self.params['custom_setting'])
def post_experiment_processing(self) -> bool:
"""Override to add custom post-processing."""
success = super().post_experiment_processing()
if success:
# Add custom processing
self._generate_custom_outputs()
return success
def _validate_custom_setting(self, setting):
"""Custom parameter validation."""
if not isinstance(setting, str):
raise ValueError("custom_setting must be a string")
def _generate_custom_outputs(self):
"""Generate custom output files."""
output_dir = self.session_directory
self.custom_output_path = os.path.join(
output_dir,
f"{self.session_uuid}_custom.json"
)
custom_data = {
'session_uuid': self.session_uuid,
'custom_metadata': self.params.get('custom_setting', 'default')
}
with open(self.custom_output_path, 'w') as f:
json.dump(custom_data, f, indent=2)
logging.info(f"Custom output saved: {self.custom_output_path}")
Method Reference
Core Methods
- BaseExperiment.run(param_file: str | None = None) bool
Run the complete experiment pipeline.
- Parameters:
param_file – Path to JSON parameter file
- Returns:
True if successful, False otherwise
This method orchestrates the entire experiment execution:
Load and validate parameters
Set up Git repository
Start Bonsai process
Monitor execution
Perform post-processing
Clean up resources
- BaseExperiment.load_parameters(param_file: str | None)
Load and validate experiment parameters.
- Parameters:
param_file – Path to JSON parameter file
- BaseExperiment.start_bonsai()
Start the Bonsai workflow as a subprocess.
Creates the Bonsai process with proper argument passing and begins monitoring.
- BaseExperiment.stop()
Stop the Bonsai process gracefully.
Attempts graceful termination first, then forces termination if necessary.
Utility Methods
- BaseExperiment.determine_session_directory() str | None
Determine or generate output directory path using AIND data schema standards.
- Returns:
Full path to the output directory, or None if not determinable