API Reference - Base Module

The base module provides the foundational classes and utilities for all OpenScope experimental launchers.

BaseExperiment Class

class openscope_experimental_launcher.base.experiment.BaseExperiment[source]

Bases: object

Base class for OpenScope experimental launchers.

Provides core functionality for: - Parameter loading and management - Bonsai process management - Repository setup and version control - Process monitoring and memory management - Basic output file generation

The core experiment launcher class that provides Bonsai process management, parameter handling, and session tracking. Key Methods:

run(param_file=None)[source]

Run the experiment with the given parameters.

Parameters:

param_file (Optional[str]) – Path to the JSON parameter file

Return type:

bool

Returns:

True if successful, False otherwise

load_parameters(param_file)[source]

Load parameters from a JSON file.

Parameters:

param_file (Optional[str]) – Path to the JSON parameter file

start_bonsai()[source]

Start the Bonsai workflow using BonsaiInterface.

stop()[source]

Stop the Bonsai process if it’s running.

post_experiment_processing()[source]

Perform post-experiment processing specific to each rig type. This method should be overridden in each rig-specific launcher.

Default implementation does nothing - each rig should implement its own data reformatting logic here.

Return type:

bool

Returns:

True if successful, False otherwise

Properties:

session_uuid
subject_id
user_id
start_time
stop_time
__init__()[source]

Initialize the base experiment with core functionality.

collect_runtime_information()[source]

Collect key information from user at runtime.

This method can be extended in derived classes to collect rig-specific information.

Return type:

Dict[str, str]

Returns:

Dictionary containing collected runtime information

load_parameters(param_file)[source]

Load parameters from a JSON file.

Parameters:

param_file (Optional[str]) – Path to the JSON parameter file

start_bonsai()[source]

Start the Bonsai workflow using BonsaiInterface.

kill_process()[source]

Kill the Bonsai process immediately.

stop()[source]

Stop the Bonsai process if it’s running.

get_bonsai_errors()[source]

Return any errors reported by Bonsai.

Return type:

str

cleanup()[source]

Clean up resources when the script exits.

signal_handler(sig, frame)[source]

Handle Ctrl+C and other signals.

post_experiment_processing()[source]

Perform post-experiment processing specific to each rig type. This method should be overridden in each rig-specific launcher.

Default implementation does nothing - each rig should implement its own data reformatting logic here.

Return type:

bool

Returns:

True if successful, False otherwise

run(param_file=None)[source]

Run the experiment with the given parameters.

Parameters:

param_file (Optional[str]) – Path to the JSON parameter file

Return type:

bool

Returns:

True if successful, False otherwise

determine_session_directory()[source]

Determine or generate output directory path using AIND data schema standards.

Parameters:

None

Return type:

Optional[str]

Returns:

Full path to the output directory, or None if not determinable

save_experiment_metadata(output_directory, param_file=None)[source]

Save experiment metadata to the output directory.

This includes: - Original parameter JSON file - Command line arguments used to run the experiment - Runtime information and system details - Experiment logs (if available)

Parameters:
  • output_directory (str) – Directory where metadata should be saved

  • param_file (Optional[str]) – Path to the original parameter file (if available)

classmethod create_argument_parser(description=None)[source]

Create a standard argument parser for experiment launchers.

Parameters:

description (str) – Description for the argument parser

Return type:

ArgumentParser

Returns:

Configured ArgumentParser instance

classmethod run_from_args(args)[source]

Run the experiment from parsed command line arguments.

Parameters:

args (Namespace) – Parsed command line arguments

Return type:

int

Returns:

Exit code (0 for success, 1 for failure)

classmethod main(description=None, args=None)[source]

Main entry point for experiment launchers.

Parameters:
  • description (str) – Description for the argument parser

  • args (List[str]) – Command line arguments (defaults to sys.argv)

Return type:

int

Returns:

Exit code (0 for success, 1 for failure)

setup_continuous_logging(output_directory, centralized_log_dir=None)[source]

Set up continuous logging to output directory and optionally centralized location.

Parameters:
  • output_directory (str) – Directory where experiment-specific logs should be saved

  • centralized_log_dir (Optional[str]) – Optional centralized logging directory

finalize_logging()[source]

Finalize logging at the end of the experiment.

Logs final session information and closes file handlers.

Configuration and Utilities

ConfigLoader

class openscope_experimental_launcher.utils.config_loader.ConfigLoader[source]

Handles loading and parsing of CamStim-compatible configuration files.

Handles loading and parsing of CamStim-style configuration files.

__init__()[source]

Initialize the configuration loader.

load_config(params)[source]

Load configuration from CamStim config files.

Parameters:

params (Dict[str, Any]) – Experiment parameters that may contain config path overrides

Return type:

Dict[str, Dict[str, Any]]

Returns:

Dictionary containing all configuration sections

GitManager

class openscope_experimental_launcher.utils.git_manager.GitManager[source]

Handles Git repository operations for workflow management.

Manages Git repository operations including cloning, checkout, and version tracking.

__init__()[source]

Initialize the Git manager.

setup_repository(params)[source]

Set up the repository based on parameters.

Parameters:

params (Dict[str, Any]) – Dictionary containing repository configuration

Return type:

bool

Returns:

True if successful, False otherwise

get_repository_path(params)[source]

Get the full path to the cloned repository.

Parameters:

params (Dict[str, Any]) – Dictionary containing repository configuration

Return type:

Optional[str]

Returns:

Path to repository or None if not configured

ProcessMonitor

class openscope_experimental_launcher.utils.process_monitor.ProcessMonitor(kill_threshold=90.0)[source]

Handles process monitoring with memory usage tracking and runaway detection.

Monitors process health, memory usage, and handles cleanup operations.

__init__(kill_threshold=90.0)[source]

Initialize the process monitor.

Parameters:

kill_threshold (float) – Memory usage threshold (percentage) above initial usage that triggers process termination

monitor_process(process, initial_memory_percent, kill_callback=None)[source]

Monitor a process until it completes or exceeds memory threshold.

Parameters:
  • process (Popen) – The subprocess to monitor

  • initial_memory_percent (float) – Initial memory usage percentage

  • kill_callback (Optional[Callable]) – Function to call if process needs to be killed

get_process_memory_info(process)[source]

Get detailed memory information for a process.

Parameters:

process (Popen) – The subprocess to inspect

Return type:

dict

Returns:

Dictionary containing memory information

is_process_responsive(process, timeout=5.0)[source]

Check if a process is responsive by testing if it responds within timeout.

Parameters:
  • process (Popen) – The subprocess to check

  • timeout (float) – Timeout in seconds

Return type:

bool

Returns:

True if process is responsive, False otherwise

Example Usage

Basic Experiment

from openscope_experimental_launcher.base.experiment import BaseExperiment
import logging

# Set up logging
logging.basicConfig(level=logging.INFO)

# Create experiment instance
experiment = BaseExperiment()

# Run with parameter file
success = experiment.run("experiment_params.json")

if success:
    print(f"Experiment completed successfully!")
    print(f"Session UUID: {experiment.session_uuid}")
    print(f"Subject ID: {experiment.subject_id}")
    print(f"User ID: {experiment.user_id}")
    if experiment.start_time and experiment.stop_time:
        print(f"Duration: {experiment.stop_time - experiment.start_time}")
else:
    print("Experiment failed. Check logs for details.")
    errors = experiment.get_bonsai_errors()
    print(f"Bonsai errors: {errors}")

Manual Process Control

from openscope_experimental_launcher.base.experiment import BaseExperiment

experiment = BaseExperiment()

try:
    # Load parameters manually
    experiment.load_parameters("params.json")

    # Start Bonsai process
    experiment.start_bonsai()

    # Monitor progress (this blocks until completion)
    print(f"Experiment running with PID: {experiment.bonsai_process.pid}")

except Exception as e:
    print(f"Error during experiment: {e}")
finally:
    # Ensure cleanup
    experiment.stop()

Parameter Access

experiment = BaseExperiment()
experiment.load_parameters("params.json")

# Access loaded parameters
print(f"Subject ID: {experiment.subject_id}")
print(f"User ID: {experiment.user_id}")
print(f"Repository URL: {experiment.params['repository_url']}")
print(f"Bonsai path: {experiment.params['bonsai_path']}")

# Check parameter validation
print(f"Parameter checksum: {experiment.params_checksum}")

Session Metadata

experiment = BaseExperiment()
success = experiment.run("params.json")

if success:
    # Access session information
    session_info = {
        'uuid': experiment.session_uuid,
        'subject_id': experiment.subject_id,
        'user_id': experiment.user_id,
        'start_time': experiment.start_time.isoformat(),
        'end_time': experiment.stop_time.isoformat(),
        'duration_seconds': (experiment.stop_time - experiment.start_time).total_seconds(),
        'output_directory': experiment.session_directory,
        'parameter_checksum': experiment.params_checksum,
        'workflow_checksum': experiment.script_checksum
    }

    print(f"Session metadata: {session_info}")

Error Handling

from openscope_experimental_launcher.base.experiment import BaseExperiment
import logging

def robust_experiment_runner(params_file):
    """Run experiment with comprehensive error handling."""

    experiment = BaseExperiment()

    try:
        # Validate parameters first
        experiment.load_parameters(params_file)

        # Check required fields
        required_fields = ['repository_url', 'bonsai_path', 'subject_id', 'user_id']
        missing_fields = [field for field in required_fields
                        if not experiment.params.get(field)]

        if missing_fields:
            raise ValueError(f"Missing required parameters: {missing_fields}")

        # Run experiment
        success = experiment.run(params_file)

        if not success:
            # Get detailed error information
            bonsai_errors = experiment.get_bonsai_errors()
            logging.error(f"Experiment failed. Bonsai errors: {bonsai_errors}")

            # Check process return code
            if experiment.bonsai_process:
                return_code = experiment.bonsai_process.returncode
                logging.error(f"Bonsai exit code: {return_code}")

            return False

        return True

    except FileNotFoundError as e:
        logging.error(f"Parameter file not found: {e}")
        return False
    except ValueError as e:
        logging.error(f"Parameter validation error: {e}")
        return False
    except Exception as e:
        logging.error(f"Unexpected error: {e}")
        return False
    finally:
        # Ensure cleanup
        experiment.stop()

Constants and Enums

Platform Information

The base experiment automatically detects system information:

experiment = BaseExperiment()
platform_info = experiment.platform_info

# Returns dictionary with:
# {
#     'python': '3.11.0',
#     'os': ('Windows', '10', '10.0.19041'),
#     'hardware': ('Intel64 Family 6 Model 142 Stepping 10, GenuineIntel', 'AMD64'),
#     'computer_name': 'DESKTOP-ABC123',
#     'rig_id': 'behavior_rig_1'
# }

Default Paths

# Default configuration locations
DEFAULT_CONFIG_PATH = "C:/ProgramData/AIBS_MPE/camstim/config/stim.cfg"
DEFAULT_OUTPUT_DIR = "data"
DEFAULT_REPO_PATH = "C:/BonsaiTemp"

Windows Integration

Process Management

The base experiment uses Windows-specific APIs for robust process control:

# Windows job objects are automatically created
experiment = BaseExperiment()

# Check if Windows modules are available
if experiment.hJob:
    print("Windows job object created for process management")
else:
    print("Limited process management (Windows modules not available)")

Memory Monitoring

# Memory usage is automatically monitored during experiments
experiment = BaseExperiment()
experiment.run("params.json")

# Access memory usage information
if hasattr(experiment, '_percent_used'):
    print(f"Initial memory usage: {experiment._percent_used}%")

Extending BaseExperiment

Creating Custom Launchers

from openscope_experimental_launcher.base.experiment import BaseExperiment
import logging

class CustomExperiment(BaseExperiment):
    """Custom experiment launcher with additional features."""

    def __init__(self):
        super().__init__()
        self.custom_output_path = None
        logging.info("Custom experiment initialized")

    def load_parameters(self, param_file):
        """Override to add custom parameter validation."""
        super().load_parameters(param_file)

        # Add custom parameter processing
        if 'custom_setting' in self.params:
            self._validate_custom_setting(self.params['custom_setting'])

    def post_experiment_processing(self) -> bool:
        """Override to add custom post-processing."""
        success = super().post_experiment_processing()

        if success:
            # Add custom processing
            self._generate_custom_outputs()

        return success

    def _validate_custom_setting(self, setting):
        """Custom parameter validation."""
        if not isinstance(setting, str):
            raise ValueError("custom_setting must be a string")
      def _generate_custom_outputs(self):
        """Generate custom output files."""
        output_dir = self.session_directory
        self.custom_output_path = os.path.join(
            output_dir,
            f"{self.session_uuid}_custom.json"
        )

        custom_data = {
            'session_uuid': self.session_uuid,
            'custom_metadata': self.params.get('custom_setting', 'default')
        }

        with open(self.custom_output_path, 'w') as f:
            json.dump(custom_data, f, indent=2)

        logging.info(f"Custom output saved: {self.custom_output_path}")

Method Reference

Core Methods

BaseExperiment.run(param_file: str | None = None) bool

Run the complete experiment pipeline.

Parameters:

param_file – Path to JSON parameter file

Returns:

True if successful, False otherwise

This method orchestrates the entire experiment execution:

  1. Load and validate parameters

  2. Set up Git repository

  3. Start Bonsai process

  4. Monitor execution

  5. Perform post-processing

  6. Clean up resources

BaseExperiment.load_parameters(param_file: str | None)

Load and validate experiment parameters.

Parameters:

param_file – Path to JSON parameter file

BaseExperiment.start_bonsai()

Start the Bonsai workflow as a subprocess.

Creates the Bonsai process with proper argument passing and begins monitoring.

BaseExperiment.stop()

Stop the Bonsai process gracefully.

Attempts graceful termination first, then forces termination if necessary.

Utility Methods

BaseExperiment.determine_session_directory() str | None

Determine or generate output directory path using AIND data schema standards.

Returns:

Full path to the output directory, or None if not determinable

BaseExperiment.create_bonsai_arguments() List[str]

Create command-line arguments for Bonsai.

Returns:

List of –property arguments for Bonsai

BaseExperiment.get_bonsai_errors() str

Return any errors reported by Bonsai.

Returns:

Concatenated error messages from stderr