rig¶
- pydantic model aind_behavior_services.rig._base.AindBehaviorRigModel[source]¶
Bases:
SchemaVersionedModel
- Fields:
- Validators:
- pydantic model aind_behavior_services.rig._harp_gen.HarpCameraController[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpCameraControllerGen2[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpClockSynchronizer[source]¶
Bases:
_HarpDeviceBase
- Fields:
- Validators:
- field connected_clock_outputs: List[ConnectedClockOutput] = [][source]¶
Connected clock outputs
- Validated by:
- validator validate_connected_clock_outputs » connected_clock_outputs[source]¶
- pydantic model aind_behavior_services.rig._harp_gen.HarpDeviceGeneric[source]¶
Bases:
_HarpDeviceBase
- Fields:
- pydantic model aind_behavior_services.rig._harp_gen.HarpDriver12Volts[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpEnvironmentSensor[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpIblBehaviorControl[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpInputExpander[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpLedController[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpLicketySplit[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpMultiPwmGenerator[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpNeurophotometricsFP3002[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpOlfactometer[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpOutputExpander[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpPyControlAdapter[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpSimpleAnalogGenerator[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpSniffDetector[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpStepperDriver[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpSynchronizer[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpTimestampGeneratorGen1[source]¶
Bases:
_HarpDeviceBase
- Fields:
- Validators:
- field connected_clock_outputs: List[ConnectedClockOutput] = [][source]¶
Connected clock outputs
- Validated by:
- validator validate_connected_clock_outputs » connected_clock_outputs[source]¶
- pydantic model aind_behavior_services.rig._harp_gen.HarpTimestampGeneratorGen3[source]¶
Bases:
_HarpDeviceBase
- Fields:
- Validators:
- field connected_clock_outputs: List[ConnectedClockOutput] = [][source]¶
Connected clock outputs
- Validated by:
- validator validate_connected_clock_outputs » connected_clock_outputs[source]¶
- pydantic model aind_behavior_services.rig._harp_gen.HarpVestibularH1[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpVestibularH2[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpWearBaseStationGen2[source]¶
Bases:
_HarpDeviceBase
- pydantic model aind_behavior_services.rig._harp_gen.HarpWhiteRabbit[source]¶
Bases:
_HarpDeviceBase
- Fields:
- Validators:
- field connected_clock_outputs: List[ConnectedClockOutput] = [][source]¶
Connected clock outputs
- Validated by:
- validator validate_connected_clock_outputs » connected_clock_outputs[source]¶
- pydantic model aind_behavior_services.rig.cameras.CameraController[source]¶
Bases:
Device
,Generic
[TCamera
]- Fields:
- aind_behavior_services.rig.cameras.FFMPEG_INPUT = '-colorspace bt709 -color_primaries bt709 -color_range full -color_trc linear'[source]¶
Default input arguments
- aind_behavior_services.rig.cameras.FFMPEG_OUTPUT_16BIT = '-vf "scale=out_color_matrix=bt709:out_range=full,format=rgb48le,scale=out_range=full" -c:v hevc_nvenc -pix_fmt p010le -color_range full -colorspace bt709 -color_trc linear -tune hq -preset p4 -rc vbr -cq 12 -b:v 0M -metadata author="Allen Institute for Neural Dynamics" -maxrate 700M -bufsize 350M'[source]¶
Default output arguments for 16-bit video encoding
- aind_behavior_services.rig.cameras.FFMPEG_OUTPUT_8BIT = '-vf "scale=out_color_matrix=bt709:out_range=full,format=bgr24,scale=out_range=full" -c:v h264_nvenc -pix_fmt yuv420p -color_range full -colorspace bt709 -color_trc linear -tune hq -preset p4 -rc vbr -cq 12 -b:v 0M -metadata author="Allen Institute for Neural Dynamics" -maxrate 700M -bufsize 350M'[source]¶
Default output arguments for 8-bit video encoding
- pydantic model aind_behavior_services.rig.cameras.SpinnakerCamera[source]¶
Bases:
Device
- Fields:
- Validators:
- field adc_bit_depth: SpinnakerCameraAdcBitDepth | None = SpinnakerCameraAdcBitDepth.ADC8BIT[source]¶
ADC bit depth. If None will be left as default.
- field color_processing: Literal['Default', 'NoColorProcessing'] = 'Default'[source]¶
Color processing
- field gamma: float | None = None[source]¶
Gamma. If None, will disable gamma correction.
- Constraints:
ge = 0
- field pixel_format: SpinnakerCameraPixelFormat | None = SpinnakerCameraPixelFormat.MONO8[source]¶
Pixel format. If None will be left as default.
- field region_of_interest: Rect = Rect(x=0, y=0, width=0, height=0)[source]¶
Region of interest
- Validated by:
- field video_writer: VideoWriter | None = None[source]¶
Video writer. If not provided, no video will be saved.
- validator validate_roi » region_of_interest[source]¶
- class aind_behavior_services.rig.cameras.SpinnakerCameraAdcBitDepth(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases:
IntEnum
- class aind_behavior_services.rig.cameras.SpinnakerCameraPixelFormat(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases:
IntEnum
- pydantic model aind_behavior_services.rig.cameras.VideoWriterFfmpeg[source]¶
Bases:
BaseModel
- Fields:
- field input_arguments: str = '-colorspace bt709 -color_primaries bt709 -color_range full -color_trc linear'[source]¶
Input arguments
- field output_arguments: str = '-vf "scale=out_color_matrix=bt709:out_range=full,format=bgr24,scale=out_range=full" -c:v h264_nvenc -pix_fmt yuv420p -color_range full -colorspace bt709 -color_trc linear -tune hq -preset p4 -rc vbr -cq 12 -b:v 0M -metadata author="Allen Institute for Neural Dynamics" -maxrate 700M -bufsize 350M'[source]¶
Output arguments
- class aind_behavior_services.rig.cameras.VideoWriterFfmpegFactory(bit_depth: Literal[8, 16] = 8, video_writer_ffmpeg_kwargs: Dict[str, Any] = None)[source]¶
Bases:
object
- construct_video_writer_ffmpeg() VideoWriterFfmpeg [source]¶
- update_video_writer_ffmpeg_kwargs(video_writer: VideoWriterFfmpeg)[source]¶
- pydantic model aind_behavior_services.rig.cameras.VideoWriterOpenCv[source]¶
Bases:
BaseModel
- Fields:
- pydantic model aind_behavior_services.rig.harp.HarpClockGenerator[source]¶
Bases:
HarpTimestampGeneratorGen3
- Fields:
- Validators:
- pydantic model aind_behavior_services.rig.harp.HarpLickometer[source]¶
Bases:
HarpLicketySplit
- Fields:
- pydantic model aind_behavior_services.rig.visual_stimulation.DisplayCalibration[source]¶
Bases:
BaseModel
- Fields:
- field extrinsics: DisplayExtrinsics = DisplayExtrinsics(rotation=Vector3(x=0.0, y=0.0, z=0.0), translation=Vector3(x=0.0, y=1.309016, z=-13.27))[source]¶
Extrinsics
- field intrinsics: DisplayIntrinsics = DisplayIntrinsics(frame_width=1920, frame_height=1080, display_width=20, display_height=15)[source]¶
Intrinsics
- pydantic model aind_behavior_services.rig.visual_stimulation.DisplayExtrinsics[source]¶
Bases:
BaseModel
- Fields:
- pydantic model aind_behavior_services.rig.visual_stimulation.DisplayIntrinsics[source]¶
Bases:
BaseModel
- pydantic model aind_behavior_services.rig.visual_stimulation.DisplaysCalibration[source]¶
Bases:
BaseModel
- Fields:
- field center: DisplayCalibration = DisplayCalibration(intrinsics=DisplayIntrinsics(frame_width=1920, frame_height=1080, display_width=20, display_height=15), extrinsics=DisplayExtrinsics(rotation=Vector3(x=0.0, y=0.0, z=0.0), translation=Vector3(x=0.0, y=1.309016, z=-13.27)))[source]¶
Center display calibration
- field left: DisplayCalibration = DisplayCalibration(intrinsics=DisplayIntrinsics(frame_width=1920, frame_height=1080, display_width=20, display_height=15), extrinsics=DisplayExtrinsics(rotation=Vector3(x=0.0, y=1.0472, z=0.0), translation=Vector3(x=-16.6917756, y=1.309016, z=-3.575264)))[source]¶
Left display calibration
- field right: DisplayCalibration = DisplayCalibration(intrinsics=DisplayIntrinsics(frame_width=1920, frame_height=1080, display_width=20, display_height=15), extrinsics=DisplayExtrinsics(rotation=Vector3(x=0.0, y=-1.0472, z=0.0), translation=Vector3(x=16.6917756, y=1.309016, z=-3.575264)))[source]¶
Right display calibration
- pydantic model aind_behavior_services.rig.visual_stimulation.Screen[source]¶
Bases:
Device
- Fields:
- field calibration: DisplaysCalibration = DisplaysCalibration(left=DisplayCalibration(intrinsics=DisplayIntrinsics(frame_width=1920, frame_height=1080, display_width=20, display_height=15), extrinsics=DisplayExtrinsics(rotation=Vector3(x=0.0, y=1.0472, z=0.0), translation=Vector3(x=-16.6917756, y=1.309016, z=-3.575264))), center=DisplayCalibration(intrinsics=DisplayIntrinsics(frame_width=1920, frame_height=1080, display_width=20, display_height=15), extrinsics=DisplayExtrinsics(rotation=Vector3(x=0.0, y=0.0, z=0.0), translation=Vector3(x=0.0, y=1.309016, z=-13.27))), right=DisplayCalibration(intrinsics=DisplayIntrinsics(frame_width=1920, frame_height=1080, display_width=20, display_height=15), extrinsics=DisplayExtrinsics(rotation=Vector3(x=0.0, y=-1.0472, z=0.0), translation=Vector3(x=16.6917756, y=1.309016, z=-3.575264))))[source]¶
Screen calibration