rig¶
- pydantic model aind_behavior_services.rig.AindBehaviorRigModel[source]¶
Bases:
SchemaVersionedModel
- Fields:
- Validators:
- pydantic model aind_behavior_services.rig.CameraController[source]¶
Bases:
Device
,Generic
[TCamera
]- Fields:
- pydantic model aind_behavior_services.rig.DisplayCalibration[source]¶
Bases:
BaseModel
- Fields:
- field extrinsics: DisplayExtrinsics = DisplayExtrinsics(rotation=Vector3(x=0.0, y=0.0, z=0.0), translation=Vector3(x=0.0, y=1.309016, z=-13.27))[source]¶
Extrinsics
- field intrinsics: DisplayIntrinsics = DisplayIntrinsics(frame_width=1920, frame_height=1080, display_width=20, display_height=15)[source]¶
Intrinsics
- pydantic model aind_behavior_services.rig.DisplaysCalibration[source]¶
Bases:
BaseModel
- Fields:
- field center: DisplayCalibration = DisplayCalibration(intrinsics=DisplayIntrinsics(frame_width=1920, frame_height=1080, display_width=20, display_height=15), extrinsics=DisplayExtrinsics(rotation=Vector3(x=0.0, y=0.0, z=0.0), translation=Vector3(x=0.0, y=1.309016, z=-13.27)))[source]¶
Center display calibration
- field left: DisplayCalibration = DisplayCalibration(intrinsics=DisplayIntrinsics(frame_width=1920, frame_height=1080, display_width=20, display_height=15), extrinsics=DisplayExtrinsics(rotation=Vector3(x=0.0, y=1.0472, z=0.0), translation=Vector3(x=-16.6917756, y=1.309016, z=-3.575264)))[source]¶
Left display calibration
- field right: DisplayCalibration = DisplayCalibration(intrinsics=DisplayIntrinsics(frame_width=1920, frame_height=1080, display_width=20, display_height=15), extrinsics=DisplayExtrinsics(rotation=Vector3(x=0.0, y=-1.0472, z=0.0), translation=Vector3(x=16.6917756, y=1.309016, z=-3.575264)))[source]¶
Right display calibration
- aind_behavior_services.rig.FFMPEG_INPUT = '-colorspace bt709 -color_primaries bt709 -color_range full -color_trc linear'[source]¶
Default input arguments
- aind_behavior_services.rig.FFMPEG_OUTPUT_16BIT = '-vf "scale=out_color_matrix=bt709:out_range=full,format=rgb48le,scale=out_range=full" -c:v hevc_nvenc -pix_fmt p010le -color_range full -colorspace bt709 -color_trc linear -tune hq -preset p4 -rc vbr -cq 12 -b:v 0M -metadata author="Allen Institute for Neural Dynamics" -maxrate 700M -bufsize 350M'[source]¶
Default output arguments for 16-bit video encoding
- aind_behavior_services.rig.FFMPEG_OUTPUT_8BIT = '-vf "scale=out_color_matrix=bt709:out_range=full,format=bgr24,scale=out_range=full" -c:v h264_nvenc -pix_fmt yuv420p -color_range full -colorspace bt709 -color_trc linear -tune hq -preset p4 -rc vbr -cq 12 -b:v 0M -metadata author="Allen Institute for Neural Dynamics" -maxrate 700M -bufsize 350M'[source]¶
Default output arguments for 8-bit video encoding
- pydantic model aind_behavior_services.rig.HarpAnalogInput[source]¶
Bases:
HarpDeviceGeneric
- pydantic model aind_behavior_services.rig.HarpBehavior[source]¶
Bases:
HarpDeviceGeneric
- pydantic model aind_behavior_services.rig.HarpClockGenerator[source]¶
Bases:
HarpDeviceGeneric
- Fields:
- Validators:
- field connected_clock_outputs: List[ConnectedClockOutput] = [][source]¶
Connected clock outputs
- Validated by:
- validator validate_connected_clock_outputs » connected_clock_outputs[source]¶
- pydantic model aind_behavior_services.rig.HarpClockSynchronizer[source]¶
Bases:
HarpDeviceGeneric
- Fields:
- Validators:
- field connected_clock_outputs: List[ConnectedClockOutput] = [][source]¶
Connected clock outputs
- Validated by:
- field device_type: Literal[HarpDeviceType.CLOCKSYNCHRONIZER] = HarpDeviceType.CLOCKSYNCHRONIZER[source]¶
- validator validate_connected_clock_outputs » connected_clock_outputs[source]¶
- pydantic model aind_behavior_services.rig.HarpCuttlefish[source]¶
Bases:
HarpDeviceGeneric
- class aind_behavior_services.rig.HarpDeviceType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases:
str
,Enum
- pydantic model aind_behavior_services.rig.HarpEnvironmentSensor[source]¶
Bases:
HarpDeviceGeneric
- pydantic model aind_behavior_services.rig.HarpLickometer[source]¶
Bases:
HarpDeviceGeneric
- pydantic model aind_behavior_services.rig.HarpLoadCells[source]¶
Bases:
HarpDeviceGeneric
- pydantic model aind_behavior_services.rig.HarpOlfactometer[source]¶
Bases:
HarpDeviceGeneric
- pydantic model aind_behavior_services.rig.HarpSniffDetector[source]¶
Bases:
HarpDeviceGeneric
- pydantic model aind_behavior_services.rig.HarpSoundCard[source]¶
Bases:
HarpDeviceGeneric
- pydantic model aind_behavior_services.rig.HarpStepperDriver[source]¶
Bases:
HarpDeviceGeneric
- pydantic model aind_behavior_services.rig.HarpTreadmill[source]¶
Bases:
HarpDeviceGeneric
- pydantic model aind_behavior_services.rig.HarpWhiteRabbit[source]¶
Bases:
HarpDeviceGeneric
- Fields:
- Validators:
- field connected_clock_outputs: List[ConnectedClockOutput] = [][source]¶
Connected clock outputs
- Validated by:
- validator validate_connected_clock_outputs » connected_clock_outputs[source]¶
- pydantic model aind_behavior_services.rig.Screen[source]¶
Bases:
Device
- Fields:
- field calibration: DisplaysCalibration = DisplaysCalibration(left=DisplayCalibration(intrinsics=DisplayIntrinsics(frame_width=1920, frame_height=1080, display_width=20, display_height=15), extrinsics=DisplayExtrinsics(rotation=Vector3(x=0.0, y=1.0472, z=0.0), translation=Vector3(x=-16.6917756, y=1.309016, z=-3.575264))), center=DisplayCalibration(intrinsics=DisplayIntrinsics(frame_width=1920, frame_height=1080, display_width=20, display_height=15), extrinsics=DisplayExtrinsics(rotation=Vector3(x=0.0, y=0.0, z=0.0), translation=Vector3(x=0.0, y=1.309016, z=-13.27))), right=DisplayCalibration(intrinsics=DisplayIntrinsics(frame_width=1920, frame_height=1080, display_width=20, display_height=15), extrinsics=DisplayExtrinsics(rotation=Vector3(x=0.0, y=-1.0472, z=0.0), translation=Vector3(x=16.6917756, y=1.309016, z=-3.575264))))[source]¶
Screen calibration
- pydantic model aind_behavior_services.rig.SpinnakerCamera[source]¶
Bases:
Device
- Fields:
- Validators:
- field adc_bit_depth: SpinnakerCameraAdcBitDepth | None = SpinnakerCameraAdcBitDepth.ADC8BIT[source]¶
ADC bit depth. If None will be left as default.
- field color_processing: Literal['Default', 'NoColorProcessing'] = 'Default'[source]¶
Color processing
- field gamma: float | None = None[source]¶
Gamma. If None, will disable gamma correction.
- Constraints:
ge = 0
- field pixel_format: SpinnakerCameraPixelFormat | None = SpinnakerCameraPixelFormat.MONO8[source]¶
Pixel format. If None will be left as default.
- field region_of_interest: Rect = Rect(x=0, y=0, width=0, height=0)[source]¶
Region of interest
- Validated by:
- field video_writer: VideoWriter | None = None[source]¶
Video writer. If not provided, no video will be saved.
- validator validate_roi » region_of_interest[source]¶
- class aind_behavior_services.rig.SpinnakerCameraAdcBitDepth(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases:
IntEnum
- class aind_behavior_services.rig.SpinnakerCameraPixelFormat(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases:
IntEnum
- pydantic model aind_behavior_services.rig.VideoWriterFfmpeg[source]¶
Bases:
BaseModel
- Fields:
- field input_arguments: str = '-colorspace bt709 -color_primaries bt709 -color_range full -color_trc linear'[source]¶
Input arguments
- field output_arguments: str = '-vf "scale=out_color_matrix=bt709:out_range=full,format=bgr24,scale=out_range=full" -c:v h264_nvenc -pix_fmt yuv420p -color_range full -colorspace bt709 -color_trc linear -tune hq -preset p4 -rc vbr -cq 12 -b:v 0M -metadata author="Allen Institute for Neural Dynamics" -maxrate 700M -bufsize 350M'[source]¶
Output arguments
- class aind_behavior_services.rig.VideoWriterFfmpegFactory(bit_depth: Literal[8, 16] = 8, video_writer_ffmpeg_kwargs: Dict[str, Any] = None)[source]¶
Bases:
object
- construct_video_writer_ffmpeg() VideoWriterFfmpeg [source]¶
- update_video_writer_ffmpeg_kwargs(video_writer: VideoWriterFfmpeg)[source]¶