api.task_logic

_images/AindBehaviorTelekinesisTaskLogic.svg
class aind_behavior_telekinesis.task_logic.Action(*, reward_probability: Distribution, <aind_behavior_services.schema._SgenTypenameAnnotation object at 0x7f862687fe00>]=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=1.0), truncation_parameters=None, scaling_parameters=None), reward_amount: Distribution, <aind_behavior_services.schema._SgenTypenameAnnotation object at 0x7f862687fe00>]=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=1.0), truncation_parameters=None, scaling_parameters=None), reward_delay: Distribution, <aind_behavior_services.schema._SgenTypenameAnnotation object at 0x7f862687fe00>]=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=1.0), truncation_parameters=None, scaling_parameters=None), action_duration: Distribution, <aind_behavior_services.schema._SgenTypenameAnnotation object at 0x7f862687fe00>]=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=0.5), truncation_parameters=None, scaling_parameters=None), upper_action_threshold: Distribution, <aind_behavior_services.schema._SgenTypenameAnnotation object at 0x7f862687fe00>]=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=20000.0), truncation_parameters=None, scaling_parameters=None), lower_action_threshold: Distribution, <aind_behavior_services.schema._SgenTypenameAnnotation object at 0x7f862687fe00>]=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=0.0), truncation_parameters=None, scaling_parameters=None), is_operant: bool = True, time_to_collect: Distribution, <aind_behavior_services.schema._SgenTypenameAnnotation object at 0x7f862687fe00>] | None=None, continuous_feedback: ContinuousFeedback | None = None)[source]

Bases: BaseModel

Defines an abstract class for an harvest action

action_duration: _SgenTypenameAnnotation object at 0x7f862687fe00>][source]
continuous_feedback: ContinuousFeedback | None[source]
is_operant: bool[source]
lower_action_threshold: _SgenTypenameAnnotation object at 0x7f862687fe00>][source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

reward_amount: _SgenTypenameAnnotation object at 0x7f862687fe00>][source]
reward_delay: _SgenTypenameAnnotation object at 0x7f862687fe00>][source]
reward_probability: _SgenTypenameAnnotation object at 0x7f862687fe00>][source]
time_to_collect: _SgenTypenameAnnotation object at 0x7f862687fe00>] | None[source]
upper_action_threshold: _SgenTypenameAnnotation object at 0x7f862687fe00>][source]
class aind_behavior_telekinesis.task_logic.ActionLookUpTableFactory(*, path: str, offset: float = 0, scale: float = 1, action0_min: float, action0_max: float, action1_min: float, action1_max: float)[source]

Bases: BaseModel

Factory for loading and configuring a look-up table image used to map action inputs to output values

action0_max: float[source]
action0_min: float[source]
action1_max: float[source]
action1_min: float[source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

offset: float[source]
path: str[source]
scale: float[source]
type aind_behavior_telekinesis.task_logic.ActionSource = Annotated[LoadCellActionSource | BehaviorAnalogInputActionSource, FieldInfo(annotation=NoneType, required=True, discriminator='action_source')][source]
class aind_behavior_telekinesis.task_logic.ActionSourceType(*values)[source]

Bases: str, Enum

Defines the source of the data to use in the action

BEHAVIOR_ANALOG_INPUT = 'BehaviorAnalogInput'[source]
LOADCELL = 'LoadCell'[source]
class aind_behavior_telekinesis.task_logic.AindBehaviorTelekinesisTaskLogic(*, name: Literal['AindTelekinesis'] = 'AindTelekinesis', description: str = '', task_parameters: AindTelekinesisTaskParameters, version: Literal['0.5.0-rc1'] = '0.5.0-rc1', stage_name: str | None = None)[source]

Bases: Task

Task logic definition for the Telekinesis behavior task

model_config = {'extra': 'forbid', 'str_strip_whitespace': True, 'strict': True, 'validate_assignment': True, 'validate_default': True}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

name: Literal['AindTelekinesis'][source]
task_parameters: AindTelekinesisTaskParameters[source]
version: Literal['0.5.0-rc1'][source]
class aind_behavior_telekinesis.task_logic.AindTelekinesisTaskParameters(*, rng_seed: float | None = None, aind_behavior_services_pkg_version: Annotated[Literal['0.13.6'], _PydanticGeneralMetadata(pattern='^(0|[1-9]\\d*)\\.(0|[1-9]\\d*)\\.(0|[1-9]\\d*)(?:-((?:0|[1-9]\\d*|\\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\\.(?:0|[1-9]\\d*|\\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\\+([0-9a-zA-Z-]+(?:\\.[0-9a-zA-Z-]+)*))?$')] = '0.13.6', environment: Environment, operation_control: OperationControl, **extra_data: Any)[source]

Bases: TaskParameters

Task parameters for the Telekinesis task

environment: Environment[source]
model_config = {'extra': 'allow', 'str_strip_whitespace': True, 'strict': True, 'validate_assignment': True, 'validate_default': True}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

operation_control: OperationControl[source]
class aind_behavior_telekinesis.task_logic.AudioFeedback(*, continuous_feedback_mode: Literal[ContinuousFeedbackMode.AUDIO] = ContinuousFeedbackMode.AUDIO, converter_lut_input: Annotated[List[Annotated[float, FieldInfo(annotation=NoneType, required=True, metadata=[Ge(ge=0), Le(le=1)])]], MinLen(min_length=2)] = [0, 1], converter_lut_output: Annotated[List[float], MinLen(min_length=2)] = [0, 1])[source]

Bases: _ContinuousFeedbackBase

Continuous feedback delivered via audio

continuous_feedback_mode: Literal[ContinuousFeedbackMode.AUDIO][source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class aind_behavior_telekinesis.task_logic.BehaviorAnalogInputActionSource(*, action_source: Literal[ActionSourceType.BEHAVIOR_ANALOG_INPUT] = ActionSourceType.BEHAVIOR_ANALOG_INPUT, channel: Annotated[int, Ge(ge=0), Le(le=1)] = 0)[source]

Bases: _ActionSource

Action source read from a behavior board analog input channel

action_source: Literal[ActionSourceType.BEHAVIOR_ANALOG_INPUT][source]
channel: int[source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class aind_behavior_telekinesis.task_logic.Block(*, mode: Literal[BlockStatisticsMode.BLOCK] = BlockStatisticsMode.BLOCK, trials: List[Trial] = [], shuffle: bool = False, repeat_count: int | None = 0)[source]

Bases: BaseModel

A fixed list of trials to run in sequence

mode: Literal[BlockStatisticsMode.BLOCK][source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

repeat_count: int | None[source]
shuffle: bool[source]
trials: List[Trial][source]
class aind_behavior_telekinesis.task_logic.BlockGenerator(*, mode: Literal[BlockStatisticsMode.BLOCK_GENERATOR] = BlockStatisticsMode.BLOCK_GENERATOR, block_size: Distribution, <aind_behavior_services.schema._SgenTypenameAnnotation object at 0x7f862687fe00>]=UniformDistribution(family=<DistributionFamily.UNIFORM: 'Uniform'>, distribution_parameters=UniformDistributionParameters(family=<DistributionFamily.UNIFORM: 'Uniform'>, min=50.0, max=60.0), truncation_parameters=None, scaling_parameters=None), trial_statistics: Trial)[source]

Bases: BaseModel

Generates blocks of trials by sampling from a trial statistics template

block_size: _SgenTypenameAnnotation object at 0x7f862687fe00>][source]
mode: Literal[BlockStatisticsMode.BLOCK_GENERATOR][source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

trial_statistics: Trial[source]
type aind_behavior_telekinesis.task_logic.BlockStatistics = Annotated[Block | BlockGenerator, FieldInfo(annotation=NoneType, required=True, discriminator='mode')][source]
class aind_behavior_telekinesis.task_logic.BlockStatisticsMode(*values)[source]

Bases: str, Enum

Defines the mode of the environment

BLOCK = 'Block'[source]
BLOCK_GENERATOR = 'BlockGenerator'[source]
type aind_behavior_telekinesis.task_logic.ContinuousFeedback = Annotated[ManipulatorFeedback | AudioFeedback, FieldInfo(annotation=NoneType, required=True, discriminator='continuous_feedback_mode')][source]
class aind_behavior_telekinesis.task_logic.ContinuousFeedbackMode(*values)[source]

Bases: str, Enum

Defines the feedback mode

AUDIO = 'Audio'[source]
MANIPULATOR = 'Manipulator'[source]
NONE = 'None'[source]
class aind_behavior_telekinesis.task_logic.Environment(*, block_statistics: List[BlockStatistics], shuffle: bool = False, repeat_count: int | None = 0)[source]

Bases: BaseModel

Defines the structure of the behavioral environment as a sequence of blocks

block_statistics: List[BlockStatistics][source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

repeat_count: int | None[source]
shuffle: bool[source]
class aind_behavior_telekinesis.task_logic.LoadCellActionSource(*, action_source: Literal[ActionSourceType.LOADCELL] = ActionSourceType.LOADCELL, channel: Annotated[int, Ge(ge=0), Le(le=7)] = 0)[source]

Bases: _ActionSource

Action source read from a load cell channel

action_source: Literal[ActionSourceType.LOADCELL][source]
channel: Annotated[int, FieldInfo(annotation=NoneType, required=True, description='Load cell channel number available', metadata=[Ge(ge=0), Le(le=7)])][source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class aind_behavior_telekinesis.task_logic.LutSampler2D(*, sampler_type: Literal['LUT'] = 'LUT', lut_reference: str)[source]

Bases: BaseModel

lut_reference: str[source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

sampler_type: Literal['LUT'][source]
class aind_behavior_telekinesis.task_logic.ManipulatorFeedback(*, continuous_feedback_mode: Literal[ContinuousFeedbackMode.MANIPULATOR] = ContinuousFeedbackMode.MANIPULATOR, converter_lut_input: Annotated[List[Annotated[float, FieldInfo(annotation=NoneType, required=True, metadata=[Ge(ge=0), Le(le=1)])]], MinLen(min_length=2)] = [0, 1], converter_lut_output: Annotated[List[float], MinLen(min_length=2)] = [0, 1])[source]

Bases: _ContinuousFeedbackBase

Continuous feedback delivered via manipulator position

continuous_feedback_mode: Literal[ContinuousFeedbackMode.MANIPULATOR][source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class aind_behavior_telekinesis.task_logic.OperationControl(*, action_luts: Dict[str, ~aind_behavior_telekinesis.task_logic.ActionLookUpTableFactory]=<factory>, spout: SpoutOperationControl = SpoutOperationControl(default_retraction_offset=-7, enabled=True))[source]

Bases: BaseModel

Top-level operational settings including LUT registry and spout control

action_luts: Dict[str, ActionLookUpTableFactory][source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

spout: SpoutOperationControl[source]
class aind_behavior_telekinesis.task_logic.QuiescencePeriod(*, duration: Distribution, <aind_behavior_services.schema._SgenTypenameAnnotation object at 0x7f862687fe00>]=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=0.5), truncation_parameters=None, scaling_parameters=None), action_threshold: float = 0, has_cue: bool = False)[source]

Bases: BaseModel

Defines a quiescence settings

action_threshold: float[source]
duration: _SgenTypenameAnnotation object at 0x7f862687fe00>][source]
has_cue: bool[source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class aind_behavior_telekinesis.task_logic.ResponsePeriod(*, duration: Distribution, <aind_behavior_services.schema._SgenTypenameAnnotation object at 0x7f862687fe00>]=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=0.5), truncation_parameters=None, scaling_parameters=None), has_cue: bool = True, action: Action = Action(reward_probability=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=1.0), truncation_parameters=None, scaling_parameters=None), reward_amount=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=1.0), truncation_parameters=None, scaling_parameters=None), reward_delay=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=1.0), truncation_parameters=None, scaling_parameters=None), action_duration=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=0.5), truncation_parameters=None, scaling_parameters=None), upper_action_threshold=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=20000.0), truncation_parameters=None, scaling_parameters=None), lower_action_threshold=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=0.0), truncation_parameters=None, scaling_parameters=None), is_operant=True, time_to_collect=None, continuous_feedback=None))[source]

Bases: BaseModel

Defines a response period

action: Action[source]
duration: _SgenTypenameAnnotation object at 0x7f862687fe00>][source]
has_cue: bool[source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

type aind_behavior_telekinesis.task_logic.Sampler = Annotated[LutSampler2D | Sampler1D | Sampler2D, FieldInfo(annotation=NoneType, required=True, discriminator='sampler_type')][source]
class aind_behavior_telekinesis.task_logic.Sampler1D(*, sampler_type: Literal['1D'] = '1D', min_from: float, max_from: float, min_to: float, max_to: float)[source]

Bases: BaseModel

max_from: float[source]
max_to: float[source]
min_from: float[source]
min_to: float[source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

sampler_type: Literal['1D'][source]
class aind_behavior_telekinesis.task_logic.Sampler2D(*, sampler_type: Literal['2D'] = '2D', min_from_0: float, max_from_0: float, min_from_1: float, max_from_1: float, min_to_0: float, max_to_0: float, min_to_1: float, max_to_1: float)[source]

Bases: BaseModel

max_from_0: float[source]
max_from_1: float[source]
max_to_0: float[source]
max_to_1: float[source]
min_from_0: float[source]
min_from_1: float[source]
min_to_0: float[source]
min_to_1: float[source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

sampler_type: Literal['2D'][source]
class aind_behavior_telekinesis.task_logic.SpoutOperationControl(*, default_retraction_offset: float = -7, enabled: bool = True)[source]

Bases: BaseModel

Control settings for the reward spout

default_retraction_offset: float[source]
enabled: bool[source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class aind_behavior_telekinesis.task_logic.Trial(*, inter_trial_interval: Distribution, <aind_behavior_services.schema._SgenTypenameAnnotation object at 0x7f862687fe00>]=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=0.5), truncation_parameters=None, scaling_parameters=None), quiescence_period: QuiescencePeriod | None = None, response_period: ResponsePeriod = ResponsePeriod(duration=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=0.5), truncation_parameters=None, scaling_parameters=None), has_cue=True, action=Action(reward_probability=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=1.0), truncation_parameters=None, scaling_parameters=None), reward_amount=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=1.0), truncation_parameters=None, scaling_parameters=None), reward_delay=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=1.0), truncation_parameters=None, scaling_parameters=None), action_duration=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=0.5), truncation_parameters=None, scaling_parameters=None), upper_action_threshold=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=20000.0), truncation_parameters=None, scaling_parameters=None), lower_action_threshold=Scalar(family=<DistributionFamily.SCALAR: 'Scalar'>, distribution_parameters=ScalarDistributionParameter(family=<DistributionFamily.SCALAR: 'Scalar'>, value=0.0), truncation_parameters=None, scaling_parameters=None), is_operant=True, time_to_collect=None, continuous_feedback=None)), action_source_0: ActionSource, action_source_1: ActionSource | None = None, sampler: Sampler)[source]

Bases: BaseModel

Defines a trial Action values are accumulated and normalized per second. E.g: Voltage/s -> LUT units/s -> Accumulate until threshold is reached

action_source_0: ActionSource[source]
action_source_1: ActionSource | None[source]
inter_trial_interval: _SgenTypenameAnnotation object at 0x7f862687fe00>][source]
model_config = {}[source]

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

quiescence_period: QuiescencePeriod | None[source]
response_period: ResponsePeriod[source]
sampler: Sampler[source]
aind_behavior_telekinesis.task_logic.normal_distribution_value(mean: float, std: float) NormalDistribution[source]

Helper function to create a normal distribution for a given range.

Parameters:
  • mean (float) – The mean value of the normal distribution.

  • std (float) – The standard deviation of the normal distribution.

Returns:

The normal distribution type.

Return type:

distributions.NormalDistribution

aind_behavior_telekinesis.task_logic.scalar_value(value: float) Scalar[source]

Helper function to create a scalar value distribution for a given value.

Parameters:

value (float) – The value of the scalar distribution.

Returns:

The scalar distribution type.

Return type:

distributions.Scalar

aind_behavior_telekinesis.task_logic.uniform_distribution_value(min: float, max: float) UniformDistribution[source]

Helper function to create a uniform distribution for a given range.

Parameters:
  • min (float) – The minimum value of the uniform distribution.

  • max (float) – The maximum value of the uniform distribution.

Returns:

The uniform distribution type.

Return type:

distributions.Uniform