tbp.monty.frameworks.environments#

tbp.monty.frameworks.environments.embodied_data#

class EnvironmentDataLoader(dataset: EnvironmentDataset, motor_system: MotorSystem, rng)[source]#

Bases: object

Wraps the environment dataset with an iterator.

The observations are based on the actions returned by the motor_system.

The first value returned by this iterator are the observations of the environment’s initial state, subsequent observations are returned after the action returned by motor_system is applied.

dataset#

EnvironmentDataset

motor_system#

MotorSystem

Note

If the amount variable returned by motor_system is None, the amount used by habitat will be the default for the actuator, e.g. PanTiltZoomCamera.translation_step

Note

This one on its own won’t work.

finish()[source]#
post_episode()[source]#
post_epoch()[source]#
pre_episode()[source]#
pre_epoch()[source]#
class EnvironmentDataLoaderPerObject(object_names, object_init_sampler, *args, **kwargs)[source]#

Bases: EnvironmentDataLoader

Dataloader for testing on environment with one “primary target” object.

Dataloader for testing on environment where we load one “primary target” object at a time; in addition, one can optionally load other “distractor” objects to the environment

Has a list of primary target objects, swapping these objects in and out for episodes without resetting the environment. The objects are initialized with parameters such that we can vary their location, rotation, and scale.

After the primary target is added to the environment, other distractor objects, sampled from the same object list, can be added.

add_distractor_objects(primary_target_obj, init_params, primary_target_name)[source]#

Add arbitrarily many “distractor” objects to the environment.

Parameters:
  • primary_target_obj – the Habitat object which is the primary target in the scene

  • init_params (dict) – parameters used to initialize the object, e.g. orientation; for now, these are identical to the primary target except for the object ID

  • primary_target_name (str) – name of the primary target object

change_object_by_idx(idx)[source]#

Update the primary target object in the scene based on the given index.

The given idx is the index of the object in the self.object_names list, which should correspond to the index of the object in the self.object_params list.

Also add any distractor objects if required.

Parameters:

idx – Index of the new object and ints parameters in object_params

create_semantic_mapping()[source]#

Create a unique semantic ID (positive integer) for each object.

Used by Habitat for the semantic sensor.

In addition, create a dictionary mapping back and forth between these IDs and the corresponding name of the object

cycle_object()[source]#

Remove the previous object(s) from the scene and add a new primary target.

Also add any potential distractor objects.

post_episode()[source]#
post_epoch()[source]#
pre_episode()[source]#
pre_epoch()[source]#
reset_agent()[source]#
class EnvironmentDataset(*args: Any, **kwargs: Any)[source]#

Bases: Dataset

Wraps an embodied environment with a torch.utils.data.Dataset.

TODO: Change the name of this class to reflect the interactiveness. Monty doesn’t work with static datasets, it interacts with the environment.

env_init_func#

Callable function used to create the embodied environment. This function should return a class implementing EmbodiedEnvironment

env_init_args#

Arguments to env_init_func

n_actions_per_epoch#

Number of actions per epoch. Used to determine the number of observations this dataset will return per epoch. It can be viewed as the dataset size.

transform#

Callable used to tranform the observations returned by the dataset

Note

Main idea is to separate concerns: - dataset owns the environment and creates it at initialization - dataset just handles the __getitem__() method - dataset does not handle motor activity, it just accepts action from

policy and uses it to look up the next observation

apply_transform(transform, observation, state)[source]#
close()[source]#
reset()[source]#
property action_space#
class InformedEnvironmentDataLoader(use_get_good_view_positioning_procedure: bool = False, *args, **kwargs)[source]#

Bases: EnvironmentDataLoaderPerObject

Dataloader that supports a policy which makes use of previous observation(s).

Extension of the EnvironmentDataLoader where the actions can be informed by the observations. It passes the observation to the InformedPolicy class (which is an extension of the BasePolicy). This policy can then make use of the observation to decide on the next action.

Also has the following, additional functionality; TODO refactor/separate these out as appropriate

i) this dataloader allows for early stopping by adding the set_done method which can for example be called when the object is recognized.

ii) the motor_only_step can be set such that the sensory module can later determine whether perceptual data should be sent to the learning module, or just fed back to the motor policy.

iii) Handles different data-loader updates depending on whether the policy is based on the surface-agent or touch-agent

  1. Supports hypothesis-testing “jump” policy

execute_jump_attempt()[source]#

Attempt a hypothesis-testing “jump” onto a location of the object.

Delegates to motor policy directly to determine specific jump actions.

Returns:

The observation from the jump attempt.

first_step()[source]#

Carry out particular motor-system state updates required on the first step.

TODO ?can get rid of this by appropriately initializing motor_only_step

Returns:

The observation from the first step.

get_good_view(sensor_id: str, allow_translation: bool = True, max_orientation_attempts: int = 1) bool[source]#

Policy to get a good view of the object before an episode starts.

Used by the distant agent to find the initial view of an object at the beginning of an episode with respect to a given sensor (the surface agent makes use of the touch_object method instead). Also currently used by the distant agent after a “jump” has been initialized by a model-based policy.

First, the agent moves towards object until it fills a minimum of percentage (given by motor_system._policy.good_view_percentage) of the sensor’s field of view or the closest point of the object is less than a given distance (motor_system._policy.desired_object_distance) from the sensor. This makes sure that big and small objects all fill similar amount of space in the sensor’s field of view. Otherwise small objects may be too small to perform saccades or the sensor ends up inside of big objects. This step is performed by default but can be skipped by setting allow_translation=False.

Second, the agent will then be oriented towards the object so that the sensor’s central pixel is on-object. In the case of multi-object experiments, (i.e., when num_distractors > 0), there is an additional orientation step performed prior to the translational movement step.

Parameters:
  • sensor_id – The name of the sensor used to inform movements.

  • allow_translation – Whether to allow movement toward the object via the motor systems’s move_close_enough method. If False, only orientienting movements are performed. Default is True.

  • max_orientation_attempts – The maximum number of orientation attempts allowed before giving up and returning False indicating that the sensor is not on the target object.

Returns:

Whether the sensor is on the target object.

TODO Mmove most of this to the motor systems, shouldn’t be in embodied_data

class

get_good_view_with_patch_refinement() bool[source]#

Policy to get a good view of the object for the central patch.

Used by the distant agent to move and orient toward an object such that the central patch is on-object. This is done by first moving and orienting the agent toward the object using the view finder. Then orienting movements are performed using the central patch (i.e., the sensor module with id “patch” or “patch_0”) to ensure that the patch’s central pixel is on-object. Up to 3 reorientation attempts are performed using the central patch.

Also currently used by the distant agent after a “jump” has been initialized by a model-based policy.

Returns:

Whether the sensor is on the object.

handle_failed_jump(pre_jump_state, first_sensor)[source]#

Deal with the results of a failed hypothesis-testing jump.

A failed jump is “off-object”, i.e. the object is not perceived by the sensor.

handle_successful_jump()[source]#

Deal with the results of a successful hypothesis-testing jump.

A successful jump is “on-object”, i.e. the object is perceived by the sensor.

pre_episode()[source]#
class OmniglotDataLoader(alphabets, characters, versions, dataset, motor_system: MotorSystem, *args, **kwargs)[source]#

Bases: EnvironmentDataLoaderPerObject

Dataloader for Omniglot dataset.

change_object_by_idx(idx)[source]#

Update the object in the scene given the idx of it in the object params.

Parameters:

idx – Index of the new object and ints parameters in object params

cycle_object()[source]#

Switch to the next character image.

post_episode()[source]#
post_epoch()[source]#
class SaccadeOnImageDataLoader(scenes, versions, dataset: EnvironmentDataset, motor_system: MotorSystem, *args, **kwargs)[source]#

Bases: EnvironmentDataLoaderPerObject

Dataloader for moving over a 2D image with depth channel.

change_object_by_idx(idx)[source]#

Update the object in the scene given the idx of it in the object params.

Parameters:

idx – Index of the new object and ints parameters in object params

cycle_object()[source]#

Switch to the next scene image.

post_episode()[source]#
post_epoch()[source]#
class SaccadeOnImageFromStreamDataLoader(dataset: EnvironmentDataset, motor_system: MotorSystem, *args, **kwargs)[source]#

Bases: SaccadeOnImageDataLoader

Dataloader for moving over a 2D image with depth channel.

change_scene_by_idx(idx)[source]#

Update the object in the scene given the idx of it in the object params.

Parameters:

idx – Index of the new object and ints parameters in object params

cycle_scene()[source]#

Switch to the next scene image.

post_episode()[source]#
post_epoch()[source]#
pre_epoch()[source]#

tbp.monty.frameworks.environments.embodied_environment#

class ActionSpace[source]#

Bases: Container

Represents the environment action space.

abstract sample()[source]#

Sample the action space returning a random action.

class EmbodiedEnvironment[source]#

Bases: ABC

abstract add_object(name: str, position: Tuple[float, float, float] = (0.0, 0.0, 0.0), rotation: Tuple[float, float, float, float] = (1.0, 0.0, 0.0, 0.0), scale: Tuple[float, float, float] = (1.0, 1.0, 1.0), semantic_id: str | None = None, enable_physics: bool | None = False, object_to_avoid=False, primary_target_object=None)[source]#

Add an object to the environment.

Parameters:
  • name – The name of the object to add.

  • position – The initial absolute position of the object.

  • rotation – The initial rotation WXYZ quaternion of the object. Defaults to (1,0,0,0).

  • scale – The scale of the object to add. Defaults to (1,1,1).

  • semantic_id – Optional override for the object semantic ID.

  • enable_physics – Whether to enable physics on the object. Defaults to False.

  • object_to_avoid – If True, run collision checks to ensure the object will not collide with any other objects in the scene. If collision is detected, the object will be moved. Defaults to False.

  • primary_target_object – If not None, the added object will be positioned so that it does not obscure the initial view of the primary target object (which avoiding collision alone cannot guarantee). Used when adding multiple objects. Defaults to None.

Returns:

The newly added object.

TODO: This add_object interface is elevated from HabitatSim.add_object and is

quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

abstract close()[source]#

Close the environmnt releasing all resources.

Any call to any other environment method may raise an exception

abstract get_state()[source]#

Return the state of the environment (and agent).

abstract remove_all_objects()[source]#

Remove all objects from the environment.

TODO: This remove_all_objects interface is elevated from

HabitatSim.remove_all_objects and is quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

abstract reset()[source]#

Reset enviroment to its initial state.

Return the environment’s initial observations.

abstract step(action: Action) Dict[Any, Dict][source]#

Apply the given action to the environment.

Return the current observations and other environment information (i.e. sensor pose) after the action is applied.

abstract property action_space#

Returns list of all possible actions available in the environment.

tbp.monty.frameworks.environments.real_robots#

class RealRobotsEnvironment(id)[source]#

Bases: EmbodiedEnvironment

Real Robots environment compatible with Monty.

Note

real_robots dependencies are not installed by default. Install real_robot extra dependencies if you want to use real_robot environment

add_object(*args, **kwargs)[source]#

Add an object to the environment.

Parameters:
  • name – The name of the object to add.

  • position – The initial absolute position of the object.

  • rotation – The initial rotation WXYZ quaternion of the object. Defaults to (1,0,0,0).

  • scale – The scale of the object to add. Defaults to (1,1,1).

  • semantic_id – Optional override for the object semantic ID.

  • enable_physics – Whether to enable physics on the object. Defaults to False.

  • object_to_avoid – If True, run collision checks to ensure the object will not collide with any other objects in the scene. If collision is detected, the object will be moved. Defaults to False.

  • primary_target_object – If not None, the added object will be positioned so that it does not obscure the initial view of the primary target object (which avoiding collision alone cannot guarantee). Used when adding multiple objects. Defaults to None.

Returns:

The newly added object.

TODO: This add_object interface is elevated from HabitatSim.add_object and is

quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

close()[source]#

Close the environmnt releasing all resources.

Any call to any other environment method may raise an exception

remove_all_objects()[source]#

Remove all objects from the environment.

TODO: This remove_all_objects interface is elevated from

HabitatSim.remove_all_objects and is quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

reset()[source]#

Reset enviroment to its initial state.

Return the environment’s initial observations.

step(action)[source]#

Apply the given action to the environment.

Return the current observations and other environment information (i.e. sensor pose) after the action is applied.

property action_space#

Returns list of all possible actions available in the environment.

tbp.monty.frameworks.environments.two_d_data#

class OmniglotEnvironment(patch_size=10, data_path=None)[source]#

Bases: EmbodiedEnvironment

Environment for Omniglot dataset.

add_object(*args, **kwargs)[source]#

Add an object to the environment.

Parameters:
  • name – The name of the object to add.

  • position – The initial absolute position of the object.

  • rotation – The initial rotation WXYZ quaternion of the object. Defaults to (1,0,0,0).

  • scale – The scale of the object to add. Defaults to (1,1,1).

  • semantic_id – Optional override for the object semantic ID.

  • enable_physics – Whether to enable physics on the object. Defaults to False.

  • object_to_avoid – If True, run collision checks to ensure the object will not collide with any other objects in the scene. If collision is detected, the object will be moved. Defaults to False.

  • primary_target_object – If not None, the added object will be positioned so that it does not obscure the initial view of the primary target object (which avoiding collision alone cannot guarantee). Used when adding multiple objects. Defaults to None.

Returns:

The newly added object.

TODO: This add_object interface is elevated from HabitatSim.add_object and is

quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

close()[source]#

Close the environmnt releasing all resources.

Any call to any other environment method may raise an exception

get_image_patch(img, loc, patch_size)[source]#
get_state()[source]#

Return the state of the environment (and agent).

load_new_character_data()[source]#
motor_to_locations(motor)[source]#
remove_all_objects()[source]#

Remove all objects from the environment.

TODO: This remove_all_objects interface is elevated from

HabitatSim.remove_all_objects and is quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

reset()[source]#

Reset enviroment to its initial state.

Return the environment’s initial observations.

step(action: Action)[source]#

Retrieve the next observation.

Since the omniglot dataset includes stroke information (the order in which the character was drawn as a list of x,y coordinates) we use that for movement. This means we start at the first x,y coordinate saved in the move path and then move in increments specified by amount through this list. Overall there are usually several hundred points (~200-400) but it varies between characters and versions. If the reach the end of a move path and the episode is not finished, we start from the beginning again. If len(move_path) % amount != 0 we will sample different points on the second pass.

Parameters:
  • action – Not used at the moment since we just follow the draw path. However,

  • at (we do use the rotation_degrees to determine the amount of pixels to move) –

  • step. (each) –

Returns:

observation (dict).

switch_to_object(alphabet_id, character_id, version_id)[source]#
property action_space#

Returns list of all possible actions available in the environment.

class SaccadeOnImageEnvironment(patch_size=64, data_path=None)[source]#

Bases: EmbodiedEnvironment

Environment for moving over a 2D image with depth channel.

Images should be stored in .png format for rgb and .data format for depth.

add_object(*args, **kwargs)[source]#

Add an object to the environment.

Parameters:
  • name – The name of the object to add.

  • position – The initial absolute position of the object.

  • rotation – The initial rotation WXYZ quaternion of the object. Defaults to (1,0,0,0).

  • scale – The scale of the object to add. Defaults to (1,1,1).

  • semantic_id – Optional override for the object semantic ID.

  • enable_physics – Whether to enable physics on the object. Defaults to False.

  • object_to_avoid – If True, run collision checks to ensure the object will not collide with any other objects in the scene. If collision is detected, the object will be moved. Defaults to False.

  • primary_target_object – If not None, the added object will be positioned so that it does not obscure the initial view of the primary target object (which avoiding collision alone cannot guarantee). Used when adding multiple objects. Defaults to None.

Returns:

The newly added object.

TODO: This add_object interface is elevated from HabitatSim.add_object and is

quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

close()[source]#

Close the environmnt releasing all resources.

Any call to any other environment method may raise an exception

get_3d_coordinates_from_pixel_indices(pixel_idx)[source]#

Retrieve 3D coordinates of a pixel.

Returns:

The 3D coordinates of the pixel.

get_3d_scene_point_cloud()[source]#

Turn 2D depth image into 3D pointcloud using DepthTo3DLocations.

This point cloud is used to estimate the sensor displacement in 3D space between two subsequent steps. Without this we get displacements in pixel space which does not work with our 3D models.

Returns:

The 3D scene point cloud. current_sf_scene_point_cloud: The 3D scene point cloud in sensor frame.

Return type:

current_scene_point_cloud

get_image_patch(loc)[source]#

Extract 2D image patch from a location in pixel space.

Returns:

The depth patch. rgb_patch: The rgb patch. depth3d_patch: The depth3d patch. sensor_frame_patch: The sensor frame patch.

Return type:

depth_patch

get_move_area()[source]#

Calculate area in which patch can move on the image.

Returns:

The move area.

get_next_loc(action_name, amount)[source]#

Calculate next location in pixel space given the current action.

Returns:

The next location in pixel space.

get_state()[source]#

Get agent state.

Returns:

The agent state.

load_depth_data(depth_path, height, width)[source]#

Load depth image from .data file.

Returns:

The depth image.

load_new_scene_data()[source]#

Load depth and rgb data for next scene environment.

Returns:

The depth image. current_rgb_image: The rgb image. start_location: The start location.

Return type:

current_depth_image

load_rgb_data(rgb_path)[source]#

Load RGB image and put into np array.

Returns:

The rgb image.

process_depth_data(depth)[source]#

Process depth data by reshaping, clipping and flipping.

Returns:

The processed depth image.

remove_all_objects()[source]#

Remove all objects from the environment.

TODO: This remove_all_objects interface is elevated from

HabitatSim.remove_all_objects and is quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

reset()[source]#

Reset environment and extract image patch.

TODO: clean up. Do we need this? No reset required in this dataloader, maybe indicate this better here.

Returns:

The observation from the image patch.

step(action: Action)[source]#

Retrieve the next observation.

Parameters:
  • action – moving up, down, left or right from current location.

  • amount – Amount of pixels to move at once.

Returns:

observation (dict).

switch_to_object(scene_id, scene_version_id)[source]#

Load new image to be used as environment.

property action_space#

Returns list of all possible actions available in the environment.

class SaccadeOnImageFromStreamEnvironment(patch_size=64, data_path=None)[source]#

Bases: SaccadeOnImageEnvironment

Environment for moving over a 2D streamed image with depth channel.

load_new_scene_data()[source]#

Load depth and rgb data for next scene environment.

Returns:

The depth image. current_rgb_image: The rgb image. start_location: The start location.

Return type:

current_depth_image

switch_to_scene(scene_id)[source]#

tbp.monty.frameworks.environments.ycb#