tbp.monty.frameworks.environments#

tbp.monty.frameworks.environments.embodied_data#

class EnvironmentDataLoader(dataset: EnvironmentDataset, motor_system: MotorSystem, rng)[source]#

Bases: object

Wraps the environment dataset with an iterator.

The observations are based on the actions returned by the motor_system.

The first value returned by this iterator are the observations of the environment’s initial state, subsequent observations are returned after the action returned by motor_system is applied.

dataset#

EnvironmentDataset

motor_system#

MotorSystem

Note

If the amount variable returned by motor_system is None, the amount used by habitat will be the default for the actuator, e.g. PanTiltZoomCamera.translation_step

Note

This one on its own won’t work.

__init__(dataset: EnvironmentDataset, motor_system: MotorSystem, rng)[source]#
finish()[source]#
post_episode()[source]#
post_epoch()[source]#
pre_episode()[source]#
pre_epoch()[source]#
class EnvironmentDataLoaderPerObject(object_names, object_init_sampler, *args, **kwargs)[source]#

Bases: EnvironmentDataLoader

Dataloader for testing on environment with one “primary target” object.

Dataloader for testing on environment where we load one “primary target” object at a time; in addition, one can optionally load other “distractor” objects to the environment

Has a list of primary target objects, swapping these objects in and out for episodes without resetting the environment. The objects are initialized with parameters such that we can vary their location, rotation, and scale.

After the primary target is added to the environment, other distractor objects, sampled from the same object list, can be added.

__init__(object_names, object_init_sampler, *args, **kwargs)[source]#

Initialize dataloader.

Parameters:
  • object_names

    list of objects if doing a simple experiment with primary target objects only; dict for experiments with multiple objects, corresponding to –> targets_list : the list of primary target objects source_object_list : the original object list from which the primary

    target objects were sampled; used to sample distractor objects

    num_distractorsthe number of distractor objects to add to the

    environment

  • object_init_sampler – Function that returns dict with position, rotation, and scale of objects when re-initializing. To keep configs serializable, default is set to DefaultObjectInitializer.

  • *args

    ?

  • **kwargs

    ?

See also

tbp.monty.frameworks.make_dataset_configs EnvironmentDataLoaderPerObjectTrainArgs

Raises:

TypeError – If object_names is not a list or dictionary

add_distractor_objects(primary_target_obj, init_params, primary_target_name)[source]#

Add arbitrarily many “distractor” objects to the environment.

Parameters:
  • primary_target_obj – the Habitat object which is the primary target in the scene

  • init_params – parameters used to initialize the object, e.g. orientation; for now, these are identical to the primary target except for the object ID

  • primary_target_name – name of the primary target object

change_object_by_idx(idx)[source]#

Update the primary target object in the scene based on the given index.

The given idx is the index of the object in the self.object_names list, which should correspond to the index of the object in the self.object_params list.

Also add any distractor objects if required.

Parameters:

idx – Index of the new object and ints parameters in object_params

create_semantic_mapping()[source]#

Create a unique semantic ID (positive integer) for each object.

Used by Habitat for the semantic sensor.

In addition, create a dictionary mapping back and forth between these IDs and the corresponding name of the object

cycle_object()[source]#

Remove the previous object(s) from the scene and add a new primary target.

Also add any potential distractor objects.

post_episode()[source]#
post_epoch()[source]#
pre_episode()[source]#
pre_epoch()[source]#
class EnvironmentDataset(*args: Any, **kwargs: Any)[source]#

Bases: Dataset

Wraps an embodied environment with a torch.utils.data.Dataset.

TODO: Change the name of this class to reflect the interactiveness. Monty doesn’t work with static datasets, it interacts with the environment.

env_init_func#

Callable function used to create the embodied environment. This function should return a class implementing EmbodiedEnvironment

env_init_args#

Arguments to env_init_func

n_actions_per_epoch#

Number of actions per epoch. Used to determine the number of observations this dataset will return per epoch. It can be viewed as the dataset size.

transform#

Callable used to tranform the observations returned by the dataset

Note

Main idea is to separate concerns: - dataset owns the environment and creates it at initialization - dataset just handles the __getitem__() method - dataset does not handle motor activity, it just accepts action from

policy and uses it to look up the next observation

__init__(env_init_func, env_init_args, rng, transform=None)[source]#
apply_transform(transform, observation, state)[source]#
close()[source]#
reset()[source]#
property action_space#
class InformedEnvironmentDataLoader(object_names, object_init_sampler, *args, **kwargs)[source]#

Bases: EnvironmentDataLoaderPerObject

Dataloader that supports a policy which makes use of previous observation(s).

Extension of the EnvironmentDataLoader where the actions can be informed by the observations. It passes the observation to the InformedPolicy class (which is an extension of the BasePolicy). This policy can then make use of the observation to decide on the next action.

Also has the following, additional functionality; TODO refactor/separate these out as appropriate

i) this dataloader allows for early stopping by adding the set_done method which can for example be called when the object is recognized.

ii) the motor_only_step can be set such that the sensory module can later determine whether perceptual data should be sent to the learning module, or just fed back to the motor policy.

iii) Handles different data-loader updates depending on whether the policy is based on the surface-agent or touch-agent

  1. Supports hypothesis-testing “jump” policy

execute_jump_attempt()[source]#

Attempt a hypothesis-testing “jump” onto a location of the object.

Delegates to motor policy directly to determine specific jump actions.

Returns:

The observation from the jump attempt.

first_step()[source]#

Carry out particular motor-system state updates required on the first step.

TODO ?can get rid of this by appropriately initializing motor_only_step

Returns:

The observation from the first step.

get_good_view(sensor_id: str, allow_translation: bool = True, max_orientation_attempts: int = 1) bool[source]#

Invoke the GetGoodView positioning procedure.

Parameters:
  • sensor_id (str) – The ID of the sensor to use for positioning.

  • allow_translation (bool) – Whether to allow movement toward the object via the motor systems’s move_close_enough method. If False, only orientienting movements are performed. Defaults to True.

  • max_orientation_attempts (int) – The maximum number of orientation attempts allowed before giving up and truncating the procedure indicating that the sensor is not on the target object.

Return type:

bool

Returns:

Whether the sensor is on the target object.

get_good_view_with_patch_refinement() bool[source]#

Policy to get a good view of the object for the central patch.

Used by the distant agent to move and orient toward an object such that the central patch is on-object. This is done by first moving and orienting the agent toward the object using the view finder. Then orienting movements are performed using the central patch (i.e., the sensor module with id “patch” or “patch_0”) to ensure that the patch’s central pixel is on-object. Up to 3 reorientation attempts are performed using the central patch.

Also currently used by the distant agent after a “jump” has been initialized by a model-based policy.

Return type:

bool

Returns:

Whether the sensor is on the object.

handle_failed_jump(pre_jump_state, first_sensor)[source]#

Deal with the results of a failed hypothesis-testing jump.

A failed jump is “off-object”, i.e. the object is not perceived by the sensor.

handle_successful_jump()[source]#

Deal with the results of a successful hypothesis-testing jump.

A successful jump is “on-object”, i.e. the object is perceived by the sensor.

pre_episode()[source]#
class OmniglotDataLoader(alphabets, characters, versions, dataset, motor_system: MotorSystem, *args, **kwargs)[source]#

Bases: EnvironmentDataLoaderPerObject

Dataloader for Omniglot dataset.

__init__(alphabets, characters, versions, dataset, motor_system: MotorSystem, *args, **kwargs)[source]#

Initialize dataloader.

Parameters:
  • alphabets – List of alphabets.

  • characters – List of characters.

  • versions – List of versions.

  • dataset – The environment dataset.

  • motor_system (MotorSystem) – The motor system.

  • *args – Additional arguments

  • **kwargs – Additional keyword arguments

Raises:

TypeError – If motor_system is not an instance of MotorSystem.

change_object_by_idx(idx)[source]#

Update the object in the scene given the idx of it in the object params.

Parameters:

idx – Index of the new object and ints parameters in object params

cycle_object()[source]#

Switch to the next character image.

post_episode()[source]#
post_epoch()[source]#
class SaccadeOnImageDataLoader(scenes, versions, dataset: EnvironmentDataset, motor_system: MotorSystem, *args, **kwargs)[source]#

Bases: EnvironmentDataLoaderPerObject

Dataloader for moving over a 2D image with depth channel.

__init__(scenes, versions, dataset: EnvironmentDataset, motor_system: MotorSystem, *args, **kwargs)[source]#

Initialize dataloader.

Parameters:
  • scenes – List of scenes

  • versions – List of versions

  • dataset (EnvironmentDataset) – The environment dataset.

  • motor_system (MotorSystem) – The motor system.

  • *args – Additional arguments

  • **kwargs – Additional keyword arguments

Raises:

TypeError – If motor_system is not an instance of MotorSystem.

change_object_by_idx(idx)[source]#

Update the object in the scene given the idx of it in the object params.

Parameters:

idx – Index of the new object and ints parameters in object params

cycle_object()[source]#

Switch to the next scene image.

post_episode()[source]#
post_epoch()[source]#
class SaccadeOnImageFromStreamDataLoader(dataset: EnvironmentDataset, motor_system: MotorSystem, *args, **kwargs)[source]#

Bases: SaccadeOnImageDataLoader

Dataloader for moving over a 2D image with depth channel.

__init__(dataset: EnvironmentDataset, motor_system: MotorSystem, *args, **kwargs)[source]#

Initialize dataloader.

Parameters:
  • dataset (EnvironmentDataset) – The environment dataset.

  • motor_system (MotorSystem) – The motor system.

  • *args – Additional arguments

  • **kwargs – Additional keyword arguments

Raises:

TypeError – If motor_system is not an instance of MotorSystem.

change_scene_by_idx(idx)[source]#

Update the object in the scene given the idx of it in the object params.

Parameters:

idx – Index of the new object and ints parameters in object params

cycle_scene()[source]#

Switch to the next scene image.

post_episode()[source]#
post_epoch()[source]#
pre_epoch()[source]#

tbp.monty.frameworks.environments.embodied_environment#

class ActionSpace[source]#

Bases: Container

Represents the environment action space.

abstract sample()[source]#

Sample the action space returning a random action.

class EmbodiedEnvironment[source]#

Bases: ABC

abstract add_object(name: str, position: Tuple[float, float, float] = (0.0, 0.0, 0.0), rotation: Tuple[float, float, float, float] = (1.0, 0.0, 0.0, 0.0), scale: Tuple[float, float, float] = (1.0, 1.0, 1.0), semantic_id: str | None = None, enable_physics: bool | None = False, object_to_avoid=False, primary_target_object=None)[source]#

Add an object to the environment.

Parameters:
  • name (str) – The name of the object to add.

  • position (Tuple[float, float, float]) – The initial absolute position of the object.

  • rotation (Tuple[float, float, float, float]) – The initial rotation WXYZ quaternion of the object. Defaults to (1,0,0,0).

  • scale (Tuple[float, float, float]) – The scale of the object to add. Defaults to (1,1,1).

  • semantic_id (Optional[str]) – Optional override for the object semantic ID.

  • enable_physics (Optional[bool]) – Whether to enable physics on the object. Defaults to False.

  • object_to_avoid – If True, run collision checks to ensure the object will not collide with any other objects in the scene. If collision is detected, the object will be moved. Defaults to False.

  • primary_target_object – If not None, the added object will be positioned so that it does not obscure the initial view of the primary target object (which avoiding collision alone cannot guarantee). Used when adding multiple objects. Defaults to None.

Returns:

The newly added object.

TODO: This add_object interface is elevated from HabitatSim.add_object and is

quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

abstract close()[source]#

Close the environmnt releasing all resources.

Any call to any other environment method may raise an exception

abstract get_state()[source]#

Return the state of the environment (and agent).

abstract remove_all_objects()[source]#

Remove all objects from the environment.

TODO: This remove_all_objects interface is elevated from

HabitatSim.remove_all_objects and is quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

abstract reset()[source]#

Reset enviroment to its initial state.

Return the environment’s initial observations.

abstract step(action: Action) Dict[Any, Dict][source]#

Apply the given action to the environment.

Return the current observations and other environment information (i.e. sensor pose) after the action is applied.

Return type:

Dict[Any, Dict]

abstract property action_space#

Returns list of all possible actions available in the environment.

tbp.monty.frameworks.environments.real_robots#

class RealRobotsEnvironment(id)[source]#

Bases: EmbodiedEnvironment

Real Robots environment compatible with Monty.

Note

real_robots dependencies are not installed by default. Install real_robot extra dependencies if you want to use real_robot environment

__init__(id)[source]#
add_object(*args, **kwargs)[source]#

Add an object to the environment.

Parameters:
  • name – The name of the object to add.

  • position – The initial absolute position of the object.

  • rotation – The initial rotation WXYZ quaternion of the object. Defaults to (1,0,0,0).

  • scale – The scale of the object to add. Defaults to (1,1,1).

  • semantic_id – Optional override for the object semantic ID.

  • enable_physics – Whether to enable physics on the object. Defaults to False.

  • object_to_avoid – If True, run collision checks to ensure the object will not collide with any other objects in the scene. If collision is detected, the object will be moved. Defaults to False.

  • primary_target_object – If not None, the added object will be positioned so that it does not obscure the initial view of the primary target object (which avoiding collision alone cannot guarantee). Used when adding multiple objects. Defaults to None.

Returns:

The newly added object.

TODO: This add_object interface is elevated from HabitatSim.add_object and is

quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

close()[source]#

Close the environmnt releasing all resources.

Any call to any other environment method may raise an exception

remove_all_objects()[source]#

Remove all objects from the environment.

TODO: This remove_all_objects interface is elevated from

HabitatSim.remove_all_objects and is quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

reset()[source]#

Reset enviroment to its initial state.

Return the environment’s initial observations.

step(action)[source]#

Apply the given action to the environment.

Return the current observations and other environment information (i.e. sensor pose) after the action is applied.

property action_space#

Returns list of all possible actions available in the environment.

tbp.monty.frameworks.environments.two_d_data#

class OmniglotEnvironment(patch_size=10, data_path=None)[source]#

Bases: EmbodiedEnvironment

Environment for Omniglot dataset.

__init__(patch_size=10, data_path=None)[source]#

Initialize environment.

Parameters:
  • patch_size – height and width of patch in pixels, defaults to 10

  • data_path – path to the omniglot dataset. If None its set to ~/tbp/data/omniglot/python/

add_object(*args, **kwargs)[source]#

Add an object to the environment.

Parameters:
  • name – The name of the object to add.

  • position – The initial absolute position of the object.

  • rotation – The initial rotation WXYZ quaternion of the object. Defaults to (1,0,0,0).

  • scale – The scale of the object to add. Defaults to (1,1,1).

  • semantic_id – Optional override for the object semantic ID.

  • enable_physics – Whether to enable physics on the object. Defaults to False.

  • object_to_avoid – If True, run collision checks to ensure the object will not collide with any other objects in the scene. If collision is detected, the object will be moved. Defaults to False.

  • primary_target_object – If not None, the added object will be positioned so that it does not obscure the initial view of the primary target object (which avoiding collision alone cannot guarantee). Used when adding multiple objects. Defaults to None.

Returns:

The newly added object.

TODO: This add_object interface is elevated from HabitatSim.add_object and is

quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

close()[source]#

Close the environmnt releasing all resources.

Any call to any other environment method may raise an exception

get_image_patch(img, loc, patch_size)[source]#
get_state()[source]#

Return the state of the environment (and agent).

load_new_character_data()[source]#
motor_to_locations(motor)[source]#
remove_all_objects()[source]#

Remove all objects from the environment.

TODO: This remove_all_objects interface is elevated from

HabitatSim.remove_all_objects and is quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

reset()[source]#

Reset enviroment to its initial state.

Return the environment’s initial observations.

step(action: Action) dict[source]#

Retrieve the next observation.

Since the omniglot dataset includes stroke information (the order in which the character was drawn as a list of x,y coordinates) we use that for movement. This means we start at the first x,y coordinate saved in the move path and then move in increments specified by amount through this list. Overall there are usually several hundred points (~200-400) but it varies between characters and versions. If the reach the end of a move path and the episode is not finished, we start from the beginning again. If len(move_path) % amount != 0 we will sample different points on the second pass.

Parameters:
  • action (Action) – Not used at the moment since we just follow the draw path. However,

  • at (we do use the rotation_degrees to determine the amount of pixels to move) –

  • step. (each) –

Return type:

dict

Returns:

The observation.

switch_to_object(alphabet_id, character_id, version_id)[source]#
property action_space#

Returns list of all possible actions available in the environment.

class SaccadeOnImageEnvironment(patch_size=64, data_path=None)[source]#

Bases: EmbodiedEnvironment

Environment for moving over a 2D image with depth channel.

Images should be stored in .png format for rgb and .data format for depth.

__init__(patch_size=64, data_path=None)[source]#

Initialize environment.

Parameters:
  • patch_size – height and width of patch in pixels, defaults to 64

  • data_path – path to the image dataset. If None its set to ~/tbp/data/worldimages/labeled_scenes/

add_object(*args, **kwargs)[source]#

Add an object to the environment.

Parameters:
  • name – The name of the object to add.

  • position – The initial absolute position of the object.

  • rotation – The initial rotation WXYZ quaternion of the object. Defaults to (1,0,0,0).

  • scale – The scale of the object to add. Defaults to (1,1,1).

  • semantic_id – Optional override for the object semantic ID.

  • enable_physics – Whether to enable physics on the object. Defaults to False.

  • object_to_avoid – If True, run collision checks to ensure the object will not collide with any other objects in the scene. If collision is detected, the object will be moved. Defaults to False.

  • primary_target_object – If not None, the added object will be positioned so that it does not obscure the initial view of the primary target object (which avoiding collision alone cannot guarantee). Used when adding multiple objects. Defaults to None.

Returns:

The newly added object.

TODO: This add_object interface is elevated from HabitatSim.add_object and is

quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

close()[source]#

Close the environmnt releasing all resources.

Any call to any other environment method may raise an exception

get_3d_coordinates_from_pixel_indices(pixel_idx)[source]#

Retrieve 3D coordinates of a pixel.

Returns:

The 3D coordinates of the pixel.

get_3d_scene_point_cloud()[source]#

Turn 2D depth image into 3D pointcloud using DepthTo3DLocations.

This point cloud is used to estimate the sensor displacement in 3D space between two subsequent steps. Without this we get displacements in pixel space which does not work with our 3D models.

Returns:

The 3D scene point cloud. current_sf_scene_point_cloud: The 3D scene point cloud in sensor frame.

Return type:

current_scene_point_cloud

get_image_patch(loc)[source]#

Extract 2D image patch from a location in pixel space.

Returns:

The depth patch. rgb_patch: The rgb patch. depth3d_patch: The depth3d patch. sensor_frame_patch: The sensor frame patch.

Return type:

depth_patch

get_move_area()[source]#

Calculate area in which patch can move on the image.

Returns:

The move area.

get_next_loc(action_name, amount)[source]#

Calculate next location in pixel space given the current action.

Returns:

The next location in pixel space.

get_state()[source]#

Get agent state.

Returns:

The agent state.

load_depth_data(depth_path, height, width)[source]#

Load depth image from .data file.

Returns:

The depth image.

load_new_scene_data()[source]#

Load depth and rgb data for next scene environment.

Returns:

The depth image. current_rgb_image: The rgb image. start_location: The start location.

Return type:

current_depth_image

load_rgb_data(rgb_path)[source]#

Load RGB image and put into np array.

Returns:

The rgb image.

process_depth_data(depth)[source]#

Process depth data by reshaping, clipping and flipping.

Returns:

The processed depth image.

remove_all_objects()[source]#

Remove all objects from the environment.

TODO: This remove_all_objects interface is elevated from

HabitatSim.remove_all_objects and is quite specific to HabitatSim implementation. We should consider refactoring this to be more generic.

reset()[source]#

Reset environment and extract image patch.

TODO: clean up. Do we need this? No reset required in this dataloader, maybe indicate this better here.

Returns:

The observation from the image patch.

step(action: Action) dict[source]#

Retrieve the next observation.

Parameters:
  • action (Action) – moving up, down, left or right from current location.

  • amount – Amount of pixels to move at once.

Return type:

dict

Returns:

The observation.

switch_to_object(scene_id, scene_version_id)[source]#

Load new image to be used as environment.

property action_space#

Returns list of all possible actions available in the environment.

class SaccadeOnImageFromStreamEnvironment(patch_size=64, data_path=None)[source]#

Bases: SaccadeOnImageEnvironment

Environment for moving over a 2D streamed image with depth channel.

__init__(patch_size=64, data_path=None)[source]#

Initialize environment.

Parameters:
  • patch_size – height and width of patch in pixels, defaults to 64

  • data_path – path to the image dataset. If None its set to ~/tbp/data/worldimages/world_data_stream/

load_new_scene_data()[source]#

Load depth and rgb data for next scene environment.

Returns:

The depth image. current_rgb_image: The rgb image. start_location: The start location.

Return type:

current_depth_image

switch_to_scene(scene_id)[source]#

tbp.monty.frameworks.environments.ycb#