tbp.monty.frameworks.environments#
tbp.monty.frameworks.environments.embodied_data#
tbp.monty.frameworks.environments.embodied_environment#
- class EmbodiedEnvironment[source]#
Bases:
ABC
- abstract close()[source]#
Close the environmnt releasing all resources.
Any call to any other environment method may raise an exception
- abstract reset()[source]#
Reset enviroment to its initial state.
Return the environment’s initial observations.
- abstract step(action: Action) Dict[Any, Dict] [source]#
Apply the given action to the environment.
Return the current observations and other environment information (i.e. sensor pose) after the action is applied.
- abstract property action_space#
Returns list of all possible actions available in the environment.
tbp.monty.frameworks.environments.habitat#
- class AgentConfig(agent_type: Type[HabitatAgent], agent_args: dict | Type[HabitatAgentArgs])[source]#
Bases:
object
Agent configuration used by
HabitatEnvironment
.- agent_type: Type[HabitatAgent]#
- class HabitatActionSpace(iterable=(), /)[source]#
Bases:
tuple
,ActionSpace
ActionSpace wrapper for Habitat’s AgentConfiguration.
Wraps
habitat_sim.agent.AgentConfiguration
action space as montyActionSpace
.
- class HabitatEnvironment(agents: List[dict | AgentConfig], objects: List[dict | ObjectConfig] | None = None, scene_id: str | None = None, seed: int = 42, data_path: str | None = None)[source]#
Bases:
EmbodiedEnvironment
habitat-sim environment compatible with Monty.
- agents#
List of
AgentConfig
to place in the scene.
- objects#
Optional list of
ObjectConfig
to place in the scene.
- scene_id#
Scene to use or None for empty environment.
- seed#
Simulator seed to use
- data_path#
Path to the dataset.
- close()[source]#
Close the environmnt releasing all resources.
Any call to any other environment method may raise an exception
- reset()[source]#
Reset enviroment to its initial state.
Return the environment’s initial observations.
- step(action: Action) Dict[str, Dict] [source]#
Apply the given action to the environment.
Return the current observations and other environment information (i.e. sensor pose) after the action is applied.
- property action_space#
Returns list of all possible actions available in the environment.
- class MultiSensorAgentArgs(agent_id: str, sensor_ids: Tuple[str], position: tuple = (0.0, 1.5, 0.0), rotation: tuple = (1.0, 0.0, 0.0, 0.0), height: float = 0.0, rotation_step: float = 0.0, translation_step: float = 0.0, action_space_type: str = 'distant_agent', resolutions: tuple = ((16, 16),), positions: tuple = ((0.0, 0.0, 0.0),), rotations: tuple = ((1.0, 0.0, 0.0, 0.0),), zooms: tuple = (1.0,), semantics: tuple = (False,))#
Bases:
HabitatAgentArgs
- class ObjectConfig(name: str, position: tuple = (0.0, 0.0, 0.0), rotation: tuple = (1.0, 0.0, 0.0, 0.0), scale: tuple = (1.0, 1.0, 1.0), semantic_id: NoneType = None, enable_physics: bool = False, object_to_avoid=False, primary_target_bb=None)#
Bases:
object
- object_to_avoid: _empty = False#
- primary_target_bb: _empty = None#
- class SingleSensorAgentArgs(agent_id: str, sensor_id: str, agent_position: tuple = (0.0, 1.5, 0.0), sensor_position: tuple = (0.0, 0.0, 0.0), rotation: tuple = (1.0, 0.0, 0.0, 0.0), height: float = 0.0, resolution: tuple = (16, 16), zoom: float = 1.0, semantic: bool = False, rotation_step: float = 0.0, translation_step: float = 0.0, action_space: NoneType = None, action_space_type: str = 'distant_agent')#
Bases:
HabitatAgentArgs
tbp.monty.frameworks.environments.real_robots#
- class RealRobotsEnvironment(id)[source]#
Bases:
EmbodiedEnvironment
Real Robots environment compatible with Monty.
Note
real_robots dependencies are not installed by default. Install real_robot extra dependencies if you want to use real_robot environment
- close()[source]#
Close the environmnt releasing all resources.
Any call to any other environment method may raise an exception
- reset()[source]#
Reset enviroment to its initial state.
Return the environment’s initial observations.
- step(action)[source]#
Apply the given action to the environment.
Return the current observations and other environment information (i.e. sensor pose) after the action is applied.
- property action_space#
Returns list of all possible actions available in the environment.
tbp.monty.frameworks.environments.two_d_data#
- class OmniglotEnvironment(patch_size=10, data_path=None)[source]#
Bases:
EmbodiedEnvironment
Environment for Omniglot dataset.
- close()[source]#
Close the environmnt releasing all resources.
Any call to any other environment method may raise an exception
- reset()[source]#
Reset enviroment to its initial state.
Return the environment’s initial observations.
- step(_action, amount)[source]#
Retrieve the next observation.
Since the omniglot dataset includes stroke information (the order in which the character was drawn as a list of x,y coordinates) we use that for movement. This means we start at the first x,y coordinate saved in the move path and then move in increments specified by amount through this list. Overall there are usually several hundred points (~200-400) but it varies between characters and versions. If the reach the end of a move path and the episode is not finished, we start from the beginning again. If len(move_path) % amount != 0 we will sample different points on the second pass.
- Parameters:
_action – Not used at the moment since we just follow the draw path.
amount – Amount of elements in move path to move at once.
- Returns:
observation (dict).
- property action_space#
Returns list of all possible actions available in the environment.
- class SaccadeOnImageEnvironment(patch_size=64, data_path=None)[source]#
Bases:
EmbodiedEnvironment
Environment for moving over a 2D image with depth channel.
Images should be stored in .png format for rgb and .data format for depth.
- close()[source]#
Close the environmnt releasing all resources.
Any call to any other environment method may raise an exception
- get_3d_coordinates_from_pixel_indices(pixel_idx)[source]#
Retrieve 3D coordinates of a pixel.
- Returns:
The 3D coordinates of the pixel.
- get_3d_scene_point_cloud()[source]#
Turn 2D depth image into 3D pointcloud using DepthTo3DLocations.
This point cloud is used to estimate the sensor displacement in 3D space between two subsequent steps. Without this we get displacements in pixel space which does not work with our 3D models.
- Returns:
The 3D scene point cloud. current_sf_scene_point_cloud: The 3D scene point cloud in sensor frame.
- Return type:
current_scene_point_cloud
- get_image_patch(loc)[source]#
Extract 2D image patch from a location in pixel space.
- Returns:
The depth patch. rgb_patch: The rgb patch. depth3d_patch: The depth3d patch. sensor_frame_patch: The sensor frame patch.
- Return type:
depth_patch
- get_move_area()[source]#
Calculate area in which patch can move on the image.
- Returns:
The move area.
- get_next_loc(action_name, amount)[source]#
Calculate next location in pixel space given the current action.
- Returns:
The next location in pixel space.
- load_depth_data(depth_path, height, width)[source]#
Load depth image from .data file.
- Returns:
The depth image.
- load_new_scene_data()[source]#
Load depth and rgb data for next scene environment.
- Returns:
The depth image. current_rgb_image: The rgb image. start_location: The start location.
- Return type:
current_depth_image
- process_depth_data(depth)[source]#
Process depth data by reshaping, clipping and flipping.
- Returns:
The processed depth image.
- reset()[source]#
Reset environment and extract image patch.
TODO: clean up. Do we need this? No reset required in this dataloader, maybe indicate this better here.
- Returns:
The observation from the image patch.
- step(action: Action)[source]#
Retrieve the next observation.
- Parameters:
action – moving up, down, left or right from current location.
amount – Amount of pixels to move at once.
- Returns:
observation (dict).
- property action_space#
Returns list of all possible actions available in the environment.
- class SaccadeOnImageFromStreamEnvironment(patch_size=64, data_path=None)[source]#
Bases:
SaccadeOnImageEnvironment
Environment for moving over a 2D streamed image with depth channel.