tbp.monty.frameworks#

tbp.monty.frameworks.agents#

AgentID(x)#

tbp.monty.frameworks.run#

main(cfg: DictConfig)[source]#
print_config(config)[source]#

Print config with nice formatting if config_args.print_config is True.

tbp.monty.frameworks.run_env#

setup_env(monty_logs_dir_default: str = '~/tbp/results/monty/', monty_models_dir_default: str = '~/tbp/results/monty/', monty_data_dir_default: str = '~/tbp/data')[source]#

Setup environment variables for Monty.

Parameters:
  • monty_logs_dir_default (str) – Default directory for Monty logs.

  • monty_models_dir_default (str) – Default directory for Monty pretrained models.

  • monty_data_dir_default (str) – Default directory for Monty experiments data.

tbp.monty.frameworks.run_parallel#

cat_csv(filenames, outfile)[source]#
cat_files(filenames, outfile)[source]#
collect_detailed_episodes_names(parallel_dirs)[source]#
filter_episode_configs(configs: list[dict], episode_spec: str | None) list[dict][source]#
generate_parallel_eval_configs(experiment: DictConfig, name: str) list[Mapping][source]#

Generate configs for evaluation episodes in parallel.

Create a config for each object and rotation in the experiment. Unlike with parallel training episodes, a config is created for each object + rotation separately.

Parameters:
  • experiment – Config for experiment to be broken into parallel configs.

  • name – Name of experiment.

Returns:

List of configs for evaluation episodes.

generate_parallel_train_configs(experiment: DictConfig, name: str) list[Mapping][source]#

Generate configs for training episodes in parallel.

Create a config for each object in the experiment. Unlike with parallel eval episodes, each parallel config specifies a single object but all rotations.

Parameters:
  • experiment – Config for experiment to be broken into parallel configs.

  • name – Name of experiment.

Returns:

List of configs for training episodes.

Note

If we view the same object from multiple poses in separate experiments, we need to replicate what post_episode does in supervised pre training. To avoid this, we just run training episodes parallel across OBJECTS, but poses are still in sequence. By contrast, eval episodes are parallel across objects AND poses.

get_episode_stats(exp, mode)[source]#
get_overall_stats(stats)[source]#
main(cfg: DictConfig)[source]#
move_reproducibility_data(base_dir, parallel_dirs)[source]#
mv_files(filenames: Iterable[Path], outdir: Path)[source]#
parse_episode_spec(episode_spec: str | None, total: int) list[int][source]#

Parses a zero-based episode selection string into episode indices.

Converts a human-friendly selection string into a sorted list of unique, zero-based episode indices in the half-open interval [0, total). The parser supports single indices and Python-slice-like ranges using a colon (:), with the end index exclusive.

Parameters:
  • episode_spec (str | None) – Selection string describing which episodes to run. See supported forms.

  • total (int) – Total number of episodes. Must be non-negative.

Supported forms:
  • “all”, “:”, or empty string: select all valid indices [0, total)

  • Comma-separated integers and ranges, for example “0,3,5:8”

  • Open-ended ranges (end-exclusive):
    • “:N” selects [0, N) (i.e., indices 0 through N-1)

    • “N:” selects [N, total)

Notes

  • Ranges are validated, not clamped. If a range falls outside [0, total), or is otherwise malformed, a ValueError is raised.

  • Duplicates are eliminated; the result is returned in ascending order.

Return type:

list[int]

Returns:

A sorted list of unique zero-based indices within [0, total) that match the selection described by episode_spec.

Raises:

ValueError – If the selection contains any invalid index or range.

post_parallel_eval(experiments: list[Mapping], base_dir: str) None[source]#

Post-execution cleanup after running evaluation in parallel.

Logs are consolidated across parallel runs and saved to disk.

Parameters:
  • experiments – List of experiments ran in parallel.

  • base_dir – Directory where parallel logs are stored.

post_parallel_log_cleanup(filenames, outfile, cat_fn)[source]#
post_parallel_profile_cleanup(parallel_dirs, base_dir, mode)[source]#
post_parallel_train(experiments: list[Mapping], base_dir: str) None[source]#

Post-execution cleanup after running training in parallel.

Object models are consolidated across parallel runs and saved to disk.

Parameters:
  • experiments – List of experiments ran in parallel.

  • base_dir – Directory where parallel logs are stored.

print_config(config)[source]#

Print config with nice formatting if config_args.print_config is True.

run_episodes_parallel(experiments: list[Mapping], num_parallel: int, experiment_name: str, train: bool = True) None[source]#

Run episodes in parallel.

Parameters:
  • experiments – List of configs to run in parallel.

  • num_parallel – Maximum number of parallel processes to run. If there are fewer configs to run than num_parallel, then the actual number of processes will be equal to the number of configs.

  • experiment_name – name of experiment

  • train – whether to run training or evaluation

sample_params_to_init_args(params)[source]#
single_evaluate(experiment)[source]#
single_train(experiment)[source]#