We have now released v0.3.0! Please use the latest version for the best experience.

orbit.envs#

Sub-package for environment definitions.

Environments define the interface between the agent and the simulation. In the simplest case, the environment provides the agent with the current observations and executes the actions provided by the agent. However, the environment can also provide additional information such as the current reward, done flag, and information about the current episode.

Based on these, there are two types of environments:

  • BaseEnv: The base environment which only provides the agent with the current observations and executes the actions provided by the agent.

  • RLTaskEnv: The RL task environment which besides the functionality of the base environment also provides additional Markov Decision Process (MDP) related information such as the current reward, done flag, and information.

Submodules

mdp

Sub-module with implementation of manager terms.

ui

Sub-module providing UI window implementation for environments.

Classes

BaseEnv

The base environment encapsulates the simulation scene and the environment managers.

BaseEnvCfg

Base configuration of the environment.

ViewerCfg

Configuration of the scene viewport camera.

RLTaskEnv

The superclass for reinforcement learning-based environments.

RLTaskEnvCfg

Configuration for a reinforcement learning environment.

Base Environment#

class omni.isaac.orbit.envs.BaseEnv[source]#

The base environment encapsulates the simulation scene and the environment managers.

While a simulation scene or world comprises of different components such as the robots, objects, and sensors (cameras, lidars, etc.), the environment is a higher level abstraction that provides an interface for interacting with the simulation. The environment is comprised of the following components:

  • Scene: The scene manager that creates and manages the virtual world in which the robot operates. This includes defining the robot, static and dynamic objects, sensors, etc.

  • Observation Manager: The observation manager that generates observations from the current simulation state and the data gathered from the sensors. These observations may include privileged information that is not available to the robot in the real world. Additionally, user-defined terms can be added to process the observations and generate custom observations. For example, using a network to embed high-dimensional observations into a lower-dimensional space.

  • Action Manager: The action manager that processes the raw actions sent to the environment and converts them to low-level commands that are sent to the simulation. It can be configured to accept raw actions at different levels of abstraction. For example, in case of a robotic arm, the raw actions can be joint torques, joint positions, or end-effector poses. Similarly for a mobile base, it can be the joint torques, or the desired velocity of the floating base.

  • Event Manager: The event manager orchestrates operations triggered based on simulation events. This includes resetting the scene to a default state, applying random pushes to the robot at different intervals of time, or randomizing properties such as mass and friction coefficients. This is useful for training and evaluating the robot in a variety of scenarios.

The environment provides a unified interface for interacting with the simulation. However, it does not include task-specific quantities such as the reward function, or the termination conditions. These quantities are often specific to defining Markov Decision Processes (MDPs) while the base environment is agnostic to the MDP definition.

The environment steps forward in time at a fixed time-step. The physics simulation is decimated at a lower time-step. This is to ensure that the simulation is stable. These two time-steps can be configured independently using the BaseEnvCfg.decimation (number of simulation steps per environment step) and the BaseEnvCfg.sim.dt (physics time-step) parameters. Based on these parameters, the environment time-step is computed as the product of the two. The two time-steps can be obtained by querying the physics_dt and the step_dt properties respectively.

Methods:

__init__(cfg)

Initialize the environment.

load_managers()

Load the managers for the environment.

reset([seed, options])

Resets all the environments and returns observations.

step(action)

Execute one time-step of the environment's dynamics.

seed([seed])

Set the seed for the environment.

close()

Cleanup for the environment.

Attributes:

num_envs

The number of instances of the environment that are running.

physics_dt

The physics time-step (in s).

step_dt

The environment stepping time-step (in s).

device

The device on which the environment is running.

__init__(cfg: BaseEnvCfg)[source]#

Initialize the environment.

Parameters:

cfg – The configuration object for the environment.

Raises:

RuntimeError – If a simulation context already exists. The environment must always create one since it configures the simulation context and controls the simulation.

property num_envs: int#

The number of instances of the environment that are running.

property physics_dt: float#

The physics time-step (in s).

This is the lowest time-decimation at which the simulation is happening.

property step_dt: float#

The environment stepping time-step (in s).

This is the time-step at which the environment steps forward.

property device#

The device on which the environment is running.

load_managers()[source]#

Load the managers for the environment.

This function is responsible for creating the various managers (action, observation, events, etc.) for the environment. Since the managers require access to physics handles, they can only be created after the simulator is reset (i.e. played for the first time).

Note

In case of standalone application (when running simulator from Python), the function is called automatically when the class is initialized.

However, in case of extension mode, the user must call this function manually after the simulator is reset. This is because the simulator is only reset when the user calls SimulationContext.reset_async() and it isn’t possible to call async functions in the constructor.

reset(seed: int | None = None, options: dict[str, Any] | None = None) tuple[Dict[str, Union[torch.Tensor, Dict[str, torch.Tensor]]], dict][source]#

Resets all the environments and returns observations.

Parameters:
  • seed – The seed to use for randomization. Defaults to None, in which case the seed is not set.

  • options

    Additional information to specify how the environment is reset. Defaults to None.

    Note

    This argument is used for compatibility with Gymnasium environment definition.

Returns:

A tuple containing the observations and extras.

step(action: torch.Tensor) tuple[Dict[str, Union[torch.Tensor, Dict[str, torch.Tensor]]], dict][source]#

Execute one time-step of the environment’s dynamics.

The environment steps forward at a fixed time-step, while the physics simulation is decimated at a lower time-step. This is to ensure that the simulation is stable. These two time-steps can be configured independently using the BaseEnvCfg.decimation (number of simulation steps per environment step) and the BaseEnvCfg.physics_dt (physics time-step). Based on these parameters, the environment time-step is computed as the product of the two.

Parameters:

action – The actions to apply on the environment. Shape is (num_envs, action_dim).

Returns:

A tuple containing the observations and extras.

static seed(seed: int = -1) int[source]#

Set the seed for the environment.

Parameters:

seed – The seed for random generator. Defaults to -1.

Returns:

The seed used for random generator.

close()[source]#

Cleanup for the environment.

class omni.isaac.orbit.envs.BaseEnvCfg[source]#

Base configuration of the environment.

Attributes:

viewer

Viewer configuration.

sim

Physics simulation configuration.

decimation

Number of control action updates @ sim dt per policy dt.

scene

Scene settings.

observations

Observation space settings.

actions

Action space settings.

events

Event settings.

randomization

Randomization settings.

Classes:

ui_window_class_type

alias of BaseEnvWindow

viewer: ViewerCfg#

Viewer configuration. Default is ViewerCfg().

sim: SimulationCfg#

Physics simulation configuration. Default is SimulationCfg().

ui_window_class_type#

alias of BaseEnvWindow

decimation: int#

Number of control action updates @ sim dt per policy dt.

For instance, if the simulation dt is 0.01s and the policy dt is 0.1s, then the decimation is 10. This means that the control action is updated every 10 simulation steps.

scene: InteractiveSceneCfg#

Scene settings.

Please refer to the omni.isaac.orbit.scene.InteractiveSceneCfg class for more details.

observations: object#

Observation space settings.

Please refer to the omni.isaac.orbit.managers.ObservationManager class for more details.

actions: object#

Action space settings.

Please refer to the omni.isaac.orbit.managers.ActionManager class for more details.

events: object#

Event settings. Defaults to the basic configuration that resets the scene to its default state.

Please refer to the omni.isaac.orbit.managers.EventManager class for more details.

randomization: object | None#

Randomization settings. Default is None.

Deprecated since version 0.3.0: This attribute is deprecated and will be removed in v0.4.0. Please use the events attribute to configure the randomization settings.

class omni.isaac.orbit.envs.ViewerCfg[source]#

Configuration of the scene viewport camera.

Attributes:

eye

Initial camera position (in m).

lookat

Initial camera target position (in m).

cam_prim_path

The camera prim path to record images from.

resolution

The resolution (width, height) of the camera specified using cam_prim_path.

origin_type

The frame in which the camera position (eye) and target (lookat) are defined in.

env_index

The environment index for frame origin.

asset_name

The asset name in the interactive scene for the frame origin.

eye: tuple[float, float, float]#

Initial camera position (in m). Default is (7.5, 7.5, 7.5).

lookat: tuple[float, float, float]#

Initial camera target position (in m). Default is (0.0, 0.0, 0.0).

cam_prim_path: str#

The camera prim path to record images from. Default is “/OmniverseKit_Persp”, which is the default camera in the viewport.

resolution: tuple[int, int]#

The resolution (width, height) of the camera specified using cam_prim_path. Default is (1280, 720).

origin_type: Literal['world', 'env', 'asset_root']#

The frame in which the camera position (eye) and target (lookat) are defined in. Default is “world”.

Available options are:

  • "world": The origin of the world.

  • "env": The origin of the environment defined by env_index.

  • "asset_root": The center of the asset defined by asset_name in environment env_index.

env_index: int#

The environment index for frame origin. Default is 0.

This quantity is only effective if origin is set to “env” or “asset_root”.

asset_name: str | None#

The asset name in the interactive scene for the frame origin. Default is None.

This quantity is only effective if origin is set to “asset_root”.

RL Task Environment#

class omni.isaac.orbit.envs.RLTaskEnv[source]#

Bases: BaseEnv, Env

The superclass for reinforcement learning-based environments.

This class inherits from BaseEnv and implements the core functionality for reinforcement learning-based environments. It is designed to be used with any RL library. The class is designed to be used with vectorized environments, i.e., the environment is expected to be run in parallel with multiple sub-environments. The number of sub-environments is specified using the num_envs.

Each observation from the environment is a batch of observations for each sub- environments. The method step() is also expected to receive a batch of actions for each sub-environment.

While the environment itself is implemented as a vectorized environment, we do not inherit from gym.vector.VectorEnv. This is mainly because the class adds various methods (for wait and asynchronous updates) which are not required. Additionally, each RL library typically has its own definition for a vectorized environment. Thus, to reduce complexity, we directly use the gym.Env over here and leave it up to library-defined wrappers to take care of wrapping this environment for their agents.

Note

For vectorized environments, it is recommended to only call the reset() method once before the first call to step(), i.e. after the environment is created. After that, the step() function handles the reset of terminated sub-environments. This is because the simulator does not support resetting individual sub-environments in a vectorized environment.

Attributes:

is_vector_env

Whether the environment is a vectorized environment.

metadata

Metadata for the environment.

cfg

Configuration for the environment.

max_episode_length_s

Maximum episode length in seconds.

max_episode_length

Maximum episode length in environment steps.

device

The device on which the environment is running.

np_random

Returns the environment's internal _np_random that if not set will initialise with a random seed.

num_envs

The number of instances of the environment that are running.

physics_dt

The physics time-step (in s).

step_dt

The environment stepping time-step (in s).

unwrapped

Returns the base non-wrapped environment.

Methods:

__init__(cfg[, render_mode])

Initialize the environment.

load_managers()

Load the managers for the environment.

step(action)

Execute one time-step of the environment's dynamics and reset terminated environments.

get_wrapper_attr(name)

Gets the attribute name from the environment.

render([recompute])

Run rendering without stepping through the physics.

reset([seed, options])

Resets all the environments and returns observations.

seed([seed])

Set the seed for the environment.

close()

Cleanup for the environment.

is_vector_env: ClassVar[bool] = True#

Whether the environment is a vectorized environment.

metadata: ClassVar[dict[str, Any]] = {'isaac_sim_version': omni.isaac.version.get_version, 'render_modes': [None, 'human', 'rgb_array']}#

Metadata for the environment.

cfg: RLTaskEnvCfg#

Configuration for the environment.

__init__(cfg: RLTaskEnvCfg, render_mode: str | None = None, **kwargs)[source]#

Initialize the environment.

Parameters:
  • cfg – The configuration for the environment.

  • render_mode – The render mode for the environment. Defaults to None, which is similar to "human".

property max_episode_length_s: float#

Maximum episode length in seconds.

property max_episode_length: int#

Maximum episode length in environment steps.

load_managers()[source]#

Load the managers for the environment.

This function is responsible for creating the various managers (action, observation, events, etc.) for the environment. Since the managers require access to physics handles, they can only be created after the simulator is reset (i.e. played for the first time).

Note

In case of standalone application (when running simulator from Python), the function is called automatically when the class is initialized.

However, in case of extension mode, the user must call this function manually after the simulator is reset. This is because the simulator is only reset when the user calls SimulationContext.reset_async() and it isn’t possible to call async functions in the constructor.

step(action: torch.Tensor) tuple[Dict[str, Union[torch.Tensor, Dict[str, torch.Tensor]]], torch.Tensor, torch.Tensor, torch.Tensor, dict][source]#

Execute one time-step of the environment’s dynamics and reset terminated environments.

Unlike the BaseEnv.step class, the function performs the following operations:

  1. Process the actions.

  2. Perform physics stepping.

  3. Perform rendering if gui is enabled.

  4. Update the environment counters and compute the rewards and terminations.

  5. Reset the environments that terminated.

  6. Compute the observations.

  7. Return the observations, rewards, resets and extras.

Parameters:

action – The actions to apply on the environment. Shape is (num_envs, action_dim).

Returns:

A tuple containing the observations, rewards, resets (terminated and truncated) and extras.

property device#

The device on which the environment is running.

get_wrapper_attr(name: str) Any#

Gets the attribute name from the environment.

property np_random: numpy.random.Generator#

Returns the environment’s internal _np_random that if not set will initialise with a random seed.

Returns:

Instances of np.random.Generator

property num_envs: int#

The number of instances of the environment that are running.

property physics_dt: float#

The physics time-step (in s).

This is the lowest time-decimation at which the simulation is happening.

render(recompute: bool = False) np.ndarray | None[source]#

Run rendering without stepping through the physics.

By convention, if mode is:

  • human: Render to the current display and return nothing. Usually for human consumption.

  • rgb_array: Return an numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video.

Parameters:

recompute – Whether to force a render even if the simulator has already rendered the scene. Defaults to False.

Returns:

The rendered image as a numpy array if mode is “rgb_array”. Otherwise, returns None.

Raises:
  • RuntimeError – If mode is set to “rgb_data” and simulation render mode does not support it. In this case, the simulation render mode must be set to RenderMode.PARTIAL_RENDERING or RenderMode.FULL_RENDERING.

  • NotImplementedError – If an unsupported rendering mode is specified.

reset(seed: int | None = None, options: dict[str, Any] | None = None) tuple[Dict[str, Union[torch.Tensor, Dict[str, torch.Tensor]]], dict]#

Resets all the environments and returns observations.

Parameters:
  • seed – The seed to use for randomization. Defaults to None, in which case the seed is not set.

  • options

    Additional information to specify how the environment is reset. Defaults to None.

    Note

    This argument is used for compatibility with Gymnasium environment definition.

Returns:

A tuple containing the observations and extras.

static seed(seed: int = -1) int#

Set the seed for the environment.

Parameters:

seed – The seed for random generator. Defaults to -1.

Returns:

The seed used for random generator.

property step_dt: float#

The environment stepping time-step (in s).

This is the time-step at which the environment steps forward.

property unwrapped: Env[ObsType, ActType]#

Returns the base non-wrapped environment.

Returns:

The base non-wrapped gymnasium.Env instance

Return type:

Env

close()[source]#

Cleanup for the environment.

class omni.isaac.orbit.envs.RLTaskEnvCfg[source]#

Bases: BaseEnvCfg

Configuration for a reinforcement learning environment.

Classes:

ui_window_class_type

alias of RLTaskEnvWindow

Attributes:

is_finite_horizon

Whether the learning task is treated as a finite or infinite horizon problem for the agent.

episode_length_s

Duration of an episode (in seconds).

rewards

Reward settings.

terminations

Termination settings.

curriculum

Curriculum settings.

commands

Command settings.

ui_window_class_type#

alias of RLTaskEnvWindow

is_finite_horizon: bool#

Whether the learning task is treated as a finite or infinite horizon problem for the agent. Defaults to False, which means the task is treated as an infinite horizon problem.

This flag handles the subtleties of finite and infinite horizon tasks:

  • Finite horizon: no penalty or bootstrapping value is required by the the agent for running out of time. However, the environment still needs to terminate the episode after the time limit is reached.

  • Infinite horizon: the agent needs to bootstrap the value of the state at the end of the episode. This is done by sending a time-limit (or truncated) done signal to the agent, which triggers this bootstrapping calculation.

If True, then the environment is treated as a finite horizon problem and no time-out (or truncated) done signal is sent to the agent. If False, then the environment is treated as an infinite horizon problem and a time-out (or truncated) done signal is sent to the agent.

Note

The base RLTaskEnv class does not use this flag directly. It is used by the environment wrappers to determine what type of done signal to send to the corresponding learning agent.

episode_length_s: float#

Duration of an episode (in seconds).

Based on the decimation rate and physics time step, the episode length is calculated as:

episode_length_steps = ceil(episode_length_s / (decimation_rate * physics_time_step))

For example, if the decimation rate is 10, the physics time step is 0.01, and the episode length is 10 seconds, then the episode length in steps is 100.

rewards: object#

Reward settings.

Please refer to the omni.isaac.orbit.managers.RewardManager class for more details.

viewer: ViewerCfg#

Viewer configuration. Default is ViewerCfg().

sim: SimulationCfg#

Physics simulation configuration. Default is SimulationCfg().

decimation: int#

Number of control action updates @ sim dt per policy dt.

For instance, if the simulation dt is 0.01s and the policy dt is 0.1s, then the decimation is 10. This means that the control action is updated every 10 simulation steps.

scene: InteractiveSceneCfg#

Scene settings.

Please refer to the omni.isaac.orbit.scene.InteractiveSceneCfg class for more details.

observations: object#

Observation space settings.

Please refer to the omni.isaac.orbit.managers.ObservationManager class for more details.

actions: object#

Action space settings.

Please refer to the omni.isaac.orbit.managers.ActionManager class for more details.

events: object#

Event settings. Defaults to the basic configuration that resets the scene to its default state.

Please refer to the omni.isaac.orbit.managers.EventManager class for more details.

randomization: object | None#

Randomization settings. Default is None.

Deprecated since version 0.3.0: This attribute is deprecated and will be removed in v0.4.0. Please use the events attribute to configure the randomization settings.

terminations: object#

Termination settings.

Please refer to the omni.isaac.orbit.managers.TerminationManager class for more details.

curriculum: object#

Curriculum settings.

Please refer to the omni.isaac.orbit.managers.CurriculumManager class for more details.

commands: object#

Command settings.

Please refer to the omni.isaac.orbit.managers.CommandManager class for more details.