orbit.envs#
Sub-package for environment definitions.
Environments define the interface between the agent and the simulation. In the simplest case, the environment provides the agent with the current observations and executes the actions provided by the agent. However, the environment can also provide additional information such as the current reward, done flag, and information about the current episode.
Based on these, there are two types of environments:
BaseEnv
: The base environment which only provides the agent with the current observations and executes the actions provided by the agent.RLTaskEnv
: The RL task environment which besides the functionality of the base environment also provides additional Markov Decision Process (MDP) related information such as the current reward, done flag, and information.
Submodules
Sub-module with implementation of manager terms. |
|
Sub-module providing UI window implementation for environments. |
Classes
The base environment encapsulates the simulation scene and the environment managers. |
|
Base configuration of the environment. |
|
Configuration of the scene viewport camera. |
|
The superclass for reinforcement learning-based environments. |
|
Configuration for a reinforcement learning environment. |
Base Environment#
- class omni.isaac.orbit.envs.BaseEnv[source]#
The base environment encapsulates the simulation scene and the environment managers.
While a simulation scene or world comprises of different components such as the robots, objects, and sensors (cameras, lidars, etc.), the environment is a higher level abstraction that provides an interface for interacting with the simulation. The environment is comprised of the following components:
Scene: The scene manager that creates and manages the virtual world in which the robot operates. This includes defining the robot, static and dynamic objects, sensors, etc.
Observation Manager: The observation manager that generates observations from the current simulation state and the data gathered from the sensors. These observations may include privileged information that is not available to the robot in the real world. Additionally, user-defined terms can be added to process the observations and generate custom observations. For example, using a network to embed high-dimensional observations into a lower-dimensional space.
Action Manager: The action manager that processes the raw actions sent to the environment and converts them to low-level commands that are sent to the simulation. It can be configured to accept raw actions at different levels of abstraction. For example, in case of a robotic arm, the raw actions can be joint torques, joint positions, or end-effector poses. Similarly for a mobile base, it can be the joint torques, or the desired velocity of the floating base.
Event Manager: The event manager orchestrates operations triggered based on simulation events. This includes resetting the scene to a default state, applying random pushes to the robot at different intervals of time, or randomizing properties such as mass and friction coefficients. This is useful for training and evaluating the robot in a variety of scenarios.
The environment provides a unified interface for interacting with the simulation. However, it does not include task-specific quantities such as the reward function, or the termination conditions. These quantities are often specific to defining Markov Decision Processes (MDPs) while the base environment is agnostic to the MDP definition.
The environment steps forward in time at a fixed time-step. The physics simulation is decimated at a lower time-step. This is to ensure that the simulation is stable. These two time-steps can be configured independently using the
BaseEnvCfg.decimation
(number of simulation steps per environment step) and theBaseEnvCfg.sim.dt
(physics time-step) parameters. Based on these parameters, the environment time-step is computed as the product of the two. The two time-steps can be obtained by querying thephysics_dt
and thestep_dt
properties respectively.Methods:
__init__
(cfg)Initialize the environment.
Load the managers for the environment.
reset
([seed, options])Resets all the environments and returns observations.
step
(action)Execute one time-step of the environment's dynamics.
seed
([seed])Set the seed for the environment.
close
()Cleanup for the environment.
Attributes:
The number of instances of the environment that are running.
The physics time-step (in s).
The environment stepping time-step (in s).
The device on which the environment is running.
- __init__(cfg: BaseEnvCfg)[source]#
Initialize the environment.
- Parameters:
cfg – The configuration object for the environment.
- Raises:
RuntimeError – If a simulation context already exists. The environment must always create one since it configures the simulation context and controls the simulation.
- property physics_dt: float#
The physics time-step (in s).
This is the lowest time-decimation at which the simulation is happening.
- property step_dt: float#
The environment stepping time-step (in s).
This is the time-step at which the environment steps forward.
- property device#
The device on which the environment is running.
- load_managers()[source]#
Load the managers for the environment.
This function is responsible for creating the various managers (action, observation, events, etc.) for the environment. Since the managers require access to physics handles, they can only be created after the simulator is reset (i.e. played for the first time).
Note
In case of standalone application (when running simulator from Python), the function is called automatically when the class is initialized.
However, in case of extension mode, the user must call this function manually after the simulator is reset. This is because the simulator is only reset when the user calls
SimulationContext.reset_async()
and it isn’t possible to call async functions in the constructor.
- reset(seed: int | None = None, options: dict[str, Any] | None = None) tuple[Dict[str, Union[torch.Tensor, Dict[str, torch.Tensor]]], dict] [source]#
Resets all the environments and returns observations.
- Parameters:
seed – The seed to use for randomization. Defaults to None, in which case the seed is not set.
options –
Additional information to specify how the environment is reset. Defaults to None.
Note
This argument is used for compatibility with Gymnasium environment definition.
- Returns:
A tuple containing the observations and extras.
- step(action: torch.Tensor) tuple[Dict[str, Union[torch.Tensor, Dict[str, torch.Tensor]]], dict] [source]#
Execute one time-step of the environment’s dynamics.
The environment steps forward at a fixed time-step, while the physics simulation is decimated at a lower time-step. This is to ensure that the simulation is stable. These two time-steps can be configured independently using the
BaseEnvCfg.decimation
(number of simulation steps per environment step) and theBaseEnvCfg.physics_dt
(physics time-step). Based on these parameters, the environment time-step is computed as the product of the two.- Parameters:
action – The actions to apply on the environment. Shape is (num_envs, action_dim).
- Returns:
A tuple containing the observations and extras.
- class omni.isaac.orbit.envs.BaseEnvCfg[source]#
Base configuration of the environment.
Attributes:
Viewer configuration.
Physics simulation configuration.
Number of control action updates @ sim dt per policy dt.
Scene settings.
Observation space settings.
Action space settings.
Event settings.
Randomization settings.
Classes:
alias of
BaseEnvWindow
- sim: SimulationCfg#
Physics simulation configuration. Default is SimulationCfg().
- ui_window_class_type#
alias of
BaseEnvWindow
- decimation: int#
Number of control action updates @ sim dt per policy dt.
For instance, if the simulation dt is 0.01s and the policy dt is 0.1s, then the decimation is 10. This means that the control action is updated every 10 simulation steps.
- scene: InteractiveSceneCfg#
Scene settings.
Please refer to the
omni.isaac.orbit.scene.InteractiveSceneCfg
class for more details.
- observations: object#
Observation space settings.
Please refer to the
omni.isaac.orbit.managers.ObservationManager
class for more details.
- actions: object#
Action space settings.
Please refer to the
omni.isaac.orbit.managers.ActionManager
class for more details.
- events: object#
Event settings. Defaults to the basic configuration that resets the scene to its default state.
Please refer to the
omni.isaac.orbit.managers.EventManager
class for more details.
- class omni.isaac.orbit.envs.ViewerCfg[source]#
Configuration of the scene viewport camera.
Attributes:
Initial camera position (in m).
Initial camera target position (in m).
The camera prim path to record images from.
The resolution (width, height) of the camera specified using
cam_prim_path
.The frame in which the camera position (eye) and target (lookat) are defined in.
The environment index for frame origin.
The asset name in the interactive scene for the frame origin.
- lookat: tuple[float, float, float]#
Initial camera target position (in m). Default is (0.0, 0.0, 0.0).
- cam_prim_path: str#
The camera prim path to record images from. Default is “/OmniverseKit_Persp”, which is the default camera in the viewport.
- resolution: tuple[int, int]#
The resolution (width, height) of the camera specified using
cam_prim_path
. Default is (1280, 720).
- origin_type: Literal['world', 'env', 'asset_root']#
The frame in which the camera position (eye) and target (lookat) are defined in. Default is “world”.
Available options are:
"world"
: The origin of the world."env"
: The origin of the environment defined byenv_index
."asset_root"
: The center of the asset defined byasset_name
in environmentenv_index
.
RL Task Environment#
- class omni.isaac.orbit.envs.RLTaskEnv[source]#
-
The superclass for reinforcement learning-based environments.
This class inherits from
BaseEnv
and implements the core functionality for reinforcement learning-based environments. It is designed to be used with any RL library. The class is designed to be used with vectorized environments, i.e., the environment is expected to be run in parallel with multiple sub-environments. The number of sub-environments is specified using thenum_envs
.Each observation from the environment is a batch of observations for each sub- environments. The method
step()
is also expected to receive a batch of actions for each sub-environment.While the environment itself is implemented as a vectorized environment, we do not inherit from
gym.vector.VectorEnv
. This is mainly because the class adds various methods (for wait and asynchronous updates) which are not required. Additionally, each RL library typically has its own definition for a vectorized environment. Thus, to reduce complexity, we directly use thegym.Env
over here and leave it up to library-defined wrappers to take care of wrapping this environment for their agents.Note
For vectorized environments, it is recommended to only call the
reset()
method once before the first call tostep()
, i.e. after the environment is created. After that, thestep()
function handles the reset of terminated sub-environments. This is because the simulator does not support resetting individual sub-environments in a vectorized environment.Attributes:
Whether the environment is a vectorized environment.
Metadata for the environment.
Configuration for the environment.
Maximum episode length in seconds.
Maximum episode length in environment steps.
The device on which the environment is running.
Returns the environment's internal
_np_random
that if not set will initialise with a random seed.The number of instances of the environment that are running.
The physics time-step (in s).
The environment stepping time-step (in s).
Returns the base non-wrapped environment.
Methods:
__init__
(cfg[, render_mode])Initialize the environment.
Load the managers for the environment.
step
(action)Execute one time-step of the environment's dynamics and reset terminated environments.
get_wrapper_attr
(name)Gets the attribute name from the environment.
render
([recompute])Run rendering without stepping through the physics.
reset
([seed, options])Resets all the environments and returns observations.
seed
([seed])Set the seed for the environment.
close
()Cleanup for the environment.
- metadata: ClassVar[dict[str, Any]] = {'isaac_sim_version': omni.isaac.version.get_version, 'render_modes': [None, 'human', 'rgb_array']}#
Metadata for the environment.
- cfg: RLTaskEnvCfg#
Configuration for the environment.
- __init__(cfg: RLTaskEnvCfg, render_mode: str | None = None, **kwargs)[source]#
Initialize the environment.
- Parameters:
cfg – The configuration for the environment.
render_mode – The render mode for the environment. Defaults to None, which is similar to
"human"
.
- load_managers()[source]#
Load the managers for the environment.
This function is responsible for creating the various managers (action, observation, events, etc.) for the environment. Since the managers require access to physics handles, they can only be created after the simulator is reset (i.e. played for the first time).
Note
In case of standalone application (when running simulator from Python), the function is called automatically when the class is initialized.
However, in case of extension mode, the user must call this function manually after the simulator is reset. This is because the simulator is only reset when the user calls
SimulationContext.reset_async()
and it isn’t possible to call async functions in the constructor.
- step(action: torch.Tensor) tuple[Dict[str, Union[torch.Tensor, Dict[str, torch.Tensor]]], torch.Tensor, torch.Tensor, torch.Tensor, dict] [source]#
Execute one time-step of the environment’s dynamics and reset terminated environments.
Unlike the
BaseEnv.step
class, the function performs the following operations:Process the actions.
Perform physics stepping.
Perform rendering if gui is enabled.
Update the environment counters and compute the rewards and terminations.
Reset the environments that terminated.
Compute the observations.
Return the observations, rewards, resets and extras.
- Parameters:
action – The actions to apply on the environment. Shape is (num_envs, action_dim).
- Returns:
A tuple containing the observations, rewards, resets (terminated and truncated) and extras.
- property device#
The device on which the environment is running.
- property np_random: numpy.random.Generator#
Returns the environment’s internal
_np_random
that if not set will initialise with a random seed.- Returns:
Instances of np.random.Generator
- property physics_dt: float#
The physics time-step (in s).
This is the lowest time-decimation at which the simulation is happening.
- render(recompute: bool = False) np.ndarray | None [source]#
Run rendering without stepping through the physics.
By convention, if mode is:
human: Render to the current display and return nothing. Usually for human consumption.
rgb_array: Return an numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video.
- Parameters:
recompute – Whether to force a render even if the simulator has already rendered the scene. Defaults to False.
- Returns:
The rendered image as a numpy array if mode is “rgb_array”. Otherwise, returns None.
- Raises:
RuntimeError – If mode is set to “rgb_data” and simulation render mode does not support it. In this case, the simulation render mode must be set to
RenderMode.PARTIAL_RENDERING
orRenderMode.FULL_RENDERING
.NotImplementedError – If an unsupported rendering mode is specified.
- reset(seed: int | None = None, options: dict[str, Any] | None = None) tuple[Dict[str, Union[torch.Tensor, Dict[str, torch.Tensor]]], dict] #
Resets all the environments and returns observations.
- Parameters:
seed – The seed to use for randomization. Defaults to None, in which case the seed is not set.
options –
Additional information to specify how the environment is reset. Defaults to None.
Note
This argument is used for compatibility with Gymnasium environment definition.
- Returns:
A tuple containing the observations and extras.
- static seed(seed: int = -1) int #
Set the seed for the environment.
- Parameters:
seed – The seed for random generator. Defaults to -1.
- Returns:
The seed used for random generator.
- property step_dt: float#
The environment stepping time-step (in s).
This is the time-step at which the environment steps forward.
- property unwrapped: Env[ObsType, ActType]#
Returns the base non-wrapped environment.
- Returns:
The base non-wrapped
gymnasium.Env
instance- Return type:
Env
- class omni.isaac.orbit.envs.RLTaskEnvCfg[source]#
Bases:
BaseEnvCfg
Configuration for a reinforcement learning environment.
Classes:
alias of
RLTaskEnvWindow
Attributes:
Whether the learning task is treated as a finite or infinite horizon problem for the agent.
Duration of an episode (in seconds).
Reward settings.
Termination settings.
Curriculum settings.
Command settings.
- ui_window_class_type#
alias of
RLTaskEnvWindow
- is_finite_horizon: bool#
Whether the learning task is treated as a finite or infinite horizon problem for the agent. Defaults to False, which means the task is treated as an infinite horizon problem.
This flag handles the subtleties of finite and infinite horizon tasks:
Finite horizon: no penalty or bootstrapping value is required by the the agent for running out of time. However, the environment still needs to terminate the episode after the time limit is reached.
Infinite horizon: the agent needs to bootstrap the value of the state at the end of the episode. This is done by sending a time-limit (or truncated) done signal to the agent, which triggers this bootstrapping calculation.
If True, then the environment is treated as a finite horizon problem and no time-out (or truncated) done signal is sent to the agent. If False, then the environment is treated as an infinite horizon problem and a time-out (or truncated) done signal is sent to the agent.
Note
The base
RLTaskEnv
class does not use this flag directly. It is used by the environment wrappers to determine what type of done signal to send to the corresponding learning agent.
- episode_length_s: float#
Duration of an episode (in seconds).
Based on the decimation rate and physics time step, the episode length is calculated as:
episode_length_steps = ceil(episode_length_s / (decimation_rate * physics_time_step))
For example, if the decimation rate is 10, the physics time step is 0.01, and the episode length is 10 seconds, then the episode length in steps is 100.
- rewards: object#
Reward settings.
Please refer to the
omni.isaac.orbit.managers.RewardManager
class for more details.
- sim: SimulationCfg#
Physics simulation configuration. Default is SimulationCfg().
- decimation: int#
Number of control action updates @ sim dt per policy dt.
For instance, if the simulation dt is 0.01s and the policy dt is 0.1s, then the decimation is 10. This means that the control action is updated every 10 simulation steps.
- scene: InteractiveSceneCfg#
Scene settings.
Please refer to the
omni.isaac.orbit.scene.InteractiveSceneCfg
class for more details.
- observations: object#
Observation space settings.
Please refer to the
omni.isaac.orbit.managers.ObservationManager
class for more details.
- actions: object#
Action space settings.
Please refer to the
omni.isaac.orbit.managers.ActionManager
class for more details.
- events: object#
Event settings. Defaults to the basic configuration that resets the scene to its default state.
Please refer to the
omni.isaac.orbit.managers.EventManager
class for more details.
- randomization: object | None#
Randomization settings. Default is None.
Deprecated since version 0.3.0: This attribute is deprecated and will be removed in v0.4.0. Please use the
events
attribute to configure the randomization settings.
- terminations: object#
Termination settings.
Please refer to the
omni.isaac.orbit.managers.TerminationManager
class for more details.
- curriculum: object#
Curriculum settings.
Please refer to the
omni.isaac.orbit.managers.CurriculumManager
class for more details.
- commands: object#
Command settings.
Please refer to the
omni.isaac.orbit.managers.CommandManager
class for more details.