Registering an Environment#
In the previous tutorial, we learned how to create a custom cartpole environment. We manually created an instance of the environment by importing the environment class and its configuration class.
Environment creation in the previous tutorial
# create environment configuration
env_cfg = CartpoleEnvCfg()
env_cfg.scene.num_envs = args_cli.num_envs
# setup RL environment
env = RLTaskEnv(cfg=env_cfg)
While straightforward, this approach is not scalable as we have a large suite of environments.
In this tutorial, we will show how to use the gymnasium.register()
method to register
environments with the gymnasium
registry. This allows us to create the environment through
the gymnasium.make()
function.
Environment creation in this tutorial
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils import parse_env_cfg
def main():
"""Random actions agent with Orbit environment."""
# create environment configuration
env_cfg = parse_env_cfg(
args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
)
# create environment
env = gym.make(args_cli.task, cfg=env_cfg)
The Code#
The tutorial corresponds to the random_agent.py
script in the orbit/source/standalone/environments
directory.
Code for random_agent.py
1# Copyright (c) 2022-2024, The ORBIT Project Developers.
2# All rights reserved.
3#
4# SPDX-License-Identifier: BSD-3-Clause
5
6"""Script to an environment with random action agent."""
7
8"""Launch Isaac Sim Simulator first."""
9
10import argparse
11
12from omni.isaac.orbit.app import AppLauncher
13
14# add argparse arguments
15parser = argparse.ArgumentParser(description="Random agent for Orbit environments.")
16parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
17parser.add_argument(
18 "--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
19)
20parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
21parser.add_argument("--task", type=str, default=None, help="Name of the task.")
22# append AppLauncher cli args
23AppLauncher.add_app_launcher_args(parser)
24# parse the arguments
25args_cli = parser.parse_args()
26
27# launch omniverse app
28app_launcher = AppLauncher(args_cli)
29simulation_app = app_launcher.app
30
31"""Rest everything follows."""
32
33import gymnasium as gym
34import torch
35
36import omni.isaac.orbit_tasks # noqa: F401
37from omni.isaac.orbit_tasks.utils import parse_env_cfg
38
39
40def main():
41 """Random actions agent with Orbit environment."""
42 # create environment configuration
43 env_cfg = parse_env_cfg(
44 args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
45 )
46 # create environment
47 env = gym.make(args_cli.task, cfg=env_cfg)
48
49 # print info (this is vectorized environment)
50 print(f"[INFO]: Gym observation space: {env.observation_space}")
51 print(f"[INFO]: Gym action space: {env.action_space}")
52 # reset environment
53 env.reset()
54 # simulate environment
55 while simulation_app.is_running():
56 # run everything in inference mode
57 with torch.inference_mode():
58 # sample actions from -1 to 1
59 actions = 2 * torch.rand(env.action_space.shape, device=env.unwrapped.device) - 1
60 # apply actions
61 env.step(actions)
62
63 # close the simulator
64 env.close()
65
66
67if __name__ == "__main__":
68 # run the main function
69 main()
70 # close sim app
71 simulation_app.close()
The Code Explained#
The envs.RLTaskEnv
class inherits from the gymnasium.Env
class to follow
a standard interface. However, unlike the traditional Gym environments, the envs.RLTaskEnv
implements a vectorized environment. This means that multiple environment instances
are running simultaneously in the same process, and all the data is returned in a batched
fashion.
Using the gym registry#
To register an environment, we use the gymnasium.register()
method. This method takes
in the environment name, the entry point to the environment class, and the entry point to the
environment configuration class. For the cartpole environment, the following shows the registration
call in the omni.isaac.orbit_tasks.classic.cartpole
sub-package:
import gymnasium as gym
from . import agents
from .cartpole_env_cfg import CartpoleEnvCfg
##
# Register Gym environments.
##
gym.register(
id="Isaac-Cartpole-v0",
entry_point="omni.isaac.orbit.envs:RLTaskEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": CartpoleEnvCfg,
"rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
"rsl_rl_cfg_entry_point": agents.rsl_rl_ppo_cfg.CartpolePPORunnerCfg,
"skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
"sb3_cfg_entry_point": f"{agents.__name__}:sb3_ppo_cfg.yaml",
},
)
The id
argument is the name of the environment. As a convention, we name all the environments
with the prefix Isaac-
to make it easier to search for them in the registry. The name of the
environment is typically followed by the name of the task, and then the name of the robot.
For instance, for legged locomotion with ANYmal C on flat terrain, the environment is called
Isaac-Velocity-Flat-Anymal-C-v0
. The version number v<N>
is typically used to specify different
variations of the same environment. Otherwise, the names of the environments can become too long
and difficult to read.
The entry_point
argument is the entry point to the environment class. The entry point is a string
of the form <module>:<class>
. In the case of the cartpole environment, the entry point is
omni.isaac.orbit.envs:RLTaskEnv
. The entry point is used to import the environment class
when creating the environment instance.
The env_cfg_entry_point
argument specifies the default configuration for the environment. The default
configuration is loaded using the omni.isaac.orbit_tasks.utils.parse_env_cfg()
function.
It is then passed to the gymnasium.make()
function to create the environment instance.
The configuration entry point can be both a YAML file or a python configuration class.
Note
The gymnasium
registry is a global registry. Hence, it is important to ensure that the
environment names are unique. Otherwise, the registry will throw an error when registering
the environment.
Creating the environment#
To inform the gym
registry with all the environments provided by the omni.isaac.orbit_tasks
extension, we must import the module at the start of the script. This will execute the __init__.py
file which iterates over all the sub-packages and registers their respective environments.
import omni.isaac.orbit_tasks # noqa: F401
In this tutorial, the task name is read from the command line. The task name is used to parse the default configuration as well as to create the environment instance. In addition, other parsed command line arguments such as the number of environments, the simulation device, and whether to render, are used to override the default configuration.
# create environment configuration
env_cfg = parse_env_cfg(
args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
)
# create environment
env = gym.make(args_cli.task, cfg=env_cfg)
Once creating the environment, the rest of the execution follows the standard resetting and stepping.
The Code Execution#
Now that we have gone through the code, let’s run the script and see the result:
./orbit.sh -p source/standalone/environments/random_agent.py --task Isaac-Cartpole-v0 --num_envs 32
This should open a stage with everything similar to the previous Creating an RL Environment tutorial.
To stop the simulation, you can either close the window, or press Ctrl+C
in the terminal.
In addition, you can also change the simulation device from GPU to CPU by adding the --cpu
flag:
./orbit.sh -p source/standalone/environments/random_agent.py --task Isaac-Cartpole-v0 --num_envs 32 --cpu
With the --cpu
flag, the simulation will run on the CPU. This is useful for debugging the simulation.
However, the simulation will run much slower than on the GPU.