We have now released v0.3.0! Please use the latest version for the best experience.

Training with an RL Agent#

In the previous tutorials, we covered how to define an RL task environment, register it into the gym registry, and interact with it using a random agent. We now move on to the next step: training an RL agent to solve the task.

Although the envs.RLTaskEnv conforms to the gymnasium.Env interface, it is not exactly a gym environment. The input and outputs of the environment are not numpy arrays, but rather based on torch tensors with the first dimension being the number of environment instances.

Additionally, most RL libraries expect their own variation of an environment interface. For example, Stable-Baselines3 expects the environment to conform to its VecEnv API which expects a list of numpy arrays instead of a single tensor. Similarly, RSL-RL and RL-Games expect a different interface. Since there is no one-size-fits-all solution, we do not base the envs.RLTaskEnv on any particular learning library. Instead, we implement wrappers to convert the environment into the expected interface. These are specified in the omni.isaac.orbit_tasks.utils.wrappers module.

In this tutorial, we will use Stable-Baselines3 to train an RL agent to solve the cartpole balancing task.

Caution

Wrapping the environment with the respective learning framework’s wrapper should happen in the end, i.e. after all other wrappers have been applied. This is because the learning framework’s wrapper modifies the interpretation of environment’s APIs which may no longer be compatible with gymnasium.Env.

The Code#

For this tutorial, we use the training script from Stable-Baselines3 workflow in the orbit/source/standalone/workflows/sb3 directory.

Code for train.py
  1# Copyright (c) 2022-2024, The ORBIT Project Developers.
  2# All rights reserved.
  3#
  4# SPDX-License-Identifier: BSD-3-Clause
  5
  6"""Script to train RL agent with Stable Baselines3.
  7
  8Since Stable-Baselines3 does not support buffers living on GPU directly,
  9we recommend using smaller number of environments. Otherwise,
 10there will be significant overhead in GPU->CPU transfer.
 11"""
 12
 13"""Launch Isaac Sim Simulator first."""
 14
 15import argparse
 16
 17from omni.isaac.orbit.app import AppLauncher
 18
 19# add argparse arguments
 20parser = argparse.ArgumentParser(description="Train an RL agent with Stable-Baselines3.")
 21parser.add_argument("--video", action="store_true", default=False, help="Record videos during training.")
 22parser.add_argument("--video_length", type=int, default=200, help="Length of the recorded video (in steps).")
 23parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
 24parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
 25parser.add_argument(
 26    "--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
 27)
 28parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
 29parser.add_argument("--task", type=str, default=None, help="Name of the task.")
 30parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
 31# append AppLauncher cli args
 32AppLauncher.add_app_launcher_args(parser)
 33# parse the arguments
 34args_cli = parser.parse_args()
 35
 36# launch omniverse app
 37app_launcher = AppLauncher(args_cli)
 38simulation_app = app_launcher.app
 39
 40"""Rest everything follows."""
 41
 42import gymnasium as gym
 43import numpy as np
 44import os
 45from datetime import datetime
 46
 47from stable_baselines3 import PPO
 48from stable_baselines3.common.callbacks import CheckpointCallback
 49from stable_baselines3.common.logger import configure
 50from stable_baselines3.common.vec_env import VecNormalize
 51
 52from omni.isaac.orbit.utils.dict import print_dict
 53from omni.isaac.orbit.utils.io import dump_pickle, dump_yaml
 54
 55import omni.isaac.orbit_tasks  # noqa: F401
 56from omni.isaac.orbit_tasks.utils import load_cfg_from_registry, parse_env_cfg
 57from omni.isaac.orbit_tasks.utils.wrappers.sb3 import Sb3VecEnvWrapper, process_sb3_cfg
 58
 59
 60def main():
 61    """Train with stable-baselines agent."""
 62    # parse configuration
 63    env_cfg = parse_env_cfg(
 64        args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
 65    )
 66    agent_cfg = load_cfg_from_registry(args_cli.task, "sb3_cfg_entry_point")
 67
 68    # override configuration with command line arguments
 69    if args_cli.seed is not None:
 70        agent_cfg["seed"] = args_cli.seed
 71
 72    # directory for logging into
 73    log_dir = os.path.join("logs", "sb3", args_cli.task, datetime.now().strftime("%Y-%m-%d_%H-%M-%S"))
 74    # dump the configuration into log-directory
 75    dump_yaml(os.path.join(log_dir, "params", "env.yaml"), env_cfg)
 76    dump_yaml(os.path.join(log_dir, "params", "agent.yaml"), agent_cfg)
 77    dump_pickle(os.path.join(log_dir, "params", "env.pkl"), env_cfg)
 78    dump_pickle(os.path.join(log_dir, "params", "agent.pkl"), agent_cfg)
 79
 80    # post-process agent configuration
 81    agent_cfg = process_sb3_cfg(agent_cfg)
 82    # read configurations about the agent-training
 83    policy_arch = agent_cfg.pop("policy")
 84    n_timesteps = agent_cfg.pop("n_timesteps")
 85
 86    # create isaac environment
 87    env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)
 88    # wrap for video recording
 89    if args_cli.video:
 90        video_kwargs = {
 91            "video_folder": os.path.join(log_dir, "videos"),
 92            "step_trigger": lambda step: step % args_cli.video_interval == 0,
 93            "video_length": args_cli.video_length,
 94            "disable_logger": True,
 95        }
 96        print("[INFO] Recording videos during training.")
 97        print_dict(video_kwargs, nesting=4)
 98        env = gym.wrappers.RecordVideo(env, **video_kwargs)
 99    # wrap around environment for stable baselines
100    env = Sb3VecEnvWrapper(env)
101    # set the seed
102    env.seed(seed=agent_cfg["seed"])
103
104    if "normalize_input" in agent_cfg:
105        env = VecNormalize(
106            env,
107            training=True,
108            norm_obs="normalize_input" in agent_cfg and agent_cfg.pop("normalize_input"),
109            norm_reward="normalize_value" in agent_cfg and agent_cfg.pop("normalize_value"),
110            clip_obs="clip_obs" in agent_cfg and agent_cfg.pop("clip_obs"),
111            gamma=agent_cfg["gamma"],
112            clip_reward=np.inf,
113        )
114
115    # create agent from stable baselines
116    agent = PPO(policy_arch, env, verbose=1, **agent_cfg)
117    # configure the logger
118    new_logger = configure(log_dir, ["stdout", "tensorboard"])
119    agent.set_logger(new_logger)
120
121    # callbacks for agent
122    checkpoint_callback = CheckpointCallback(save_freq=1000, save_path=log_dir, name_prefix="model", verbose=2)
123    # train the agent
124    agent.learn(total_timesteps=n_timesteps, callback=checkpoint_callback)
125    # save the final model
126    agent.save(os.path.join(log_dir, "model"))
127
128    # close the simulator
129    env.close()
130
131
132if __name__ == "__main__":
133    # run the main function
134    main()
135    # close sim app
136    simulation_app.close()

The Code Explained#

Most of the code above is boilerplate code to create logging directories, saving the parsed configurations, and setting up different Stable-Baselines3 components. For this tutorial, the important part is creating the environment and wrapping it with the Stable-Baselines3 wrapper.

There are three wrappers used in the code above:

  1. gymnasium.wrappers.RecordVideo: This wrapper records a video of the environment and saves it to the specified directory. This is useful for visualizing the agent’s behavior during training.

  2. wrappers.sb3.Sb3VecEnvWrapper: This wrapper converts the environment into a Stable-Baselines3 compatible environment.

  3. stable_baselines3.common.vec_env.VecNormalize: This wrapper normalizes the environment’s observations and rewards.

Each of these wrappers wrap around the previous wrapper by following env = wrapper(env, *args, **kwargs) repeatedly. The final environment is then used to train the agent. For more information on how these wrappers work, please refer to the Wrapping environments documentation.

The Code Execution#

We train a PPO agent from Stable-Baselines3 to solve the cartpole balancing task.

Training the agent#

There are three main ways to train the agent. Each of them has their own advantages and disadvantages. It is up to you to decide which one you prefer based on your use case.

Headless execution#

If the --headless flag is set, the simulation is not rendered during training. This is useful when training on a remote server or when you do not want to see the simulation. Typically, it speeds up the training process since only physics simulation step is performed.

./orbit.sh -p source/standalone/workflows/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64 --headless

Headless execution with off-screen render#

Since the above command does not render the simulation, it is not possible to visualize the agent’s behavior during training. To visualize the agent’s behavior, we pass the --offscreen_render which enables off-screen rendering. Additionally, we pass the flag --video which records a video of the agent’s behavior during training.

./orbit.sh -p source/standalone/workflows/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64 --headless --offscreen_render --video

The videos are saved to the logs/sb3/Isaac-Cartpole-v0/<run-dir>/videos directory. You can open these videos using any video player.

Interactive execution#

While the above two methods are useful for training the agent, they don’t allow you to interact with the simulation to see what is happening. In this case, you can ignore the --headless flag and run the training script as follows:

./orbit.sh -p source/standalone/workflows/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64

This will open the Isaac Sim window and you can see the agent training in the environment. However, this will slow down the training process since the simulation is rendered on the screen. As a workaround, you can switch between different render modes in the "Orbit" window that is docked on the bottom-right corner of the screen. To learn more about these render modes, please check the sim.SimulationContext.RenderMode class.

Viewing the logs#

On a separate terminal, you can monitor the training progress by executing the following command:

# execute from the root directory of the repository
./orbit.sh -p -m tensorboard.main --logdir logs/sb3/Isaac-Cartpole-v0

Playing the trained agent#

Once the training is complete, you can visualize the trained agent by executing the following command:

# execute from the root directory of the repository
./orbit.sh -p source/standalone/workflows/sb3/play.py --task Isaac-Cartpole-v0 --num_envs 32 --use_last_checkpoint

The above command will load the latest checkpoint from the logs/sb3/Isaac-Cartpole-v0 directory. You can also specify a specific checkpoint by passing the --checkpoint flag.