Orbit: A Unified Simulation Framework for Interactive Robot Learning Environments

Mayank Mittal1,2, Calvin Yu3, Qinxi Yu3, Jingzhou Liu3, Nikita Rudin1,2, David Hoeller1,2, Jia Lin Yuan3, Ritvik Singh3, Yunrong Guo2, Hammad Mazhar2, Ajay Mandlekar2, Buck Babich2, Gavriel State2, Marco Hutter1, Animesh Garg2,3
1ETH Zurich, 2NVIDIA, 3University of Toronto
Update [03.06.2024]: Isaac Lab is now officially released! Please visit the new website for the latest updates.
Update [18.03.2024]: Orbit will continue to evolve as Isaac Lab to become an even lighter application on Isaac Sim for robot learning. Stay tuned for more updates!

Orbit exploits the latest simulation capabilities to facilitate robot learning research.

Abstract

We present ORBIT, a unified and modular framework for robotics and robot learning, powered by NVIDIA Isaac Sim. It offers a modular design to easily and efficiently create robotic environments with photo-realistic scenes, and fast and accurate rigid and soft body simulation.

With ORBIT, we provide a suite of benchmark tasks of varying difficulty- from single-stage cabinet opening and cloth folding to multi-stage tasks such as room reorganization. The tasks include variations in objects' physical properties and placements, material textures, and scene lighting. To support working with diverse observations and actions spaces, we include various fixed-arm and mobile manipulators with different controller implementations and physics-based sensors. ORBIT allows training reinforcement learning policies and collecting large demonstration datasets from hand-crafted or expert solutions in a matter of minutes by leveraging GPU-based parallelization. In summary, we offer fourteen robot articulations, three different physics-based sensors, twenty learning environments, wrappers to four different learning frameworks and interfaces to help connect to a real robot.

With this framework, we aim to support various research areas, including representation learning, reinforcement learning, imitation learning, and motion planning. We hope it helps establish interdisciplinary collaborations between these communities and its modularity makes it easily extensible for more tasks and applications in the future.

Video

Supported Workflows

Reinforcement Learning

With Orbit, you can use an RL framework of your choice and focus on algorithmic research. With RSL-RL and RL-Games, you can train a policy at upto 100k FPS, while with stable-baselines3, you can train a policy at upto 10k FPS.

Imitation Learning

Orbit includes out-of-the-box support for various peripheral devices such as keyboard, spacemouse and gamepad. You can use these devices to teleoperate the robot and collect demonstrations for behavior cloning.

Motion Generation

Sense-Model-Plan-Act (SMPA) decomposes the complex problem of reasoning and control into sub-components. With Orbit, you can define you can define and evaluate your own hand-crafted state machines and motion generators.

Deployment on Physical Robots

It is possible to connect a physical Franka Emika arm to Orbit using ZeroMQ. The computed joint commands from the framework can be sent to the robot and the robot's state can be read back from the robot.

Sim-to-real Transfer

To match complex actuator dynamics (e.g. delays, friction, etc.), you can easily incorporate different actuator models in the simulation through Orbit. This functionality along with various domain randomization tools facilitate training a policy in simulation and transferring it to the real robot. Here we show a policy trained for legged locomotion in simulation and transferred to the robot, ANYmal-D. The robot uses series elastic actuator (SEA) which has non-linear dissipation and hard-to-model delays. Thus, to bridge the sim-to-real gap we use an MLP-based actuator model in simulation to compensate for the actuator dynamics.

Training in Simulation

Trained Policy in Simulation

Deployment on ANYmal-D

Example Tasks

Orbit is a general-purpose framework for learning policies for a variety of tasks. Here we show some example tasks that can be easily implemented in Orbit.

Fixed-arm Manipulation

In-hand Manipulation

Mobile Manipulation

Throughput Comparison

We compare the throughput of environments in Orbit with those in other popular frameworks. We use the same hardware setup for all comparisons, which is a computer with a 16-core AMD Ryzen 5950X, 64 GB RAM, and NVIDIA 3090RTX. We measure the throughput of the environments as the number of frames per second (FPS) that the environment can generate.

allegro-hand
anymal-flat
allegro-hand
anymal-flat

* These numbers were computed using Isaac Sim 2022.1.0.

BibTeX

If you use Orbit in your research, please cite our paper:

@article{mittal2023orbit,
  title={Orbit: A Unified Simulation Framework for Interactive Robot Learning Environments}, 
  author={Mittal, Mayank and Yu, Calvin and Yu, Qinxi and Liu, Jingzhou and Rudin, Nikita and Hoeller, David and Yuan, Jia Lin and Singh, Ritvik and Guo, Yunrong and Mazhar, Hammad and Mandlekar, Ajay and Babich, Buck and State, Gavriel and Hutter, Marco and Garg, Animesh},
  journal={IEEE Robotics and Automation Letters}, 
  year={2023},
  volume={8},
  number={6},
  pages={3740-3747},
  doi={10.1109/LRA.2023.3270034}
}