What Is Isaac Lab?
Isaac Lab is NVIDIA's GPU-accelerated robot simulation and learning framework, built on Isaac Sim 4.0 (Omniverse-based). It supports up to 4,000+ parallel environments on a single A100, enabling RL policy training that would take weeks on CPU-based simulators to complete in hours. It's MIT licensed, which means no restrictions on commercial use — a significant advantage over some competing platforms.
Isaac Lab is not primarily a physics simulator — it delegates simulation to Isaac Sim's PhysX backend. It's a training framework: it manages environment vectorization, observation/action space definitions, reward computation, and the training loop interface with standard RL libraries (RSL-RL, RL-Games, Stable Baselines3).
Installation
Prerequisites: Ubuntu 22.04, CUDA 12.1+, NVIDIA driver ≥530. Minimum GPU: RTX 3090 (24GB VRAM) for small-scale experiments; A100 80GB for production training runs.
- Step 1: Install Isaac Sim 4.0 via pip: pip install isaacsim==4.0.0 --extra-index-url https://pypi.nvidia.com. This pulls approximately 15GB of packages. Expect 20–30 minutes on a fast connection.
- Step 2: Clone Isaac Lab and run the installer: git clone https://github.com/isaac-sim/IsaacLab && cd IsaacLab && ./isaaclab.sh --install. This creates a virtual environment with all dependencies and runs a verification test.
- Step 3: Verify with a headless test: ./isaaclab.sh -p source/standalone/tutorials/00_sim/create_empty.py --headless. Total setup time: approximately 30–45 minutes on a fresh machine with fast internet.
Built-In Environments Overview
| Category | Environment | Robot | Task |
|---|---|---|---|
| Manipulation | Isaac-Reach-Franka-v0 | Franka Research 3 | End-effector reach to target pose |
| Manipulation | Isaac-Lift-Cube-Franka-v0 | Franka Research 3 | Grasp and lift cube |
| Manipulation | Isaac-Open-Drawer-Franka-v0 | Franka Research 3 | Pull drawer open to target position |
| Locomotion | Isaac-Velocity-Rough-Anymal-C-v0 | ANYmal C | Velocity tracking on rough terrain |
| Locomotion | Isaac-Walk-Unitree-G1-v0 | Unitree G1 | Forward walking velocity tracking |
| Navigation | Isaac-Navigation-Flat-v0 | Generic diff-drive | Goal-conditioned navigation |
RL Training Walkthrough
Training a lift-cube policy from scratch with PPO on 2,048 parallel environments:
- Command: ./isaaclab.sh -p source/standalone/workflows/rsl_rl/train.py --task Isaac-Lift-Cube-Franka-v0 --num_envs 2048 --headless
- Expected runtime: Approximately 8 hours on a single A100 80GB to reach 80% success rate. On RTX 4090 (24GB), reduce num_envs to 512 and expect 20–30 hours.
- What to watch: Monitor episode_rew_mean (should increase monotonically after 500K steps), success_rate (target >70% before exporting), and value_loss (should stabilize; if diverging, reduce learning rate).
Key Hyperparameters
| Parameter | Default Value | Effect of Increasing |
|---|---|---|
| num_envs | 2048 | Better gradient estimates, higher GPU memory use |
| learning_rate | 1e-3 | Faster early learning, instability risk |
| gamma (discount) | 0.99 | Longer horizon planning, slower propagation |
| clip_param (PPO) | 0.2 | Less conservative updates, instability risk |
| num_mini_batches | 4 | Smaller batches, noisier gradients |
Importing Your Own Robot (URDF/USD)
Isaac Lab supports importing custom robot models in URDF (Universal Robot Description Format) or USD (Universal Scene Description) format. For most teams, the workflow is: export URDF from your CAD tool or ROS package, convert to USD, and register as an Isaac Lab asset.
# Convert URDF to USD using Isaac Sim's converter
./isaaclab.sh -p source/standalone/tools/convert_urdf.py \
--input_path /path/to/your_robot.urdf \
--output_path /path/to/your_robot.usd \
--fix_base # Set True for fixed-base arms, False for mobile robots
# Verify the import visually
./isaaclab.sh -p source/standalone/tutorials/00_sim/spawn_usd.py \
--asset_path /path/to/your_robot.usd
Common pitfalls in URDF import: mesh scale mismatches (URDF uses meters, some CAD tools export in millimeters), incorrect joint limits (PhysX enforces joint limits strictly -- make sure they match your real robot), and missing collision meshes (Isaac Lab requires collision geometry separate from visual geometry for physics simulation). The SVRC team has validated URDF imports for OpenArm, UR5e, Franka FR3, Kinova Gen3, and Unitree G1 -- contact us if you need a pre-configured USD asset for these platforms.
Domain Randomization Setup
Domain randomization is the primary technique for bridging the sim-to-real gap. By randomizing visual and physical properties during training, the policy learns to be robust to variations it will encounter on the real robot. Isaac Lab provides a built-in randomization framework with configurable distributions for every parameter.
Key parameters to randomize and their recommended ranges:
| Parameter | Range | Distribution | Impact on Transfer |
|---|---|---|---|
| Object mass | 0.5x-2x nominal | Log-uniform | High |
| Friction coefficient | 0.3-1.2 | Uniform | High (contact tasks) |
| Joint damping | 0.8x-1.5x nominal | Uniform | Medium |
| Actuator strength | 0.7x-1.3x nominal | Uniform | High (locomotion) |
| Observation noise | Gaussian, sigma 0.01-0.05 | Gaussian | Medium |
| Action delay | 0-3 timesteps | Uniform integer | High (real-time control) |
| Gravity direction | +/- 5 degrees from vertical | Uniform | Low-medium |
Start with conservative randomization ranges and gradually widen them. Over-aggressive randomization makes training harder without improving transfer. A good heuristic: if your sim policy achieves less than 60% success with randomization enabled, the ranges are too wide. Reduce them until sim performance reaches 80%+, then gradually widen while monitoring real-world performance.
GPU Performance: A100 vs. RTX 4090 vs. RTX 3090
| GPU | VRAM | Max Envs (6-DOF arm) | Steps/sec (PPO) | Time to 80% (Lift-Cube) | Cloud Cost/hr |
|---|---|---|---|---|---|
| A100 80GB | 80 GB | 4,096 | ~180K | 6-8 hr | $3.50-4.50 |
| RTX 4090 | 24 GB | 1,024 | ~120K | 16-22 hr | $1.00-1.50 |
| RTX 3090 | 24 GB | 512 | ~65K | 28-36 hr | $0.60-0.80 |
| H100 80GB | 80 GB | 4,096+ | ~250K | 4-6 hr | $5.00-8.00 |
The cost-efficiency sweet spot depends on your iteration speed requirements. For research with frequent hyperparameter sweeps, A100 or H100 cloud instances save calendar time despite higher hourly cost. For production training runs where you know the configuration works, RTX 4090 offers the best cost per training run. SVRC's RL environment service includes GPU compute -- contact us for bulk training pricing.
Isaac Lab vs. Other Simulation Frameworks
| Feature | Isaac Lab | MuJoCo + Gymnasium | RoboCasa (AI2) |
|---|---|---|---|
| GPU parallelism | Native (4000+ envs) | MJX (limited), CPU otherwise | Limited (MuJoCo backend) |
| Rendering quality | RTX path tracing | Basic OpenGL | MuJoCo renderer |
| Contact physics | PhysX (fast, approximate) | MuJoCo (accurate, slower) | MuJoCo |
| License | MIT | Apache 2.0 | MIT |
| RL integration | RSL-RL, RL-Games, SB3 | SB3, CleanRL, any Gym-compatible | SB3, robosuite API |
| Best for | High-speed RL training, sim-to-real | Contact-rich tasks, research | Home/kitchen environments |
Isaac Lab is the right choice when training speed is the bottleneck -- which is the case for most RL-based manipulation and locomotion tasks. MuJoCo is preferred when contact physics accuracy is critical (deformable objects, tight insertion tasks) and dataset sizes are manageable on CPU. For teams that need both, SVRC recommends prototyping reward functions in MuJoCo (faster iteration on reward design) and scaling to Isaac Lab for production training (faster wall-clock convergence).
Sim-to-Real Export
Isaac Lab supports ONNX export for trained policies: ./isaaclab.sh -p source/standalone/workflows/rsl_rl/export.py --task Isaac-Lift-Cube-Franka-v0 --checkpoint path/to/model.pth. The exported ONNX model can be converted to TensorRT for deployment on Jetson AGX Orin using TensorRT's trtexec tool.
Typical inference latency on Jetson AGX Orin after TensorRT conversion: 5–15ms for policies up to 10M parameters — well within real-time control requirements. For policies with image observations (ResNet encoder + MLP policy), expect 30–80ms depending on image resolution.
Performance Comparison
| Simulator | Max Parallel Envs (A100 80GB) | Physics Accuracy | RL Training Speed |
|---|---|---|---|
| Isaac Lab (PhysX) | 4,000+ | Medium-high | Fastest |
| MuJoCo (CPU) | 50–100 | High (contact) | Slowest |
| PyBullet | 10–20 | Medium | Slow |
| IsaacGym (legacy) | 8,192 | Medium | Fast (deprecated) |
SVRC provides pre-configured Isaac Lab environments for custom manipulation tasks as part of our simulation services. See the RL environment documentation for available configurations.
Related Reading
- RL vs. Imitation Learning Decision Guide -- When to use Isaac Lab (RL) vs. data collection (IL)
- Zero-Shot vs. Few-Shot Robot Policies -- Foundation model fine-tuning as an alternative to sim RL
- LeRobot Guide -- Training IL policies on real demonstration data
- Robot Arm Buying Guide 2026 -- Hardware platforms with validated Isaac Lab URDF models
- OpenArm Setup Guide -- Deploying sim-trained policies on OpenArm hardware
- SVRC RL Environment Service -- Pre-configured Isaac Lab environments with GPU compute
- SVRC Platform -- Model management and deployment infrastructure