SO-ARM101 Reaching Policy

This model is a reinforcement learning policy trained for the SO-ARM101 robot arm to perform end-effector reaching tasks in Isaac Lab.

Model Description

  • Task: Move the end-effector to randomly sampled target poses in 3D space
  • Robot: SO-ARM101 (6-DOF robotic arm)
  • Framework: Isaac Lab 2.3.0 (on Isaac Sim 5.1.0)
  • Algorithm: RSL-RL (Robotic Systems Lab - Reinforcement Learning)
  • Environment: Isaac-SO-ARM101-Reach-v0
  • Training: 999 iterations with 4096 parallel environments

Model Overview

This policy learns to control the SO-ARM101 robot arm's joint positions to reach target end-effector poses. The model effectively learns inverse kinematics behavior through reinforcement learning, enabling the robot to accurately position its end-effector at desired 3D locations.

Training Details

Environment Configuration

  • Observation Space: Joint positions, velocities, and target pose relative to end-effector
  • Action Space: Joint position commands (6 DOF)
  • Reward Function: Negative distance between end-effector and target pose
  • Episode Length: Variable (resets on success or timeout)

Training Parameters

  • Parallel Environments: 4096
  • Total Iterations: 999
  • Training Time: ~1.5 hours on NVIDIA RTX 4080 Super (16GB VRAM)
  • Framework: Isaac Lab with RSL-RL runner
  • Simulator: Isaac Sim 5.1.0

Hardware Used

  • GPU: NVIDIA RTX 4080 Super (16GB VRAM)
  • OS: Ubuntu 24.04 LTS
  • CUDA: 13.0

Usage

Prerequisites

# Install Isaac Lab (with Docker)
# See: https://isaac-sim.github.io/IsaacLab/

# Clone SO-ARM101 external project
git clone https://github.com/MuammerBay/isaac_so_arm101.git
cd isaac_so_arm101

Evaluation

# Inside Isaac Lab container
cd /workspace/isaaclab

# Run the trained policy
./isaaclab.sh -p /workspace/isaac_so_arm101/src/isaac_so_arm101/scripts/rsl_rl/play.py \
    --task Isaac-SO-ARM101-Reach-Play-v0 \
    --checkpoint /path/to/model_999.pt

Training From Scratch

# Train the policy
./isaaclab.sh -p /workspace/isaac_so_arm101/src/isaac_so_arm101/scripts/rsl_rl/train.py \
    --task Isaac-SO-ARM101-Reach-v0 \
    --num_envs 4096 \
    --headless

Performance

The trained policy demonstrates accurate reaching behavior with the SO-ARM101 robot, successfully moving the end-effector to target positions across the reachable workspace with high precision.

Use Cases

This reaching policy serves as a foundation for:

  • Inverse Kinematics: Learned IK controller for end-effector positioning
  • Manipulation Tasks: Base controller for pick-and-place, assembly, etc.
  • Trajectory Following: Can be extended for path planning applications
  • Sim-to-Real Transfer: Ready for deployment on real SO-ARM101 hardware

Citation

If you use this model, please cite:

@misc{so-arm101-reach-isaaclab,
  title={SO-ARM101 Reaching Policy trained with Isaac Lab},
  author={PathOn AI},
  year={2026},
  howpublished={\url{https://huggingface.co/}},
}

@software{isaaclab,
  author = {Mittal, Mayank and others},
  title = {Isaac Lab: A Unified Framework for Robot Learning},
  url = {https://isaac-sim.github.io/IsaacLab/},
  year = {2024},
}

Related Resources

License

MIT License

Downloads last month

-

Downloads are not tracked for this model. How to track
Video Preview
loading