Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[IROS 2025 Best Paper Award Finalist & IEEE TRO 2026] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems

Notifications You must be signed in to change notification settings

OpenDriveLab/AgiBot-World

Repository files navigation

AgiBot World Colosseo is a full-stack large-scale robot learning platform curated for advancing bimanual manipulation in scalable and intelligent embodied systems. It is accompanied by foundation models, benchmarks, and an ecosystem to democratize access to high-quality robot data for the academic community and the industry, paving the path towards the "ImageNet Moment" for Embodied AI.

We have released:

  • GO-1: Our robotic foundation model pretrained on AgiBot World Dataset
  • GO-1 Air: GO-1 model without Latent Planner, high-performanced and lightweighted
  • Task Catalog: Reference sheet outlining the tasks in our dataset, including robot end-effector types, sample action-text descriptions and more
  • AgiBot World Beta: Our complete dataset featuring 1,003,672 trajectories (~43.8T)
  • AgiBot World Alpha: A curated subset of AgiBot World Beta, containing 92,214 trajectories (~8.5T)

NewsπŸ“°

Important

🌟 Stay up to date at opendrivelab.com!

  • [2025εΉ΄09月19ζ—₯] πŸš€ Our robotic foundation model GO-1 open-sourced.
  • [2025εΉ΄03月10ζ—₯] πŸ“„ Research Blog and Technical Report released.
  • [2025εΉ΄03月01ζ—₯] Agibot World Beta released.
  • [2025εΉ΄01月03ζ—₯] Agibot World Alpha Sample Dataset released.
  • [2024εΉ΄12月30ζ—₯] πŸ€– Agibot World Alpha released.

TODO List πŸ“…

  • AgiBot World Alpha
  • AgiBot World Beta
    • ~1,000,000 trajectories of high-quality robot data
  • AgiBot World Foundation Model: GO-1
    • GO-1 fine-tuning script
    • GO-1 Air pre-trained checkpoint
    • GO-1 pre-trained checkpoint
    • Examples of using GO-1 model
  • 2025 AgiBot World Challenge

Key Features πŸ”‘

  • 1 million+ trajectories from 100 robots.
  • 100+ 1:1 replicated real-life scenarios across 5 target domains.
  • Cutting-edge hardware: visual tactile sensors / 6-DoF Dexterous hand / mobile dual-arm robots
  • Wide-spectrum versatile challenging tasks
  • General robotic policy pretrained on AgiBot World
Contact-rich Manipulation

Contact-rich Manipulation

Long-horizon Planning

Long-horizon Planning

Multi-robot Collaboration

Multi-robot Collaboration

Fold Shirt (AgileX)

Fold Shirt (AgileX)

Fold Shirt (AgiBot G1)

Fold Shirt (AgiBot G1)

Fold Shirt (Dual Franka)

Fold Shirt (Dual Franka)

Table of Contents

Getting started πŸ”₯

Installation

  1. Download our source code:
git clone https://github.com/OpenDriveLab/AgiBot-World.git
cd AgiBot-World
  1. Create a new conda environment:
conda create -n go1 python=3.10 -y
conda activate go1
  1. Install dependencies:

This project is built on LeRobot (dataset v2.1, commit 2b71789)
⚑️ Our environment has been tested with CUDA 12.4.

pip install -e .
pip install --no-build-isolation flash-attn==2.4.2

If you encounter out of RAM issue while installing flash attention, you can set the environment variable MAX_JOBS to limit the number of parallel compilation jobs:

MAX_JOBS=4 pip install --no-build-isolation flash-attn==2.4.2

How to Get Started with Our AgiBot World Data

Download Datasets

pip install openxlab # install CLI
openxlab dataset get --dataset-repo OpenDriveLab/AgiBot-World # dataset download
huggingface-cli download --resume-download --repo-type dataset agibot-world/AgiBotWorld-Alpha --local-dir ./AgiBotWorld-Alpha

Convert the data to LeRobot Dataset format following any4lerobot.

Visualize Datasets

We adapt and extend the dataset visualization script from LeRobot Project:

python scripts/visualize_dataset.py --task-id 390 --dataset-path /path/to/lerobot/format/dataset

It will open rerun.io and display the camera streams, robot states and actions, like this:

How to Get Started with Our GO-1 Model

Requirements

We strongly recommend full fine-tuning for the best performance. However, if GPU memory is limited, you can alternatively fine-tune only the Action Expert.

Usage GPU Memory Required Example GPU
Inference ~7GB RTX 4090
Fine-tuning (Full) ~70GB (batch size=16) A100 80GB, H100
Fine-tuning (Only AE) ~24GB (batch size=16) RTX 4090, A100 40GB

Model Zoo

Model HF Link Description
GO-1 Air https://huggingface.co/agibot-world/GO-1-Air GO-1 model without Latent Planner pre-trained on AgiBot World dataset
GO-1 https://huggingface.co/agibot-world/GO-1 GO-1 model pre-trained on AgiBot World dataset

Fine-tuning on Your Own Dataset

Here we provide an example of fine-tuning the GO-1 model on the LIBERO dataset. You can easily adapt it for your own data.

1. Prepare Data

We use the LeRobot dataset for our default dataset and dataloader. We provide a script for converting LIBERO to LeRobot format in evaluate/libero/convert_libero_data_to_lerobot.py.

Since TensorFlow is required to read the RLDS format, we recommend creating a separate conda environment to avoid package conflicts:

conda create -n libero_data python=3.10 -y
conda activate libero_data
pip install -e ".[libero_data]"

Download the raw LIBERO dataset from OpenVLA, then run the script to convert it into LeRobot dataset:

# Optional: Change the LeRobot home directory
export HF_LEROBOT_HOME=/path/to/your/lerobot
python evaluate/libero/convert_libero_data_to_lerobot.py --data_dir /path/to/your/libero/data

2. Prepare Configs

We provide an example config for fine-tuning GO-1 on LIBERO in go1/configs/go1_sft_libero.py.

Key sections in the config:

  • DatasetArguments - path or repo for the LeRobot dataset.
  • GOModelArguments - model settings: architecture (GO-1 Air or GO-1), action chunk size, diffusion scheduler, parameter freezing, etc.
  • GOTrainingArguments - training hyper-parameters, see transformers docs for more details.
  • SpaceArguments - state/action dimensions, data keys in the LeRobot dataset, default language prompt, control frequency.

See go1/configs/go1_base_cfg.py for all available config options.

3. Start Fine-tuning

Start fine-tuning with the following command, you can setup environment variables according to the shell.

RUNNAME=<YOUR_RUNNAME> bash go1/shell/train.sh /path/to/your/config

Checkpoints will be saved in experiment/<YOUR_RUNNAME> and logs will be saved in experiment/<YOUR_RUNNAME>/logs.

Notes:

  • We also provide a debugging shell which can run on a single RTX4090. It also set DEBUG_MODE to true for faster init.
  • We do not need to precompute the normalization statistics for the training data, as LeRobot will compute them when loading the dataset. The statistics will be saved to experiment/<YOUR_RUNNAME>/dataset_stats.json.
  • We set action chunk size and control frequency input as 30 in GO-1 pre-training, as our AgiBot World dataset is collected at 30Hz. We change them to 10 in LIBERO fine-tuning, as the LIBERO dataset is collected at 10Hz. You can change them accordingly in the config file.

Testing Your Model

Local Inference

After fine-tuning, you can test your model locally using an example script in evaluate/deploy.py. You can build a GO1Infer object to load the model and dataset statistics, then call the inference method to run inference:

import numpy as np
from evaluate.deploy import GO1Infer
model = GO1Infer(model_path="/path/to/your/checkpoint", data_stats_path="/path/to/your/dataset_stats.json")
payload = {
 "top": ...,
 "right": ...,
 "left": ...,
 "instruction": "example instruction",
 "state": ...,
 "ctrl_freqs": np.array([30]),
}
actions = model.inference(payload)

We also provide a script for open-loop evaluation with training data in evaluate/openloop_eval.py.

Remote Inference

Considering that 1. real robot may not have powerful GPUs, 2. different robots and simulation benchmarks often require different package dependencies, we also provide a policy server for GO-1. A client in another environment or another machine send observations to the server for remote inference.

Start the server and it will listen on port PORT and waits for observations:

python evaluate/deploy.py --model_path /path/to/your/checkpoint --data_stats_path /path/to/your/dataset_stats.json --port <PORT>

For the client, we provide a GO1Client class to send requests to the server and receive actions:

from typing import Dict, Any
import json_numpy
import numpy as np
import requests
json_numpy.patch()
class GO1Client:
 def __init__(self, host: str, port: int):
 self.host = host
 self.port = port
 def predict_action(self, payload: Dict[str, Any]) -> np.ndarray:
 response = requests.post(
 f"http://{self.host}:{self.port}/act", json=payload, headers={"Content-Type": "application/json"}
 )
 if response.status_code == 200:
 result = response.json()
 action = np.array(result)
 return action
 else:
 print(f"Request failed, status code: {response.status_code}")
 print(f"Error message: {response.text}")
 return None

We can then run the LIBERO evaluation script to query the server, see the LIBERO README for details.

More Examples

We will provide more examples of fine-tuning and running inference with GO-1 models on real robots and simulation platforms.

Currently we have:

  • Genie Studio: AgiBot G1 with out-of-the-box GO-1 model plus integrated data collection, fine-tuning, and deployment pipeline.
  • AgileX: AgileX Cobot Magic (Aloha)
  • LIBERO: LIBERO Simulation (Franka)
  • RoboTwin: RoboTwin Simulation (Aloha)

πŸ“„ License and Citation

All the data and code within this repo are under CC BY-NC-SA 4.0.

  • Please consider citing our work if it helps your research.
  • For the full authorship and detailed contributions, please refer to contributions.
  • In alphabetical order by surname:
@article{bu2025agibot_arxiv,
 title={Agibot world colosseo: A large-scale manipulation platform for scalable and intelligent embodied systems},
 author={Bu, Qingwen and Cai, Jisong and Chen, Li and Cui, Xiuqi and Ding, Yan and Feng, Siyuan and Gao, Shenyuan and He, Xindong and Huang, Xu and Jiang, Shu and others},
 journal={arXiv preprint arXiv:2503.06669},
 year={2025}
}
@inproceedings{bu2025agibot_iros,
 title={Agibot world colosseo: A large-scale manipulation platform for scalable and intelligent embodied systems},
 author={Bu, Qingwen and Cai, Jisong and Chen, Li and Cui, Xiuqi and Ding, Yan and Feng, Siyuan and He, Xindong and Huang, Xu and others},
 booktitle={2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
 year={2025},
 organization={IEEE}
}
@article{shi2025diversity,
 title={Is Diversity All You Need for Scalable Robotic Manipulation?},
 author={Shi, Modi and Chen, Li and Chen, Jin and Lu, Yuxiang and Liu, Chiming and Ren, Guanghui and Luo, Ping and Huang, Di and Yao, Maoqing and Li, Hongyang},
 journal={arXiv preprint arXiv:2507.06219},
 year={2025}
}

πŸ“ Blogs

@misc{AgiBotWorldTeam2025agibot-world-colosseo,
 title = {Introducing AgiBot World Colosseo: A Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems},
 author = {Shi, Modi and Lu, Yuxiang and Wang, Huijie and Xie, Chengen and Bu, Qingwen},
 year = {2025},
 month = {March},
 howpublished = {\url{https://opendrivelab.com/AgiBot-World/}},
 note = {Blog post},
 }
@misc{AgiBotWorldTeam2025open-sourcing-go1,
 title = {Open-sourcing GO-1: The Bitter Lessons of Building VLA Systems at Scale},
 author = {Shi, Modi and Lu, Yuxiang and Wang, Huijie and Yang, Shaoze},
 year = {2025},
 month = {September},
 howpublished = {\url{https://opendrivelab.com/OpenGO1/}},
 note = {Blog post},
 }

About

[IROS 2025 Best Paper Award Finalist & IEEE TRO 2026] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems

Topics

Resources

Contributing

Stars

Watchers

Forks

Sponsor this project

Contributors 14

AltStyle γ«γ‚ˆγ£γ¦ε€‰ζ›γ•γ‚ŒγŸγƒšγƒΌγ‚Έ (->γ‚ͺγƒͺγ‚ΈγƒŠγƒ«) /