An integrated large-scale model training system with efficient parallelization techniques.
Paper: Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training
Blog: Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training
pip install colossalai
git clone git@github.com:hpcaitech/ColossalAI.git cd ColossalAI # install dependency pip install -r requirements/requirements.txt # install colossalai pip install .
Install and enable CUDA kernel fusion (compulsory installation when using fused optimizer)
pip install -v --no-cache-dir --global-option="--cuda_ext" .
import colossalai from colossalai.trainer import Trainer from colossalai.core import global_context as gpc engine, train_dataloader, test_dataloader = colossalai.initialize() trainer = Trainer(engine=engine, verbose=True) trainer.fit( train_dataloader=train_dataloader, test_dataloader=test_dataloader, epochs=gpc.config.num_epochs, hooks_cfg=gpc.config.hooks, display_progress=True, test_interval=5 )
Let's say we have a huge MLP model and its very large hidden size makes it difficult to fit into a single GPU. We can then distribute the model weights across GPUs in a 2D mesh while you still write your model in a familiar way.
from colossalai.nn import Linear2D import torch.nn as nn class MLP_2D(nn.Module): def __init__(self): super().__init__() self.linear_1 = Linear2D(in_features=1024, out_features=16384) self.linear_2 = Linear2D(in_features=16384, out_features=1024) def forward(self, x): x = self.linear_1(x) x = self.linear_2(x) return x
Colossal-AI provides a collection of parallel training components for you. We aim to support you to write your distributed deep learning models just like how you write your single-GPU model. We provide friendly tools to kickstart distributed training in a few lines.
- Data Parallelism
- Pipeline Parallelism
- 1D, 2D, 2.5D, 3D and sequence parallelism
- Friendly trainer and engine
- Extensible for new parallelism
- Mixed Precision Training
- Zero Redundancy Optimizer (ZeRO)
@article{bian2021colossal,
title={Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training},
author={Bian, Zhengda and Liu, Hongxin and Wang, Boxiang and Huang, Haichen and Li, Yongbin and Wang, Chuanrui and Cui, Fan and You, Yang},
journal={arXiv preprint arXiv:2110.14883},
year={2021}
}