Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training

License

Notifications You must be signed in to change notification settings

DLReseach/ColossalAI

Repository files navigation

Colossal-AI

An integrated large-scale model training system with efficient parallelization techniques.

Paper: Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training

Blog: Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training

Installation

PyPI

pip install colossalai

Install From Source

git clone git@github.com:hpcaitech/ColossalAI.git
cd ColossalAI
# install dependency
pip install -r requirements/requirements.txt
# install colossalai
pip install .

Install and enable CUDA kernel fusion (compulsory installation when using fused optimizer)

pip install -v --no-cache-dir --global-option="--cuda_ext" .

Documentation

Quick View

Start Distributed Training in Lines

import colossalai
from colossalai.trainer import Trainer
from colossalai.core import global_context as gpc
engine, train_dataloader, test_dataloader = colossalai.initialize()
trainer = Trainer(engine=engine,
 verbose=True)
trainer.fit(
 train_dataloader=train_dataloader,
 test_dataloader=test_dataloader,
 epochs=gpc.config.num_epochs,
 hooks_cfg=gpc.config.hooks,
 display_progress=True,
 test_interval=5
)

Write a Simple 2D Parallel Model

Let's say we have a huge MLP model and its very large hidden size makes it difficult to fit into a single GPU. We can then distribute the model weights across GPUs in a 2D mesh while you still write your model in a familiar way.

from colossalai.nn import Linear2D
import torch.nn as nn
class MLP_2D(nn.Module):
 def __init__(self):
 super().__init__()
 self.linear_1 = Linear2D(in_features=1024, out_features=16384)
 self.linear_2 = Linear2D(in_features=16384, out_features=1024)
 def forward(self, x):
 x = self.linear_1(x)
 x = self.linear_2(x)
 return x

Features

Colossal-AI provides a collection of parallel training components for you. We aim to support you to write your distributed deep learning models just like how you write your single-GPU model. We provide friendly tools to kickstart distributed training in a few lines.

Cite Us

@article{bian2021colossal,
 title={Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training},
 author={Bian, Zhengda and Liu, Hongxin and Wang, Boxiang and Huang, Haichen and Li, Yongbin and Wang, Chuanrui and Cui, Fan and You, Yang},
 journal={arXiv preprint arXiv:2110.14883},
 year={2021}
}

About

Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 94.2%
  • Cuda 4.6%
  • Other 1.2%

AltStyle によって変換されたページ (->オリジナル) /