Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Lossless Training Speed Up by Unbiased Dynamic Data Pruning

Notifications You must be signed in to change notification settings

MLDL/InfoBatch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

History

87 Commits

Repository files navigation

InfoBatch

ICLR 2024 Oral | [Paper] | [Code]

InfoBatch is a tool for lossless deep learning training acceleration. It achieves lossless training speed-up by unbiased dynamic data pruning.

image

News

[2024εΉ΄9月24ζ—₯] πŸ”₯ Ultralytics-InfoBatch in testing at https://github.com/henryqin1997/ultralytics-infobatch! Latent diffusion was added to examples!

[2024εΉ΄1月17ζ—₯] πŸ”₯ New version with only 3 lines of change comes! Note that one should use a per-sample loss (to update the score and calculate batch loss)

[2024εΉ΄1月16ζ—₯] πŸ”₯ Our work got accepted to ICLR 2024 (oral)! A new version with only 3 lines of change will be updated soon. Experiments included in the paper (and beyond) will be gradually updated with detail.

[2023εΉ΄8月1ζ—₯] πŸ”₯ InfoBatch can now losslessly save 40.9% on CIFAR100 and ImageNet. We are updating paper content and preparing for public code.

TODO List

  • Plug-and-Play Implementation of InfoBatch
  • PyPI Registration
  • Experiment: Classification on Cifar
  • Experiment: Classification on ImageNet
  • Experiment: Segmentation
  • Experiment: Diffusion
  • Experiment: Instruction Finetuning
  • Experiment: Detection (YOLOv8) on COCO
  • Paper: Updated on Openreview

Contents

Install

Install InfoBatch via

pip install git+https://github.com/NUS-HPC-AI-Lab/InfoBatch

Or you can clone this repo and install it locally.

git clone https://github.com/NUS-HPC-AI-Lab/InfoBatch
cd InfoBatch
pip install -e .

Get Started

To adapt your code with InfoBatch, just download and import InfoBatch, and change the following three lines:

image

Note that one should use a per-sample loss to update the score and calculate batch loss; if the learning rate scheduler is step-based, adjust its steps-per-epoch accordingly at beginning of each epoch.

For research studies and more flexible codes, you can refer to the code in research.

Experiments

Cifar

To run the CIFAR-100 example with baseline, run with delta=0:

python3 examples/cifar_example.py \
 --model r50 --optimizer lars --max-lr 5.2 --delta 0.0

To run the CIFAR-100 example with InfoBatch, run the following:

python3 examples/cifar_example.py \
 --model r50 --optimizer lars --max-lr 5.2 --delta 0.875 --ratio 0.5 --use_info_batch

Our example also supports mixed precision training and distributed data parallelism with the following command:

CUDA_VISIBLE_DEVICES=0,1 python3 -m torch.distributed.launch --use_env --nnodes=1 --nproc_per_node=2 \
 --master_addr=127.0.0.1 --master_port=23456 --node_rank=0 cifar_example.py \
 --use_ddp --use_info_batch --fp16 \
 --model r50 --optimizer lars --max-lr 5.2 --delta 0.875 --ratio 0.5

You may observe a performance drop when using the Distributed Data-Parallel (DDP) training approach compared to the Data Parallel (DP) approach on multiple GPUs, especially in versions prior to Pytorch 1.11. However, this is not specific to our algorithm itself.

Citation

@inproceedings{
 qin2024infobatch,
 title={InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning},
 author={Qin, Ziheng and Wang, Kai and Zheng, Zangwei and Gu, Jianyang and Peng, Xiangyu and Zhaopan Xu and Zhou, Daquan and Lei Shang and Baigui Sun and Xuansong Xie and You, Yang},
 booktitle={The Twelfth International Conference on Learning Representations},
 year={2024},
 url={https://openreview.net/forum?id=C61sk5LsK6}
}

About

Lossless Training Speed Up by Unbiased Dynamic Data Pruning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%

AltStyle γ«γ‚ˆγ£γ¦ε€‰ζ›γ•γ‚ŒγŸγƒšγƒΌγ‚Έ (->γ‚ͺγƒͺγ‚ΈγƒŠγƒ«) /