Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[CVPR 2025 Oral] Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models

License

Notifications You must be signed in to change notification settings

hustvl/LightningDiT

Repository files navigation

⚡Reconstruction vs. Generation:

Taming Optimization Dilemma in Latent Diffusion Models

FID=1.35 on ImageNet-256 & 21.8x faster training than DiT!

Jingfeng Yao1, Bin Yang2, Xinggang Wang1*

1 ​Huazhong University of Science and Technology (HUST)
2 ​Independent Researcher

*Corresponding author: xgwang@hust.edu.cn

PWC

license authors paper arXiv

✨ Highlights

  • Latent diffusion system with 0.28 rFID and 1.35 FID on ImageNet-256 generation!

  • More than ×ばつ faster convergence with VA-VAE and LightningDiT than original DiT!

  • Surpass DiT with FID=2.11 with only 8 GPUs in about 10 hours. Let's make diffusion transformers research more affordable!

📰 News

  • [2025年12月16日] Check our new work VTP, a brand new scaling law of visual tokenziers!

  • [2025年04月04日] VA-VAE has been selected as Oral Presentation!

  • [2025年02月27日] VA-VAE has been accepted by CVPR 2025! 🎉🎉🎉

  • [2025年02月25日] We have released training codes of VA-VAE!

  • [2025年01月16日] More experimental tokenizer variants have been released! You could check them here.

  • [2025年01月02日] We have released the pre-trained weights.

  • [2025年01月01日] We have released the code and paper for VA-VAE and LightningDiT! The weights and pre-extracted latents will be released soon.

📄 Introduction

Latent diffusion models (LDMs) with Transformer architectures excel at generating high-fidelity images. However, recent studies reveal an optimization dilemma in this two-stage design: while increasing the per-token feature dimension in visual tokenizers improves reconstruction quality, it requires substantially larger diffusion models and more training iterations to achieve comparable generation performance. Consequently, existing systems often settle for sub-optimal solutions, either producing visual artifacts due to information loss within tokenizers or failing to converge fully due to expensive computation costs.

We argue that this dilemma stems from the inherent difficulty in learning unconstrained high-dimensional latent spaces. To address this, we propose aligning the latent space with pre-trained vision foundation models when training the visual tokenizers. Our proposed VA-VAE (Vision foundation model Aligned Variational AutoEncoder) significantly expands the reconstruction-generation frontier of latent diffusion models, enabling faster convergence of Diffusion Transformers (DiT) in high-dimensional latent spaces. To exploit the full potential of VA-VAE, we build an enhanced DiT baseline with improved training strategies and architecture designs, termed LightningDiT. The integrated system demonstrates remarkable training efficiency by reaching FID=2.11 in just 64 epochs -- an over ×ばつ convergence speedup over the original DiT implementations, while achieving state-of-the-art performance on ImageNet-256 image generation with FID=1.35.

📝 Results

  • State-of-the-art Performance on ImageNet 256x256 with FID=1.35.
  • Surpass DiT within only 64 epochs training, achieving 21.8x speedup.

🎯 How to Use

Installation

conda create -n lightningdit python=3.10.12
conda activate lightningdit
pip install -r requirements.txt

Inference with Pre-trained Models

  • Download weights and data infos:

  • Fast sample demo images:

    Run:

    bash bash run_fast_inference.sh ${config_path}
    

    Images will be saved into demo_images/demo_samples.png, e.g. the following one:

  • Sample for FID-50k evaluation:

    Run:

    bash run_inference.sh ${config_path}
    

    NOTE: The FID result reported by the script serves as a reference value. The final FID-50k reported in paper is evaluated with ADM:

    git clone https://github.com/openai/guided-diffusion.git
    # save your npz file with tools/save_npz.py
    bash run_fid_eval.sh /path/to/your.npz
    

🎮 Train Your Own Models

  • We provide a 👆detailed tutorial for training your own models of 2.1 FID score within only 64 epochs. It takes only about 10 hours with 8 x H800 GPUs.

❤️ Acknowledgements

This repo is mainly built on DiT, FastDiT and SiT. Our VAVAE codes are mainly built with LDM and MAR. Thanks for all these great works.

📝 Citation

If you find our work useful, please cite our related paper:

# CVPR 2025
@inproceedings{yao2025vavae,
 title={Reconstruction vs. generation: Taming optimization dilemma in latent diffusion models},
 author={Yao, Jingfeng and Yang, Bin and Wang, Xinggang},
 booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
 year={2025}
}
# NeurIPS 2024
@article{yao2024fasterdit,
 title={Fasterdit: Towards faster diffusion transformers training without architecture modification},
 author={Yao, Jingfeng and Wang, Cheng and Liu, Wenyu and Wang, Xinggang},
 journal={Advances in Neural Information Processing Systems},
 volume={37},
 pages={56166--56189},
 year={2024}
}

About

[CVPR 2025 Oral] Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

AltStyle によって変換されたページ (->オリジナル) /