-
Notifications
You must be signed in to change notification settings - Fork 6.3k
Add Dream 7B Diffusion Large Language Model Pipeline #12091
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@yiyixuxu @a-r-r-o-w the Dream 7B model uses a transformers
-style custom tokenizer, which I believe is based on GPT2Tokenizer
but with different pre-tokenization rules. Since this tokenizer is not in transformers
, should I open a PR there to add it? And if so, do you think the Dream transformer should also be added to transformers
? The original transformer implementation is also transformers
-compatible.
In case of certain custom implementations, we try to implement and keep the relevant files within diffusers. Some examples I could quickly find are:
- https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/modeling_latent_upsampler.py
- https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/modeling_flux.py
- https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/consisid/consisid_utils.py
Maybe in this case you could create a tokenizer_gpt.py
file within the pipeline directory to use it? WDYT @yiyixuxu?
I don't think the model implementation can live in transformers
if we're using it for diffusion sampling. For example, Cosmos 1.0 released with an autoregressive and diffusion version, but we have two different implementations and PRs for support it in both libraries. So, let's maintain it here :)
...e_steps and fix shape errors
Hi @a-r-r-o-w, I think this PR is ready for an initial design review :).
...e properly tested
What does this PR do?
This PR implements a pipeline for the Dream 7B diffusion large language model (blog post, weights and code, repo). Dream is a masked (discrete) diffusion model for text which claims to perform comparably to similarly sized SOTA autoregressive LLMs such as Qwen 2.5 7B on NLP tasks and have superior performance on planning tasks.
Fixes #12017.
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@yiyixuxu
@a-r-r-o-w
@ntoxeg