Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Training scripts for Flux/Flux Kontext (non-DreamBooth) #12274

aungsiminhtet started this conversation in General
Discussion options

While reviewing the training examples in diffusers, I noticed that for Stable Diffusion we have several well-structured scripts covering different scenarios:

examples/text_to_image/train_text_to_image.py — fine-tunes the UNet parameters
examples/text_to_image/train_text_to_image_lora.py — trains LoRA layers in the UNet
examples/dreambooth/train_dreambooth.py — fine-tunes the UNet and optionally the text encoder
examples/dreambooth/train_dreambooth_lora.py — same as above, but with LoRA layers instead of full parameter tuning

However, for Flux and Flux Kontext, I don’t see equivalent training scripts outside of DreamBooth. From what I understand, this means that if someone wants to fine-tune these models directly (e.g., for text-to-image tasks without DreamBooth personalization), they currently don’t have a ready-made example.

My questions are:

  1. Is this omission intentional (e.g., because DreamBooth covers most practical fine-tuning cases for Flux/Flux Kontext)?
  2. Or is it simply that non-DreamBooth training scripts have not yet been added and would be open for contribution?

If the latter, I'd be interested in contributing scripts like train_text_to_image_flux.py and train_text_to_image_lora_flux.py, adapted from the Stable Diffusion versions. While I know the difference between these scripts is not huge, having non-DreamBooth training examples for Flux/Flux Kontext could be useful for some practitioners who want to fine-tune directly on new datasets without the DreamBooth setup.

Thanks for your insights!

You must be logged in to vote

Replies: 1 comment

Comment options

If you see the README for dreambooth flux you wil notice that their command to train kontext with image dataset kontext-community/relighting is actually not doing dreambooth. It is using the same script but there is no special token, no prior preservation or any of that. So yes they are giving you a generic training script and by default they even give you a training example without dreambooth

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet

AltStyle によって変換されたページ (->オリジナル) /