-
Notifications
You must be signed in to change notification settings - Fork 6.3k
Description
Is your feature request related to a problem? Please describe.
Text to image models require generation of text embeddings as a first step. While this step is relatively quick if VRAM amount is not a constraint but it often is and therefore the text model(s) need to loaded, unload, then transformer loaded which leads to significant overhead and slowdown. If a user want to generate multiple images using the same prompt and different seeds, this process has to happen over and over (increasing batch size is usually not an option).
Describe the solution you'd like.
We can have an option to cache embedding either to RAM or to disk where the hash of the text is a key and the vectory embedding is a value. When a disk is used, the values will have to be in a folder appropriate for the model.
When the caching is enabled, the diffusers pipeline would check the cache first and if there is a hit, will use it.
If there is no hit, the embedding is generated and saved to the cache.
Describe alternatives you've considered.
I have a custom code which makes use embedding geration as a separate step
https://github.com/xhinker/sd_embed/blob/main/src/sd_embed/embedding_funcs.py
and then the embeddings are fed into the pipeline in the 2nd step.
This is effectively a solution popularised by @sayakpaul except I cache the embeddings.
This method works upto Flux.1 but new models appear e.g. Qwen Image for which there is currently no support.
Additional context.
Adding this should not break anything since the cache can be always disabled (the default) and wiped out.