-
Notifications
You must be signed in to change notification settings - Fork 38
-
Thoughts on LORA/etc. implementation? I know what and how they work conceptually. But what I don't know, is what math goes on with which tensors at what stage? I'm guessing UNET?
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 2
Replies: 5 comments 8 replies
-
LORA is on my todo list, but still have upscaler to get done first, so if you wanna give it a shot that would be awesome
I think LORA application is fairly inexpensive, a assume we need the tokenizer and unet
Might be worth a looky in here
https://github.com/huggingface/diffusers/blob/1328aeb274610f492c10a246ffba0bc4de8f689b/src/diffusers/loaders.py#L1173
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 2
-
having lora will be amazing, a simple trainer/lora maker will be great to
Beta Was this translation helpful? Give feedback.
All reactions
-
Wow, what timing. An LCM for SDXL as a LORA just came out: https://huggingface.co/blog/lcm_lora https://github.com/huggingface/diffusers/releases/tag/v0.23.0
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 2
-
very very good, lora lcm will be great, also i was thinking if u guys connect animatediff with lcm will be a game changer too, much much faster animations will we have!
Beta Was this translation helpful? Give feedback.
All reactions
-
I have opened a PR for this
#20
If someone knows of a good, but small SD 1.5 LoRA for Onnx let me know, need one for testing (not LCM, will be easier to support SD first)
Beta Was this translation helpful? Give feedback.
All reactions
-
❤️ 2
-
oh my bad then :D let's wait someone convert one to onnx
Beta Was this translation helpful? Give feedback.
All reactions
-
Good to hear the implementation looks trivial. I've been looking around that diffusers loaders code you linked, load_lora_into_unet looked promising but it's rough going. I'm expecting to see tensor math or something, but all I see are various dictionary entries being replaced.
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 1
-
as far as I can tell its as simple as that, the LoRA is just a dictionary of keys and weights to be applied to existing tokens in the model
Unless I am missing a big piece, if I am I will soon find out, lol
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 1
-
I think its similar to prompt weighting, which is another thing we need to add
Using () [] to boost/lower keyword in a prompt, like other apps
Find all () [] groups, extract keyword -> Tokenize Text -> match tokens to text -> multiply tokens weights by 'n'
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 1
-
@saddam213 if you still need a LoRA for this, I can make one for you or send you one.
Beta Was this translation helpful? Give feedback.