Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit 673d435

Browse files
add attentionmixin to qwen image (#12219)
1 parent 561ab54 commit 673d435

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

‎src/diffusers/models/transformers/transformer_qwenimage.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
from ...loaders import FromOriginalModelMixin, PeftAdapterMixin
2626
from ...utils import USE_PEFT_BACKEND, logging, scale_lora_layers, unscale_lora_layers
2727
from ...utils.torch_utils import maybe_allow_in_graph
28-
from ..attention import FeedForward
28+
from ..attention import AttentionMixin, FeedForward
2929
from ..attention_dispatch import dispatch_attention_fn
3030
from ..attention_processor import Attention
3131
from ..cache_utils import CacheMixin
@@ -470,7 +470,9 @@ def forward(
470470
return encoder_hidden_states, hidden_states
471471

472472

473-
class QwenImageTransformer2DModel(ModelMixin, ConfigMixin, PeftAdapterMixin, FromOriginalModelMixin, CacheMixin):
473+
class QwenImageTransformer2DModel(
474+
ModelMixin, ConfigMixin, PeftAdapterMixin, FromOriginalModelMixin, CacheMixin, AttentionMixin
475+
):
474476
"""
475477
The Transformer model introduced in Qwen.
476478

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /