Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit cc5b31f

Browse files
[docs] Migrate syntax (#12390)
* change syntax * make style
1 parent d7a1a03 commit cc5b31f

File tree

239 files changed

+1948
-2657
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

239 files changed

+1948
-2657
lines changed

‎docs/source/en/api/configuration.md‎

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,11 +14,8 @@ specific language governing permissions and limitations under the License.
1414

1515
Schedulers from [`~schedulers.scheduling_utils.SchedulerMixin`] and models from [`ModelMixin`] inherit from [`ConfigMixin`] which stores all the parameters that are passed to their respective `__init__` methods in a JSON-configuration file.
1616

17-
<Tip>
18-
19-
To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `hf auth login`.
20-
21-
</Tip>
17+
> [!TIP]
18+
> To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `hf auth login`.
2219
2320
## ConfigMixin
2421

‎docs/source/en/api/loaders/ip_adapter.md‎

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,11 +14,8 @@ specific language governing permissions and limitations under the License.
1414

1515
[IP-Adapter](https://hf.co/papers/2308.06721) is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder.
1616

17-
<Tip>
18-
19-
Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter [loading](../../using-diffusers/loading_adapters#ip-adapter) guide, and you can see how to use it in the [usage](../../using-diffusers/ip_adapter) guide.
20-
21-
</Tip>
17+
> [!TIP]
18+
> Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter [loading](../../using-diffusers/loading_adapters#ip-adapter) guide, and you can see how to use it in the [usage](../../using-diffusers/ip_adapter) guide.
2219
2320
## IPAdapterMixin
2421

‎docs/source/en/api/loaders/lora.md‎

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -33,11 +33,8 @@ LoRA is a fast and lightweight training method that inserts and trains a signifi
3333
- [`QwenImageLoraLoaderMixin`] provides similar functions for [Qwen Image](https://huggingface.co/docs/diffusers/main/en/api/pipelines/qwen)
3434
- [`LoraBaseMixin`] provides a base class with several utility methods to fuse, unfuse, unload, LoRAs and more.
3535

36-
<Tip>
37-
38-
To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide.
39-
40-
</Tip>
36+
> [!TIP]
37+
> To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide.
4138
4239
## LoraBaseMixin
4340

‎docs/source/en/api/loaders/peft.md‎

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,11 +14,8 @@ specific language governing permissions and limitations under the License.
1414

1515
Diffusers supports loading adapters such as [LoRA](../../using-diffusers/loading_adapters) with the [PEFT](https://huggingface.co/docs/peft/index) library with the [`~loaders.peft.PeftAdapterMixin`] class. This allows modeling classes in Diffusers like [`UNet2DConditionModel`], [`SD3Transformer2DModel`] to operate with an adapter.
1616

17-
<Tip>
18-
19-
Refer to the [Inference with PEFT](../../tutorials/using_peft_for_inference.md) tutorial for an overview of how to use PEFT in Diffusers for inference.
20-
21-
</Tip>
17+
> [!TIP]
18+
> Refer to the [Inference with PEFT](../../tutorials/using_peft_for_inference.md) tutorial for an overview of how to use PEFT in Diffusers for inference.
2219
2320
## PeftAdapterMixin
2421

‎docs/source/en/api/loaders/textual_inversion.md‎

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,11 +16,8 @@ Textual Inversion is a training method for personalizing models by learning new
1616

1717
[`TextualInversionLoaderMixin`] provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings.
1818

19-
<Tip>
20-
21-
To learn more about how to load Textual Inversion embeddings, see the [Textual Inversion](../../using-diffusers/loading_adapters#textual-inversion) loading guide.
22-
23-
</Tip>
19+
> [!TIP]
20+
> To learn more about how to load Textual Inversion embeddings, see the [Textual Inversion](../../using-diffusers/loading_adapters#textual-inversion) loading guide.
2421
2522
## TextualInversionLoaderMixin
2623

‎docs/source/en/api/loaders/transformer_sd3.md‎

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,11 +16,8 @@ This class is useful when *only* loading weights into a [`SD3Transformer2DModel`
1616

1717
The [`SD3Transformer2DLoadersMixin`] class currently only loads IP-Adapter weights, but will be used in the future to save weights and load LoRAs.
1818

19-
<Tip>
20-
21-
To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide.
22-
23-
</Tip>
19+
> [!TIP]
20+
> To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide.
2421
2522
## SD3Transformer2DLoadersMixin
2623

‎docs/source/en/api/loaders/unet.md‎

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,11 +16,8 @@ Some training methods - like LoRA and Custom Diffusion - typically target the UN
1616

1717
The [`UNet2DConditionLoadersMixin`] class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters.
1818

19-
<Tip>
20-
21-
To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide.
22-
23-
</Tip>
19+
> [!TIP]
20+
> To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide.
2421
2522
## UNet2DConditionLoadersMixin
2623

‎docs/source/en/api/models/consistency_decoder_vae.md‎

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,11 +16,8 @@ Consistency decoder can be used to decode the latents from the denoising UNet in
1616

1717
The original codebase can be found at [openai/consistencydecoder](https://github.com/openai/consistencydecoder).
1818

19-
<Tip warning={true}>
20-
21-
Inference is only supported for 2 iterations as of now.
22-
23-
</Tip>
19+
> [!WARNING]
20+
> Inference is only supported for 2 iterations as of now.
2421
2522
The pipeline could not have been contributed without the help of [madebyollin](https://github.com/madebyollin) and [mrsteyk](https://github.com/mrsteyk) from [this issue](https://github.com/openai/consistencydecoder/issues/1).
2623

‎docs/source/en/api/models/transformer2d.md‎

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -22,11 +22,8 @@ When the input is **continuous**:
2222

2323
When the input is **discrete**:
2424

25-
<Tip>
26-
27-
It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don't contain a prediction for the masked pixel because the unnoised image cannot be masked.
28-
29-
</Tip>
25+
> [!TIP]
26+
> It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don't contain a prediction for the masked pixel because the unnoised image cannot be masked.
3027
3128
1. Convert input (classes of latent pixels) to embeddings and apply positional embeddings.
3229
2. Apply the Transformer blocks in the standard way.

‎docs/source/en/api/outputs.md‎

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -39,11 +39,8 @@ For instance, retrieving an image by indexing into it returns the tuple `(output
3939
outputs[:1]
4040
```
4141

42-
<Tip>
43-
44-
To check a specific pipeline or model output, refer to its corresponding API documentation.
45-
46-
</Tip>
42+
> [!TIP]
43+
> To check a specific pipeline or model output, refer to its corresponding API documentation.
4744
4845
## BaseOutput
4946

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /