Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

feat(scheduler): Add scale_betas_for_timesteps to DDPMScheduler #12341

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
seotaekkong wants to merge 1 commit into huggingface:main
base: main
Choose a base branch
Loading
from seotaekkong:feature/scale-betas-for-timesteps

Conversation

Copy link

@seotaekkong seotaekkong commented Sep 17, 2025

What does this PR do?

This PR introduces a new boolean flag, scale_betas_for_timesteps, to the DDPMScheduler. This flag provides an optional, more robust way to handle the beta schedule when num_train_timesteps is set to a value other than the default of 1000.

Motivation and Context

The default parameters for the DDPMScheduler (beta_start=0.0001, beta_end=0.02) are implicitly tuned for num_train_timesteps=1000. This creates a potential "usability trap" for practitioners who may change the number of training timesteps without realizing they should also adjust the beta range.

  • If a user sets num_train_timesteps to a large value (e.g., 4000), the linear beta schedule becomes too shallow, and noise is added too slowly.
  • If num_train_timesteps is set to a small value (e.g., 200), the schedule becomes too steep, and noise is added too aggressively.
    Both scenarios can lead to suboptimal training performance that is difficult to debug.

Proposed Solution

This PR introduces an opt-in solution to this problem.

  • A new flag, scale_betas_for_timesteps, is added to the scheduler's __init__ method.
  • It defaults to False to ensure 100% backward compatibility with existing code.
  • When set to True, it automatically scales the beta_end parameter using a simple heuristic (beta_end * (1000 / num_train_timesteps)). This ensures that the overall noise schedule remains sensible and robust, regardless of the number of training steps chosen by the user.
  • The scaled beta_end is used by schedules dependent on it (e.g., linear, scaled_linear), while schedules that do not use this parameter (e.g., squaredcos_cap_v2) are naturally unaffected.
    This change makes the scheduler more intuitive and helps prevent common configuration errors.

Fixes # (issue)

Before submitting

Who can review?

As suggested by the contribution guide for schedulers: @yiyixuxu

Copy link
Author

Hi, I wanted to add a quick comment with some further justification for this change. The PR addresses a subtle but critical issue where changing num_train_timesteps can cause the asymptotic variance of the forward process to deviate from unity, breaking the assumption that $x_T$ matches the standard Gaussian prior that the reverse process starts from.

Problem

Using the standard notation

$$\bar{\alpha}_t = \prod_{i=1}^t (1 - \beta_i)$$

the variance of the final state $x_T$ is given by $Var(x_T) = 1 - ᾱ_T$. For the sampling process to match a standard Gaussian prior N(0, 1), we require $Var(x_T) \approx 1$. The value $ᾱ_T$ is controlled by the sum of betas since $-\log ᾱ_T \approx \sum_{i=1}^T \beta_i$. If this sum is too small, $ᾱ_T$ will not be close to zero and the variance will be incorrect.

How the Current Implementation Fails

The current implementation leads to an inconsistent $\sum_i \beta_i$ when $T$ is changed, which breaks the unit variance assumption.

  • Default T = 1000: $\sum_i \beta_i \approx 10.05$ and $Var(x_T) \approx 1$.
  • Naive change T = 200: $\sum_i \beta_i \approx 2.01$ which is far too small. This results in $Var(x_T) \approx 0.87$ and the prior is incorrect, which can cause significant issues during the reverse sampling process.

How the Proposed Fix Works

The proposed fix of scaling beta_end ensures the sum of betas remains approximately constant, thereby preserving the unit variance of the final state.

  • With T = 200, beta_end is scaled to 0.1 and $\sum_i \beta_i \approx 10.01$.
  • This ensures $\bar{\alpha}_T \approx 0$ and in turn $Var(x_T) \approx 1$, preserving the mathematical integrity of the diffusion process.
    This change ensures the scheduler produces a theoretically sound noise schedule by default, preventing users from having to manually correct for variance issues when experimenting with different numbers of training timesteps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Reviewers
No reviews
Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

1 participant

AltStyle によって変換されたページ (->オリジナル) /