tfm.optimization.PowerAndLinearDecay
Stay organized with collections
Save and categorize content based on your preferences.
Learning rate schedule with multiplied by linear decay at the end.
View aliases
Main aliases
tfm.optimization.PowerAndLinearDecay(
initial_learning_rate: float,
total_decay_steps: int,
power: float = 1.0,
linear_decay_fraction: float = 0.1,
offset: int = 0,
name: str = 'PowerAndLinearDecay'
)
The schedule has the following behavoir. Let offset_step = step - offset.
1) offset_step < 0, the actual learning rate equals initial_learning_rate. 2) offset_step <= total_decay_steps * (1 - linear_decay_fraction), the actual learning rate equals lr * offset_step^power. 3) total_decay_steps * (1 - linear_decay_fraction) <= offset_step < total_decay_steps, the actual learning rate equals lr * offset_step^power * (total_decay_steps - offset_step) / (total_decay_steps * linear_decay_fraction). 4) offset_step >= total_decay_steps, the actual learning rate equals zero.
Args | |
|---|---|
initial_learning_rate
|
The initial learning rate. |
total_decay_steps
|
The total number of steps for power + linear decay. |
power
|
The order of the polynomial. |
linear_decay_fraction
|
In the last linear_decay_fraction steps, the
learning rate will be multiplied by a linear decay.
|
offset
|
The offset applied to steps. |
name
|
Optional, name of learning rate schedule. |
Methods
from_config
@classmethodfrom_config( config )
Instantiates a LearningRateSchedule from its config.
| Args | |
|---|---|
config
|
Output of get_config().
|
| Returns | |
|---|---|
A LearningRateSchedule instance.
|
get_config
get_config()
Get the configuration of the learning rate schedule.
__call__
__call__(
step
)
Call self as a function.