tfm.optimization.PolynomialDecayWithOffset

View source on GitHub

A LearningRateSchedule that uses a polynomial decay schedule.

Inherits From: base_lr_class

View aliases

Main aliases

tfm.optimization.lr_schedule.PolynomialDecayWithOffset

tfm.optimization.PolynomialDecayWithOffset(
 offset=0, **kwargs
)

It is commonly observed that a monotonically decreasing learning rate, whose degree of change is carefully chosen, results in a better performing model. This schedule applies a polynomial decay function to an optimizer step, given a provided initial_learning_rate, to reach an end_learning_rate in the given decay_steps.

It requires a step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step.

The schedule is a 1-arg callable that produces a decayed learning rate when passed the current optimizer step. This can be useful for changing the learning rate value across different invocations of optimizer functions. It is computed as:

defdecayed_learning_rate(step):
 step = min(step, decay_steps)
 return ((initial_learning_rate - end_learning_rate) *
 (1 - step / decay_steps) ^ (power)
 ) + end_learning_rate

If cycle is True then a multiple of decay_steps is used, the first one that is bigger than step.

defdecayed_learning_rate(step):
 decay_steps = decay_steps * ceil(step / decay_steps)
 return ((initial_learning_rate - end_learning_rate) *
 (1 - step / decay_steps) ^ (power)
 ) + end_learning_rate

You can pass this schedule directly into a tf.keras.optimizers.Optimizer as the learning rate. Example: Fit a model while decaying from 0.1 to 0.01 in 10000 steps using sqrt (i.e. power=0.5):

...
starter_learning_rate = 0.1
end_learning_rate = 0.01
decay_steps = 10000
learning_rate_fn = tf.keras.optimizers.schedules.PolynomialDecay(
 starter_learning_rate,
 decay_steps,
 end_learning_rate,
 power=0.5)
model.compile(optimizer=tf.keras.optimizers.SGD(
 learning_rate=learning_rate_fn),
 loss='sparse_categorical_crossentropy',
 metrics=['accuracy'])
model.fit(data, labels, epochs=5)

The learning rate schedule is also serializable and deserializable using tf.keras.optimizers.schedules.serialize and tf.keras.optimizers.schedules.deserialize.

Returns

A 1-arg callable learning rate schedule that takes the current optimizer step and outputs the decayed learning rate, a scalar Tensor of the same type as initial_learning_rate.

Child Classes

class base_lr_class

Methods

from_config

@classmethod
from_config(
 config
)

Instantiates a LearningRateSchedule from its config.

Args
config Output of get_config().

Returns
A LearningRateSchedule instance.

get_config

get_config()

__call__

View source

__call__(
 step
)

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.

Last updated 2024年02月02日 UTC.