tfm.optimization.PowerDecayWithOffset

View source on GitHub

Power learning rate decay with offset.

View aliases

Main aliases

tfm.optimization.lr_schedule.PowerDecayWithOffset

tfm.optimization.PowerDecayWithOffset(
 initial_learning_rate: float,
 power: float = 1.0,
 offset: int = 0,
 pre_offset_learning_rate: float = 1000000.0,
 name: str = 'PowerDecayWithOffset'
)

Learning rate equals to pre_offset_learning_rate if step < offset. Otherwise, learning rate equals to lr * (step - offset)^power.

Args

initial_learning_rate The initial learning rate.
power The order of the polynomial.
offset The offset when computing the power decay.
pre_offset_learning_rate The maximum learning rate we'll use.
name Optional, name of learning rate schedule.

Methods

from_config

@classmethod
from_config(
 config
)

Instantiates a LearningRateSchedule from its config.

Args
config Output of get_config().

Returns
A LearningRateSchedule instance.

get_config

View source

get_config()

Get the configuration of the learning rate schedule.

__call__

View source

__call__(
 step
)

Call self as a function.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.

Last updated 2024年02月02日 UTC.