Skip to content

tensorplay.optim.lr_scheduler

Classes

class CosineAnnealingLR [source]

python
CosineAnnealingLR(optimizer, t_max, eta_min=0, last_epoch=-1, verbose=False)

Bases: _LRScheduler

Set the learning rate of each parameter group using a cosine annealing schedule, where ηmax is set to the initial lr and Tmax is the number of epochs to decay. When last_epoch=-1, sets initial lr as lr.

Args

  • optimizer (Optimizer): Wrapped optimizer.
  • t_max (int): Maximum number of epochs.
  • eta_min (float): Minimum learning rate. Default: 0.
  • last_epoch (int): The index of last epoch. Default: -1.
  • verbose (bool): If True, prints a message to stdout for each update. Default: False.
Methods

__init__(self, optimizer, t_max, eta_min=0, last_epoch=-1, verbose=False) [source]

Initialize self. See help(type(self)) for accurate signature.


get_last_lr(self) -> List[float] [source]

Return last computed learning rates by the scheduler.

Raises

  • RuntimeError: If the scheduler has not stepped yet.

get_lr(self) [source]

Compute the learning rate for the current epoch.

Returns

List[float]: Learning rates for each parameter group. Must have the same length as param_groups.

load_state_dict(self, state_dict: dict[str, typing.Any]) -> None [source]

Loads the scheduler state.

Args

  • state_dict (dict): Scheduler state. Should be an object returned from a call to state_dict.

Raises

  • ValueError: If the number of base_lrs in state_dict does not match the current optimizer's param_groups.

state_dict(self) -> dict[str, typing.Any] [source]

Returns the state of the scheduler as a dict.

Includes all attributes except 'optimizer' to avoid circular references.


step(self, epoch: Optional[int] = None) -> None [source]

Step the scheduler to update learning rates.

Args

  • epoch (Optional[int]): The epoch index to set. If None, increment last_epoch by 1.

Raises

  • ValueError: If epoch is a non-integer or negative value, or less than current last_epoch.

class ExponentialLR [source]

python
ExponentialLR(optimizer, gamma, last_epoch=-1, verbose=False)

Bases: _LRScheduler

Set the learning rate of each parameter group to the initial lr decayed by gamma every epoch. When last_epoch=-1, sets initial lr as lr.

Args

  • optimizer (Optimizer): Wrapped optimizer.
  • gamma (float): Multiplicative factor of learning rate decay.
  • Default: 0.1.
  • last_epoch (int): The index of last epoch. Default: -1.
  • verbose (bool): If True, prints a message to stdout for each update. Default: False.
Methods

__init__(self, optimizer, gamma, last_epoch=-1, verbose=False) [source]

Initialize self. See help(type(self)) for accurate signature.


get_last_lr(self) -> List[float] [source]

Return last computed learning rates by the scheduler.

Raises

  • RuntimeError: If the scheduler has not stepped yet.

get_lr(self) [source]

Compute the learning rate for the current epoch.

Returns

List[float]: Learning rates for each parameter group. Must have the same length as param_groups.

load_state_dict(self, state_dict: dict[str, typing.Any]) -> None [source]

Loads the scheduler state.

Args

  • state_dict (dict): Scheduler state. Should be an object returned from a call to state_dict.

Raises

  • ValueError: If the number of base_lrs in state_dict does not match the current optimizer's param_groups.

state_dict(self) -> dict[str, typing.Any] [source]

Returns the state of the scheduler as a dict.

Includes all attributes except 'optimizer' to avoid circular references.


step(self, epoch: Optional[int] = None) -> None [source]

Step the scheduler to update learning rates.

Args

  • epoch (Optional[int]): The epoch index to set. If None, increment last_epoch by 1.

Raises

  • ValueError: If epoch is a non-integer or negative value, or less than current last_epoch.

class MultiStepLR [source]

python
MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=-1, verbose=False)

Bases: _LRScheduler

Set the learning rate of each parameter group to the initial lr decayed by gamma once the number of epoch reaches one of the milestones. When last_epoch=-1, sets initial lr as lr.

Args

  • optimizer (Optimizer): Wrapped optimizer.
  • milestones (set of int): Set of epoch indices. Must be increasing.
  • gamma (float): Multiplicative factor of learning rate decay.
  • Default: 0.1.
  • last_epoch (int): The index of last epoch. Default: -1.
  • verbose (bool): If True, prints a message to stdout for each update. Default: False.
Methods

__init__(self, optimizer, milestones, gamma=0.1, last_epoch=-1, verbose=False) [source]

Initialize self. See help(type(self)) for accurate signature.


get_last_lr(self) -> List[float] [source]

Return last computed learning rates by the scheduler.

Raises

  • RuntimeError: If the scheduler has not stepped yet.

get_lr(self) [source]

Compute the learning rate for the current epoch.

Returns

List[float]: Learning rates for each parameter group. Must have the same length as param_groups.

load_state_dict(self, state_dict: dict[str, typing.Any]) -> None [source]

Loads the scheduler state.

Args

  • state_dict (dict): Scheduler state. Should be an object returned from a call to state_dict.

Raises

  • ValueError: If the number of base_lrs in state_dict does not match the current optimizer's param_groups.

state_dict(self) -> dict[str, typing.Any] [source]

Returns the state of the scheduler as a dict.

Includes all attributes except 'optimizer' to avoid circular references.


step(self, epoch: Optional[int] = None) -> None [source]

Step the scheduler to update learning rates.

Args

  • epoch (Optional[int]): The epoch index to set. If None, increment last_epoch by 1.

Raises

  • ValueError: If epoch is a non-integer or negative value, or less than current last_epoch.

class Optimizer [source]

python
Optimizer(params, defaults)

Base class for optimizers.

Args

  • params (iterable): an iterable of Tensor s or dict s. Specifies what Tensors should be optimized.
  • defaults: (dict): a dict containing default values of optimization options (used when a parameter group doesn't specify them).
Methods

__init__(self, params, defaults) [source]

Initialize self. See help(type(self)) for accurate signature.


add_param_group(self, param_group) [source]


load_state_dict(self, state_dict) [source]


state_dict(self) [source]


step(self, closure=None) [source]


zero_grad(self, set_to_none=False) [source]


class ReduceLROnPlateau [source]

python
ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08, verbose=False)

Base class for reducing learning rate when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This scheduler reads a metrics quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.

Methods

__init__(self, optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08, verbose=False) [source]

Initialize self. See help(type(self)) for accurate signature.


is_better(self, a, best) [source]


load_state_dict(self, state_dict) [source]


state_dict(self) [source]


step(self, metrics, epoch=None) [source]


class StepLR [source]

python
StepLR(optimizer, step_size, gamma=0.1, last_epoch=-1, verbose=False)

Bases: _LRScheduler

Set the learning rate of each parameter group to the initial lr decayed by gamma every step_size epochs. When last_epoch=-1, sets initial lr as lr.

Args

  • optimizer (Optimizer): Wrapped optimizer.
  • step_size (int): Period of learning rate decay.
  • gamma (float): Multiplicative factor of learning rate decay.
  • Default: 0.1.
  • last_epoch (int): The index of last epoch. Default: -1.
  • verbose (bool): If True, prints a message to stdout for each update. Default: False.
Methods

__init__(self, optimizer, step_size, gamma=0.1, last_epoch=-1, verbose=False) [source]

Initialize self. See help(type(self)) for accurate signature.


get_last_lr(self) -> List[float] [source]

Return last computed learning rates by the scheduler.

Raises

  • RuntimeError: If the scheduler has not stepped yet.

get_lr(self) [source]

Compute the learning rate for the current epoch.

Returns

List[float]: Learning rates for each parameter group. Must have the same length as param_groups.

load_state_dict(self, state_dict: dict[str, typing.Any]) -> None [source]

Loads the scheduler state.

Args

  • state_dict (dict): Scheduler state. Should be an object returned from a call to state_dict.

Raises

  • ValueError: If the number of base_lrs in state_dict does not match the current optimizer's param_groups.

state_dict(self) -> dict[str, typing.Any] [source]

Returns the state of the scheduler as a dict.

Includes all attributes except 'optimizer' to avoid circular references.


step(self, epoch: Optional[int] = None) -> None [source]

Step the scheduler to update learning rates.

Args

  • epoch (Optional[int]): The epoch index to set. If None, increment last_epoch by 1.

Raises

  • ValueError: If epoch is a non-integer or negative value, or less than current last_epoch.

Released under the Apache 2.0 License.

📚DeepWiki