core.modules.scheduler#

Copyright (c) Meta Platforms, Inc. and affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Classes#

CosineLRLambda

LRScheduler

Learning rate scheduler class for torch.optim learning rate schedulers

Functions#

warmup_lr_lambda(current_step, optim_config)

Returns a learning rate multiplier.

Module Contents#

core.modules.scheduler.warmup_lr_lambda(current_step: int, optim_config)#

Returns a learning rate multiplier. Till warmup_steps, learning rate linearly increases to initial_lr, and then gets multiplied by lr_gamma every time a milestone is crossed.

class core.modules.scheduler.CosineLRLambda(warmup_epochs: int, warmup_factor: float, epochs: int, lr_min_factor: float)#
warmup_epochs#
lr_warmup_factor#
max_epochs#
lr_min_factor#
__call__(current_step: int) float#
class core.modules.scheduler.LRScheduler(optimizer, config)#

Learning rate scheduler class for torch.optim learning rate schedulers

Notes

If no learning rate scheduler is specified in the config the default scheduler is warmup_lr_lambda (fairchem.core.common.utils) not no scheduler, this is for backward-compatibility reasons. To run without a lr scheduler specify scheduler: “Null” in the optim section of the config.

Parameters:
  • optimizer (obj) – torch optim object

  • config (dict) – Optim dict from the input config

optimizer#
config#
step(metrics=None, epoch=None) None#
filter_kwargs(config)#
get_lr()#