core.models.equiformer_v2.trainers.lr_scheduler#

Classes#

Functions#

multiply(obj, num)

cosine_lr_lambda(current_step, scheduler_params)

multistep_lr_lambda(→ float)

Module Contents#

core.models.equiformer_v2.trainers.lr_scheduler.multiply(obj, num)#
core.models.equiformer_v2.trainers.lr_scheduler.cosine_lr_lambda(current_step: int, scheduler_params)#
class core.models.equiformer_v2.trainers.lr_scheduler.CosineLRLambda(scheduler_params)#
warmup_epochs#
lr_warmup_factor#
max_epochs#
lr_min_factor#
__call__(current_step: int)#
core.models.equiformer_v2.trainers.lr_scheduler.multistep_lr_lambda(current_step: int, scheduler_params) float#
class core.models.equiformer_v2.trainers.lr_scheduler.MultistepLRLambda(scheduler_params)#
warmup_epochs#
lr_warmup_factor#
lr_decay_epochs#
lr_gamma#
__call__(current_step: int) float#
class core.models.equiformer_v2.trainers.lr_scheduler.LRScheduler(optimizer, config)#

Notes

  1. scheduler.step() is called for every step for OC20 training.

  2. We use “scheduler_params” in .yml to specify scheduler parameters.

  3. For cosine learning rate, we use LambdaLR with lambda function being cosine:

    scheduler: LambdaLR scheduler_params:

    lambda_type: cosine …

  4. Following 3., if cosine is used, scheduler_params in .yml looks like:

    scheduler: LambdaLR scheduler_params:

    lambda_type: cosine warmup_epochs: … warmup_factor: … lr_min_factor: …

  5. Following 3., if multistep is used, scheduler_params in .yml looks like:

    scheduler: LambdaLR scheduler_params:

    lambda_type: multistep warmup_epochs: … warmup_factor: … decay_epochs: … (list) decay_rate: …

Parameters:
  • optimizer (obj) – torch optim object

  • config (dict) – Optim dict from the input config

optimizer#
config#
scheduler_type#
scheduler_params#
step(metrics=None, epoch=None)#
filter_kwargs(config)#
get_lr() float | None#