core.models.equiformer_v2.trainers.lr_scheduler#
Classes#
Functions#
|
|
|
|
|
Module Contents#
- core.models.equiformer_v2.trainers.lr_scheduler.multiply(obj, num)#
- core.models.equiformer_v2.trainers.lr_scheduler.cosine_lr_lambda(current_step: int, scheduler_params)#
- class core.models.equiformer_v2.trainers.lr_scheduler.CosineLRLambda(scheduler_params)#
- warmup_epochs#
- lr_warmup_factor#
- max_epochs#
- lr_min_factor#
- __call__(current_step: int)#
- core.models.equiformer_v2.trainers.lr_scheduler.multistep_lr_lambda(current_step: int, scheduler_params) float #
- class core.models.equiformer_v2.trainers.lr_scheduler.MultistepLRLambda(scheduler_params)#
- warmup_epochs#
- lr_warmup_factor#
- lr_decay_epochs#
- lr_gamma#
- __call__(current_step: int) float #
- class core.models.equiformer_v2.trainers.lr_scheduler.LRScheduler(optimizer, config)#
Notes
scheduler.step() is called for every step for OC20 training.
We use “scheduler_params” in .yml to specify scheduler parameters.
- For cosine learning rate, we use LambdaLR with lambda function being cosine:
scheduler: LambdaLR scheduler_params:
lambda_type: cosine …
- Following 3., if cosine is used, scheduler_params in .yml looks like:
scheduler: LambdaLR scheduler_params:
lambda_type: cosine warmup_epochs: … warmup_factor: … lr_min_factor: …
- Following 3., if multistep is used, scheduler_params in .yml looks like:
scheduler: LambdaLR scheduler_params:
lambda_type: multistep warmup_epochs: … warmup_factor: … decay_epochs: … (list) decay_rate: …
- Parameters:
optimizer (obj) – torch optim object
config (dict) – Optim dict from the input config
- optimizer#
- config#
- scheduler_type#
- scheduler_params#
- step(metrics=None, epoch=None)#
- filter_kwargs(config)#
- get_lr() float | None #