2024-11-19 06:22:11 (INFO): Running in local mode without elastic launch (single gpu only) 2024-11-19 06:22:11 (INFO): Setting env PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True 2024-11-19 06:22:11 (INFO): Project root: /home/runner/work/fairchem/fairchem/src/fairchem /home/runner/work/fairchem/fairchem/src/fairchem/core/models/escn/so3.py:23: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. _Jd = torch.load(os.path.join(os.path.dirname(__file__), "Jd.pt")) /home/runner/work/fairchem/fairchem/src/fairchem/core/models/scn/spherical_harmonics.py:23: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. _Jd = torch.load(os.path.join(os.path.dirname(__file__), "Jd.pt")) /home/runner/work/fairchem/fairchem/src/fairchem/core/models/equiformer_v2/wigner.py:10: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. _Jd = torch.load(os.path.join(os.path.dirname(__file__), "Jd.pt")) /home/runner/work/fairchem/fairchem/src/fairchem/core/models/equiformer_v2/layer_norm.py:75: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead. @torch.cuda.amp.autocast(enabled=False) /home/runner/work/fairchem/fairchem/src/fairchem/core/models/equiformer_v2/layer_norm.py:175: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead. @torch.cuda.amp.autocast(enabled=False) /home/runner/work/fairchem/fairchem/src/fairchem/core/models/equiformer_v2/layer_norm.py:263: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead. @torch.cuda.amp.autocast(enabled=False) /home/runner/work/fairchem/fairchem/src/fairchem/core/models/equiformer_v2/layer_norm.py:357: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead. @torch.cuda.amp.autocast(enabled=False) 2024-11-19 06:22:12 (INFO): amp: false cmd: checkpoint_dir: /home/runner/work/fairchem/fairchem/docs/core/checkpoints/2024-11-19-06-21-52 commit: aa298ac identifier: '' logs_dir: /home/runner/work/fairchem/fairchem/docs/core/logs/tensorboard/2024-11-19-06-21-52 print_every: 10 results_dir: /home/runner/work/fairchem/fairchem/docs/core/results/2024-11-19-06-21-52 seed: 0 timestamp_id: 2024-11-19-06-21-52 version: 0.1.dev1+gaa298ac dataset: {} evaluation_metrics: metrics: energy: - mae forces: - forcesx_mae - forcesy_mae - forcesz_mae - mae - cosine_similarity - magnitude_error misc: - energy_forces_within_threshold primary_metric: forces_mae gp_gpus: null gpus: 0 logger: tensorboard loss_functions: - energy: coefficient: 1 fn: mae - forces: coefficient: 1 fn: l2mae model: activation: silu cbf: name: spherical_harmonics cutoff: 6.0 direct_forces: true emb_size_atom: 512 emb_size_bil_trip: 64 emb_size_cbf: 16 emb_size_edge: 512 emb_size_rbf: 16 emb_size_trip: 64 envelope: exponent: 5 name: polynomial extensive: true max_neighbors: 50 name: gemnet_t num_after_skip: 2 num_atom: 3 num_before_skip: 1 num_blocks: 3 num_concat: 1 num_radial: 128 num_spherical: 7 otf_graph: true output_init: HeOrthogonal rbf: name: gaussian regress_forces: true optim: batch_size: 16 clip_grad_norm: 10 ema_decay: 0.999 energy_coefficient: 1 eval_batch_size: 16 eval_every: 5000 force_coefficient: 1 loss_energy: mae loss_force: atomwisel2 lr_gamma: 0.8 lr_initial: 0.0005 lr_milestones: - 64000 - 96000 - 128000 - 160000 - 192000 max_epochs: 80 num_workers: 2 optimizer: AdamW optimizer_params: amsgrad: true warmup_steps: -1 outputs: energy: level: system forces: eval_on_free_atoms: true level: atom train_on_free_atoms: true relax_dataset: {} slurm: {} task: prediction_dtype: float32 test_dataset: a2g_args: r_energy: false r_forces: false format: ase_db select_args: selection: natoms>5,xc=PBE src: data.db trainer: ocp val_dataset: {} 2024-11-19 06:22:12 (INFO): Loading model: gemnet_t 2024-11-19 06:22:13 (INFO): Loaded GemNetT with 31671825 parameters. 2024-11-19 06:22:13 (WARNING): log_summary for Tensorboard not supported 2024-11-19 06:22:13 (WARNING): Could not find dataset metadata.npz files in '[PosixPath('data.db')]' 2024-11-19 06:22:13 (WARNING): Disabled BalancedBatchSampler because num_replicas=1. 2024-11-19 06:22:13 (WARNING): Failed to get data sizes, falling back to uniform partitioning. BalancedBatchSampler requires a dataset that has a metadata attributed with number of atoms. 2024-11-19 06:22:13 (INFO): rank: 0: Sampler created... 2024-11-19 06:22:13 (INFO): Created BalancedBatchSampler with sampler=, batch_size=16, drop_last=False 2024-11-19 06:22:13 (INFO): Attemping to load user specified checkpoint at /tmp/fairchem_checkpoints/gndt_oc22_all_s2ef.pt 2024-11-19 06:22:13 (INFO): Loading checkpoint from: /tmp/fairchem_checkpoints/gndt_oc22_all_s2ef.pt /home/runner/work/fairchem/fairchem/src/fairchem/core/trainers/base_trainer.py:602: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. checkpoint = torch.load(checkpoint_path, map_location=map_location) 2024-11-19 06:22:14 (INFO): Overwriting scaling factors with those loaded from checkpoint. If you're generating predictions with a pretrained checkpoint, this is the correct behavior. To disable this, delete `scale_dict` from the checkpoint. 2024-11-19 06:22:14 (WARNING): Scale factor comment not found in model 2024-11-19 06:22:14 (INFO): Predicting on test. device 0: 0%| | 0/3 [00:00