core.models.base#

Copyright (c) Meta, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Classes#

GraphData

Class to keep graph attributes nicely packaged.

GraphModelMixin

Mixin Model class implementing some general convenience properties and methods.

HeadInterface

BackboneInterface

HydraModel

Base class for all neural network modules.

Module Contents#

class core.models.base.GraphData#

Class to keep graph attributes nicely packaged.

edge_index: torch.Tensor#
edge_distance: torch.Tensor#
edge_distance_vec: torch.Tensor#
cell_offsets: torch.Tensor#
offset_distances: torch.Tensor#
neighbors: torch.Tensor#
batch_full: torch.Tensor#
atomic_numbers_full: torch.Tensor#
node_offset: int = 0#
class core.models.base.GraphModelMixin#

Mixin Model class implementing some general convenience properties and methods.

generate_graph(data, cutoff=None, max_neighbors=None, use_pbc=None, otf_graph=None, enforce_max_neighbors_strictly=None, use_pbc_single=False)#
property num_params: int#
no_weight_decay() list#

Returns a list of parameters with no weight decay.

class core.models.base.HeadInterface#
property use_amp#
abstract forward(data: torch_geometric.data.Batch, emb: dict[str, torch.Tensor]) dict[str, torch.Tensor]#

Head forward.

Parameters:
  • data (DataBatch) – Atomic systems as input

  • emb (dict[str->torch.Tensor]) – Embeddings of the input as generated by the backbone

Returns:

outputs – Return one or more targets generated by this head

Return type:

dict[str->torch.Tensor]

class core.models.base.BackboneInterface#
abstract forward(data: torch_geometric.data.Batch) dict[str, torch.Tensor]#

Backbone forward.

Parameters:

data (DataBatch) – Atomic systems as input

Returns:

embedding – Return backbone embeddings for the given input

Return type:

dict[str->torch.Tensor]

class core.models.base.HydraModel(backbone: dict | None = None, heads: dict | None = None, finetune_config: dict | None = None, otf_graph: bool = True, pass_through_head_outputs: bool = False, freeze_backbone: bool = False)#

Bases: torch.nn.Module, GraphModelMixin

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

device = None#
otf_graph#
pass_through_head_outputs#
forward(data: torch_geometric.data.Batch)#