core.models.escaip.EScAIP

Contents

core.models.escaip.EScAIP#

Classes#

EScAIPBackbone

Efficiently Scaled Attention Interactomic Potential (EScAIP) backbone model.

EScAIPHeadBase

Base class for all neural network modules.

EScAIPDirectForceHead

Base class for all neural network modules.

EScAIPEnergyHead

Base class for all neural network modules.

EScAIPGradientEnergyForceStressHead

Do not support torch.compile

Module Contents#

class core.models.escaip.EScAIP.EScAIPBackbone(**kwargs)#

Bases: torch.nn.Module, fairchem.core.models.base.BackboneInterface

Efficiently Scaled Attention Interactomic Potential (EScAIP) backbone model.

global_cfg#
molecular_graph_cfg#
gnn_cfg#
reg_cfg#
regress_forces#
direct_forces#
regress_stress#
dataset_list#
max_num_elements#
max_neighbors#
cutoff#
data_preprocess#
input_block#
transformer_blocks#
readout_layers#
output_projection#
compiled_forward(data: fairchem.core.models.escaip.custom_types.GraphAttentionData)#
forward(data: fairchem.core.datasets.atomic_data.AtomicData)#

Backbone forward.

Parameters:

data (AtomicData) – Atomic systems as input

Returns:

embedding – Return backbone embeddings for the given input

Return type:

dict[str->torch.Tensor]

no_weight_decay()#
init_weights()#
class core.models.escaip.EScAIP.EScAIPHeadBase(backbone: EScAIPBackbone)#

Bases: torch.nn.Module, fairchem.core.models.base.HeadInterface

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

global_cfg#
molecular_graph_cfg#
gnn_cfg#
reg_cfg#
regress_forces#
direct_forces#
post_init(gain=1.0)#
no_weight_decay()#
class core.models.escaip.EScAIP.EScAIPDirectForceHead(backbone: EScAIPBackbone)#

Bases: EScAIPHeadBase

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

force_direction_layer#
force_magnitude_layer#
node_norm#
edge_norm#
compiled_forward(edge_features, node_features, data: fairchem.core.models.escaip.custom_types.GraphAttentionData)#
forward(data, emb: dict[str, torch.Tensor]) dict[str, torch.Tensor]#

Head forward.

Parameters:
  • data (AtomicData) – Atomic systems as input

  • emb (dict[str->torch.Tensor]) – Embeddings of the input as generated by the backbone

Returns:

outputs – Return one or more targets generated by this head

Return type:

dict[str->torch.Tensor]

class core.models.escaip.EScAIP.EScAIPEnergyHead(backbone: EScAIPBackbone)#

Bases: EScAIPHeadBase

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

energy_layer#
energy_reduce#
use_global_readout#
node_norm#
compiled_forward(emb)#
forward(data, emb: dict[str, torch.Tensor]) dict[str, torch.Tensor]#

Head forward.

Parameters:
  • data (AtomicData) – Atomic systems as input

  • emb (dict[str->torch.Tensor]) – Embeddings of the input as generated by the backbone

Returns:

outputs – Return one or more targets generated by this head

Return type:

dict[str->torch.Tensor]

class core.models.escaip.EScAIP.EScAIPGradientEnergyForceStressHead(backbone: EScAIPBackbone, prefix: str | None = None, wrap_property: bool = True)#

Bases: EScAIPEnergyHead

Do not support torch.compile

regress_stress#
regress_forces#
prefix#
wrap_property#
forward(data, emb: dict[str, torch.Tensor]) dict[str, torch.Tensor]#

Head forward.

Parameters:
  • data (AtomicData) – Atomic systems as input

  • emb (dict[str->torch.Tensor]) – Embeddings of the input as generated by the backbone

Returns:

outputs – Return one or more targets generated by this head

Return type:

dict[str->torch.Tensor]