core.models.uma.nn.embedding_dev#
Copyright (c) Meta Platforms, Inc. and affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.
Classes#
Base class for all neural network modules. |
|
Base class for all neural network modules. |
Module Contents#
- class core.models.uma.nn.embedding_dev.EdgeDegreeEmbedding(sphere_channels: int, lmax: int, mmax: int, max_num_elements: int, edge_channels_list, rescale_factor, cutoff, mappingReduced, activation_checkpoint_chunk_size: int | None)#
Bases:
torch.nn.Module
- Parameters:
sphere_channels (int) – Number of spherical channels
lmax (int) – degrees (l)
mmax (int) – orders (m)
max_num_elements (int) – Maximum number of atomic numbers
(list (edge_channels_list) – int): List of sizes of invariant edge embedding. For example, [input_channels, hidden_channels, hidden_channels]. The last one will be used as hidden size when use_atom_edge_embedding is True.
use_atom_edge_embedding (bool) – Whether to use atomic embedding along with relative distance for edge scalar features
rescale_factor (float) – Rescale the sum aggregation
cutoff (float) – Cutoff distance for the radial function
mappingReduced (CoefficientMapping) – Class to convert l and m indices once node embedding is rotated
- sphere_channels#
- lmax#
- mmax#
- mappingReduced#
- activation_checkpoint_chunk_size#
- m_0_num_coefficients: int#
- m_all_num_coefficents: int#
- max_num_elements#
- edge_channels_list#
- rad_func#
- rescale_factor#
- cutoff#
- envelope#
- forward_chunk(x, x_edge, edge_distance, edge_index, wigner_and_M_mapping_inv, node_offset=0)#
- forward(x, x_edge, edge_distance, edge_index, wigner_and_M_mapping_inv, node_offset=0)#
- class core.models.uma.nn.embedding_dev.ChgSpinEmbedding(embedding_type, embedding_target, embedding_size, grad, scale=1.0)#
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- embedding_type#
- embedding_target#
- forward(x)#
- class core.models.uma.nn.embedding_dev.DatasetEmbedding(embedding_size, grad, dataset_list)#
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- embedding_size#
- dataset_emb_dict#
- forward(dataset_list)#