core.models.gemnet.layers.efficient#
Copyright (c) Meta, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.
Classes#
Down projection in the efficient reformulation. |
|
Efficient reformulation of the bilinear layer and subsequent summation. |
Module Contents#
- class core.models.gemnet.layers.efficient.EfficientInteractionDownProjection(num_spherical: int, num_radial: int, emb_size_interm: int)#
Bases:
torch.nn.Module
Down projection in the efficient reformulation.
- Parameters:
emb_size_interm (int) – Intermediate embedding size (down-projection size).
kernel_initializer (callable) – Initializer of the weight matrix.
- num_spherical#
- num_radial#
- emb_size_interm#
- reset_parameters() None #
- forward(rbf, sph, id_ca, id_ragged_idx)#
- Parameters:
rbf (torch.Tensor, shape=(1, nEdges, num_radial))
sph (torch.Tensor, shape=(nEdges, Kmax, num_spherical))
id_ca
id_ragged_idx
- Returns:
rbf_W1 (torch.Tensor, shape=(nEdges, emb_size_interm, num_spherical))
sph (torch.Tensor, shape=(nEdges, Kmax, num_spherical)) – Kmax = maximum number of neighbors of the edges
- class core.models.gemnet.layers.efficient.EfficientInteractionBilinear(emb_size: int, emb_size_interm: int, units_out: int)#
Bases:
torch.nn.Module
Efficient reformulation of the bilinear layer and subsequent summation.
- Parameters:
units_out (int) – Embedding output size of the bilinear layer.
kernel_initializer (callable) – Initializer of the weight matrix.
- emb_size#
- emb_size_interm#
- units_out#
- reset_parameters() None #
- forward(basis, m, id_reduce, id_ragged_idx) torch.Tensor #
- Parameters:
basis
m (quadruplets: m = m_db , triplets: m = m_ba)
id_reduce
id_ragged_idx
- Returns:
m_ca – Edge embeddings.
- Return type:
torch.Tensor, shape=(nEdges, units_out)