core.models.equiformer_v2.layer_norm#
1. Normalize features of shape (N, sphere_basis, C), with sphere_basis = (lmax + 1) ** 2.
2. The difference from layer_norm.py is that all type-L vectors have the same number of channels and input features are of shape (N, sphere_basis, C).
Classes#
Base class for all neural network modules. |
|
Functions#
|
|
Module Contents#
- core.models.equiformer_v2.layer_norm.get_normalization_layer(norm_type: str, lmax: int, num_channels: int, eps: float = 1e-05, affine: bool = True, normalization: str = 'component')#
- core.models.equiformer_v2.layer_norm.get_l_to_all_m_expand_index(lmax: int)#
- class core.models.equiformer_v2.layer_norm.EquivariantLayerNormArray(lmax: int, num_channels: int, eps: float = 1e-05, affine: bool = True, normalization: str = 'component')#
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- lmax#
- num_channels#
- eps#
- affine#
- normalization#
- __repr__() str #
- forward(node_input)#
Assume input is of shape [N, sphere_basis, C]
- class core.models.equiformer_v2.layer_norm.EquivariantLayerNormArraySphericalHarmonics(lmax: int, num_channels: int, eps: float = 1e-05, affine: bool = True, normalization: str = 'component', std_balance_degrees: bool = True)#
Bases:
torch.nn.Module
Normalize over L = 0.
Normalize across all m components from degrees L > 0.
Do not normalize separately for different L (L > 0).
- lmax#
- num_channels#
- eps#
- affine#
- std_balance_degrees#
- norm_l0#
- normalization#
- __repr__() str #
- forward(node_input)#
Assume input is of shape [N, sphere_basis, C]
- class core.models.equiformer_v2.layer_norm.EquivariantRMSNormArraySphericalHarmonics(lmax: int, num_channels: int, eps: float = 1e-05, affine: bool = True, normalization: str = 'component')#
Bases:
torch.nn.Module
Normalize across all m components from degrees L >= 0.
- lmax#
- num_channels#
- eps#
- affine#
- normalization#
- __repr__() str #
- forward(node_input)#
Assume input is of shape [N, sphere_basis, C]
- class core.models.equiformer_v2.layer_norm.EquivariantRMSNormArraySphericalHarmonicsV2(lmax: int, num_channels: int, eps: float = 1e-05, affine: bool = True, normalization: str = 'component', centering: bool = True, std_balance_degrees: bool = True)#
Bases:
torch.nn.Module
Normalize across all m components from degrees L >= 0.
Expand weights and multiply with normalized feature to prevent slicing and concatenation.
- lmax#
- num_channels#
- eps#
- affine#
- centering#
- std_balance_degrees#
- normalization#
- __repr__() str #
- forward(node_input)#
Assume input is of shape [N, sphere_basis, C]
- class core.models.equiformer_v2.layer_norm.EquivariantDegreeLayerScale(lmax: int, num_channels: int, scale_factor: float = 2.0)#
Bases:
torch.nn.Module
Similar to Layer Scale used in CaiT (Going Deeper With Image Transformers (ICCV’21)), we scale the output of both attention and FFN.
For degree L > 0, we scale down the square root of 2 * L, which is to emulate halving the number of channels when using higher L.
- lmax#
- num_channels#
- scale_factor#
- affine_weight#
- __repr__() str #
- forward(node_input)#