core.models.equiformer_v2.drop#

Add extra_repr into DropPath implemented by timm for displaying more info.

Classes#

DropPath

Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).

GraphDropPath

Consider batch for graph data when dropping paths.

EquivariantDropout

Base class for all neural network modules.

EquivariantScalarsDropout

Base class for all neural network modules.

EquivariantDropoutArraySphericalHarmonics

Base class for all neural network modules.

Functions#

drop_path(→ torch.Tensor)

Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).

Module Contents#

core.models.equiformer_v2.drop.drop_path(x: torch.Tensor, drop_prob: float = 0.0, training: bool = False) torch.Tensor#

Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). This is the same as the DropConnect impl I created for EfficientNet, etc networks, however, the original name is misleading as ‘Drop Connect’ is a different form of dropout in a separate paper… See discussion: tensorflow/tpu#494 … I’ve opted for changing the layer and argument names to ‘drop path’ rather than mix DropConnect as a layer name and use ‘survival rate’ as the argument.

class core.models.equiformer_v2.drop.DropPath(drop_prob: float)#

Bases: torch.nn.Module

Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).

drop_prob#
forward(x: torch.Tensor) torch.Tensor#
extra_repr() str#

Set the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

class core.models.equiformer_v2.drop.GraphDropPath(drop_prob: float)#

Bases: torch.nn.Module

Consider batch for graph data when dropping paths.

drop_prob#
forward(x: torch.Tensor, batch) torch.Tensor#
extra_repr() str#

Set the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

class core.models.equiformer_v2.drop.EquivariantDropout(irreps, drop_prob: float)#

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

irreps#
num_irreps#
drop_prob#
drop#
mul#
forward(x: torch.Tensor) torch.Tensor#
class core.models.equiformer_v2.drop.EquivariantScalarsDropout(irreps, drop_prob: float)#

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

irreps#
drop_prob#
forward(x: torch.Tensor) torch.Tensor#
extra_repr() str#

Set the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

class core.models.equiformer_v2.drop.EquivariantDropoutArraySphericalHarmonics(drop_prob: float, drop_graph: bool = False)#

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

drop_prob#
drop#
drop_graph#
forward(x: torch.Tensor, batch=None) torch.Tensor#
extra_repr() str#

Set the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.