core.models.escaip.utils.nn_utils#

Classes#

Activation

str(object='') -> str

SquaredReLU

Base class for all neural network modules.

StarReLU

Base class for all neural network modules.

SmeLU

Base class for all neural network modules.

NormalizationType

str(object='') -> str

Skip

Base class for all neural network modules.

Functions#

build_activation(activation)

get_linear(in_features, out_features[, bias, ...])

Build a linear layer with optional activation and dropout.

get_feedforward(hidden_dim, activation, ...[, bias, ...])

Build a feedforward layer with optional activation function.

no_weight_decay(model)

init_linear_weights(module[, gain])

get_normalization_layer(normalization_type)

Module Contents#

class core.models.escaip.utils.nn_utils.Activation#

Bases: str, enum.Enum

str(object=’’) -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.__str__() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to ‘strict’.

SquaredReLU = 'squared_relu'#
GeLU = 'gelu'#
LeakyReLU = 'leaky_relu'#
ReLU = 'relu'#
SmeLU = 'smelu'#
StarReLU = 'star_relu'#
class core.models.escaip.utils.nn_utils.SquaredReLU#

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x: torch.Tensor) torch.Tensor#
class core.models.escaip.utils.nn_utils.StarReLU#

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x: torch.Tensor) torch.Tensor#
class core.models.escaip.utils.nn_utils.SmeLU(beta: float = 2.0)#

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

beta#
forward(x: torch.Tensor) torch.Tensor#
core.models.escaip.utils.nn_utils.build_activation(activation: Activation | None)#
core.models.escaip.utils.nn_utils.get_linear(in_features: int, out_features: int, bias: bool = False, activation: Activation | None = None, dropout: float = 0.0)#

Build a linear layer with optional activation and dropout.

core.models.escaip.utils.nn_utils.get_feedforward(hidden_dim: int, activation: Activation | None, hidden_layer_multiplier: int, bias: bool = False, dropout: float = 0.0)#

Build a feedforward layer with optional activation function.

core.models.escaip.utils.nn_utils.no_weight_decay(model)#
core.models.escaip.utils.nn_utils.init_linear_weights(module, gain=1.0)#
class core.models.escaip.utils.nn_utils.NormalizationType#

Bases: str, enum.Enum

str(object=’’) -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.__str__() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to ‘strict’.

LayerNorm = 'layernorm'#
Skip = 'skip'#
RMSNorm = 'rmsnorm'#
class core.models.escaip.utils.nn_utils.Skip(*_, **__)#

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x, **_)#
core.models.escaip.utils.nn_utils.get_normalization_layer(normalization_type: NormalizationType)#