core.models.dimenet_plus_plus#
Copyright (c) Meta, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.
—
This code borrows heavily from the DimeNet implementation as part of pytorch-geometric: rusty1s/pytorch_geometric. License:
—
Copyright (c) 2020 Matthias Fey <matthias.fey@tu-dortmund.de>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Attributes#
Classes#
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
DimeNet++ implementation based on klicperajo/dimenet. |
|
Base class for all neural network modules. |
|
DimeNet++ implementation based on klicperajo/dimenet. |
|
DimeNet++ implementation based on klicperajo/dimenet. |
Module Contents#
- core.models.dimenet_plus_plus.sym = None#
- class core.models.dimenet_plus_plus.InteractionPPBlock(hidden_channels: int, int_emb_size: int, basis_emb_size: int, num_spherical: int, num_radial: int, num_before_skip: int, num_after_skip: int, act='silu')#
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- act#
- lin_rbf1#
- lin_rbf2#
- lin_sbf1#
- lin_sbf2#
- lin_kj#
- lin_ji#
- lin_down#
- lin_up#
- layers_before_skip#
- lin#
- layers_after_skip#
- reset_parameters() None #
- forward(x, rbf, sbf, idx_kj, idx_ji)#
- class core.models.dimenet_plus_plus.OutputPPBlock(num_radial: int, hidden_channels: int, out_emb_channels: int, out_channels: int, num_layers: int, act: str = 'silu')#
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- act#
- lin_rbf#
- lin_up#
- lins#
- lin#
- reset_parameters() None #
- forward(x, rbf, i, num_nodes: int | None = None)#
- class core.models.dimenet_plus_plus.DimeNetPlusPlus(hidden_channels: int, out_channels: int, num_blocks: int, int_emb_size: int, basis_emb_size: int, out_emb_channels: int, num_spherical: int, num_radial: int, cutoff: float = 5.0, envelope_exponent: int = 5, num_before_skip: int = 1, num_after_skip: int = 2, num_output_layers: int = 3, act: str = 'silu')#
Bases:
torch.nn.Module
DimeNet++ implementation based on klicperajo/dimenet.
- Parameters:
hidden_channels (int) – Hidden embedding size.
out_channels (int) – Size of each output sample.
num_blocks (int) – Number of building blocks.
int_emb_size (int) – Embedding size used for interaction triplets
basis_emb_size (int) – Embedding size used in the basis transformation
out_emb_channels (int) – Embedding size used for atoms in the output block
num_spherical (int) – Number of spherical harmonics.
num_radial (int) – Number of radial basis functions.
cutoff – (float, optional): Cutoff distance for interatomic interactions. (default:
5.0
)envelope_exponent (int, optional) – Shape of the smooth cutoff. (default:
5
)num_before_skip – (int, optional): Number of residual layers in the interaction blocks before the skip connection. (default:
1
)num_after_skip – (int, optional): Number of residual layers in the interaction blocks after the skip connection. (default:
2
)num_output_layers – (int, optional): Number of linear layers for the output blocks. (default:
3
)act – (function, optional): The activation funtion. (default:
silu
)
- url = 'https://github.com/klicperajo/dimenet/raw/master/pretrained'#
- act#
- cutoff#
- num_blocks#
- rbf#
- sbf#
- emb#
- output_blocks#
- interaction_blocks#
- reset_parameters() None #
- triplets(edge_index, cell_offsets, num_nodes: int)#
- abstract forward(z, pos, batch=None)#
- class core.models.dimenet_plus_plus.DimeNetPlusPlusWrapEnergyAndForceHead(backbone)#
Bases:
torch.nn.Module
,fairchem.core.models.base.HeadInterface
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- regress_forces#
- forward(data: torch_geometric.data.batch.Batch, emb: dict[str, torch.Tensor]) dict[str, torch.Tensor] #
Head forward.
- Parameters:
data (DataBatch) – Atomic systems as input
emb (dict[str->torch.Tensor]) – Embeddings of the input as generated by the backbone
- Returns:
outputs – Return one or more targets generated by this head
- Return type:
dict[str->torch.Tensor]
- class core.models.dimenet_plus_plus.DimeNetPlusPlusWrap(use_pbc: bool = True, use_pbc_single: bool = False, regress_forces: bool = True, hidden_channels: int = 128, num_blocks: int = 4, int_emb_size: int = 64, basis_emb_size: int = 8, out_emb_channels: int = 256, num_spherical: int = 7, num_radial: int = 6, otf_graph: bool = False, cutoff: float = 10.0, envelope_exponent: int = 5, num_before_skip: int = 1, num_after_skip: int = 2, num_output_layers: int = 3)#
Bases:
DimeNetPlusPlus
,fairchem.core.models.base.GraphModelMixin
DimeNet++ implementation based on klicperajo/dimenet.
- Parameters:
hidden_channels (int) – Hidden embedding size.
out_channels (int) – Size of each output sample.
num_blocks (int) – Number of building blocks.
int_emb_size (int) – Embedding size used for interaction triplets
basis_emb_size (int) – Embedding size used in the basis transformation
out_emb_channels (int) – Embedding size used for atoms in the output block
num_spherical (int) – Number of spherical harmonics.
num_radial (int) – Number of radial basis functions.
cutoff – (float, optional): Cutoff distance for interatomic interactions. (default:
5.0
)envelope_exponent (int, optional) – Shape of the smooth cutoff. (default:
5
)num_before_skip – (int, optional): Number of residual layers in the interaction blocks before the skip connection. (default:
1
)num_after_skip – (int, optional): Number of residual layers in the interaction blocks after the skip connection. (default:
2
)num_output_layers – (int, optional): Number of linear layers for the output blocks. (default:
3
)act – (function, optional): The activation funtion. (default:
silu
)
- regress_forces#
- use_pbc#
- use_pbc_single#
- cutoff#
- otf_graph#
- max_neighbors = 50#
- _forward(data)#
- forward(data)#
- property num_params: int#
- class core.models.dimenet_plus_plus.DimeNetPlusPlusWrapBackbone(use_pbc: bool = True, use_pbc_single: bool = False, regress_forces: bool = True, hidden_channels: int = 128, num_blocks: int = 4, int_emb_size: int = 64, basis_emb_size: int = 8, out_emb_channels: int = 256, num_spherical: int = 7, num_radial: int = 6, otf_graph: bool = False, cutoff: float = 10.0, envelope_exponent: int = 5, num_before_skip: int = 1, num_after_skip: int = 2, num_output_layers: int = 3)#
Bases:
DimeNetPlusPlusWrap
,fairchem.core.models.base.BackboneInterface
DimeNet++ implementation based on klicperajo/dimenet.
- Parameters:
hidden_channels (int) – Hidden embedding size.
out_channels (int) – Size of each output sample.
num_blocks (int) – Number of building blocks.
int_emb_size (int) – Embedding size used for interaction triplets
basis_emb_size (int) – Embedding size used in the basis transformation
out_emb_channels (int) – Embedding size used for atoms in the output block
num_spherical (int) – Number of spherical harmonics.
num_radial (int) – Number of radial basis functions.
cutoff – (float, optional): Cutoff distance for interatomic interactions. (default:
5.0
)envelope_exponent (int, optional) – Shape of the smooth cutoff. (default:
5
)num_before_skip – (int, optional): Number of residual layers in the interaction blocks before the skip connection. (default:
1
)num_after_skip – (int, optional): Number of residual layers in the interaction blocks after the skip connection. (default:
2
)num_output_layers – (int, optional): Number of linear layers for the output blocks. (default:
3
)act – (function, optional): The activation funtion. (default:
silu
)
- forward(data: torch_geometric.data.batch.Batch) dict[str, torch.Tensor] #
Backbone forward.
- Parameters:
data (DataBatch) – Atomic systems as input
- Returns:
embedding – Return backbone embeddings for the given input
- Return type:
dict[str->torch.Tensor]