core.models.gemnet.initializers

core.models.gemnet.initializers#

Copyright (c) Meta, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Functions#

_standardize(kernel)

Makes sure that N*Var(W) = 1 and E[W] = 0

he_orthogonal_init(→ torch.Tensor)

Generate a weight matrix with variance according to He (Kaiming) initialization.

Module Contents#

core.models.gemnet.initializers._standardize(kernel)#

Makes sure that N*Var(W) = 1 and E[W] = 0

core.models.gemnet.initializers.he_orthogonal_init(tensor: torch.Tensor) torch.Tensor#

Generate a weight matrix with variance according to He (Kaiming) initialization. Based on a random (semi-)orthogonal matrix neural networks are expected to learn better when features are decorrelated (stated by eg. “Reducing overfitting in deep networks by decorrelating representations”, “Dropout: a simple way to prevent neural networks from overfitting”, “Exact solutions to the nonlinear dynamics of learning in deep linear neural networks”)