Simple simulations using the OCP ASE calculator#

To introduce OCP we start with using it to calculate adsorption energies for a simple, atomic adsorbate where we specify the site we want to the adsorption energy for. Conceptually, you do this like you would do it with density functional theory. You create a slab model for the surface, place an adsorbate on it as an initial guess, run a relaxation to get the lowest energy geometry, and then compute the adsorption energy using reference states for the adsorbate.

You do have to be careful in the details though. Some OCP model/checkpoint combinations return a total energy like density functional theory would, but some return an “adsorption energy” directly. You have to know which one you are using. In this example, the model we use returns an “adsorption energy”.

Calculating adsorption energies#

Calculating adsorption energy with OCP Adsorption energies are different than you might be used to in OCP. For example, you may want the adsorption energy of oxygen, and conventionally you would compute that from this reaction:

1/2 O2 + slab -> slab-O

This is not what is done in OCP. It is referenced to a different reaction

x CO + (x + y/2 - z) H2 + (z-x) H2O + w/2 N2 + * -> CxHyOzNw*

Here, x=y=w=0, z=1, so the reaction ends up as

-H2 + H2O + * -> O*

or alternatively,

H2O + * -> O* + H2

It is possible through thermodynamic cycles to compute other reactions. If we can look up rH1 below and compute rH2

H2 + 1/2 O2 -> H2O  re1 = -3.03 eV
H2O + * -> O* + H2  re2  # Get from OCP as a direct calculation

Then, the adsorption energy for

1/2O2 + * -> O*  

is just re1 + re2.

Based on https://atct.anl.gov/Thermochemical%20Data/version%201.118/species/?species_number=986, the formation energy of water is about -3.03 eV at standard state. You could also compute this using DFT.

The first step is getting a checkpoint for the model we want to use. eSCN is currently the state of the art model arXiv. This next cell will download the checkpoint if you don’t have it already. However, we’re going to use an older GemNet model (which still works pretty well!) just to keep the resources lower for this tutorial.

The different models have different compute requirements. If you find your kernel is crashing, it probably means you have exceeded the allowed amount of memory. This checkpoint works fine in this example, but it may crash your kernel if you use it in the NRR example.

from fairchem.core.models.model_registry import model_name_to_local_file

checkpoint_path = model_name_to_local_file('EquiformerV2-31M-S2EF-OC20-All+MD', local_cache='/tmp/fairchem_checkpoints/')

Next we load the checkpoint. The output is somewhat verbose, but it can be informative for debugging purposes.

from fairchem.core.common.relaxation.ase_utils import OCPCalculator
calc = OCPCalculator(checkpoint_path=checkpoint_path, cpu=False)
# calc = OCPCalculator(checkpoint_path=checkpoint_path, cpu=True)
/home/runner/work/fairchem/fairchem/src/fairchem/core/models/escn/so3.py:23: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  _Jd = torch.load(os.path.join(os.path.dirname(__file__), "Jd.pt"))
/home/runner/work/fairchem/fairchem/src/fairchem/core/models/scn/spherical_harmonics.py:23: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  _Jd = torch.load(os.path.join(os.path.dirname(__file__), "Jd.pt"))
/home/runner/work/fairchem/fairchem/src/fairchem/core/models/equiformer_v2/wigner.py:10: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  _Jd = torch.load(os.path.join(os.path.dirname(__file__), "Jd.pt"))
/home/runner/work/fairchem/fairchem/src/fairchem/core/models/equiformer_v2/layer_norm.py:75: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
  @torch.cuda.amp.autocast(enabled=False)
/home/runner/work/fairchem/fairchem/src/fairchem/core/models/equiformer_v2/layer_norm.py:175: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
  @torch.cuda.amp.autocast(enabled=False)
/home/runner/work/fairchem/fairchem/src/fairchem/core/models/equiformer_v2/layer_norm.py:263: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
  @torch.cuda.amp.autocast(enabled=False)
/home/runner/work/fairchem/fairchem/src/fairchem/core/models/equiformer_v2/layer_norm.py:357: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
  @torch.cuda.amp.autocast(enabled=False)
/home/runner/work/fairchem/fairchem/src/fairchem/core/common/relaxation/ase_utils.py:150: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  checkpoint = torch.load(checkpoint_path, map_location=torch.device("cpu"))
WARNING:root:Detected old config, converting to new format. Consider updating to avoid potential incompatibilities.
WARNING:root:equiformer_v2 (EquiformerV2) class is deprecated in favor of equiformer_v2_backbone_and_heads  (EquiformerV2BackboneAndHeads)
/home/runner/work/fairchem/fairchem/src/fairchem/core/modules/normalization/normalizer.py:69: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  "mean": torch.tensor(state_dict["mean"]),
WARNING:root:No seed has been set in modelcheckpoint or OCPCalculator! Results may not be reproducible on re-run

Next we can build a slab with an adsorbate on it. Here we use the ASE module to build a Pt slab. We use the experimental lattice constant that is the default. This can introduce some small errors with DFT since the lattice constant can differ by a few percent, and it is common to use DFT lattice constants. In this example, we do not constrain any layers.

from ase.build import fcc111, add_adsorbate
from ase.optimize import BFGS
re1 = -3.03

slab = fcc111('Pt', size=(2, 2, 5), vacuum=10.0)
add_adsorbate(slab, 'O', height=1.2, position='fcc')

slab.set_calculator(calc)
opt = BFGS(slab)
opt.run(fmax=0.05, steps=100)
slab_e = slab.get_potential_energy()
slab_e + re1
/tmp/ipykernel_4417/3494085424.py:6: DeprecationWarning: Please use atoms.calc = calc
  slab.set_calculator(calc)
/home/runner/work/fairchem/fairchem/src/fairchem/core/trainers/ocp_trainer.py:451: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
  with torch.cuda.amp.autocast(enabled=self.scaler is not None):
      Step     Time          Energy          fmax
BFGS:    0 21:46:36        1.212761        1.631530
BFGS:    1 21:46:37        1.071970        0.906286
BFGS:    2 21:46:38        0.981418        0.637568
BFGS:    3 21:46:38        0.960159        0.689580
BFGS:    4 21:46:39        0.881591        0.561220
BFGS:    5 21:46:40        0.835982        0.376775
BFGS:    6 21:46:40        0.824549        0.420579
BFGS:    7 21:46:41        0.814234        0.512919
BFGS:    8 21:46:42        0.827346        0.902131
BFGS:    9 21:46:42        0.779740        0.358879
BFGS:   10 21:46:43        0.770880        0.205400
BFGS:   11 21:46:44        0.757477        0.122984
BFGS:   12 21:46:44        0.755318        0.112765
BFGS:   13 21:46:45        0.754121        0.105082
BFGS:   14 21:46:46        0.755007        0.087168
BFGS:   15 21:46:46        0.757398        0.047700
-2.2726015734672544

It is good practice to look at your geometries to make sure they are what you expect.

import matplotlib.pyplot as plt
from ase.visualize.plot import plot_atoms

fig, axs = plt.subplots(1, 2)
plot_atoms(slab, axs[0]);
plot_atoms(slab, axs[1], rotation=('-90x'))
axs[0].set_axis_off()
axs[1].set_axis_off()
../_images/c6cf28614d9d8424af32326e2fe48d51778784c1b82c93c4e269592aa8877180.png

How did we do? We need a reference point. In the paper below, there is an atomic adsorption energy for O on Pt(111) of about -4.264 eV. This is for the reaction O + * -> O*. To convert this to the dissociative adsorption energy, we have to add the reaction:

1/2 O2 -> O   D = 2.58 eV (expt)

to get a comparable energy of about -1.68 eV. There is about ~0.6 eV difference (we predicted -2.3 eV above, and the reference comparison is -1.68 eV) to account for. The biggest difference is likely due to the differences in exchange-correlation functional. The reference data used the PBE functional, and eSCN was trained on RPBE data. To additional places where there are differences include:

  1. Difference in lattice constant

  2. The reference energy used for the experiment references. These can differ by up to 0.5 eV from comparable DFT calculations.

  3. How many layers are relaxed in the calculation

Some of these differences tend to be systematic, and you can calibrate and correct these, especially if you can augment these with your own DFT calculations.

See convergence study for some additional studies of factors that influence this number.

Exercises#

  1. Explore the effect of the lattice constant on the adsorption energy.

  2. Try different sites, including the bridge and top sites. Compare the energies, and inspect the resulting geometries.

Next steps#

In the next step, we consider some more complex adsorbates in nitrogen reduction, and how we can leverage OCP to automate the search for the most stable adsorbate geometry. See the next step.

Convergence study#

In Calculating adsorption energies we discussed some possible reasons we might see a discrepancy. Here we investigate some factors that impact the computed energies.

In this section, the energies refer to the reaction 1/2 O2 -> O*.

Effects of number of layers#

Slab thickness could be a factor. Here we relax the whole slab, and see by about 4 layers the energy is converged to ~0.02 eV.

for nlayers in [3, 4, 5, 6, 7, 8]:
    slab = fcc111('Pt', size=(2, 2, nlayers), vacuum=10.0)
    add_adsorbate(slab, 'O', height=1.2, position='fcc')

    slab.set_calculator(calc)
    opt = BFGS(slab, logfile=None)
    opt.run(fmax=0.05, steps=100)
    slab_e = slab.get_potential_energy()
    print(f'nlayers = {nlayers}: {slab_e + re1:1.2f} eV')
/tmp/ipykernel_4417/4214609233.py:5: DeprecationWarning: Please use atoms.calc = calc
  slab.set_calculator(calc)
nlayers = 3: -2.38 eV
nlayers = 4: -2.26 eV
nlayers = 5: -2.27 eV
nlayers = 6: -2.26 eV
nlayers = 7: -2.26 eV
nlayers = 8: -2.27 eV

Effects of relaxation#

It is common to only relax a few layers, and constrain lower layers to bulk coordinates. We do that here. We only relax the adsorbate and the top layer.

This has a small effect (0.1 eV).

from ase.constraints import FixAtoms

for nlayers in [3, 4, 5, 6, 7, 8]:
    slab = fcc111('Pt', size=(2, 2, nlayers), vacuum=10.0)
    add_adsorbate(slab, 'O', height=1.2, position='fcc')
    
    slab.set_constraint(FixAtoms(mask=[atom.tag > 1 for atom in slab]))

    slab.set_calculator(calc)
    opt = BFGS(slab, logfile=None)
    opt.run(fmax=0.05, steps=100)
    slab_e = slab.get_potential_energy()
    print(f'nlayers = {nlayers}: {slab_e + re1:1.2f} eV')
/tmp/ipykernel_4417/2738246227.py:9: DeprecationWarning: Please use atoms.calc = calc
  slab.set_calculator(calc)
nlayers = 3: -2.22 eV
nlayers = 4: -2.07 eV
nlayers = 5: -2.09 eV
nlayers = 6: -2.08 eV
nlayers = 7: -2.09 eV
nlayers = 8: -2.10 eV

Unit cell size#

Coverage effects are quite noticeable with oxygen. Here we consider larger unit cells. This effect is large, and the results don’t look right, usually adsorption energies get more favorable at lower coverage, not less. This suggests fine-tuning could be important even at low coverages.

for size in [1, 2, 3, 4, 5]:
    slab = fcc111('Pt', size=(size, size, 5), vacuum=10.0)
    add_adsorbate(slab, 'O', height=1.2, position='fcc')
    
    slab.set_constraint(FixAtoms(mask=[atom.tag > 1 for atom in slab]))

    slab.set_calculator(calc)
    opt = BFGS(slab, logfile=None)
    opt.run(fmax=0.05, steps=100)
    slab_e = slab.get_potential_energy()
    print(f'({size}x{size}): {slab_e + re1:1.2f} eV')
/tmp/ipykernel_4417/1073633840.py:7: DeprecationWarning: Please use atoms.calc = calc
  slab.set_calculator(calc)
(1x1): -1.00 eV
(2x2): -2.09 eV
(3x3): -1.44 eV
(4x4): -1.48 eV
(5x5): -1.37 eV

Summary#

As with DFT, you should take care to see how these kinds of decisions affect your results, and determine if they would change any interpretations or not.