Open Catalyst 2020 Nudged Elastic Band (OC20NEB)#
Overview#
This is a validation dataset which was used to assess model performance in CatTSunami: Accelerating Transition State Energy Calculations with Pre-trained Graph Neural Networks. It is comprised of 932 NEB relaxation trajectories. There are three different types of reactions represented: desorptions, dissociations, and transfers. NEB calculations allow us to find transition states. The rate of reaction is determined by the transition state energy, so access to transition states is very important for catalysis research. For more information, check out the paper.
File Structure and Contents#
The tar file contains 3 subdirectories: dissociations, desorptions, and transfers. As the names imply, these directories contain the converged DFT trajectories for each of the reaction classes. Within these directories, the trajectories are named to identify the contents of the file. Here is an example and the anatomy of the name:
desorption_id_83_2409_9_111-4_neb1.0.traj
desorptionindicates the reaction type (dissociation and transfer are the other possibilities)ididentifies that the material belongs to the validation in domain split (ood - out of domain is th e other possibility)83is the task id. This does not provide relavent information2409is the bulk index of the bulk used in the ocdata bulk pickle file9is the reaction index. for each reaction type there is a reaction pickle file in the repository. In this case it is the 9th entry to that pickle file111-4the first 3 numbers are the miller indices (i.e. the (1,1,1) surface), and the last number cooresponds to the shift value. In this case the 4th shift enumerated was the one used.neb1.0the number here indicates the k value used. For the full dataset, 1.0 was used so this does not distiguish any of the trajectories from one another.
The content of these trajectory files is the repeating frame sets. Despite the initial and final frames not being optimized during the NEB, the initial and final frames are saved for every iteration in the trajectory. For the dataset, 10 frames were used - 8 which were optimized over the neb. So the length of the trajectory is the number of iterations (N) * 10. If you wanted to look at the frame set prior to optimization and the optimized frame set, you could get them like this:
from __future__ import annotations
!wget https://dl.fbaipublicfiles.com/opencatalystproject/data/large_files/desorption_id_83_2409_9_111-4_neb1.0.traj
from ase.io import read
traj = read("desorption_id_83_2409_9_111-4_neb1.0.traj", ":")
unrelaxed_frames = traj[0:10]
relaxed_frames = traj[-10:]
--2025-11-21 23:38:37-- https://dl.fbaipublicfiles.com/opencatalystproject/data/large_files/desorption_id_83_2409_9_111-4_neb1.0.traj
Resolving dl.fbaipublicfiles.com (dl.fbaipublicfiles.com)...
3.167.112.66, 3.167.112.129, 3.167.112.51, ...
Connecting to dl.fbaipublicfiles.com (dl.fbaipublicfiles.com)|3.167.112.66|:443... connected.
HTTP request sent, awaiting response...
200 OK
Length: 10074935 (9.6M) [binary/octet-stream]
Saving to: ‘desorption_id_83_2409_9_111-4_neb1.0.traj’
desorptio 0%[ ] 0 --.-KB/s
desorption 3%[ ] 391.17K 1.67MB/s
desorption_ 34%[=====> ] 3.31M 7.60MB/s
desorption_id_83_24 100%[===================>] 9.61M 18.9MB/s in 0.5s
2025-11-21 23:38:38 (18.9 MB/s) - ‘desorption_id_83_2409_9_111-4_neb1.0.traj’ saved [10074935/10074935]
Download#
Splits |
Size of compressed version (in bytes) |
Size of uncompressed version (in bytes) |
MD5 checksum (download link) |
|---|---|---|---|
ASE Trajectories |
1.5G |
6.3G |
Use#
One more note: We have not prepared an lmdb for this dataset. This is because it is NEB calculations are not supported directly in ocp. You must use the ase native OCP class along with ase infrastructure to run NEB calculations. Here is an example of a use:
import os
from ase.io import read
from ase.mep import DyNEB
from ase.optimize import BFGS
from fairchem.core import FAIRChemCalculator, pretrained_mlip
traj = read("desorption_id_83_2409_9_111-4_neb1.0.traj", ":")
images = traj[0:10]
predictor = pretrained_mlip.get_predict_unit("uma-s-1p1")
neb = DyNEB(images, k=1)
for image in images:
image.calc = FAIRChemCalculator(predictor, task_name="oc20")
optimizer = BFGS(
neb,
trajectory="neb.traj",
)
# Use a small number of steps here to keep the docs fast during CI, but otherwise do quite reasonable settings.
fast_docs = os.environ.get("FAST_DOCS", "false").lower() == "true"
if fast_docs:
optimization_steps = 20
else:
optimization_steps = 300
conv = optimizer.run(fmax=0.45, steps=optimization_steps)
if conv:
neb.climb = True
conv = optimizer.run(fmax=0.05, steps=optimization_steps)
WARNING:root:device was not explicitly set, using device='cuda'.
Step Time Energy fmax
BFGS: 0 23:38:52 -305.763008 5.169706
BFGS: 1 23:38:53 -305.691698 11.366598
BFGS: 2 23:38:54 -305.916311 1.889962
BFGS: 3 23:38:55 -305.932505 2.616030
BFGS: 4 23:38:56 -306.010363 2.264344
BFGS: 5 23:38:57 -306.003679 6.892219
BFGS: 6 23:38:58 -306.254759 9.617148
BFGS: 7 23:38:59 -306.224756 3.371028
BFGS: 8 23:39:00 -306.290783 4.665928
BFGS: 9 23:39:01 -306.315119 0.727081
BFGS: 10 23:39:02 -306.329416 0.653817
BFGS: 11 23:39:03 -306.357723 1.619395
BFGS: 12 23:39:04 -306.412172 1.941202
BFGS: 13 23:39:05 -306.441252 0.604925
BFGS: 14 23:39:06 -306.471014 0.559246
BFGS: 15 23:39:07 -306.495130 2.147982
BFGS: 16 23:39:08 -306.497898 0.480617
BFGS: 17 23:39:09 -306.504509 0.516031
BFGS: 18 23:39:10 -306.511309 0.709043
BFGS: 19 23:39:11 -306.508453 0.834315
BFGS: 20 23:39:12 -306.478359 1.204801