Left: Model forward pass speed (excluding featurization) compared to MACE on a single NVIDIA A100 GPU. At large system sizes, Orb is between 3 to 6 times faster than MACE.
Right: End to end model inference speed for a 100 atom system on a single NVIDIA A100 when implemented as a Calculator object in the Atomic Simulation Environment Python library. The D3 dispersion correction adds a substantial cost which is amortized by Orb models, as the corrections are incorporated into training datasets. All measurements reported as the median of 50 runs.
2036 x 790
192.71 KB
.png file
1 reference
Notes on Orb
post
The paper is somewhat basic (and probably still in preprint), but this contribution is nonetheless great!