Figure 3 from the Orb paper
Left: Model forward pass speed (excluding featurization) compared to MACE on a single NVIDIA A100 GPU. At large system sizes, Orb is between 3 to 6 times faster than MACE.
Right: End to end model inference speed for a 100 atom system on a single NVIDIA A100 when implemented as a Calculator object in the Atomic Simulation Environment Python library. The D3 dispersion correction adds a substantial cost which is amortized by Orb models, as the corrections are incorporated into training datasets. All measurements reported as the median of 50 runs.
Image file