Not exactly the most rigorous test, but this side-by-side comparison shows the difference between running MD locally (M2 Macbook Air) and on a proper server (g4dn.2xl with T4 GPU). Each log is actually 100 simulation steps too.
Discover other files like this one
Table S1 from the MatterSim paper
Left: Model forward pass speed (excluding featurization) compared to MACE on a single NVIDIA A100 GPU. At large system sizes, Orb is between 3 to 6 times faster than MACE. Right: End to end model inference speed for a 100 atom system on a single NVIDIA A100 when implemented as a Calculator object in the Atomic Simulation Environment Python library. The D3 dispersion correction adds a substantial cost which is amortized by Orb models, as the corrections are incorporated into training datasets. All measurements reported as the median of 50 runs.
Table S5 from the MatterSim paper: Comparison of property prediction performance for M3GNet and Graphormer models.