Figure 2 from "Orb-v3" paper
Speed + max GPU memory allocated on an NVIDIA H200 for the computation of energies, forces and stress. The batch size is fixed to 1, but authors vary the number of atoms across the subplots. Relative times are computed with respect to the fastest model: orb-v3 Direct (20 neighbors). Times include both model inference and graph construction, with the latter marked by hatched lines. The graph construction method for Orb is a function of the number of atoms, as described in Appendix D. A key takeaway from this figure is that extreme scalability requires a confluence of i) efficient graph construction ii) Finite max neighbors iii) Non-conservative direct predictions. For the baselines, the authors use mace-medium-mpa-0 (v0.3.10, cuequivariance-torch v0.1.0), mattersim-v1.0.0-5m (v1.1.2), 7net-mf-ompa (v0.11.0). All models are benchmarked using PyTorch v2.6.0+cu124.
explains how to pick from eight Orb-v3 models that balance accuracy, speed, and memory for atomistic simulations. The post breaks down model names (orb-v3-X-Y-Z), where X is how forces are computed, Y is neighbor limits, and Z is the training dataset (omat or mpa). It compares conservative vs direct force calculations, unlimited vs limited neighbors, and AIMD-based -omat versus MPTraj/Alexandria-based -mpa models. Readers gain practical guidance for phonon calculations, geometry optimization, and molecular dynamics, including which models excel at energy conservation, speed, or large-scale simulations. The piece also covers workflow tips, performance at scale, and licensing (Apache 2.0). Use this guide to choose the right Orb-v3 model for your system size and research goals.
13d
