Authors introduce Orb, a family of universal interatomic potentials for atomistic modeling of materials. Orb models are 3-6 times faster than existing universal potentials, stable under simulation for a range of out of distribution materials and, upon release, represented a 31% reduction in error over other methods on the Matbench Discovery benchmark. https://arxiv.org/abs/2410.22570
How this file is connected to other assets
The paper is somewhat basic (and probably still in preprint), but this contribution is nonetheless great!
After reading the MatterSim paper, the authors proposed the idea of using the MLFF's latent space as a direct property prediction feature set. Earlier, and I had been thinking about using a VAE (or s
Discover other files like this one
The 2nd generation of our atoms-in-molecules neural network potential (AIMNet2), which is applicable to species composed of up to 14 chemical elements in both neutral and charged states, making it a valuable method for modeling the majority of non-metallic compounds. Using an exhaustive dataset of 2 x 107 hybrid DFT level of theory quantum chemical calculations, AIMNet2 combines ML-parameterized short-range and physics-based long-range terms to attain generalizability that reaches from simple organics to diverse molecules with “exotic” element-organic bonding.
Machine-learned force fields have transformed the atomistic modeling of materials by enabling simulations of ab initio quality on unprecedented time and length scales. However, they are currently limited by: (i) the significant computational and human effort that must go into development and validation of potentials for each particular system of interest; and (ii) a general lack of transferability from one chemical system to the next. Here, using the state-of-the-art MACE architecture we introduce a single general-purpose ML model, trained on a public database of 150k inorganic crystals, that is capable of running stable molecular dynamics on molecules and materials.
Left: Model forward pass speed (excluding featurization) compared to MACE on a single NVIDIA A100 GPU. At large system sizes, Orb is between 3 to 6 times faster than MACE. Right: End to end model inference speed for a 100 atom system on a single NVIDIA A100 when implemented as a Calculator object in the Atomic Simulation Environment Python library. The D3 dispersion correction adds a substantial cost which is amortized by Orb models, as the corrections are incorporated into training datasets. All measurements reported as the median of 50 runs.
Plot produced by taking the features generated by Orb (256 dim output) and visualizing different dimensionality reduction methods on them and coloring the point by Tc from the 3DSC database.