Ouro
  • Docs
  • Blog
Join for freeSign in
  • Teams
  • Search
Assets
  • Quests
  • Posts
  • APIs
  • Data
  • Teams
  • Search
Assets
  • Quests
  • Posts
  • APIs
  • Data
19h

Compiling ABACUS for GPU acceleration in Modal

Sharing a quick guide on how to get ABACUS compiled and running well in Modal. If you want to run DFT calculation with ABACUS in a serverless environment, this is the guide you're looking for.

Why might you want to do something like this? Maybe you don't have access to HPC. Modal offers cheap, on-demand access to a variety of high-end GPUs, like the A100. Or, if you need first-principles atomistic simulation data and want to offer an API on top of the functionality so that others can use it, this is a great approach too.

I've had a lot of success with this approach, particularly the plane wave approach with basis_type pw and ks_solver bpcg. Seems this setup has the best GPU support and utilization as of v3.9.0.

Let's take a look at the config. For the most part, you should be able to copy and paste this Modal image setup into your projects and you'll be good to go. I went with the modal.Image.from_dockerfile approach, but I imagine you could rework it to be compatible with their built-in approach to take better advantage of their build caching system.

Copy this code block into a file called Dockerfile:

plaintext
FROM nvidia/cuda:12.1.0-devel-ubuntu22.04
ARG ABACUS_VERSION=latest

ENV DEBIAN_FRONTEND=noninteractive

# Install system dependencies including MPI
RUN apt-get update && apt-get install -y \
    wget \
    git \
    cmake \
    make \
    gcc \
    g++ \
    gfortran \
    libopenblas-dev \
    liblapack-dev \
    libscalapack-mpi-dev \
    libfftw3-dev \
    libfftw3-mpi-dev \
    libcereal-dev \
    libxc-dev \
    libelpa-dev \
    libmpich-dev \
    python3 \
    python-is-python3 \
    python3-pip \
    python3-dev \
    && rm -rf /var/lib/apt/lists/*

# Create working directory
WORKDIR /opt

# Clone and build ABACUS with CUDA support (A100/T4)
RUN git clone https://github.com/deepmodeling/abacus-develop.git abacus && \
    cd abacus && \
    if [ "$ABACUS_VERSION" = "latest" ]; then \
        ABACUS_LATEST_TAG=$(git describe --tags $(git rev-list --tags --max-count=1)); \
        git checkout "$ABACUS_LATEST_TAG"; \
    else \
        git checkout "$ABACUS_VERSION"; \
    fi && \
    mkdir build && cd build && \
    cmake -DENABLE_LCAO=ON \
          -DENABLE_MPI=ON \
          -DUSE_OPENMP=ON \
          -DUSE_CUDA=1 \
          -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc \
          -DCMAKE_INSTALL_PREFIX=/opt/abacus \
          .. && \
    make -j$(nproc) && \
    make install && \
    echo "Verifying ABACUS build..." && \
    abacus --version || true

# Add ABACUS to PATH
ENV PATH="/opt/abacus/bin:${PATH}"

# Note: Pseudopotentials and basis sets are provided via Modal volume mounted at /data
# The volume should contain:
#   /data/pp/ - pseudopotentials directory
#   /data/orb/ - basis orbitals directory

# Install Python packages
RUN pip3 install --no-cache-dir \
    pymatgen>=2023.10.0 \
    numpy>=1.24.0 \
    ase>=3.22.0 \
    fastapi>=0.104.0 \
    python-multipart>=0.0.6

WORKDIR /workspace

Notice the important build flag -DUSE_CUDA=1 to get us our GPU support. Make sure when you make your INPUT files to also set device gpu! See the docs here.

In your Modal app code, build the image like so:

python
"""
Modal app for ABACUS MAE calculations with GPU acceleration.
Provides FastAPI endpoint for CIF upload and MAE calculation.
"""

import io
import asyncio
import json
import os
import time
import uuid
from pathlib import Path
from typing import Optional

import modal
import shutil

# Create Modal app
app = modal.App("abacus-mae-calculator")

# Single shared volume for calculations, pseudopotentials, and basis
# Layout in container:
#   /data/
#     ├── calculations/
#     ├── pp/
#     └── orb/

abacus_volume = modal.Volume.from_name("abacus-calculator", create_if_missing=True)

# Build image from Dockerfile to include ABACUS binary with GPU support
abacus_image = (
    modal.Image.from_dockerfile("Dockerfile")
    .env(
        {
            "ABACUS_PP_PATH": "/data/pp",
            "ABACUS_BASIS_PATH": "/data/orb",
        }
    )
    # Add our Python modules last for fast iteration
    .add_local_file("abacus_mae_calculator.py", "/root/abacus_mae_calculator.py")
    .add_local_file("abacus_parser.py", "/root/abacus_parser.py")
)

ABACUS will need pseudopotentials and orbital files before it can run too. I think it's best to put those in a Modal Volume and mount it to the image so you can easily update them and not have to rebuild the entire image again.

Commands to load those up might look something like this (assuming you have local folders called pseudopotentials/ and basis/):

markdown
# --- Sync helpers to upload local PPs and basis into Modal volumes ---

Note: Pseudopotentials and basis orbitals should be uploaded to the shared
Modal volume using the Modal CLI, e.g.:

  modal volume put abacus-calculator pseudopotentials /data/pp
  modal volume put abacus-calculator basis /data/orb

This codebase assumes the volume is mounted at /data for all functions.

Once you get the image built, you will see something about the NVIDA image being depreciated. I'm sure you could update that to something supported but I haven't gotten around to that yet.

And that's it for setup! To run ABACUS in the container, I would usually use Python subprocess to run.

python
cmd = [
    "mpirun",
    "--allow-run-as-root",
    "-np",
    "1",
    "abacus",
]

result = subprocess.run(
    cmd,
    cwd=calc_dir,
    capture_output=True,
    text=True,
    env=env,
)

Set up the calc_dir to include all your necessary INPUT, KPT, STRU files as need and then you can let ABACUS do it's thing. I would make the calc_dir on the Modal Volume for persistence and observability so that I could debug runs from the web UI even after the container finished working.

And that's about all there is to it. Hopefully this unlocks some awesome use cases for materials science, chemistry, and other atomistic work! Share what you build here so we can all benefit too! Happy building.

Loading comments...
13 views

On this page

  • Compiling ABACUS for GPU acceleration in Modal
Loading compatible actions...

A simple guide for compiling ABACUS to run with GPU acceleration in Modal. The post explains how to build ABACUS with CUDA support and run DFT calculations in a serverless environment. It covers why Modal’s on‑demand GPUs (like A100) can help, and which ABACUS setup (plane waves with basis_type pw and ks_solver bpcg) tends to work best on GPUs in version 3.9.0.