|
--- |
|
license: mit |
|
tags: |
|
- Machine Learning Interatomic Potential |
|
--- |
|
|
|
# Model Card for mace-universal |
|
|
|
[MACE](https://github.com/ACEsuit/mace) (Multiple Atomic Cluster Expansion) is a machine learning interatomic potential (MLIP) with higher order equivariant message passing. For more information about MACE formalism, please see authors' [paper](https://arxiv.org/abs/2206.07697). |
|
|
|
|
|
[2023-08-14-mace-universal.model](https://huggingface.co/cyrusyc/mace-universal/blob/main/2023-08-14-mace-universal.model) was trained with MPTrj data, [Materials Project](https://materialsproject.org) relaxation trajectories compiled by [CHGNet](https://arxiv.org/abs/2302.14231) authors to cover 89 elements and 1.6M configurations. The checkpoint was used for materials stability prediction on [Matbench Discovery](https://matbench-discovery.materialsproject.org/) and the associated [preprint](https://arXiv.org/abs/2308.14920). |
|
|
|
# Usage |
|
|
|
1. (optional) Install Pytorch, [ASE](https://wiki.fysik.dtu.dk/ase/) prerequisites for preferred version |
|
2. Install [MACE](https://github.com/ACEsuit/mace) through GitHub (not through pypi) |
|
|
|
```shell |
|
pip install git+https://github.com/ACEsuit/mace.git |
|
``` |
|
3. Use MACECalculator |
|
|
|
```python |
|
from mace.calculators import MACECalculator |
|
from ase import Atoms |
|
from ase.md.npt import NPT |
|
|
|
atoms = Atoms("NaCl") |
|
|
|
calculator = MACECalculator( |
|
model_paths=/path/to/pretrained.model, |
|
device=device, |
|
default_dtype="float32" or "float64", |
|
) |
|
|
|
atoms.calc = calculator |
|
|
|
npt = NPT( |
|
atoms=atoms, |
|
timestep=timestep, |
|
temperature_K=temperature, |
|
externalstress=externalstress, |
|
) |
|
|
|
npt.run(steps) |
|
``` |
|
|
|
# Citing |
|
|
|
If you use the pretrained models in this repository, please cite all the following: |
|
|
|
``` |
|
@inproceedings{Batatia2022mace, |
|
title={{MACE}: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields}, |
|
author={Ilyes Batatia and David Peter Kovacs and Gregor N. C. Simm and Christoph Ortner and Gabor Csanyi}, |
|
booktitle={Advances in Neural Information Processing Systems}, |
|
editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho}, |
|
year={2022}, |
|
url={https://openreview.net/forum?id=YPpSngE-ZU} |
|
} |
|
|
|
@article{riebesell2023matbench, |
|
title={Matbench Discovery--An evaluation framework for machine learning crystal stability prediction}, |
|
author={Riebesell, Janosh and Goodall, Rhys EA and Jain, Anubhav and Benner, Philipp and Persson, Kristin A and Lee, Alpha A}, |
|
journal={arXiv preprint arXiv:2308.14920}, |
|
year={2023} |
|
} |
|
|
|
|
|
@misc {yuan_chiang_2023, |
|
author = { {Yuan Chiang, Philipp Benner} }, |
|
title = { mace-universal (Revision e5ebd9b) }, |
|
year = 2023, |
|
url = { https://huggingface.co/cyrusyc/mace-universal }, |
|
doi = { 10.57967/hf/1202 }, |
|
publisher = { Hugging Face } |
|
} |
|
|
|
@article{deng2023chgnet, |
|
title={CHGNet as a pretrained universal neural network potential for charge-informed atomistic modelling}, |
|
author={Deng, Bowen and Zhong, Peichen and Jun, KyuJung and Riebesell, Janosh and Han, Kevin and Bartel, Christopher J and Ceder, Gerbrand}, |
|
journal={Nature Machine Intelligence}, |
|
pages={1--11}, |
|
year={2023}, |
|
publisher={Nature Publishing Group UK London} |
|
} |
|
``` |
|
|
|
# Training Guide |
|
|
|
## Training Data |
|
|
|
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
For now, please download MPTrj data from [figshare](https://figshare.com/articles/dataset/Materials_Project_Trjectory_MPtrj_Dataset/23713842). We may upload to HuggingFace Datasets in the future. |
|
|
|
## Fine-tuning |
|
|
|
<!-- This should link to a Training Procedure Card, perhaps with a short stub of information on what the training procedure is all about as well as documentation related to hyperparameters or additional training details. --> |
|
|
|
We provide an example multi-GPU training script [2023-08-14-mace-universal.sbatch](https://huggingface.co/cyrusyc/mace-universal/blob/main/2023-08-14-mace-universal.sbatch), which uses 40 A100s on NERSC Perlmutter. Please see MACE `multi-gpu` branch for more detailed instructions. |
|
|
|
|