cyrusyc commited on
Commit
ce2bf3c
1 Parent(s): 2276bf8

remove yaml in md

Browse files
Files changed (2) hide show
  1. README.md +0 -11
  2. index.md +0 -35
README.md CHANGED
@@ -1,14 +1,3 @@
1
- ---
2
- title: MLIP Arena
3
- emoji: 🤗
4
- colorFrom: yellow
5
- colorTo: indigo
6
- sdk: streamlit
7
- sdk_version: 1.25.0
8
- app_file: "serve/app.py"
9
- pinned: true
10
- ---
11
-
12
  # mlip-arena
13
 
14
  MLIP Arena is an open-source platform for benchmarking machine learning interatomic potentials (MLIPs). The platform provides a unified interface for users to evaluate the performance of their models on a variety of tasks, including single-point density functional theory calculations and molecular dynamics simulations. The platform is designed to be extensible, allowing users to contribute new models, benchmarks, and training data to the platform.
 
 
 
 
 
 
 
 
 
 
 
 
1
  # mlip-arena
2
 
3
  MLIP Arena is an open-source platform for benchmarking machine learning interatomic potentials (MLIPs). The platform provides a unified interface for users to evaluate the performance of their models on a variety of tasks, including single-point density functional theory calculations and molecular dynamics simulations. The platform is designed to be extensible, allowing users to contribute new models, benchmarks, and training data to the platform.
index.md DELETED
@@ -1,35 +0,0 @@
1
- # mlip-arena
2
-
3
- MLIP Arena is an open-source platform for benchmarking machine learning interatomic potentials (MLIPs). The platform provides a unified interface for users to evaluate the performance of their models on a variety of tasks, including single-point density functional theory calculations and molecular dynamics simulations. The platform is designed to be extensible, allowing users to contribute new models, benchmarks, and training data to the platform.
4
-
5
- ## Contribute
6
-
7
- ### Add new MLIP models
8
-
9
- If you have pretrained MLIP models that you would like to contribute to the MLIP Arena and show benchmark in real-time, please follow these steps:
10
-
11
- 1. Create a new [Hugging Face Model](https://huggingface.co/new) repository and upload the model file.
12
- 2. Follow the template to code the I/O interface for your model, and upload the script along with metadata to the MLIP Arena [here]().
13
- 3. CPU benchmarking will be performed automatically. Due to the limited amount GPU compute, if you would like to be considered for GPU benchmarking, please create a pull request to demonstrate the offline performance of your model (published paper or preprint). We will review and select the models to be benchmarked on GPU.
14
-
15
- ### Add new benchmark tasks
16
-
17
- 1. Create a new [Hugging Face Dataset](https://huggingface.co/new-dataset) repository and upload the reference data (e.g. DFT, AIMD, experimental measurements such as RDF).
18
- 2. Follow the task template to implement the task class and upload the script along with metadata to the MLIP Arena [here]().
19
- 3. Code a benchmark script to evaluate the performance of your model on the task. The script should be able to load the model and the dataset, and output the evaluation metrics.
20
-
21
- #### Molecular dynamics calculations
22
-
23
- - [ ] [MD17](http://www.sgdml.org/#datasets)
24
- - [ ] [MD22](http://www.sgdml.org/#datasets)
25
-
26
-
27
- #### Single-point density functional theory calculations
28
-
29
- - [ ] MPTrj
30
- - [ ] QM9
31
- - [ ] [Alexandria](https://alexandria.icams.rub.de/)
32
-
33
- ### Add new training datasets
34
-
35
- [Hugging Face Auto-Train](https://huggingface.co/docs/hub/webhooks-guide-auto-retrain)