cyrusyc commited on
Commit
d309ef8
1 Parent(s): 4414fd1

rearrange readme

Browse files
Files changed (1) hide show
  1. .github/README.md +25 -12
.github/README.md CHANGED
@@ -19,31 +19,44 @@ MLIP Arena is now in pre-alpha. If you're interested in joining the effort, plea
19
 
20
  If you have pretrained MLIP models that you would like to contribute to the MLIP Arena and show benchmark in real-time, there are two ways:
21
 
22
- #### Hugging Face Model
23
 
24
  0. Inherit Hugging Face [ModelHubMixin](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins) class to your awesome model class definition. We recommend [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin).
25
  1. Create a new [Hugging Face Model](https://huggingface.co/new) repository and upload the model file using [push_to_hub function](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins#huggingface_hub.ModelHubMixin.push_to_hub).
26
- 2. Follow the template to code the I/O interface for your model, and upload the script along with metadata to the MLIP Arena [here](../mlip_arena/models/README.md).
27
- 3. CPU benchmarking will be performed automatically. Due to the limited amount GPU compute, if you would like to be considered for GPU benchmarking, please create a pull request to demonstrate the offline performance of your model (published paper or preprint). We will review and select the models to be benchmarked on GPU.
 
 
 
 
 
 
 
 
 
28
 
29
  ### Add new benchmark tasks
30
 
31
- 1. Create a new [Hugging Face Dataset](https://huggingface.co/new-dataset) repository and upload the reference data (e.g. DFT, AIMD, experimental measurements such as RDF).
32
- 2. Follow the task template to implement the task class and upload the script along with metadata to the MLIP Arena [here](../mlip_arena/tasks/README.md).
33
- 3. Code a benchmark script to evaluate the performance of your model on the task. The script should be able to load the model and the dataset, and output the evaluation metrics.
34
 
35
- #### Molecular dynamics calculations
36
 
37
- - [ ] [MD17](http://www.sgdml.org/#datasets)
38
- - [ ] [MD22](http://www.sgdml.org/#datasets)
39
 
40
 
41
  #### Single-point density functional theory calculations
42
 
43
  - [ ] MPTrj
 
44
  - [ ] QM9
45
- - [ ] [Alexandria](https://alexandria.icams.rub.de/)
46
 
47
- ### Add new training datasets
 
 
 
 
 
 
48
 
49
- [Hugging Face Auto-Train](https://huggingface.co/docs/hub/webhooks-guide-auto-retrain)
 
19
 
20
  If you have pretrained MLIP models that you would like to contribute to the MLIP Arena and show benchmark in real-time, there are two ways:
21
 
22
+ #### Hugging Face Model (recommended, difficult)
23
 
24
  0. Inherit Hugging Face [ModelHubMixin](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins) class to your awesome model class definition. We recommend [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin).
25
  1. Create a new [Hugging Face Model](https://huggingface.co/new) repository and upload the model file using [push_to_hub function](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins#huggingface_hub.ModelHubMixin.push_to_hub).
26
+ 2. Follow the template to code the I/O interface for your model [here](../mlip_arena/models/README.md).
27
+ 3. Update model [registry](../mlip_arena/models/registry.yaml) with metadata
28
+
29
+ > [!NOTE] CPU benchmarking will be performed automatically. Due to the limited amount GPU compute, if you would like to be considered for GPU benchmarking, please create a pull request to demonstrate the offline performance of your model (published paper or preprint). We will review and select the models to be benchmarked on GPU.
30
+
31
+ #### External ASE Calculator (easy)
32
+
33
+ 1. Implement new ASE Calculator class in [mlip_arena/models/external.py](../mlip_arena/models/externals.py).
34
+ 2. Name your class with awesome model name and add the same name to [registry](../mlip_arena/models/registry.yaml) with metadata.
35
+
36
+ > [!CAUTION] Remove unneccessary outputs under `results` class attributes to avoid error for MD simulations. Please refer to other class definition for example.
37
 
38
  ### Add new benchmark tasks
39
 
40
+ 1. Follow the task template to implement the task class and upload the script along with metadata to the MLIP Arena [here](../mlip_arena/tasks/README.md).
41
+ 2. Code a benchmark script to evaluate the performance of your model on the task. The script should be able to load the model and the dataset, and output the evaluation metrics.
 
42
 
43
+ ### Add new datasets
44
 
45
+ 1. Create a new [Hugging Face Dataset](https://huggingface.co/new-dataset) repository and upload the reference data (e.g. DFT, AIMD, experimental measurements such as RDF).
 
46
 
47
 
48
  #### Single-point density functional theory calculations
49
 
50
  - [ ] MPTrj
51
+ - [ ] [Alexandria](https://huggingface.co/datasets/atomind/alexandria)
52
  - [ ] QM9
 
53
 
54
+ #### Molecular dynamics calculations
55
+
56
+ - [ ] [MD17](http://www.sgdml.org/#datasets)
57
+ - [ ] [MD22](http://www.sgdml.org/#datasets)
58
+
59
+
60
+ ### [Hugging Face Auto-Train](https://huggingface.co/docs/hub/webhooks-guide-auto-retrain)
61
 
62
+ Planned but not yet impelemented.