cyrusyc commited on
Commit
feb540a
1 Parent(s): 22f0dbc

streamlit community installation fix; update readme

Browse files
Files changed (2) hide show
  1. .devcontainer/devcontainer.json +3 -33
  2. .github/README.md +17 -18
.devcontainer/devcontainer.json CHANGED
@@ -1,33 +1,3 @@
1
- {
2
- "name": "Python 3",
3
- // Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
4
- "image": "mcr.microsoft.com/devcontainers/python:1-3.11-bullseye",
5
- "customizations": {
6
- "codespaces": {
7
- "openFiles": [
8
- "README.md",
9
- "serve/app.py"
10
- ]
11
- },
12
- "vscode": {
13
- "settings": {},
14
- "extensions": [
15
- "ms-python.python",
16
- "ms-python.vscode-pylance"
17
- ]
18
- }
19
- },
20
- "updateContentCommand": "[ -f packages.txt ] && sudo apt update && sudo apt upgrade -y && sudo xargs apt install -y <packages.txt; [ -f requirements.txt ] && pip3 install --user -r requirements.txt; pip3 install --user streamlit; echo '✅ Packages installed and Requirements met'",
21
- "postAttachCommand": {
22
- "server": "streamlit run serve/app.py --server.enableCORS false --server.enableXsrfProtection false"
23
- },
24
- "portsAttributes": {
25
- "8501": {
26
- "label": "Application",
27
- "onAutoForward": "openPreview"
28
- }
29
- },
30
- "forwardPorts": [
31
- 8501
32
- ]
33
- }
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a525cdb835f1b6c36c5d09b1663e2dc0b2e5a40b97214fc9ee2fc0366b9df622
3
+ size 986
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.github/README.md CHANGED
@@ -19,13 +19,26 @@ MLIP Arena is now in pre-alpha. If you're interested in joining the effort, plea
19
  ### Development
20
 
21
  ```
22
- streamlit run serva/app.py
23
  ```
24
 
 
 
 
 
 
25
  ### Add new MLIP models
26
 
27
  If you have pretrained MLIP models that you would like to contribute to the MLIP Arena and show benchmark in real-time, there are two ways:
28
 
 
 
 
 
 
 
 
 
29
  #### Hugging Face Model (recommended, difficult)
30
 
31
  0. Inherit Hugging Face [ModelHubMixin](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins) class to your awesome model class definition. We recommend [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin).
@@ -36,36 +49,22 @@ If you have pretrained MLIP models that you would like to contribute to the MLIP
36
  > [!NOTE]
37
  > CPU benchmarking will be performed automatically. Due to the limited amount GPU compute, if you would like to be considered for GPU benchmarking, please create a pull request to demonstrate the offline performance of your model (published paper or preprint). We will review and select the models to be benchmarked on GPU.
38
 
39
- #### External ASE Calculator (easy)
40
 
41
- 1. Implement new ASE Calculator class in [mlip_arena/models/external.py](../mlip_arena/models/externals.py).
42
- 2. Name your class with awesome model name and add the same name to [registry](../mlip_arena/models/registry.yaml) with metadata.
43
-
44
- > [!CAUTION]
45
- > Remove unneccessary outputs under `results` class attributes to avoid error for MD simulations. Please refer to other class definition for example.
46
-
47
- ### Add new benchmark tasks
48
-
49
- 1. Follow the task template to implement the task class and upload the script along with metadata to the MLIP Arena [here](../mlip_arena/tasks/README.md).
50
- 2. Code a benchmark script to evaluate the performance of your model on the task. The script should be able to load the model and the dataset, and output the evaluation metrics.
51
 
52
  ### Add new datasets
53
 
54
- 1. Create a new [Hugging Face Dataset](https://huggingface.co/new-dataset) repository and upload the reference data (e.g. DFT, AIMD, experimental measurements such as RDF).
55
 
 
56
 
57
  #### Single-point density functional theory calculations
58
 
59
  - [ ] MPTrj
60
  - [ ] [Alexandria](https://huggingface.co/datasets/atomind/alexandria)
61
  - [ ] QM9
 
62
 
63
  #### Molecular dynamics calculations
64
 
65
  - [ ] [MD17](http://www.sgdml.org/#datasets)
66
  - [ ] [MD22](http://www.sgdml.org/#datasets)
67
-
68
-
69
- ### [Hugging Face Auto-Train](https://huggingface.co/docs/hub/webhooks-guide-auto-retrain)
70
-
71
- Planned but not yet impelemented.
 
19
  ### Development
20
 
21
  ```
22
+ streamlit run serve/app.py
23
  ```
24
 
25
+ ### Add new benchmark tasks
26
+
27
+ 1. Follow the task template to implement the task class and upload the script along with metadata to the MLIP Arena [here](../mlip_arena/tasks/README.md).
28
+ 2. Code a benchmark script to evaluate the performance of your model on the task. The script should be able to load the model and the dataset, and output the evaluation metrics.
29
+
30
  ### Add new MLIP models
31
 
32
  If you have pretrained MLIP models that you would like to contribute to the MLIP Arena and show benchmark in real-time, there are two ways:
33
 
34
+ #### External ASE Calculator (easy)
35
+
36
+ 1. Implement new ASE Calculator class in [mlip_arena/models/external.py](../mlip_arena/models/externals.py).
37
+ 2. Name your class with awesome model name and add the same name to [registry](../mlip_arena/models/registry.yaml) with metadata.
38
+
39
+ > [!CAUTION]
40
+ > Remove unneccessary outputs under `results` class attributes to avoid error for MD simulations. Please refer to other class definition for example.
41
+
42
  #### Hugging Face Model (recommended, difficult)
43
 
44
  0. Inherit Hugging Face [ModelHubMixin](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins) class to your awesome model class definition. We recommend [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin).
 
49
  > [!NOTE]
50
  > CPU benchmarking will be performed automatically. Due to the limited amount GPU compute, if you would like to be considered for GPU benchmarking, please create a pull request to demonstrate the offline performance of your model (published paper or preprint). We will review and select the models to be benchmarked on GPU.
51
 
 
52
 
 
 
 
 
 
 
 
 
 
 
53
 
54
  ### Add new datasets
55
 
56
+ The goal is to compile and keep the copy of all the open source data in a unified format for lifelong learning with [Hugging Face Auto-Train](https://huggingface.co/docs/hub/webhooks-guide-auto-retrain).
57
 
58
+ 1. Create a new [Hugging Face Dataset](https://huggingface.co/new-dataset) repository and upload the reference data (e.g. DFT, AIMD, experimental measurements such as RDF).
59
 
60
  #### Single-point density functional theory calculations
61
 
62
  - [ ] MPTrj
63
  - [ ] [Alexandria](https://huggingface.co/datasets/atomind/alexandria)
64
  - [ ] QM9
65
+ - [ ] SPICE
66
 
67
  #### Molecular dynamics calculations
68
 
69
  - [ ] [MD17](http://www.sgdml.org/#datasets)
70
  - [ ] [MD22](http://www.sgdml.org/#datasets)