Update src/about.py
Browse files- src/about.py +6 -4
src/about.py
CHANGED
@@ -55,8 +55,10 @@ To reproduce our results, here is the commands you can run:
|
|
55 |
|
56 |
EVALUATION_QUEUE_TEXT = """
|
57 |
|
58 |
-
|
59 |
-
|
|
|
|
|
60 |
|
61 |
## Don't forget to read the FAQ and the About tabs for more information!
|
62 |
|
@@ -64,9 +66,9 @@ An evaluation framework will be available in the future to help reproduce the re
|
|
64 |
|
65 |
### 1) Make sure you can load your model and tokenizer using AutoClasses:
|
66 |
```python
|
67 |
-
from transformers import AutoConfig,
|
68 |
config = AutoConfig.from_pretrained("your model name", revision=revision)
|
69 |
-
model =
|
70 |
tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision)
|
71 |
```
|
72 |
If this step fails, follow the error messages to debug your model before submitting it. It's likely your model has been improperly uploaded.
|
|
|
55 |
|
56 |
EVALUATION_QUEUE_TEXT = """
|
57 |
|
58 |
+
Important Notes:
|
59 |
+
- Right now, the models added **are not automatically evaluated**. We may support automatic evaluation in the future on our own clusters.
|
60 |
+
An evaluation framework will be available in the future to help reproduce the results.
|
61 |
+
- Right now, we only support models with **a causal language modeling head**.
|
62 |
|
63 |
## Don't forget to read the FAQ and the About tabs for more information!
|
64 |
|
|
|
66 |
|
67 |
### 1) Make sure you can load your model and tokenizer using AutoClasses:
|
68 |
```python
|
69 |
+
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
|
70 |
config = AutoConfig.from_pretrained("your model name", revision=revision)
|
71 |
+
model = AutoModelForCausalLM.from_pretrained("your model name", revision=revision)
|
72 |
tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision)
|
73 |
```
|
74 |
If this step fails, follow the error messages to debug your model before submitting it. It's likely your model has been improperly uploaded.
|