Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ widget:
|
|
22 |
inference:
|
23 |
parameters:
|
24 |
no_repeat_ngram_size: 2
|
25 |
-
max_length:
|
26 |
early_stopping: True
|
27 |
---
|
28 |
|
@@ -32,6 +32,7 @@ inference:
|
|
32 |
- This model was trained a text-to-text task with input text as a summary of a chapter, and the output text as the analysis of that chapter on the [booksum](https://arxiv.org/abs/2105.08209) dataset.
|
33 |
- it has somewhat learned how to complete literary analysis on an arbitrary input text.
|
34 |
- **NOTE: this is fairly intensive computationally and recommended to be run on GPU. please see example usage in [this demo notebook](https://colab.research.google.com/gist/pszemraj/8e9cc5bee5cac7916ef9241b66e01b05/demo-t5-large-for-lexical-analysis.ipynb)**
|
|
|
35 |
|
36 |
## Example
|
37 |
|
|
|
22 |
inference:
|
23 |
parameters:
|
24 |
no_repeat_ngram_size: 2
|
25 |
+
max_length: 64
|
26 |
early_stopping: True
|
27 |
---
|
28 |
|
|
|
32 |
- This model was trained a text-to-text task with input text as a summary of a chapter, and the output text as the analysis of that chapter on the [booksum](https://arxiv.org/abs/2105.08209) dataset.
|
33 |
- it has somewhat learned how to complete literary analysis on an arbitrary input text.
|
34 |
- **NOTE: this is fairly intensive computationally and recommended to be run on GPU. please see example usage in [this demo notebook](https://colab.research.google.com/gist/pszemraj/8e9cc5bee5cac7916ef9241b66e01b05/demo-t5-large-for-lexical-analysis.ipynb)**
|
35 |
+
- The API is set to return max 64 tokens to avoid timeouts on CPU.
|
36 |
|
37 |
## Example
|
38 |
|