Update README.md
Browse files
README.md
CHANGED
@@ -35,4 +35,5 @@ pip uninstall cupy-cuda11x -y
|
|
35 |
|
36 |
## how do da inference?
|
37 |
|
38 |
-
See [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/b46d3e89e631701ef205297435064ab780c4853a/Nougat/Inference_with_Nougat_to_read_scientific_PDFs.ipynb)
|
|
|
|
35 |
|
36 |
## how do da inference?
|
37 |
|
38 |
+
See [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/b46d3e89e631701ef205297435064ab780c4853a/Nougat/Inference_with_Nougat_to_read_scientific_PDFs.ipynb) or [this basic notebook](https://huggingface.co/pszemraj/nougat-small-onnx/blob/main/nougat-small-onnx-example.ipynb) I uploaded. It seems ONNX brings CPU inference times to 'feasible' - it took ~15 mins for _Attention is All You Meme_ on Colab free CPU runtime.
|
39 |
+
|