sharpenb commited on
Commit
f20a31f
1 Parent(s): d7423d0

Upload folder using huggingface_hub (#6)

Browse files

- 862124108feb1c3d0a5bd7be52ea49c7f70a4b8ec0c9aca5958ada5f5656ad87 (2ea8dfb0d211909d500f8e0acbafa3e0caf12359)
- 31ee7d14912d6e9b2f9304144ae388bc283495cad45920e9c18a7898612b6b76 (6602480fd8f8fff548017c313913b08514fb9125)

Files changed (3) hide show
  1. README.md +4 -3
  2. model/optimized_model.pkl +2 -2
  3. plots.png +0 -0
README.md CHANGED
@@ -37,16 +37,17 @@ metrics:
37
  ![image info](./plots.png)
38
 
39
  **Important remarks:**
40
- - The quality of the model output might slightly vary compared to the base model. There might be minimal quality loss.
41
  - These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in config.json and are obtained after a hardware warmup. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...).
42
  - You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
 
43
 
44
  ## Setup
45
 
46
  You can run the smashed model with these steps:
47
 
48
- 0. Check cuda, torch, packaging requirements are installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. For packaging and torch, run `pip install packaging torch`.
49
- 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take 15 minutes to install.
50
  ```bash
51
  pip install pruna-engine[gpu]==0.6.0 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/
52
  ```
 
37
  ![image info](./plots.png)
38
 
39
  **Important remarks:**
40
+ - The quality of the model output might slightly vary compared to the base model.
41
  - These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in config.json and are obtained after a hardware warmup. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...).
42
  - You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
43
+ - Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
44
 
45
  ## Setup
46
 
47
  You can run the smashed model with these steps:
48
 
49
+ 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`.
50
+ 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install.
51
  ```bash
52
  pip install pruna-engine[gpu]==0.6.0 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/
53
  ```
model/optimized_model.pkl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fc3c9e0af6b28697b49382b97036ae4351ff21ab08726a68ca1b215f4dfbc505
3
- size 2582798701
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e1099612b3c76602426c915e2c74e12a7c9e2c7aba25d37819263d0b83ae5f9
3
+ size 2582799154
plots.png CHANGED