yangwang92 commited on
Commit
13243b6
1 Parent(s): 30eed04

Update index.html

Browse files
Files changed (1) hide show
  1. index.html +4 -2
index.html CHANGED
@@ -16,8 +16,10 @@
16
  <p>
17
  <b>VPTQ (Vector Post-Training Quantization)</b> is an advanced compression technique that dramatically reduces the size of large language models such as the 70B and 405B Llama models. VPTQ efficiently compresses these models to 1-2 bits within just a few hours, enabling them to run effectively on GPUs with limited memory.
18
  For more information, visit the following links:
19
- <p>The current demo runs on a free, shared A100 provided by HUGGINGFACE, which may lead to long load times for model loading and acquiring an available GPU. This demo is intended to showcase the quality of the quantized model, not inference speed.</p>
20
- <ul>
 
 
21
  <li>
22
  <a href="https://arxiv.org/abs/2409.17066" target="_blank" class="link-styled">
23
  <img src="arxiv-logo.png" alt="arXiv" width="20" height="20" /> <b>Paper on arXiv</b>
 
16
  <p>
17
  <b>VPTQ (Vector Post-Training Quantization)</b> is an advanced compression technique that dramatically reduces the size of large language models such as the 70B and 405B Llama models. VPTQ efficiently compresses these models to 1-2 bits within just a few hours, enabling them to run effectively on GPUs with limited memory.
18
  For more information, visit the following links:
19
+ <p style="font-weight: bold; font-size: larger;">
20
+ The current demo runs on a free, shared A100 provided by HUGGINGFACE, which may lead to long load times for model loading and acquiring an available GPU. This demo is intended to showcase the quality of the quantized model, not inference speed.
21
+ </p>
22
+ <ul>
23
  <li>
24
  <a href="https://arxiv.org/abs/2409.17066" target="_blank" class="link-styled">
25
  <img src="arxiv-logo.png" alt="arXiv" width="20" height="20" /> <b>Paper on arXiv</b>