Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ The model is quantized version of the [haoranxu/ALMA-7B](https://huggingface.co/
|
|
25 |
The original model was converted on 2023-12 with the following command:
|
26 |
|
27 |
```
|
28 |
-
ct2-transformers-converter --model haoranxu/ALMA-7B --quantization int8_float16 --output_dir ALMA-7B-
|
29 |
--copy_files generation_config.json special_tokens_map.json tokenizer.model tokenizer_config.json
|
30 |
```
|
31 |
|
@@ -48,7 +48,7 @@ More detailed information about the `generate_batch` methon can be found at [CTr
|
|
48 |
import ctranslate2
|
49 |
import transformers
|
50 |
|
51 |
-
generator = ctranslate2.Generator("avans06/ALMA-7B-
|
52 |
tokenizer = transformers.AutoTokenizer.from_pretrained("haoranxu/ALMA-7B")
|
53 |
|
54 |
text = "Who is Alan Turing?"
|
|
|
25 |
The original model was converted on 2023-12 with the following command:
|
26 |
|
27 |
```
|
28 |
+
ct2-transformers-converter --model haoranxu/ALMA-7B --quantization int8_float16 --output_dir ALMA-7B-ct2-int8_float16 \
|
29 |
--copy_files generation_config.json special_tokens_map.json tokenizer.model tokenizer_config.json
|
30 |
```
|
31 |
|
|
|
48 |
import ctranslate2
|
49 |
import transformers
|
50 |
|
51 |
+
generator = ctranslate2.Generator("avans06/ALMA-7B-ct2-int8_float16")
|
52 |
tokenizer = transformers.AutoTokenizer.from_pretrained("haoranxu/ALMA-7B")
|
53 |
|
54 |
text = "Who is Alan Turing?"
|