|
--- |
|
license: apache-2.0 |
|
--- |
|
# Dataset |
|
|
|
Japanese subset of the [mC4](https://huggingface.co/datasets/mc4) dataset |
|
|
|
# Training |
|
|
|
Trained for 3000 steps on top of the MPT 7b checkpoint [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b) |
|
|
|
# How to load |
|
|
|
Before running this model, please install the following pip package: |
|
|
|
```bash |
|
pip install einops |
|
``` |
|
|
|
To run this model, you may need to load it in a lower precision in order for it to fit onto your GPU. We found for a T4 GPU, it requires loading the model in 8-bit precision. To load the model in 8-bit, please install the following pip packages: |
|
|
|
```bash |
|
pip install bitsandbytes accelerate |
|
``` |
|
|
|
Caution - you will also need enough RAM to load the model. We estimate loading this model requires ~30GB. |
|
|
|
<details> |
|
<summary><b>Auto type</b></summary> |
|
|
|
|
|
|
|
```python |
|
from transformers import AutoModelForCausalLM |
|
|
|
model_name = "lightblue/japanese-mpt-7b" |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
torch_dtype='auto', |
|
trust_remote_code=True |
|
) |
|
``` |
|
|
|
</details> |
|
<details> |
|
<summary><b>In 8 bit</b></summary> |
|
|
|
|
|
|
|
```python |
|
from transformers import AutoModelForCausalLM |
|
|
|
model_name = "lightblue/japanese-mpt-7b" |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
torch_dtype='auto', |
|
load_in_8bit=True, |
|
trust_remote_code=True |
|
) |
|
``` |
|
|
|
</details> |
|
|
|
|
|
# How to use |
|
```python |
|
from transformers import AutoTokenizer, pipeline |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) |
|
|
|
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) |
|
|
|
pipe("こんにちは", temperature=0, do_sample=False, return_full_text=False, max_new_tokens=32) |
|
``` |