File size: 2,260 Bytes
f685e32 9ae5ab6 aabc37c 9ae5ab6 aabc37c 9ae5ab6 c77a920 9ae5ab6 70171cd c77a920 9ae5ab6 a30c7cc 9ae5ab6 70171cd 9ae5ab6 387bcd0 9ae5ab6 fd6d33b 3f0af0a d533e9b 9ae5ab6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
license: apache-2.0
---
# Dataset
Japanese subset of the [mC4](https://huggingface.co/datasets/mc4) dataset
# Training
Trained for 3000 steps on top of the MPT 7b checkpoint [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b)
# How to load
Before running this model, please install the following pip package:
```bash
pip install einops
```
To load the model, run the following command.
```python
from transformers import AutoModelForCausalLM
model_name = "lightblue/japanese-mpt-7b"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype='auto',
trust_remote_code=True
)
```
To run this model, you may need to load it in a lower precision in order for it to fit onto your GPU. We found for a T4 GPU, it requires loading the model in 8-bit precision. To load the model in 8-bit and 4-bit, please install the following pip packages:
```bash
pip install bitsandbytes accelerate
```
Caution - you will also need enough RAM to load the model. We estimate loading this model requires ~30GB.
<details>
<summary><b>Code to load the model in 8 bit</b></summary>
```python
from transformers import AutoModelForCausalLM
model_name = "lightblue/japanese-mpt-7b"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype='auto',
load_in_8bit=True,
trust_remote_code=True
)
```
</details><details>
<summary><b>Code to load the model in 4 bit</b></summary>
```python
from transformers import AutoModelForCausalLM
model_name = "lightblue/japanese-mpt-7b"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype='auto',
load_in_4bit=True,
trust_remote_code=True
)
```
</details>
<br/>
# How to use
```python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = """A: γγγ«γ‘γ―
B: γγγ«γ‘γ―
A: ε₯½γγͺγΉγγΌγγ―δ½γ§γγοΌ
B: γ΅γγ«γΌγ§γ
A: ε₯½γγͺι£γΉη©γ―δ½γ§γγοΌ
B:"""
pipe(prompt, temperature=0, do_sample=False, return_full_text=False, max_new_tokens=32)
# [{"generated_text": " γ«γ¬γΌγ§γ
# A: ε₯½γγͺθ²γ―δ½γ§γγοΌ
# B: θ΅€γ§γ"}]
``` |