File size: 2,459 Bytes
843083e
 
7767ff0
 
 
 
 
843083e
7767ff0
 
4357171
7767ff0
 
 
03fa443
7767ff0
 
 
 
cd98b17
 
f8f0425
 
 
 
 
03fa443
f8f0425
 
 
4ae7d3f
f8f0425
4ae7d3f
f8f0425
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7767ff0
d747cbc
7767ff0
 
 
 
 
d747cbc
7767ff0
 
 
d747cbc
03fa443
d747cbc
03fa443
 
7767ff0
 
 
d20843d
7767ff0
d747cbc
 
 
7767ff0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
license: openrail
datasets:
- cc100
language:
- ja
pipeline_tag: text-generation
---
# AIBunCho/japanese-novel-gpt-j-6b

[AI BunCho](https://bun-cho.work/)で利用しているモデルです。2021年に作った小説用言語モデルです。

## Model Details

GPT-J-6BをTPUで2週間日本語tokenizerを用いて日本語データで事前学習し、その後2週間小説データで転移学習したものです。


## Uses

Google colabのT4 High-RAMで動作確認しています。

```
pip install transformers sentencepiece accelerate
```


```python
from transformers import GPTJForCausalLM, AlbertTokenizer
import torch

tokenizer = AlbertTokenizer.from_pretrained('AIBunCho/japanese-novel-gpt-j-6b', keep_accents=True, remove_space=False)

model = GPTJForCausalLM.from_pretrained("AIBunCho/japanese-novel-gpt-j-6b", torch_dtype=torch.float16, low_cpu_mem_usage=True)

model.half()
model.eval()

if torch.cuda.is_available():
    model = model.to("cuda")

prompt = """
わたくしといふ現象は
""".strip()

input_ids = tokenizer.encode(
    prompt,
    add_special_tokens=False,
    return_tensors="pt"
).cuda()

# this is for reproducibility.
# feel free to change to get different result
seed = 27  
torch.manual_seed(seed)

tokens = model.generate(
    input_ids.to(device=model.device),
    max_new_tokens=32,
    temperature=0.6,
    top_p=0.9,
    repetition_penalty=1.2,
    do_sample=True,
    pad_token_id=tokenizer.pad_token_id,
    bos_token_id=tokenizer.bos_token_id,
    eos_token_id=tokenizer.eos_token_id
)

out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
"""わたくしといふ現象は、その因果律を断ち切ることができるのです。"""

```

[More Information Needed]

## Bias, Risks, and Limitations

The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.

### Training Data

cc100の日本語データ

Wikipedia

その他webデータ

## Author

X(旧Twitter): [@OsoneHiroyuki](https://twitter.com/OsoneHiroyuki)

## Acknowledgements

[Google TPU research cloud](https://sites.research.google/trc/about/)の支援を受けて学習を行いました。