stockmark-13b / README.md
omitakahiro's picture
Update README.md
6851ff6
|
raw
history blame
2.09 kB
metadata
license: mit
language:
  - ja
library_name: transformers
pipeline_tag: text-generation
tags:
  - japanese
  - llama-2

stockmark/stockmark-13b

This repository provides a Llama-2 based model with 13B parameters pre-trained on Japanese corpus of about 220B tokens. This model is developed by Stockmark Inc.

Please see our blog for more details.

This project is supported by AWS LLM development support program.

How to use

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("stockmark/stockmark-13b", device_map="auto", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained("stockmark/stockmark-13b")

inputs = tokenizer("自然言語処理とは", return_tensors="pt").to(model.device)
with torch.no_grad():
    tokens = model.generate(
        **inputs,
        max_new_tokens=128,
        do_sample=True,
        temperature=0.7
    )
    
output = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(output)

Example:

Training dataset

We have used Japanese corpus of total of about 220 billion tokens.

corpus tokens after preprocessing
Stockmark Web Corpus (This dataset will not be released) 9.1 billion
Patent 34.8 billion
Wikipedia 1.0 billion
CC100 10.9 billion
mC4 53.2 billion
CommonCrawl (snapshot: 2023-23, 2022-49, 2022-21, 2021-21) 112.9 billion

Library and Accelerators

License

The MIT license

Developed by

Stockmark Inc.

Author

Takahiro Omi