Takeshi Kojima commited on
Commit
f60ee9c
1 Parent(s): 5e57878

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -2,7 +2,7 @@
2
  license: cc-by-nc-4.0
3
  ---
4
 
5
- # tsubaki-10b-instruction-sft
6
 
7
  # Overview
8
  This repository provides a Japanese-centric multilingual GPT-NeoX model of 10 billion parameters.
@@ -36,8 +36,8 @@ This repository provides a Japanese-centric multilingual GPT-NeoX model of 10 bi
36
 
37
  | Variant | Link |
38
  | :-- | :--|
39
- | tsubaki-10b-instruction-sft | https://huggingface.co/Kojima777/tsubaki-10b-instruction-sft |
40
- | tsubaki-10b | https://huggingface.co/Kojima777/tsubaki-10b |
41
 
42
  * **Authors**
43
 
@@ -53,8 +53,8 @@ This repository provides a Japanese-centric multilingual GPT-NeoX model of 10 bi
53
 
54
  | Model | Average | JCommonsenseQA | JNLI | MARC-ja | JSQuAD |
55
  | :-- | :-- | :-- | :-- | :-- | :-- |
56
- | tsubaki-10b-instruction-sft | 79.04 | 74.35 | 65.65 | 96.06 | 80.09 |
57
- | tsubaki-10b | 67.27 | 65.86 | 54.19 | 84.49 | 64.54 |
58
 
59
  ---
60
 
@@ -64,8 +64,8 @@ This repository provides a Japanese-centric multilingual GPT-NeoX model of 10 bi
64
  import torch
65
  from transformers import AutoTokenizer, AutoModelForCausalLM
66
 
67
- tokenizer = AutoTokenizer.from_pretrained("Kojima777/tsubaki-10b-instruction-sft", use_fast=False)
68
- model = AutoModelForCausalLM.from_pretrained("Kojima777/tsubaki-10b-instruction-sft")
69
 
70
  if torch.cuda.is_available():
71
  model = model.to("cuda")
 
2
  license: cc-by-nc-4.0
3
  ---
4
 
5
+ # weblab-10b-instruction-sft
6
 
7
  # Overview
8
  This repository provides a Japanese-centric multilingual GPT-NeoX model of 10 billion parameters.
 
36
 
37
  | Variant | Link |
38
  | :-- | :--|
39
+ | weblab-10b-instruction-sft | https://huggingface.co/Kojima777/weblab-10b-instruction-sft |
40
+ | weblab-10b | https://huggingface.co/Kojima777/weblab-10b |
41
 
42
  * **Authors**
43
 
 
53
 
54
  | Model | Average | JCommonsenseQA | JNLI | MARC-ja | JSQuAD |
55
  | :-- | :-- | :-- | :-- | :-- | :-- |
56
+ | weblab-10b-instruction-sft | 79.04 | 74.35 | 65.65 | 96.06 | 80.09 |
57
+ | weblab-10b | 67.27 | 65.86 | 54.19 | 84.49 | 64.54 |
58
 
59
  ---
60
 
 
64
  import torch
65
  from transformers import AutoTokenizer, AutoModelForCausalLM
66
 
67
+ tokenizer = AutoTokenizer.from_pretrained("Kojima777/weblab-10b-instruction-sft", use_fast=False)
68
+ model = AutoModelForCausalLM.from_pretrained("Kojima777/weblab-10b-instruction-sft")
69
 
70
  if torch.cuda.is_available():
71
  model = model.to("cuda")