iwiwi commited on
Commit
09b9037
1 Parent(s): 664e18d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md CHANGED
@@ -1,3 +1,75 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - japanese-stablelm
5
+ - causal-lm
6
+ pipeline_tag: text-generation
7
+ datasets:
8
+ - wikipedia
9
+ - mc4
10
+ - cc100
11
+ - oscar-corpus/OSCAR-2301
12
+ - oscar-corpus/OSCAR-2201
13
+ - cerebras/SlimPajama-627B
14
+ language:
15
+ - ja
16
  ---
17
+
18
+ # Japanese Mistral-7B-v0.1 Base
19
+
20
+ ## Model Description
21
+
22
+ This is a 7B-parameter decoder-only language model with a focus on maximizing Japanese language modeling performance and Japanese downstream task performance.
23
+ We conducted continued pretraining using Japanese data on the English language model, [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), to transfer the model's knowledge and capabilities to Japanese.
24
+
25
+ *If you are looking for an instruction-following model, check [Japanese Mistral-7B-v0.1 Instruct](https://huggingface.co/stabilityai/japanese-Mistral-7B-v0.1-instruct)*.
26
+
27
+
28
+ ## Usage
29
+
30
+ ```python
31
+ TODO
32
+ ```
33
+
34
+ ## Model Details
35
+
36
+ * **Developed by**: [Stability AI](https://stability.ai/)
37
+ * **Model type**: `Japanese Mistral-7B-v0.1 Base` model is an auto-regressive language model based on the transformer decoder architecture.
38
+ * **Language(s)**: Japanese
39
+ * **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
40
+ * **Contact**: For questions and comments about the model, please email `lm@stability.ai`
41
+
42
+ ### Model Architecture
43
+
44
+ For details, please see Mistral AI's [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
45
+
46
+
47
+ ### Training Dataset
48
+
49
+ Around 100B tokens from a mixture of the following corpora were used for the continued pretraining.
50
+
51
+ - [Japanese/English Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
52
+ - [Japanese mc4](https://huggingface.co/datasets/mc4)
53
+ - [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz)
54
+ - [Japanese OSCAR](https://oscar-project.github.io/documentation/)
55
+ - [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B)
56
+
57
+
58
+ ## Use and Limitations
59
+
60
+ ### Intended Use
61
+
62
+ The model is intended to be used by all individuals as a foundational model for application-specific fine-tuning without strict limitations on commercial use.
63
+
64
+ ### Limitations and bias
65
+
66
+ The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model-generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
67
+
68
+
69
+ ## Acknowledgements
70
+
71
+ This model is based on Mistral-7B-v0.1 released by the Mistral AI team. We are grateful to the Mistral AI team for providing such an excellent base model.
72
+
73
+ We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
74
+
75
+ We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.