Edit model card

image/png

News

2024.07.21

C3TR-Adapter_ggufのVersion3を公開しました。
Version 3 of C3TR-Adapter_gguf has been released.

Version3はgoogle gemma-2ベースになり、一部の翻訳ベンチマークではgpt-4 turboやcommand-R plusを凌駕する品質です。
Version 3 is based on Google Gemma-2, and in some translation benchmarks, it outperforms GPT-4 Turbo and Command-R Plus.

以前のVersionのユーザーの方はプロンプトテンプレートに特殊なトークン(<start_of_turn>, <end_of_turn>)が追加されている事に注意してください
For users of previous versions, please note that special tokens(<start_of_turn>, <end_of_turn>) have been added to the prompt templates.

2024.05.18

C3TR-Adapter_ggufのVersion2を公開しました。
Version 2 of C3TR-Adapter_gguf has been released.

Version2では主にカジュアルな会話に関する翻訳能力が大幅に向上しています。
Version 2 has greatly improved the ability to translate casual conversations.

その反面、フォーマルな文章の翻訳能力が少し落ちてしまっています。フォーマルな文章を対象にする場合、Version1を引き続きお使いください
On the other hand, translation capabilities for formal texts have declined slightly. If you are targeting formal texts, please continue to use Version1.

モデルカード(Model Card for Model ID)

Gemmaベースの日英、英日ニューラル機械翻訳モデルであるwebbigdata/C3TR-AdapterをGPUがないPCでも動かせるようにggufフォーマットに変換したモデルです。
A Japanese-English and English-Japanese neural machine translation model, webbigdata/C3TR-Adapter, converted to gguf format so that it can run on a PC without a GPU.

簡単に試す方法(Easy way to try it)

Googleの無料WebサービスColabを使うとブラウザを使って試す事ができます。
You can try it using your browser with Colab, Google's free web service.

しかし、ColabのCPUは以前と比較して非常に性能が悪くなったため、デモは非常に時間がかかるので注意してください
Please note that the demo will take a very long time because the CPU of the Collabo is much slower than before.

リンク先で[Open in Colab]ボタンを押してColabを起動してください
Press the [Open in Colab] button on the link to start Colab
Colab Sample C3TR_Adapter_gguf_v3_Free_Colab_sample

利用可能なVersion(Available Versions)

llama.cppを使うと、様々な量子化手法でファイルのサイズを小さくする事が出来ますが、本モデルでは7種類のみを扱います。小さいサイズのモデルは、少ないメモリで高速に動作させることができますが、モデルの性能も低下します。4ビット(Q4_K_M)くらいがバランスが良いと言われています。
Although llama.cpp can be used to reduce the size of the file with various quantization methods, this sample deals with only six types. Smaller models can run faster with less memory, but also reduce the performance of the models. 4 bits (Q4_K_M) is said to be a good balance.

  • C3TR-Adapter-IQ3_XXS.gguf 3.6GB
  • C3TR-Adapter-Q3_k_m.gguf 4.5GB
  • C3TR-Adapter-Q4_k_m.gguf 5.4GB
  • C3TR-Adapter.f16.Q4_k_m.gguf 6.4GB
  • C3TR-Adapter.f16.Q5_k_m.gguf 7.2GB
  • C3TR-Adapter.f16.Q6_k.gguf 8.1GB
  • C3TR-Adapter.f16.Q8_0.gguf 10GB

サンプルコード(sample code)

ColabのCPUは遅いので、少し技術的な挑戦が必要ですが皆さんが所有しているPCでllama.cppをコンパイルして動かす方が良いでしょう。
Since Colab's CPU is slow, it is better to compile and run llama.cpp on your own PC, which requires a bit of a technical challenge.

Install and compile example(linux)

その他のOSについてはllama.cpp公式サイトを確認してください
For other operating systems, please check the llama.cpp official website

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make

推論実行例(Inference execution sample)

C3TRのような小さいモデルを使用する際にはテンプレートの書式を厳密に守ることが大切です
With small models like the C3TR, it is important to make sure you follow the template.

  • 改行の数と場所
  • 特殊トークン<start_of_turn>が2箇所、<end_of_turn>が1箇所に設置されている事
  • llama-cliのコマンドラインオプション( -e --temp 0 --repeat-penalty 1.0 -n -2)などが意図通りに設定されている事
  • The location and number of line breaks in the prompt
  • The special tokens <start_of_turn> are set in two places and <end_of_turn> is set in one place
  • Please also make sure that the command line option ( -e --temp 0 --repeat-penalty 1.0 -n -2)

The basic structure is as follows:

./llama-cli -m ./C3TR-Adapter.f16.Q4_k_m.gguf -e --temp 0 --repeat-penalty 1.0 -n -2 -p "You are a highly skilled professional Japanese-English and English-Japanese translator. Translate the given text accurately, taking into account the context and specific instructions provided. Steps may include hints enclosed in square brackets [] with the key and value separated by a colon:. Only when the subject is specified in the Japanese sentence, the subject will be added when translating into English. If no additional instructions or context are provided, use your expertise to consider what the most appropriate context is and provide a natural translation that aligns with that context. When translating, strive to faithfully reflect the meaning and tone of the original text, pay attention to cultural nuances and differences in language usage, and ensure that the translation is grammatically correct and easy to read. After completing the translation, review it once more to check for errors or unnatural expressions. For technical terms and proper nouns, either leave them in the original language or use appropriate translations as necessary. Take a deep breath, calm down, and start translating.

<start_of_turn>### Instruction:
<<Translate Japanese to English. or Translate English to Japanese.>>
When translating, please use the following hints:
[writing_style: <<casual or formal or technical or journalistic or web-fiction or business or nsfw or educational-casual or academic-presentation or slang or sns-casual>>]

### Input:
<<YOUR_TEXT>

<end_of_turn>
<start_of_turn>### Response:
"

What you can change is

  • <<Translate Japanese to English. or Translate English to Japanese.>>
  • <<casual or formal or technical or journalistic or web-fiction or business or nsfw or educational-casual or academic-presentation or slang or sns-casual>>
  • <YOUR_TEXT>

日英翻訳時(Translate Japanese to English)

./llama-cli -m ./C3TR-Adapter.f16.Q4_k_m.gguf -e --temp 0 --repeat-penalty 1.0 -n -2  -p "You are a highly skilled professional Japanese-English and English-Japanese translator. Translate the given text accurately, taking into account the context and specific instructions provided. Steps may include hints enclosed in square brackets [] with the key and value separated by a colon:. Only when the subject is specified in the Japanese sentence, the subject will be added when translating into English. If no additional instructions or context are provided, use your expertise to consider what the most appropriate context is and provide a natural translation that aligns with that context. When translating, strive to faithfully reflect the meaning and tone of the original text, pay attention to cultural nuances and differences in language usage, and ensure that the translation is grammatically correct and easy to read. After completing the translation, review it once more to check for errors or unnatural expressions. For technical terms and proper nouns, either leave them in the original language or use appropriate translations as necessary. Take a deep breath, calm down, and start translating.

<start_of_turn>### Instruction:
Translate Japanese to English.
When translating, please use the following hints:
[writing_style: educational-casual]
[虎視虎子: Torako Koshi]
[鹿乃子のこ: Noko Shikanoko]

### Input:
都立日野南高校の虎視虎子は、近所や学校でも「お淑やかな優等生」と評判の女子高生だが、実は「中学までヤンキーで、高校デビューして優等生になった」ことを隠しながら生活していた。

ある日の登校中、虎子は頭にツノが生えている少女・鹿乃子のこが電線に引っ掛かっていたところを助ける。なぜか、のこに「元ヤン」であることを見破られた虎子は、「ヤンキーのお姉さん」と呼ばれるのが嫌で「こしたん」という愛称を受け入れる。

虎子は自分が「元ヤン」だと周囲にバレないように、虎子のクラスに転校してきたのこを警戒するが、のこの生態に振り回される中で、彼女が立ち上げた部活動「シカ部」の部長として、シカであるのこのお世話係になる。
<end_of_turn>
<start_of_turn>### Response:
"

出力例(output)

Torako Koshi, a high school student at Tokyo's Hino-Minami High School, is known as a "well-behaved honor student" by her neighbors and classmates. However, she's actually hiding the fact that she was a "delinquent" until middle school and became an honor student when she entered high school.

One day, while on her way to school, Torako sees a girl with horns on her head, Noko Shikanoko, hanging from a power line. She helps her, but somehow, Noko sees through Torako's disguise and calls her "a delinquent older sister." Torako hates being called that, so she accepts Noko's nickname, "Koshi-chan."

Torako tries to keep her past as a "delinquent" a secret, so she keeps an eye on Noko, who transferred to her class. However, she ends up taking care of Noko, who is a deer, as the president of the "Deer Club" that Noko started. [end of text]

詳細はwebbigdata/C3TR-Adapterを参照してください gguf版は一部の指定が動作しません

For other grammars see webbigdata/C3TR-Adapter Some specifications do not work with the gguf version.

パラメーター(Parameters)

現在のgguf版は翻訳後に幻覚を追加出力してしまう傾向があり、パラメーターを適宜調整する必要があります。
The current gguf version tends to add hallucinations after translation and the parameters need to be adjusted accordingly.

必要に応じて下記のパラメーターを調整してください

  • 温度(--temp): この値を下げると、モデルがより確信度の高い(つまり、より一般的な)単語を選択する傾向が強くなります。
  • トップP(--top_p): この値をさらに低く設定することで、モデルが考慮する単語の範囲を狭め、より一貫性のあるテキストを生成するようになります。
  • 生成する単語数(-n): この値を減らすことで、モデルが生成するテキストの長さを短くし、不要な追加テキストの生成を防ぐことができます。-1 = 無限大、-2 = 文脈が満たされるまで。

以下はllama.cppの作者(ggerganov)による推奨パラメーターです

  • -e (改行\nをエスケープ)
  • --temp 0 (最も確率の高いトークンのみを選択)
  • --repeat-penalty 1.0 (繰り返しペナルティをオフ。指示調整済モデルでこれをするのは、決して良い考えとは言えないとの事。)
  • --no-penalize-nl (改行の繰り返しにはペナルティをあたえない) 最新のllama.cppではディフォルト動作になったので指定不要

Adjust the following parameters as needed

  • Temperature (--temp): Lowering this value will make the model more likely to select more confident (i.e., more common) words.
  • Top P (--top_p): Setting this value even lower will narrow the range of words considered by the model and produce more consistent text.
  • Number of words to generate (-n): Reducing this value will shorten the length of text generated by the model and prevent the generation of unnecessary additional text. -1 = infinity(default), -2 = until context filled.

The following are the recommended parameters by the author of llama.cpp(ggerganov)

  • -e (escape newlines (\n))
  • --temp 0(pick most probable tokens)
  • --repeat-penalty 1.0(disable repetition penalty (it's never a good idea to have this with instruction tuned models)~~ latest llama.cpp default behavior, so don't mind.
  • --no-penalize-nl(do not penalize repeating newlines) latest llama.cpp's default behavior so you need not this option.
Downloads last month
895
GGUF
Model size
9.24B params
Architecture
gemma2

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) is not available, repository is disabled.

Spaces using webbigdata/C3TR-Adapter_gguf 3

Collection including webbigdata/C3TR-Adapter_gguf