|
--- |
|
library_name: llama.cpp |
|
license: gemma |
|
tags: [] |
|
widget: |
|
- text: '<start_of_turn>user |
|
|
|
How does the brain work?<end_of_turn> |
|
|
|
<start_of_turn>model |
|
|
|
' |
|
inference: |
|
parameters: |
|
max_new_tokens: 200 |
|
extra_gated_heading: Access Gemma on Hugging Face |
|
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and |
|
agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging |
|
Face and click below. Requests are processed immediately. |
|
extra_gated_button_content: Acknowledge license |
|
--- |
|
|
|
# Gemma Model Card |
|
|
|
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs) |
|
|
|
This model card corresponds to the 7B base version of the Gemma model in GGUF Format. The weights here are **float32**. |
|
|
|
> [!IMPORTANT] |
|
> |
|
> In llama.cpp, and other related tools such as Ollama and LM Studio, please make sure that you have these flags set correctly, especially **`repeat-penalty`**. Georgi Gerganov (llama.cpp's author) shared his experience in https://huggingface.co/google/gemma-7b-it/discussions/38#65d7b14adb51f7c160769fa1. |
|
|
|
You can also visit the model card of the [2B base model GGUF](https://huggingface.co/google/gemma-2b-GGUF), [2B instruct model GGUF](https://huggingface.co/google/gemma-2b-it-GGUF) and [7B instruct model GGUF](https://huggingface.co/google/gemma-7b-it-GGUF). |
|
|
|
**Resources and Technical Documentation**: |
|
|
|
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) |
|
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) |
|
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-it-gg-hf) |
|
|
|
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-7b-GGUF) |
|
|
|
**Authors**: Google |
|
|
|
## Model Information |
|
|
|
Summary description and brief definition of inputs and outputs. |
|
|
|
### Description |
|
|
|
Gemma is a family of lightweight, state-of-the-art open models from Google, |
|
built from the same research and technology used to create the Gemini models. |
|
They are text-to-text, decoder-only large language models, available in English, |
|
with open weights, pre-trained variants, and instruction-tuned variants. Gemma |
|
models are well-suited for a variety of text generation tasks, including |
|
question answering, summarization, and reasoning. Their relatively small size |
|
makes it possible to deploy them in environments with limited resources such as |
|
a laptop, desktop or your own cloud infrastructure, democratizing access to |
|
state of the art AI models and helping foster innovation for everyone. |
|
|
|
### Usage |
|
|
|
Below we share some commands on how to get quickly started with running the model. |
|
|
|
#### Running the model on a CPU |
|
|
|
|
|
```shell |
|
llama.cpp/build$ bin/main -m gemma-7b.gguf -p "Penguins live in" --repeat-penalty 1.0 |
|
Log start |
|
main: build = 2249 (15499eb9) |
|
main: built with cc (Debian 13.2.0-5) 13.2.0 for x86_64-linux-gnu |
|
main: seed = 1708970278 |
|
llama_model_loader: loaded meta data with 19 key-value pairs and 254 tensors from gemma-7b.gguf (version GGUF V3 (latest)) |
|
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. |
|
llama_model_loader: - kv 0: general.architecture str = gemma |
|
llama_model_loader: - kv 1: general.name str = gemma-7b |
|
llama_model_loader: - kv 2: gemma.context_length u32 = 8192 |
|
llama_model_loader: - kv 3: gemma.block_count u32 = 28 |
|
llama_model_loader: - kv 4: gemma.embedding_length u32 = 3072 |
|
llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 24576 |
|
llama_model_loader: - kv 6: gemma.attention.head_count u32 = 16 |
|
llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 16 |
|
llama_model_loader: - kv 8: gemma.attention.key_length u32 = 256 |
|
llama_model_loader: - kv 9: gemma.attention.value_length u32 = 256 |
|
llama_model_loader: - kv 10: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001 |
|
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama |
|
llama_model_loader: - kv 12: tokenizer.ggml.bos_token_id u32 = 2 |
|
llama_model_loader: - kv 13: tokenizer.ggml.eos_token_id u32 = 1 |
|
llama_model_loader: - kv 14: tokenizer.ggml.padding_token_id u32 = 0 |
|
llama_model_loader: - kv 15: tokenizer.ggml.unknown_token_id u32 = 3 |
|
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,256128] = ["<pad>", "<eos>", "<bos>", "<unk>", ... |
|
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,256128] = [0.000000, 0.000000, 0.000000, 0.0000... |
|
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,256128] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... |
|
llama_model_loader: - type f32: 254 tensors |
|
llm_load_vocab: mismatch in special tokens definition ( 544/256128 vs 388/256128 ). |
|
llm_load_print_meta: format = GGUF V3 (latest) |
|
llm_load_print_meta: arch = gemma |
|
llm_load_print_meta: vocab type = SPM |
|
llm_load_print_meta: n_vocab = 256128 |
|
llm_load_print_meta: n_merges = 0 |
|
llm_load_print_meta: n_ctx_train = 8192 |
|
llm_load_print_meta: n_embd = 3072 |
|
llm_load_print_meta: n_head = 16 |
|
llm_load_print_meta: n_head_kv = 16 |
|
llm_load_print_meta: n_layer = 28 |
|
llm_load_print_meta: n_rot = 192 |
|
llm_load_print_meta: n_embd_head_k = 256 |
|
llm_load_print_meta: n_embd_head_v = 256 |
|
llm_load_print_meta: n_gqa = 1 |
|
llm_load_print_meta: n_embd_k_gqa = 4096 |
|
llm_load_print_meta: n_embd_v_gqa = 4096 |
|
llm_load_print_meta: f_norm_eps = 0.0e+00 |
|
llm_load_print_meta: f_norm_rms_eps = 1.0e-06 |
|
llm_load_print_meta: f_clamp_kqv = 0.0e+00 |
|
llm_load_print_meta: f_max_alibi_bias = 0.0e+00 |
|
llm_load_print_meta: n_ff = 24576 |
|
llm_load_print_meta: n_expert = 0 |
|
llm_load_print_meta: n_expert_used = 0 |
|
llm_load_print_meta: rope scaling = linear |
|
llm_load_print_meta: freq_base_train = 10000.0 |
|
llm_load_print_meta: freq_scale_train = 1 |
|
llm_load_print_meta: n_yarn_orig_ctx = 8192 |
|
llm_load_print_meta: rope_finetuned = unknown |
|
llm_load_print_meta: model type = 7B |
|
llm_load_print_meta: model ftype = all F32 (guessed) |
|
llm_load_print_meta: model params = 8.54 B |
|
llm_load_print_meta: model size = 31.81 GiB (32.00 BPW) |
|
llm_load_print_meta: general.name = gemma-7b |
|
llm_load_print_meta: BOS token = 2 '<bos>' |
|
llm_load_print_meta: EOS token = 1 '<eos>' |
|
llm_load_print_meta: UNK token = 3 '<unk>' |
|
llm_load_print_meta: PAD token = 0 '<pad>' |
|
llm_load_print_meta: LF token = 227 '<0x0A>' |
|
llm_load_tensors: ggml ctx size = 0.10 MiB |
|
llm_load_tensors: CPU buffer size = 32570.17 MiB |
|
...................................................................................... |
|
llama_new_context_with_model: n_ctx = 512 |
|
llama_new_context_with_model: freq_base = 10000.0 |
|
llama_new_context_with_model: freq_scale = 1 |
|
llama_kv_cache_init: CPU KV buffer size = 224.00 MiB |
|
llama_new_context_with_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB |
|
llama_new_context_with_model: CPU input buffer size = 8.01 MiB |
|
llama_new_context_with_model: CPU compute buffer size = 506.25 MiB |
|
llama_new_context_with_model: graph splits (measure): 1 |
|
|
|
system_info: n_threads = 24 / 48 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | |
|
sampling: |
|
repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000 |
|
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800 |
|
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000 |
|
sampling order: |
|
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature |
|
generate: n_ctx = 512, n_batch = 512, n_predict = -1, n_keep = 1 |
|
|
|
|
|
Penguins live in colonies on ice and snow in Antarctica. These birds live in large colonies that contain thousands of penguins. They do this to keep warm. Their colonies are made up of different types of penguins, all with different sizes. |
|
|
|
Penguins live in colonies because they need to be around other penguins to keep warm. These birds are not able to maintain their body temperature and keep it warm when they are alone. For this reason, they live in large colonies with thousands of other penguins. Penguins are also very social animals and they like to be around other penguins. They like to have company and they like to be able to talk to their friends and family. |
|
|
|
Penguins live in large colonies because they are social animals and they need to be around other penguins in order to stay warm. Penguins live in colonies because they like to be around other penguins. They like to socialize and have fun with their friends and family. [end of text] |
|
|
|
llama_print_timings: load time = 74599.65 ms |
|
llama_print_timings: sample time = 47.84 ms / 184 runs ( 0.26 ms per token, 3845.75 tokens per second) |
|
llama_print_timings: prompt eval time = 523.81 ms / 4 tokens ( 130.95 ms per token, 7.64 tokens per second) |
|
llama_print_timings: eval time = 96373.38 ms / 183 runs ( 526.63 ms per token, 1.90 tokens per second) |
|
llama_print_timings: total time = 97319.89 ms / 187 tokens |
|
Log end |
|
``` |
|
|
|
|
|
#### Running the model on a single / multi GPU |
|
|
|
|
|
```shell |
|
llama.cpp/build$ bin/main -m gemma-7b_q8_0.gguf -p "Penguins live in" --repeat-penalty 1.0 -ngl 99 |
|
Log start |
|
main: build = 2234 (973053d8) |
|
main: built with cc (Debian 13.2.0-5) 13.2.0 for x86_64-linux-gnu |
|
main: seed = 1708970331 |
|
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no |
|
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes |
|
ggml_init_cublas: found 1 CUDA devices: |
|
Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes |
|
llama_model_loader: loaded meta data with 21 key-value pairs and 254 tensors from gemma-7b_q8_0.gguf (version GGUF V3 (latest)) |
|
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. |
|
llama_model_loader: - kv 0: general.architecture str = gemma |
|
llama_model_loader: - kv 1: general.name str = gemma-7b |
|
llama_model_loader: - kv 2: gemma.context_length u32 = 8192 |
|
llama_model_loader: - kv 3: gemma.block_count u32 = 28 |
|
llama_model_loader: - kv 4: gemma.embedding_length u32 = 3072 |
|
llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 24576 |
|
llama_model_loader: - kv 6: gemma.attention.head_count u32 = 16 |
|
llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 16 |
|
llama_model_loader: - kv 8: gemma.attention.key_length u32 = 256 |
|
llama_model_loader: - kv 9: gemma.attention.value_length u32 = 256 |
|
llama_model_loader: - kv 10: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001 |
|
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama |
|
llama_model_loader: - kv 12: tokenizer.ggml.bos_token_id u32 = 2 |
|
llama_model_loader: - kv 13: tokenizer.ggml.eos_token_id u32 = 1 |
|
llama_model_loader: - kv 14: tokenizer.ggml.padding_token_id u32 = 0 |
|
llama_model_loader: - kv 15: tokenizer.ggml.unknown_token_id u32 = 3 |
|
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,256128] = ["<pad>", "<eos>", "<bos>", "<unk>", ... |
|
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,256128] = [0.000000, 0.000000, 0.000000, 0.0000... |
|
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,256128] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... |
|
llama_model_loader: - kv 19: general.quantization_version u32 = 2 |
|
llama_model_loader: - kv 20: general.file_type u32 = 7 |
|
llama_model_loader: - type f32: 57 tensors |
|
llama_model_loader: - type q8_0: 197 tensors |
|
llm_load_vocab: mismatch in special tokens definition ( 544/256128 vs 388/256128 ). |
|
llm_load_print_meta: format = GGUF V3 (latest) |
|
llm_load_print_meta: arch = gemma |
|
llm_load_print_meta: vocab type = SPM |
|
llm_load_print_meta: n_vocab = 256128 |
|
llm_load_print_meta: n_merges = 0 |
|
llm_load_print_meta: n_ctx_train = 8192 |
|
llm_load_print_meta: n_embd = 3072 |
|
llm_load_print_meta: n_head = 16 |
|
llm_load_print_meta: n_head_kv = 16 |
|
llm_load_print_meta: n_layer = 28 |
|
llm_load_print_meta: n_rot = 192 |
|
llm_load_print_meta: n_embd_head_k = 256 |
|
llm_load_print_meta: n_embd_head_v = 256 |
|
llm_load_print_meta: n_gqa = 1 |
|
llm_load_print_meta: n_embd_k_gqa = 4096 |
|
llm_load_print_meta: n_embd_v_gqa = 4096 |
|
llm_load_print_meta: f_norm_eps = 0.0e+00 |
|
llm_load_print_meta: f_norm_rms_eps = 1.0e-06 |
|
llm_load_print_meta: f_clamp_kqv = 0.0e+00 |
|
llm_load_print_meta: f_max_alibi_bias = 0.0e+00 |
|
llm_load_print_meta: n_ff = 24576 |
|
llm_load_print_meta: n_expert = 0 |
|
llm_load_print_meta: n_expert_used = 0 |
|
llm_load_print_meta: rope scaling = linear |
|
llm_load_print_meta: freq_base_train = 10000.0 |
|
llm_load_print_meta: freq_scale_train = 1 |
|
llm_load_print_meta: n_yarn_orig_ctx = 8192 |
|
llm_load_print_meta: rope_finetuned = unknown |
|
llm_load_print_meta: model type = 7B |
|
llm_load_print_meta: model ftype = Q8_0 |
|
llm_load_print_meta: model params = 8.54 B |
|
llm_load_print_meta: model size = 8.45 GiB (8.50 BPW) |
|
llm_load_print_meta: general.name = gemma-7b |
|
llm_load_print_meta: BOS token = 2 '<bos>' |
|
llm_load_print_meta: EOS token = 1 '<eos>' |
|
llm_load_print_meta: UNK token = 3 '<unk>' |
|
llm_load_print_meta: PAD token = 0 '<pad>' |
|
llm_load_print_meta: LF token = 227 '<0x0A>' |
|
llm_load_tensors: ggml ctx size = 0.19 MiB |
|
llm_load_tensors: offloading 28 repeating layers to GPU |
|
llm_load_tensors: offloading non-repeating layers to GPU |
|
llm_load_tensors: offloaded 29/29 layers to GPU |
|
llm_load_tensors: CPU buffer size = 797.27 MiB |
|
llm_load_tensors: CUDA0 buffer size = 8651.94 MiB |
|
...................................................................................... |
|
llama_new_context_with_model: n_ctx = 512 |
|
llama_new_context_with_model: freq_base = 10000.0 |
|
llama_new_context_with_model: freq_scale = 1 |
|
llama_kv_cache_init: CUDA0 KV buffer size = 224.00 MiB |
|
llama_new_context_with_model: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB |
|
llama_new_context_with_model: CUDA_Host input buffer size = 8.01 MiB |
|
ggml_gallocr_reserve_n: reallocating CUDA0 buffer from size 0.00 MiB to 506.25 MiB |
|
ggml_gallocr_reserve_n: reallocating CUDA_Host buffer from size 0.00 MiB to 6.00 MiB |
|
llama_new_context_with_model: CUDA0 compute buffer size = 506.25 MiB |
|
llama_new_context_with_model: CUDA_Host compute buffer size = 6.00 MiB |
|
llama_new_context_with_model: graph splits (measure): 3 |
|
ggml_gallocr_needs_realloc: graph has different number of nodes |
|
ggml_gallocr_alloc_graph: cannot reallocate multi buffer graph automatically, call reserve |
|
ggml_backend_sched: failed to allocate graph, reserving |
|
|
|
system_info: n_threads = 6 / 12 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | |
|
sampling: |
|
repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000 |
|
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800 |
|
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000 |
|
sampling order: |
|
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature |
|
generate: n_ctx = 512, n_batch = 512, n_predict = -1, n_keep = 1 |
|
|
|
|
|
Penguins live inggml_gallocr_needs_realloc: node inp_embd is not valid |
|
ggml_gallocr_alloc_graph: cannot reallocate multi buffer graph automatically, call reserve |
|
ggml_backend_sched: failed to allocate graph, reserving |
|
the southern hemisphere, and some travel as far north as 65 degrees south. They are most common in Antarctica and southern South America, New Zealand, and Australia. They prefer colder waters to hunt and breed. |
|
|
|
The species is the second-most populous penguin in the world. With a population of approximately 1.4 million, the little penguin is found mostly in Australia and New Zealand. They live in colonies that number from 1,000 to 3,000 penguins. |
|
|
|
The little penguin is also known as the fairy penguin, blue penguin, and little blue penguin. They are theggml_gallocr_needs_realloc: node CUDA0#KQ_mask is not valid |
|
ggml_gallocr_alloc_graph: cannot reallocate multi buffer graph automatically, call reserve |
|
ggml_backend_sched: failed to allocate graph, reserving |
|
smallest of all living penguins. [end of text] |
|
|
|
llama_print_timings: load time = 10020.90 ms |
|
llama_print_timings: sample time = 154.02 ms / 132 runs ( 1.17 ms per token, 857.06 tokens per second) |
|
llama_print_timings: prompt eval time = 32.97 ms / 4 tokens ( 8.24 ms per token, 121.34 tokens per second) |
|
llama_print_timings: eval time = 3996.86 ms / 131 runs ( 30.51 ms per token, 32.78 tokens per second) |
|
llama_print_timings: total time = 4590.71 ms / 135 tokens |
|
Log end |
|
``` |
|
|
|
### Inputs and outputs |
|
|
|
* **Input:** Text string, such as a question, a prompt, or a document to be |
|
summarized. |
|
* **Output:** Generated English-language text in response to the input, such |
|
as an answer to a question, or a summary of a document. |
|
|
|
## Model Data |
|
|
|
Data used for model training and how the data was processed. |
|
|
|
### Training Dataset |
|
|
|
These models were trained on a dataset of text data that includes a wide variety |
|
of sources, totaling 6 trillion tokens. Here are the key components: |
|
|
|
* Web Documents: A diverse collection of web text ensures the model is exposed |
|
to a broad range of linguistic styles, topics, and vocabulary. Primarily |
|
English-language content. |
|
* Code: Exposing the model to code helps it to learn the syntax and patterns of |
|
programming languages, which improves its ability to generate code or |
|
understand code-related questions. |
|
* Mathematics: Training on mathematical text helps the model learn logical |
|
reasoning, symbolic representation, and to address mathematical queries. |
|
|
|
The combination of these diverse data sources is crucial for training a powerful |
|
language model that can handle a wide variety of different tasks and text |
|
formats. |
|
|
|
### Data Preprocessing |
|
|
|
Here are the key data cleaning and filtering methods applied to the training |
|
data: |
|
|
|
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was |
|
applied at multiple stages in the data preparation process to ensure the |
|
exclusion of harmful and illegal content |
|
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and |
|
reliable, automated techniques were used to filter out certain personal |
|
information and other sensitive data from training sets. |
|
* Additional methods: Filtering based on content quality and safely in line with |
|
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). |
|
|
|
## Implementation Information |
|
|
|
Details about the model internals. |
|
|
|
### Hardware |
|
|
|
Gemma was trained using the latest generation of |
|
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). |
|
|
|
Training large language models requires significant computational power. TPUs, |
|
designed specifically for matrix operations common in machine learning, offer |
|
several advantages in this domain: |
|
|
|
* Performance: TPUs are specifically designed to handle the massive computations |
|
involved in training LLMs. They can speed up training considerably compared to |
|
CPUs. |
|
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing |
|
for the handling of large models and batch sizes during training. This can |
|
lead to better model quality. |
|
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for |
|
handling the growing complexity of large foundation models. You can distribute |
|
training across multiple TPU devices for faster and more efficient processing. |
|
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective |
|
solution for training large models compared to CPU-based infrastructure, |
|
especially when considering the time and resources saved due to faster |
|
training. |
|
* These advantages are aligned with |
|
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). |
|
|
|
### Software |
|
|
|
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture). |
|
|
|
JAX allows researchers to take advantage of the latest generation of hardware, |
|
including TPUs, for faster and more efficient training of large models. |
|
|
|
ML Pathways is Google's latest effort to build artificially intelligent systems |
|
capable of generalizing across multiple tasks. This is specially suitable for |
|
[foundation models](https://ai.google/discover/foundation-models/), including large language models like |
|
these ones. |
|
|
|
Together, JAX and ML Pathways are used as described in the |
|
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single |
|
controller' programming model of Jax and Pathways allows a single Python |
|
process to orchestrate the entire training run, dramatically simplifying the |
|
development workflow." |
|
|
|
## Evaluation |
|
|
|
Model evaluation metrics and results. |
|
|
|
### Benchmark Results |
|
|
|
These models were evaluated against a large collection of different datasets and |
|
metrics to cover different aspects of text generation: |
|
|
|
| Benchmark | Metric | 2B Params | 7B Params | |
|
| ------------------------------ | ------------- | ----------- | --------- | |
|
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | |
|
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | |
|
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | |
|
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 | |
|
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | |
|
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | |
|
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | |
|
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | |
|
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | |
|
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | |
|
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | |
|
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 | |
|
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | |
|
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | |
|
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | |
|
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | |
|
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | |
|
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | |
|
| ------------------------------ | ------------- | ----------- | --------- | |
|
| **Average** | | **54.0** | **56.4** | |
|
|
|
## Ethics and Safety |
|
|
|
Ethics and safety evaluation approach and results. |
|
|
|
### Evaluation Approach |
|
|
|
Our evaluation methods include structured evaluations and internal red-teaming |
|
testing of relevant content policies. Red-teaming was conducted by a number of |
|
different teams, each with different goals and human evaluation metrics. These |
|
models were evaluated against a number of different categories relevant to |
|
ethics and safety, including: |
|
|
|
* Text-to-Text Content Safety: Human evaluation on prompts covering safety |
|
policies including child sexual abuse and exploitation, harassment, violence |
|
and gore, and hate speech. |
|
* Text-to-Text Representational Harms: Benchmark against relevant academic |
|
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). |
|
* Memorization: Automated evaluation of memorization of training data, including |
|
the risk of personally identifiable information exposure. |
|
* Large-scale harm: Tests for "dangerous capabilities," such as chemical, |
|
biological, radiological, and nuclear (CBRN) risks. |
|
|
|
### Evaluation Results |
|
|
|
The results of ethics and safety evaluations are within acceptable thresholds |
|
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child |
|
safety, content safety, representational harms, memorization, large-scale harms. |
|
On top of robust internal evaluations, the results of well known safety |
|
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA |
|
are shown here. |
|
|
|
| Benchmark | Metric | 2B Params | 7B Params | |
|
| ------------------------------ | ------------- | ----------- | --------- | |
|
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | |
|
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | |
|
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | |
|
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | |
|
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | |
|
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | |
|
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | |
|
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | |
|
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | |
|
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | |
|
| ------------------------------ | ------------- | ----------- | --------- | |
|
|
|
|
|
## Usage and Limitations |
|
|
|
These models have certain limitations that users should be aware of. |
|
|
|
### Intended Usage |
|
|
|
Open Large Language Models (LLMs) have a wide range of applications across |
|
various industries and domains. The following list of potential uses is not |
|
comprehensive. The purpose of this list is to provide contextual information |
|
about the possible use-cases that the model creators considered as part of model |
|
training and development. |
|
|
|
* Content Creation and Communication |
|
* Text Generation: These models can be used to generate creative text formats |
|
such as poems, scripts, code, marketing copy, and email drafts. |
|
* Chatbots and Conversational AI: Power conversational interfaces for customer |
|
service, virtual assistants, or interactive applications. |
|
* Text Summarization: Generate concise summaries of a text corpus, research |
|
papers, or reports. |
|
* Research and Education |
|
* Natural Language Processing (NLP) Research: These models can serve as a |
|
foundation for researchers to experiment with NLP techniques, develop |
|
algorithms, and contribute to the advancement of the field. |
|
* Language Learning Tools: Support interactive language learning experiences, |
|
aiding in grammar correction or providing writing practice. |
|
* Knowledge Exploration: Assist researchers in exploring large bodies of text |
|
by generating summaries or answering questions about specific topics. |
|
|
|
### Limitations |
|
|
|
* Training Data |
|
* The quality and diversity of the training data significantly influence the |
|
model's capabilities. Biases or gaps in the training data can lead to |
|
limitations in the model's responses. |
|
* The scope of the training dataset determines the subject areas the model can |
|
handle effectively. |
|
* Context and Task Complexity |
|
* LLMs are better at tasks that can be framed with clear prompts and |
|
instructions. Open-ended or highly complex tasks might be challenging. |
|
* A model's performance can be influenced by the amount of context provided |
|
(longer context generally leads to better outputs, up to a certain point). |
|
* Language Ambiguity and Nuance |
|
* Natural language is inherently complex. LLMs might struggle to grasp subtle |
|
nuances, sarcasm, or figurative language. |
|
* Factual Accuracy |
|
* LLMs generate responses based on information they learned from their |
|
training datasets, but they are not knowledge bases. They may generate |
|
incorrect or outdated factual statements. |
|
* Common Sense |
|
* LLMs rely on statistical patterns in language. They might lack the ability |
|
to apply common sense reasoning in certain situations. |
|
|
|
### Ethical Considerations and Risks |
|
|
|
The development of large language models (LLMs) raises several ethical concerns. |
|
In creating an open model, we have carefully considered the following: |
|
|
|
* Bias and Fairness |
|
* LLMs trained on large-scale, real-world text data can reflect socio-cultural |
|
biases embedded in the training material. These models underwent careful |
|
scrutiny, input data pre-processing described and posterior evaluations |
|
reported in this card. |
|
* Misinformation and Misuse |
|
* LLMs can be misused to generate text that is false, misleading, or harmful. |
|
* Guidelines are provided for responsible use with the model, see the |
|
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). |
|
* Transparency and Accountability: |
|
* This model card summarizes details on the models' architecture, |
|
capabilities, limitations, and evaluation processes. |
|
* A responsibly developed open model offers the opportunity to share |
|
innovation by making LLM technology accessible to developers and researchers |
|
across the AI ecosystem. |
|
|
|
Risks identified and mitigations: |
|
|
|
* Perpetuation of biases: It's encouraged to perform continuous monitoring |
|
(using evaluation metrics, human review) and the exploration of de-biasing |
|
techniques during model training, fine-tuning, and other use cases. |
|
* Generation of harmful content: Mechanisms and guidelines for content safety |
|
are essential. Developers are encouraged to exercise caution and implement |
|
appropriate content safety safeguards based on their specific product policies |
|
and application use cases. |
|
* Misuse for malicious purposes: Technical limitations and developer and |
|
end-user education can help mitigate against malicious applications of LLMs. |
|
Educational resources and reporting mechanisms for users to flag misuse are |
|
provided. Prohibited uses of Gemma models are outlined in the |
|
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). |
|
* Privacy violations: Models were trained on data filtered for removal of PII |
|
(Personally Identifiable Information). Developers are encouraged to adhere to |
|
privacy regulations with privacy-preserving techniques. |
|
|
|
### Benefits |
|
|
|
At the time of release, this family of models provides high-performance open |
|
large language model implementations designed from the ground up for Responsible |
|
AI development compared to similarly sized models. |
|
|
|
Using the benchmark evaluation metrics described in this document, these models |
|
have shown to provide superior performance to other, comparably-sized open model |
|
alternatives. |
|
|
|
|