ipetrukha commited on
Commit
d820879
1 Parent(s): 2563b9a

7d6464614a2d17e6ee6c3b3e0e0b59edd5ed1fb99e679c3f85f4dbe91825daae

Browse files
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: gemma
4
+ license_link: https://ai.google.dev/gemma/terms
5
+ tags:
6
+ - mlx
7
+ extra_gated_heading: Access CodeGemma on Hugging Face
8
+ extra_gated_prompt: To access CodeGemma on Hugging Face, you’re required to review
9
+ and agree to Google’s usage license. To do this, please ensure you’re logged-in
10
+ to Hugging Face and click below. Requests are processed immediately.
11
+ extra_gated_button_content: Acknowledge license
12
+ ---
13
+
14
+ # ipetrukha/codegemma-1.1-2b-4bit
15
+
16
+ The Model [ipetrukha/codegemma-1.1-2b-4bit](https://huggingface.co/ipetrukha/codegemma-1.1-2b-4bit) was converted to MLX format from [google/codegemma-1.1-2b](https://huggingface.co/google/codegemma-1.1-2b) using mlx-lm version **0.16.1**.
17
+
18
+ ## Use with mlx
19
+
20
+ ```bash
21
+ pip install mlx-lm
22
+ ```
23
+
24
+ ```python
25
+ from mlx_lm import load, generate
26
+
27
+ model, tokenizer = load("ipetrukha/codegemma-1.1-2b-4bit")
28
+ response = generate(model, tokenizer, prompt="hello", verbose=True)
29
+ ```
added_tokens.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "▁<EOT>": 32003,
3
+ "▁<MID>": 32001,
4
+ "▁<PRE>": 32000,
5
+ "▁<SUF>": 32002
6
+ }
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "GemmaForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 2,
8
+ "eos_token_id": 1,
9
+ "head_dim": 256,
10
+ "hidden_act": "gelu_pytorch_tanh",
11
+ "hidden_activation": null,
12
+ "hidden_size": 2048,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 16384,
15
+ "max_position_embeddings": 8192,
16
+ "model_type": "gemma",
17
+ "num_attention_heads": 8,
18
+ "num_hidden_layers": 18,
19
+ "num_key_value_heads": 1,
20
+ "pad_token_id": 0,
21
+ "quantization": {
22
+ "group_size": 64,
23
+ "bits": 4
24
+ },
25
+ "rms_norm_eps": 1e-06,
26
+ "rope_theta": 10000.0,
27
+ "torch_dtype": "bfloat16",
28
+ "transformers_version": "4.40.1",
29
+ "use_cache": true,
30
+ "vocab_size": 256000
31
+ }
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:468ff1658034cb7992005929a6d8796523ce9ab5f4b33c08cd341a54e55c5f9e
3
+ size 1956312791