Wauplin HF staff commited on
Commit
72ca876
1 Parent(s): 0e14376

Push model using huggingface_hub.

Browse files
Files changed (3) hide show
  1. README.md +1 -2
  2. config.json +12 -8
  3. model.safetensors +3 -0
README.md CHANGED
@@ -2,10 +2,9 @@
2
  library_name: mamba-ssm
3
  tags:
4
  - arXiv:2312.00752
5
- - mamba
6
  - model_hub_mixin
7
  - pytorch_model_hub_mixin
8
- license: apache-2.0
9
  ---
10
 
11
  This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
 
2
  library_name: mamba-ssm
3
  tags:
4
  - arXiv:2312.00752
5
+ - arXiv:2405.21060
6
  - model_hub_mixin
7
  - pytorch_model_hub_mixin
 
8
  ---
9
 
10
  This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
config.json CHANGED
@@ -1,10 +1,14 @@
1
  {
2
- "d_model": 768,
3
- "n_layer": 24,
4
- "vocab_size": 50277,
5
- "ssm_cfg": {},
6
- "rms_norm": true,
7
- "residual_in_fp32": true,
8
- "fused_add_norm": true,
9
- "pad_vocab_size_multiple": 8
 
 
 
 
10
  }
 
1
  {
2
+ "attn_cfg": {},
3
+ "attn_layer_idx": [],
4
+ "d_intermediate": 0,
5
+ "d_model": 768,
6
+ "fused_add_norm": true,
7
+ "n_layer": 24,
8
+ "pad_vocab_size_multiple": 8,
9
+ "residual_in_fp32": true,
10
+ "rms_norm": true,
11
+ "ssm_cfg": {},
12
+ "tie_embeddings": true,
13
+ "vocab_size": 50277
14
  }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2634ae0d405f2482574836b0d036372bd0f917aaeacbe026358a33138e63d6d
3
+ size 516567592