|
Quantization made by Richard Erkhov. |
|
|
|
[Github](https://github.com/RichardErkhov) |
|
|
|
[Discord](https://discord.gg/pvy7H8DZMG) |
|
|
|
[Request more models](https://github.com/RichardErkhov/quant_request) |
|
|
|
|
|
Jamba-tiny-random - bnb 8bits |
|
- Model creator: https://huggingface.co/ai21labs/ |
|
- Original model: https://huggingface.co/ai21labs/Jamba-tiny-random/ |
|
|
|
|
|
|
|
|
|
Original model description: |
|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
This is a tiny, dummy version of [Jamba](https://huggingface.co/ai21labs/Jamba-v0.1), used for debugging and experimentation over the Jamba architecture. |
|
|
|
It has 128M parameters (instead of 52B), **and is initialized with random weights and did not undergo any training.** |
|
|
|
|
|
|