dranger003's picture
Update README.md
c8e1842 verified
|
raw
history blame
2.85 kB
metadata
license: other
license_name: databricks-open-model-license
library_name: gguf
license_link: https://www.databricks.com/legal/open-model-license
pipeline_tag: text-generation
base_model: databricks/dbrx-instruct

2024-04-12: Support for this model is still being worked on - PR #6515.
Testing of the quants appears to currently work and the PR has been approved but has not yet been merged into the main branch.

DBRX is a transformer-based decoder-only large language model (LLM) that was trained using next-token prediction. It uses a fine-grained mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input. It was pre-trained on 12T tokens of text and code data. Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2. This provides 65x more possible combinations of experts and we found that this improves model quality. DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA). It uses the GPT-4 tokenizer as provided in the tiktoken repository. We made these choices based on exhaustive evaluation and scaling experiments.

Layers Context Template
40
32768
<|im_start|> system
{system} <|im_end|>
<|im_start|> user
{prompt} <|im_end|>
<|im_start|> assistant
  • 16x12B MoE
  • 16 experts (12B params per single expert; top_k=4 routing)
  • 36B active params (132B total params)
  • Trained on 12T tokens
  • 32k sequence length training