gagan3012's picture
Upload folder using huggingface_hub
2098b2a
|
raw
history blame
No virus
1.92 kB
metadata
{}

license: apache-2.0
tags:
- moe
- mergekit
- merge
- gagan3012/MetaModel
- jeonsworld/CarbonVillain-en-10.7B-v2
- jeonsworld/CarbonVillain-en-10.7B-v4
- TomGrc/FusionNet_linear
---

# MetaModel_moe_multilingualv1

This model is a Mixure of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models:
* [gagan3012/MetaModel](https://huggingface.co/gagan3012/MetaModel)
* [jeonsworld/CarbonVillain-en-10.7B-v2](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v2)
* [jeonsworld/CarbonVillain-en-10.7B-v4](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v4)
* [TomGrc/FusionNet_linear](https://huggingface.co/TomGrc/FusionNet_linear)

## 🧩 Configuration

```yaml

base_model: gagan3012/MetaModel gate_mode: hidden dtype: bfloat16 experts:

  • source_model: gagan3012/MetaModel
  • source_model: jeonsworld/CarbonVillain-en-10.7B-v2
  • source_model: jeonsworld/CarbonVillain-en-10.7B-v4
  • source_model: TomGrc/FusionNet_linear
    
    ## 💻 Usage
    
    ```python
    !pip install -qU transformers bitsandbytes accelerate
    
    from transformers import AutoTokenizer
    import transformers
    import torch
    
    model = "gagan3012/MetaModel_moe_multilingualv1"
    
    tokenizer = AutoTokenizer.from_pretrained(model)
    pipeline = transformers.pipeline(
        "text-generation",
        model=model,
        model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
    )
    
    messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
    prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
    outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
    print(outputs[0]["generated_text"])