Edit model card

Adapt-Goat-Moe-3x7B is an attempt to build an real GOAT. Model consist of 4 experts,

GOAT-AI Community,

Adapt-LLM - law-llm/medicine-llm, finance-llm

base_model: GOAT-AI/GOAT-7B-Community
experts:
  - source_model: GOAT-AI/GOAT-7B-Community
    positive_prompts:
    - "chat"
    - "assistant"
    - "tell me"
    - "explain"
  - source_model: AdaptLLM/law-LLM
    positive_prompts:
    - "inquiries"
    - "quetier"
    - "legal"
    - "concerns"
    - "questioning"
    - "judicial"
  - source_model: AdaptLLM/medicine-LLM
    positive_prompts:
    - "diagnosis"
    - "analysis"
    - "disease"
    - "clinical diagnosis"
    - "medical"
  - source_model: AdaptLLM/finance-LLM
    positive_prompts:
    - "guidance"
    - "provide"
    - "recommendations"
    - "advice"
   - "counting"

This model was possible by great work of GOAT_AI and ADAPTLLM researchers.

Citation

@article
    {adaptllm,
  title        = {Adapting Large Language Models via Reading Comprehension},
  author       = {Daixuan Cheng and Shaohan Huang and Furu Wei},
  journal      = {CoRR},
  volume       = {abs/2309.09530},
  year         = {2023}
}
Downloads last month
8
Safetensors
Model size
19.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.