Edit model card

image/png

Introduction

APUS-xDAN-4.0-MOE is a transformer-based decoder-only language model, developed on a vast corpus of data to ensure robust performance.

This is an enhanced MoE (Mixture of Experts) model built on top of the continued pre-training enhanced LlaMA architecture, further optimized with human-enhanced feedback algorithms to improve reasoning, mathematical, and logical capabilities during inference.

For more comprehensive information, please visit our blog post and GitHub repository. https://github.com/shootime2021/APUS-xDAN-4.0-moe

Model Details

APUS-xDAN-4.0-MOE leverages the innovative Mixture of Experts (MoE) architecture, incorporating components from dense language models. Specifically, it inherits its capabilities from the highly performant xDAN-L2 Series. With a total of 136 billion parameters, of which 30 billion are activated during runtime, APUS-xDAN-4.0-MOE demonstrates unparalleled efficiency. Through advanced quantization techniques, our open-source version occupies a mere 42GB, making it seamlessly compatible with consumer-grade GPUs like the 4090 and 3090. The following specifications:

  • Parameters: 136B
  • Architecture: Mixture of 4 Experts (MoE)
  • Experts Utilization: 2 experts used per token
  • Layers: 60
  • Attention Heads: 56 for queries, 8 for keys/values
  • Embedding Size: 7,168
  • Additional Features:
    • Rotary embeddings (RoPE)
    • Supports activation sharding and 1.5bit~4bit quantization
  • Maximum Sequence Length (context): 32,768 tokens

Usage

Model Quantized Size Context Hardware Requirement
APUS-xDAN4.0-MoE-0402.Q2_K.gguf Q2_K 39G 32k 2x24G GPU memory
APUS-xDAN4.0-MoE-0402.IQ3_XXS.gguf IQ3_XXS 41G 32k 2x24G GPU memory
APUS-xDAN4.0-MoE-0402.Q3_K_M_Matrix.gguf Q3_K_M 51G 32k 2x24G GPU memory
APUS-xDAN4.0-MoE-0402.Q4_K_M.gguf Q4_K_M 64G 32k 3x24G GPU memory
APUS-xDAN4.0-MoE-0402

Initial


git clone https://github.com/ggerganov/llama.cpp.git
make LLAMA_CUDA=1

Interactive Chat


./main -m APUS-xDAN4.0-MoE-0402.Q2_K.gguf  \
--prompt "You are a helpful assistant named APUS-xDAN4.0 MoE." --chatml \
--interactive \
--temp 0.7 \
--ctx-size 4096 (32k)

License

APUS-xDAN-4.0-MOE is distributed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

Downloads last month
15
GGUF
Model size
114B params
Architecture
llama

2-bit

3-bit

Inference API
Unable to determine this model's library. Check the docs .