Edit model card

NSK-128k-7B-slerp-GGUF ⭐️⭐️⭐️⭐️

NSK-7B-128k-slerp is a merge of the following models using mergekit:

🧩 Configuration

slices:
  - sources:
      - model: Nitral-AI/Nyan-Stunna-7B
        layer_range: [0, 32]
      - model: Nitral-AI/Kunocchini-7b-128k-test
        layer_range: [0, 32]
merge_method: slerp
base_model: Nitral-AI/Kunocchini-7b-128k-test
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

Eval embedding benchmark (with 70 specific quesions):

inf.jpg md28g.jpg SK.jpg ks-inf.jpg command-r.jpg NSK.jpg NSMv2.jpg aura.jpg ivanDrogo.jpg KSI.jpg KSI-RPG.jpg llama3.jpg KSIF.jpg d29l38.jpg

Downloads last month
313
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including AlekseiPravdin/NSK-128k-7B-slerp-gguf