Edit model card

Llama-3.1-Niitorm-8B-LATCOSx2

d48ca23f-9063-4a66-a6b8-0abcbfe26dc5.jpg

Ordered by quality:

  • q8_0 imatrix
  • q8_0
  • q6_k imatrix
  • q6_k
  • q5_k_m imatrix
  • q5_k_s imatrix
  • q4_k_m imatrix
  • q4_k_s imatrix
  • iq4_xs imatrix
  • q4_0_4_8 imatrix arm
  • q4_0_4_4 imatrix arm

This is a test RP model, "v000000/L3.1-Niitorm-8B-t0.0001" but merged one extra time with "akjindal53244/Llama-3.1-Storm-8B". Using a new merging algorithm I wrote "LATCOS", which is non linear interpolation and cosine vector similarity between tensors in both magnitude and direction. This attempts to find the smoothest possible interpolation and make them work more seamlessly together by taking into account the vector direction where both models agree. The model seems a lot smarter even though it's just a bit more of storm, but also more compliant which could be a negative since it's less "dynamic".

imatrix data randomized bartowski, kalomeze, rp snippets, working gpt4 code, human messaging, story

Downloads last month
315
GGUF
Model size
8.03B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for v000000/L3.1-Niitorm-8B-LATCOSx2-Version-GGUFs-IMATRIX

Quantized
this model

Spaces using v000000/L3.1-Niitorm-8B-LATCOSx2-Version-GGUFs-IMATRIX 2

Collections including v000000/L3.1-Niitorm-8B-LATCOSx2-Version-GGUFs-IMATRIX