EXL2
Collection
EXL2 is the best format.
•
4 items
•
Updated
Experimental model, LimaRP LoRA trained on top of internlm2-base-20b with 8192 context length and merged with internlm2-chat-20b.
Prompt format is ChatML.
This is a merge of pre-trained language models created using mergekit.
This model was merged using the task arithmetic merge method using intervitens/internlm2-base-20b-llama as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: ./internlm2-chat-20b-llama
parameters:
weight: 1.0
- model: ./internlm2-limarp-20b-v0.03
parameters:
weight: 0.6
merge_method: task_arithmetic
base_model: ./internlm2-base-20b-llama
parameters:
#normalize: false
#int8_mask: true
dtype: bfloat16
Base model
internlm/internlm2-base-20b