YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, any-to-any, other

BigAurelian v0.5 120b 32k

A Goliath-120b style frankenmerge of aurelian-v0.5-70b-32K and WinterGoddess-1.4x-70b. The goal is to have similar performance with an extended context size. Important: Use a positional embeddings compression factor (compress_pos_emb) of 8 when loading this model.

Prompting Format

Llama2 and Alpaca.

Merge process

The models used in the merge are aurelian-v0.5-70b-32K and WinterGoddess-1.4x-70b.

The layer mix:

- range 0, 16
  aurelian
- range 8, 24
  WinterGoddess
- range 17, 32
  aurelian
- range 25, 40
  WinterGoddess
- range 33, 48
  aurelian
- range 41, 56
  WinterGoddess
- range 49, 64
  aurelian
- range 57, 72
  WinterGoddess
- range 65, 80
  aurelian

Acknowledgements

@grimulkan For creating aurelian-v0.5-70b-32K

@Sao10K For creating WinterGoddess

@alpindale For creating the original Goliath

@chargoddard For developing mergekit.

Downloads last month
10
Safetensors
Model size
118B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for llmixer/BigAurelian-v0.5-120b-32k

Quantizations
2 models