YACHT-Llama-3-Ko-8B
π΅ [JayLee LLMs Signature Tag] : βοΈ "I need a Jay Jay chat boy" π΅
β¨ Navigating the High Seas of Data: Crafting the Ultimate Yacht Insights with Merged LLMs β¨
ποΈ Merged Model Series Yacht Features
Welcome to the merged model series yacht! This provides an overview of the powerful features and functionalities that this series brings together, akin to a sleek, modern yacht sailing across the digital ocean.
1. Function Calling & JSON Outputs
- Offers precise function calling and structured JSON outputs via specialized tokens like
<tools>
,<tool_call>
, and<tool_response>
. Streamlines system communication for developers.
2. Conversational Interaction
- Avoids excessive "SYSTEM MESSAGE" chatter while delivering seamless, friendly dialogue.
- Specializes in answering questions with precision, handling arithmetic and tabular data effortlessly.
3. Expanded Context Length
- Extends the context length to 256k tokens using PoSE, offering a broader field of data analysis.
4. Multilingual Capabilities
- Transfers instruction-following from English to Korean for reliable interaction across languages.
5. Optimized Dialogue & Safety
- Aligns with human preferences using fine-tuning (SFT) and reinforcement learning (RLHF), ensuring helpful and safe dialogues.
6. Precision Merging
- Merges foundational and preview models for Korean language through task arithmetic, providing seamless integration.
7. Specialized Biomedical Knowledge
- Specializes in biomedical tasks with accurate responses for healthcare professionals and researchers.
8. Novel Training & Collaboration
- Combines ORPO method and dolphin preference datasets for high-quality conversation and collaboration.
The merged model series yacht offers unparalleled functionality, drawing together a fleet of specialized models. Whether you need precise function calling, multilingual capabilities, or conversational AI, this yacht has every deck optimized to navigate the digital ocean with style and precision.
π Merge Method
This model was merged using the DARE TIES merge method using NousResearch/Meta-Llama-3-8B as a base.
π©± Models Merged
The following models were included in the merge:
- NousResearch/Hermes-2-Pro-Llama-3-8B
- cognitivecomputations/dolphin-2.9-llama3-8b
- winglian/llama-3-8b-256k-PoSE
- maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
- asiansoul/Llama-3-Open-Ko-Linear-8B
- NousResearch/Meta-Llama-3-8B-Instruct
- nvidia/Llama3-ChatQA-1.5-8B
- Danielbrdz/Barcenas-Llama3-8b-ORPO
- aaditya/Llama3-OpenBioLLM-8B
πͺ Configuration
The following YAML configuration was used to produce this model:
models:
- model: NousResearch/Meta-Llama-3-8B
# Base model providing a general foundation without specific parameters
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
density: 0.60
weight: 0.25
- model: winglian/llama-3-8b-256k-PoSE
parameters:
density: 0.55
weight: 0.15
- model: nvidia/Llama3-ChatQA-1.5-8B
parameters:
density: 0.55
weight: 0.1
- model: asiansoul/Llama-3-Open-Ko-Linear-8B
parameters:
density: 0.55
weight: 0.2
- model: maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
parameters:
density: 0.55
weight: 0.1
- model: NousResearch/Hermes-2-Pro-Llama-3-8B
parameters:
density: 0.55
weight: 0.1
- model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
density: 0.55
weight: 0.05
- model: Danielbrdz/Barcenas-Llama3-8b-ORPO
parameters:
density: 0.55
weight: 0.05
- model: aaditya/Llama3-OpenBioLLM-8B
parameters:
density: 0.55
weight: 0.1
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
int8_mask: true
dtype: bfloat16
- Downloads last month
- 9