YACHT-Llama-3-Ko-8B

DALL-E Yacht

🎡 [JayLee LLMs Signature Tag] : ✍️ "I need a Jay Jay chat boy" 🎡

✨ Navigating the High Seas of Data: Crafting the Ultimate Yacht Insights with Merged LLMs ✨

🏟️ Merged Model Series Yacht Features

Welcome to the merged model series yacht! This provides an overview of the powerful features and functionalities that this series brings together, akin to a sleek, modern yacht sailing across the digital ocean.

1. Function Calling & JSON Outputs

  • Offers precise function calling and structured JSON outputs via specialized tokens like <tools>, <tool_call>, and <tool_response>. Streamlines system communication for developers.

2. Conversational Interaction

  • Avoids excessive "SYSTEM MESSAGE" chatter while delivering seamless, friendly dialogue.
  • Specializes in answering questions with precision, handling arithmetic and tabular data effortlessly.

3. Expanded Context Length

  • Extends the context length to 256k tokens using PoSE, offering a broader field of data analysis.

4. Multilingual Capabilities

  • Transfers instruction-following from English to Korean for reliable interaction across languages.

5. Optimized Dialogue & Safety

  • Aligns with human preferences using fine-tuning (SFT) and reinforcement learning (RLHF), ensuring helpful and safe dialogues.

6. Precision Merging

  • Merges foundational and preview models for Korean language through task arithmetic, providing seamless integration.

7. Specialized Biomedical Knowledge

  • Specializes in biomedical tasks with accurate responses for healthcare professionals and researchers.

8. Novel Training & Collaboration

  • Combines ORPO method and dolphin preference datasets for high-quality conversation and collaboration.

The merged model series yacht offers unparalleled functionality, drawing together a fleet of specialized models. Whether you need precise function calling, multilingual capabilities, or conversational AI, this yacht has every deck optimized to navigate the digital ocean with style and precision.

πŸ‘˜ Merge Method

This model was merged using the DARE TIES merge method using NousResearch/Meta-Llama-3-8B as a base.

🩱 Models Merged

The following models were included in the merge:

πŸͺ­ Configuration

The following YAML configuration was used to produce this model:

models:
  - model: NousResearch/Meta-Llama-3-8B
    # Base model providing a general foundation without specific parameters

  - model: NousResearch/Meta-Llama-3-8B-Instruct
    parameters:
      density: 0.60  
      weight: 0.25  
  
  - model: winglian/llama-3-8b-256k-PoSE
    parameters:
      density: 0.55  
      weight: 0.15  
  
  - model: nvidia/Llama3-ChatQA-1.5-8B
    parameters:
      density: 0.55  
      weight: 0.1
  
  - model: asiansoul/Llama-3-Open-Ko-Linear-8B
    parameters:
      density: 0.55  
      weight: 0.2  

  - model: maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
    parameters:
      density: 0.55  
      weight: 0.1 

  - model: NousResearch/Hermes-2-Pro-Llama-3-8B
    parameters:
      density: 0.55  
      weight: 0.1  

  - model: cognitivecomputations/dolphin-2.9-llama3-8b
    parameters:
      density: 0.55  
      weight: 0.05  

  - model: Danielbrdz/Barcenas-Llama3-8b-ORPO
    parameters:
      density: 0.55  
      weight: 0.05 

  - model: aaditya/Llama3-OpenBioLLM-8B
    parameters:
      density: 0.55  
      weight: 0.1 

merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
  int8_mask: true
dtype: bfloat16

Downloads last month
9
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for asiansoul/YACHT-Llama-3-Ko-8B