Edit model card

Update!

  • [2024.08.09] Llama3.1 버전을 κΈ°λ°˜μœΌλ‘œν•œ Bllossom-8B둜 λͺ¨λΈμ„ μ—…λ°μ΄νŠΈ ν–ˆμŠ΅λ‹ˆλ‹€. κΈ°μ‘΄ llama3기반 Bllossom 보닀 평균 5%정도 μ„±λŠ₯ ν–₯상이 μžˆμ—ˆμŠ΅λ‹ˆλ‹€.(μˆ˜μ •μ€‘μ— μžˆμŠ΅λ‹ˆλ‹€.)
  • [2024.06.18] μ‚¬μ „ν•™μŠ΅λŸ‰μ„ 250GBκΉŒμ§€ 늘린 Bllossom ELOλͺ¨λΈλ‘œ μ—…λ°μ΄νŠΈ λ˜μ—ˆμŠ΅λ‹ˆλ‹€. λ‹€λ§Œ 단어확μž₯은 ν•˜μ§€ μ•Šμ•˜μŠ΅λ‹ˆλ‹€. κΈ°μ‘΄ 단어확μž₯된 long-context λͺ¨λΈμ„ ν™œμš©ν•˜κ³  μ‹ΆμœΌμ‹ λΆ„μ€ κ°œμΈμ—°λ½μ£Όμ„Έμš”!
  • [2024.06.18] Bllossom ELO λͺ¨λΈμ€ 자체 κ°œλ°œν•œ ELOμ‚¬μ „ν•™μŠ΅ 기반으둜 μƒˆλ‘œμš΄ ν•™μŠ΅λœ λͺ¨λΈμž…λ‹ˆλ‹€. LogicKor 벀치마크 κ²°κ³Ό ν˜„μ‘΄ν•˜λŠ” ν•œκ΅­μ–΄ 10Bμ΄ν•˜ λͺ¨λΈμ€‘ SOTA점수λ₯Ό λ°›μ•˜μŠ΅λ‹ˆλ‹€.

LogicKor μ„±λŠ₯ν‘œ :

Model Math Reasoning Writing Coding Understanding Grammar Single ALL Multi ALL Overall
gpt-3.5-turbo-0125 7.14 7.71 8.28 5.85 9.71 6.28 7.50 7.95 7.72
gemini-1.5-pro-preview-0215 8.00 7.85 8.14 7.71 8.42 7.28 7.90 6.26 7.08
llama-3-Korean-Bllossom-8B 5.43 8.29 9.0 4.43 7.57 6.86 6.93 6.93 6.93

Bllossom | Demo | Homepage | Github |

저희 BllossomνŒ€ μ—μ„œ ν•œκ΅­μ–΄-μ˜μ–΄ 이쀑 μ–Έμ–΄λͺ¨λΈμΈ Bllossom을 κ³΅κ°œν–ˆμŠ΅λ‹ˆλ‹€!
μ„œμšΈκ³ΌκΈ°λŒ€ μŠˆνΌμ»΄ν“¨νŒ… μ„Όν„°μ˜ μ§€μ›μœΌλ‘œ 100GBκ°€λ„˜λŠ” ν•œκ΅­μ–΄λ‘œ λͺ¨λΈμ „체λ₯Ό ν’€νŠœλ‹ν•œ ν•œκ΅­μ–΄ κ°•ν™” 이쀑언어 λͺ¨λΈμž…λ‹ˆλ‹€!
ν•œκ΅­μ–΄ μž˜ν•˜λŠ” λͺ¨λΈ μ°Ύκ³  μžˆμ§€ μ•ŠμœΌμ…¨λ‚˜μš”?
 - ν•œκ΅­μ–΄ 졜초! 무렀 3λ§Œκ°œκ°€ λ„˜λŠ” ν•œκ΅­μ–΄ μ–΄νœ˜ν™•μž₯
 - Llama3λŒ€λΉ„ λŒ€λž΅ 25% 더 κΈ΄ 길이의 ν•œκ΅­μ–΄ Context μ²˜λ¦¬κ°€λŠ₯
 - ν•œκ΅­μ–΄-μ˜μ–΄ Pararell Corpusλ₯Ό ν™œμš©ν•œ ν•œκ΅­μ–΄-μ˜μ–΄ 지식연결 (μ‚¬μ „ν•™μŠ΅)
 - ν•œκ΅­μ–΄ λ¬Έν™”, μ–Έμ–΄λ₯Ό κ³ λ €ν•΄ μ–Έμ–΄ν•™μžκ°€ μ œμž‘ν•œ 데이터λ₯Ό ν™œμš©ν•œ λ―Έμ„Έμ‘°μ •
 - κ°•ν™”ν•™μŠ΅
이 λͺ¨λ“ κ²Œ ν•œκΊΌλ²ˆμ— 적용되고 상업적 이용이 κ°€λŠ₯ν•œ Bllossom을 μ΄μš©ν•΄ μ—¬λŸ¬λΆ„ 만의 λͺ¨λΈμ„ λ§Œλ“€μ–΄λ³΄μ„Έμš₯!
무렀 Colab 무료 GPU둜 ν•™μŠ΅μ΄ κ°€λŠ₯ν•©λ‹ˆλ‹€. ν˜Ήμ€ μ–‘μžν™” λͺ¨λΈλ‘œ CPUμ—μ˜¬λ €λ³΄μ„Έμš” [μ–‘μžν™”λͺ¨λΈ](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B-4bit)

1. Bllossom-8BλŠ” μ„œμšΈκ³ΌκΈ°λŒ€, ν…Œλ””μΈ, μ—°μ„ΈλŒ€ μ–Έμ–΄μžμ› μ—°κ΅¬μ‹€μ˜ μ–Έμ–΄ν•™μžμ™€ ν˜‘μ—…ν•΄ λ§Œλ“  μ‹€μš©μ£Όμ˜κΈ°λ°˜ μ–Έμ–΄λͺ¨λΈμž…λ‹ˆλ‹€! μ•žμœΌλ‘œ 지속적인 μ—…λ°μ΄νŠΈλ₯Ό 톡해 κ΄€λ¦¬ν•˜κ² μŠ΅λ‹ˆλ‹€ 많이 ν™œμš©ν•΄μ£Όμ„Έμš” πŸ™‚
2. 초 κ°•λ ₯ν•œ Advanced-Bllossom 8B, 70Bλͺ¨λΈ, μ‹œκ°-μ–Έμ–΄λͺ¨λΈμ„ λ³΄μœ ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€! (κΆκΈˆν•˜μ‹ λΆ„μ€ κ°œλ³„ μ—°λ½μ£Όμ„Έμš”!!)
3. Bllossom은 NAACL2024, LREC-COLING2024 (ꡬ두) λ°œν‘œλ‘œ μ±„νƒλ˜μ—ˆμŠ΅λ‹ˆλ‹€.
4. 쒋은 μ–Έμ–΄λͺ¨λΈ 계속 μ—…λ°μ΄νŠΈ ν•˜κ² μŠ΅λ‹ˆλ‹€!! ν•œκ΅­μ–΄ κ°•ν™”λ₯Όμœ„ν•΄ 곡동 μ—°κ΅¬ν•˜μ‹€λΆ„(νŠΉνžˆλ…Όλ¬Έ) μ–Έμ œλ“  ν™˜μ˜ν•©λ‹ˆλ‹€!! 
   특히 μ†ŒλŸ‰μ˜ GPU라도 λŒ€μ—¬ κ°€λŠ₯ν•œνŒ€μ€ μ–Έμ œλ“  μ—°λ½μ£Όμ„Έμš”! λ§Œλ“€κ³  싢은거 λ„μ™€λ“œλ €μš”.

The Bllossom language model is a Korean-English bilingual language model based on the open-source LLama3. It enhances the connection of knowledge between Korean and English. It has the following features:

  • Knowledge Linking: Linking Korean and English knowledge through additional training
  • Vocabulary Expansion: Expansion of Korean vocabulary to enhance Korean expressiveness.
  • Instruction Tuning: Tuning using custom-made instruction following data specialized for Korean language and Korean culture
  • Human Feedback: DPO has been applied
  • Vision-Language Alignment: Aligning the vision transformer with this language model

This model developed by MLPLab at Seoultech, Teddysum and Yonsei Univ

Demo Video

Bllossom-V Demo

Bllossom Demo(Kakao)γ…€γ…€γ…€γ…€γ…€γ…€γ…€γ…€

NEWS

  • [2024.06.18] We have reverted to the non-vocab-expansion model. However, we have significantly increased the amount of pre-training data to 250GB.
  • [2024.05.08] Vocab Expansion Model Update
  • [2024.04.25] We released Bllossom v2.0, based on llama-3

Example code

Colab Tutorial

Install Dependencies

pip install torch transformers==4.40.0 accelerate

Python code with Pipeline

import transformers
import torch

model_id = "MLP-KTLim/llama-3-Korean-Bllossom-8B"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

pipeline.model.eval()

PROMPT = '''You are a helpful AI assistant. Please answer the user's questions kindly. 당신은 유λŠ₯ν•œ AI μ–΄μ‹œμŠ€ν„΄νŠΈ μž…λ‹ˆλ‹€. μ‚¬μš©μžμ˜ μ§ˆλ¬Έμ— λŒ€ν•΄ μΉœμ ˆν•˜κ²Œ λ‹΅λ³€ν•΄μ£Όμ„Έμš”.'''
instruction = "μ„œμšΈμ˜ 유λͺ…ν•œ κ΄€κ΄‘ μ½”μŠ€λ₯Ό λ§Œλ“€μ–΄μ€„λž˜?"

messages = [
    {"role": "system", "content": f"{PROMPT}"},
    {"role": "user", "content": f"{instruction}"}
    ]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=2048,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9
)

print(outputs[0]["generated_text"][len(prompt):])
# 물둠이죠! μ„œμšΈμ€ λ‹€μ–‘ν•œ 문화와 역사, μžμ—°μ„ κ²ΈλΉ„ν•œ λ„μ‹œλ‘œ, λ§Žμ€ κ΄€κ΄‘ λͺ…μ†Œλ₯Ό μžλž‘ν•©λ‹ˆλ‹€. μ—¬κΈ° μ„œμšΈμ˜ 유λͺ…ν•œ κ΄€κ΄‘ μ½”μŠ€λ₯Ό μ†Œκ°œν•΄ λ“œλ¦΄κ²Œμš”.

### μ½”μŠ€ 1: 역사와 λ¬Έν™” 탐방

1. **경볡ꢁ**
   - μ„œμšΈμ˜ λŒ€ν‘œμ μΈ ꢁꢐ둜, μ‘°μ„  μ™•μ‘°μ˜ 역사와 λ¬Έν™”λ₯Ό μ²΄ν—˜ν•  수 μžˆλŠ” κ³³μž…λ‹ˆλ‹€.

2. **뢁촌 ν•œμ˜₯λ§ˆμ„**
   - 전톡 ν•œμ˜₯이 잘 보쑴된 λ§ˆμ„λ‘œ, μ‘°μ„ μ‹œλŒ€μ˜ μƒν™œμƒμ„ λŠλ‚„ 수 μžˆμŠ΅λ‹ˆλ‹€.

3. **인사동**
   - 전톡 문화와 ν˜„λŒ€ 예술이 κ³΅μ‘΄ν•˜λŠ” 거리둜, λ‹€μ–‘ν•œ κ°€λŸ¬λ¦¬μ™€ 전톡 μŒμ‹μ μ΄ μžˆμŠ΅λ‹ˆλ‹€.

4. **μ²­κ³„μ²œ**
   - μ„œμšΈμ˜ 쀑심에 μœ„μΉ˜ν•œ 천문으둜, μ‘°κΉ…κ³Ό 산책을 즐길 수 μžˆλŠ” κ³³μž…λ‹ˆλ‹€.

### μ½”μŠ€ 2: μžμ—°κ³Ό μ‡Όν•‘

1. **남산 μ„œμšΈνƒ€μ›Œ**
   - μ„œμšΈμ˜ 전경을 ν•œλˆˆμ— λ³Ό 수 μžˆλŠ” 곳으둜, 특히 저녁 μ‹œκ°„λŒ€μ— 일λͺ°μ„ κ°μƒν•˜λŠ” 것이 μ’‹μŠ΅λ‹ˆλ‹€.

2. **λͺ…동**
   - μ‡Όν•‘κ³Ό μŒμ‹μ μ΄ μ¦λΉ„ν•œ μ§€μ—­μœΌλ‘œ, λ‹€μ–‘ν•œ λΈŒλžœλ“œμ™€ 전톡 μŒμ‹μ„ 맛볼 수 μžˆμŠ΅λ‹ˆλ‹€.

3. **ν•œκ°•κ³΅μ›**
   - μ„œμšΈμ˜ μ£Όμš” 곡원 쀑 ν•˜λ‚˜λ‘œ, μ‘°κΉ…, μžμ „κ±° 타기, λ°°λ‚­ 여행을 즐길 수 μžˆμŠ΅λ‹ˆλ‹€.

4. **ν™λŒ€**
   - μ Šμ€μ΄λ“€μ΄ 즐겨 μ°ΎλŠ” μ§€μ—­μœΌλ‘œ, λ‹€μ–‘ν•œ 카페, λ ˆμŠ€ν† λž‘, 클럽이 μžˆμŠ΅λ‹ˆλ‹€.

### μ½”μŠ€ 3: ν˜„λŒ€μ™€ μ „ν†΅μ˜ μ‘°ν™”

1. **λ™λŒ€λ¬Έ λ””μžμΈ ν”ŒλΌμž (DDP)**
   - ν˜„λŒ€μ μΈ κ±΄μΆ•λ¬Όλ‘œ, λ‹€μ–‘ν•œ μ „μ‹œμ™€ μ΄λ²€νŠΈκ°€ μ—΄λ¦¬λŠ” κ³³μž…λ‹ˆλ‹€.

2. **μ΄νƒœμ›**
   - λ‹€μ–‘ν•œ ꡭ제 μŒμ‹κ³Ό μΉ΄νŽ˜κ°€ μžˆλŠ” μ§€μ—­μœΌλ‘œ, λ‹€μ–‘ν•œ λ¬Έν™”λ₯Ό κ²½ν—˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

3. **κ΄‘ν™”λ¬Έ**
   - μ„œμšΈμ˜ 쀑심에 μœ„μΉ˜ν•œ κ΄‘μž₯으둜, λ‹€μ–‘ν•œ 곡연과 행사가 μ—΄λ¦½λ‹ˆλ‹€.

4. **μ„œμšΈλžœλ“œ**
   - μ„œμšΈ 외곽에 μœ„μΉ˜ν•œ ν…Œλ§ˆνŒŒν¬λ‘œ, κ°€μ‘±λ‹¨μœ„ κ΄€κ΄‘κ°λ“€μ—κ²Œ 인기 μžˆλŠ” κ³³μž…λ‹ˆλ‹€.

이 μ½”μŠ€λ“€μ€ μ„œμšΈμ˜ λ‹€μ–‘ν•œ λ©΄λͺ¨λ₯Ό κ²½ν—˜ν•  수 μžˆλ„λ‘ κ΅¬μ„±λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€. 각 μ½”μŠ€λ§ˆλ‹€ μ‹œκ°„μ„ μ‘°μ ˆν•˜κ³ , 개인의 관심사에 맞게 μ„ νƒν•˜μ—¬ λ°©λ¬Έν•˜λ©΄ 쒋을 것 κ°™μŠ΅λ‹ˆλ‹€. 즐거운 μ—¬ν–‰ λ˜μ„Έμš”!

Python code with AutoModel


import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = 'MLP-KTLim/llama-3-Korean-Bllossom-8B'

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

model.eval()

PROMPT = '''You are a helpful AI assistant. Please answer the user's questions kindly. 당신은 유λŠ₯ν•œ AI μ–΄μ‹œμŠ€ν„΄νŠΈ μž…λ‹ˆλ‹€. μ‚¬μš©μžμ˜ μ§ˆλ¬Έμ— λŒ€ν•΄ μΉœμ ˆν•˜κ²Œ λ‹΅λ³€ν•΄μ£Όμ„Έμš”.'''
instruction = "μ„œμšΈμ˜ 유λͺ…ν•œ κ΄€κ΄‘ μ½”μŠ€λ₯Ό λ§Œλ“€μ–΄μ€„λž˜?"

messages = [
    {"role": "system", "content": f"{PROMPT}"},
    {"role": "user", "content": f"{instruction}"}
    ]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = model.generate(
    input_ids,
    max_new_tokens=2048,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9
)

print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True))
# 물둠이죠! μ„œμšΈμ€ λ‹€μ–‘ν•œ 문화와 역사, μžμ—°μ„ κ²ΈλΉ„ν•œ λ„μ‹œλ‘œ, λ§Žμ€ κ΄€κ΄‘ λͺ…μ†Œλ₯Ό μžλž‘ν•©λ‹ˆλ‹€. μ—¬κΈ° μ„œμšΈμ˜ 유λͺ…ν•œ κ΄€κ΄‘ μ½”μŠ€λ₯Ό μ†Œκ°œν•΄ λ“œλ¦΄κ²Œμš”.

### μ½”μŠ€ 1: 역사와 λ¬Έν™” 탐방

1. **경볡ꢁ**
   - μ„œμšΈμ˜ λŒ€ν‘œμ μΈ ꢁꢐ둜, μ‘°μ„  μ™•μ‘°μ˜ 역사와 λ¬Έν™”λ₯Ό μ²΄ν—˜ν•  수 μžˆλŠ” κ³³μž…λ‹ˆλ‹€.

2. **뢁촌 ν•œμ˜₯λ§ˆμ„**
   - 전톡 ν•œμ˜₯이 잘 보쑴된 λ§ˆμ„λ‘œ, μ‘°μ„ μ‹œλŒ€μ˜ μƒν™œμƒμ„ λŠλ‚„ 수 μžˆμŠ΅λ‹ˆλ‹€.

3. **인사동**
   - 전톡 문화와 ν˜„λŒ€ 예술이 κ³΅μ‘΄ν•˜λŠ” 거리둜, λ‹€μ–‘ν•œ κ°€λŸ¬λ¦¬μ™€ 전톡 μŒμ‹μ μ΄ μžˆμŠ΅λ‹ˆλ‹€.

4. **μ²­κ³„μ²œ**
   - μ„œμšΈμ˜ 쀑심에 μœ„μΉ˜ν•œ 천문으둜, μ‘°κΉ…κ³Ό 산책을 즐길 수 μžˆλŠ” κ³³μž…λ‹ˆλ‹€.

### μ½”μŠ€ 2: μžμ—°κ³Ό μ‡Όν•‘

1. **남산 μ„œμšΈνƒ€μ›Œ**
   - μ„œμšΈμ˜ 전경을 ν•œλˆˆμ— λ³Ό 수 μžˆλŠ” 곳으둜, 특히 저녁 μ‹œκ°„λŒ€μ— 일λͺ°μ„ κ°μƒν•˜λŠ” 것이 μ’‹μŠ΅λ‹ˆλ‹€.

2. **λͺ…동**
   - μ‡Όν•‘κ³Ό μŒμ‹μ μ΄ μ¦λΉ„ν•œ μ§€μ—­μœΌλ‘œ, λ‹€μ–‘ν•œ λΈŒλžœλ“œμ™€ 전톡 μŒμ‹μ„ 맛볼 수 μžˆμŠ΅λ‹ˆλ‹€.

3. **ν•œκ°•κ³΅μ›**
   - μ„œμšΈμ˜ μ£Όμš” 곡원 쀑 ν•˜λ‚˜λ‘œ, μ‘°κΉ…, μžμ „κ±° 타기, λ°°λ‚­ 여행을 즐길 수 μžˆμŠ΅λ‹ˆλ‹€.

4. **ν™λŒ€**
   - μ Šμ€μ΄λ“€μ΄ 즐겨 μ°ΎλŠ” μ§€μ—­μœΌλ‘œ, λ‹€μ–‘ν•œ 카페, λ ˆμŠ€ν† λž‘, 클럽이 μžˆμŠ΅λ‹ˆλ‹€.

### μ½”μŠ€ 3: ν˜„λŒ€μ™€ μ „ν†΅μ˜ μ‘°ν™”

1. **λ™λŒ€λ¬Έ λ””μžμΈ ν”ŒλΌμž (DDP)**
   - ν˜„λŒ€μ μΈ κ±΄μΆ•λ¬Όλ‘œ, λ‹€μ–‘ν•œ μ „μ‹œμ™€ μ΄λ²€νŠΈκ°€ μ—΄λ¦¬λŠ” κ³³μž…λ‹ˆλ‹€.

2. **μ΄νƒœμ›**
   - λ‹€μ–‘ν•œ ꡭ제 μŒμ‹κ³Ό μΉ΄νŽ˜κ°€ μžˆλŠ” μ§€μ—­μœΌλ‘œ, λ‹€μ–‘ν•œ λ¬Έν™”λ₯Ό κ²½ν—˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

3. **κ΄‘ν™”λ¬Έ**
   - μ„œμšΈμ˜ 쀑심에 μœ„μΉ˜ν•œ κ΄‘μž₯으둜, λ‹€μ–‘ν•œ 곡연과 행사가 μ—΄λ¦½λ‹ˆλ‹€.

4. **μ„œμšΈλžœλ“œ**
   - μ„œμšΈ 외곽에 μœ„μΉ˜ν•œ ν…Œλ§ˆνŒŒν¬λ‘œ, κ°€μ‘±λ‹¨μœ„ κ΄€κ΄‘κ°λ“€μ—κ²Œ 인기 μžˆλŠ” κ³³μž…λ‹ˆλ‹€.

이 μ½”μŠ€λ“€μ€ μ„œμšΈμ˜ λ‹€μ–‘ν•œ λ©΄λͺ¨λ₯Ό κ²½ν—˜ν•  수 μžˆλ„λ‘ κ΅¬μ„±λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€. 각 μ½”μŠ€λ§ˆλ‹€ μ‹œκ°„μ„ μ‘°μ ˆν•˜κ³ , 개인의 관심사에 맞게 μ„ νƒν•˜μ—¬ λ°©λ¬Έν•˜λ©΄ 쒋을 것 κ°™μŠ΅λ‹ˆλ‹€. 즐거운 μ—¬ν–‰ λ˜μ„Έμš”!

Citation

Language Model

@misc{bllossom,
  author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim},
  title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean},
  year = {2024},
  journal = {LREC-COLING 2024},
  paperLink = {\url{https://arxiv.org/pdf/2403.10882}},
 },
}

Vision-Language Model

@misc{bllossom-V,
  author = {Dongjae Shin, Hyunseok Lim, Inho Won, Changsu Choi, Minjun Kim, Seungwoo Song, Hangyeol Yoo, Sangmin Kim, Kyungtae Lim},
  title = {X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment},
  year = {2024},
  publisher = {GitHub},
  journal = {NAACL 2024 findings},
  paperLink = {\url{https://arxiv.org/pdf/2403.11399}},
 },
}

Contact

  • μž„κ²½νƒœ(KyungTae Lim), Professor at Seoultech. ktlim@seoultech.ac.kr
  • ν•¨μ˜κ· (Younggyun Hahm), CEO of Teddysum. hahmyg@teddysum.ai
  • κΉ€ν•œμƒ˜(Hansaem Kim), Professor at Yonsei. khss@yonsei.ac.kr

Contributor

Downloads last month
146
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for mav23/llama-3-Korean-Bllossom-8B-GGUF

Quantized
(234)
this model