Edit model card

Model Card for Model ID

AI 와 빅데이터 분석 전문 기업인 Linkbricks의 데이터사이언티스트인 지윤성(Saxo) 이사가
Linkbricks-Horizon-AI-Nous-Hermes-3-Llama3.1-Korean-cpt-8b 베이스모델을 사용해서 H100-80G 8개를 통해 SFT->DPO 한 한글 언어 모델
한글 데이터셋을 기준으로 다양한 테스크별 한국어-중국어-영어-일본어 교차 학습 데이터와 수학 및 논리판단 데이터를 통하여 한중일영 언어 교차 증강 처리와 복잡한 논리 문제 역시 대응 가능하도록 훈련한 모델이다.
-토크나이저는 단어 확장 없이 베이스 모델 그대로 사용
-수학, 논리판단 등이 강화된 모델
-COT(Chain of Thought) 강화 모델
-128k-Context Window
-한글 Function Call 및 Tool Calling 지원
-Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용


Finetuned by Mr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics
SFT->DPO training model based on Saxo/Linkbricks-Horizon-AI-Nous-Hermes-3-Llama3.1-Korean-cpt-8b through 8 H100-80Gs as a Korean language model
It is a model that has been trained to handle Korean-Chinese-English-Japanese cross-training data and 10M korean data sets and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems.
-Tokenizer uses the base model without word expansion
-Models enhanced with high-dimensional analysis of math and decision making
-Enhanced for COT(Chain of Thought) performance boost up
-128k-Context Window
-Support for Korean Functioncall and Tool Calling
-Deepspeed Stage=3, use rslora and BAdam Layer Mode


www.linkbricks.com, www.linkbricks.vc

Downloads last month
21
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Saxo/Linkbricks-Horizon-AI-Korean-Advanced-8B-COT-boost

Datasets used to train Saxo/Linkbricks-Horizon-AI-Korean-Advanced-8B-COT-boost