File size: 5,364 Bytes
5250064 8e7f3ca 5250064 b1b90ad 5250064 b1b90ad 5250064 b1b90ad 5250064 893647a 5250064 893647a 5250064 893647a 5250064 893647a 5250064 893647a 5250064 893647a 5250064 893647a 5250064 893647a 5250064 893647a 5250064 893647a 5250064 893647a 5250064 893647a 5250064 893647a 5250064 893647a 5250064 893647a 5250064 893647a 5250064 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
The **Llama-3-instruction-constructionsafety-layertuning** model is a fine-tuned model based on **beomi/Llama-3-KoEn-8B-Instruct-preview**
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
**Llama-3-instruction-constructionsafety-layertuning**
Llama-3-instruction-constructionsafety-layertuning model is contined pretrained model based on beomi/Llama-3-KoEn-8B-Instruction-preview.
The training was conducted based on the QA datasets and RAW data of Constrution Safety Guidelines provided by the Korea Ocuupational Safety and Health Agency(KOSHA).
The training was conducted using full parameter tuning, utilizing 2xA100GPU(80GB). Approximately 11,000 data were used for the training process.
After fine-tuning the entire layers, layers 0, 30, and 31 were replaced with parameters from the base model. This was done as a precautionary measure to prevent errors resulting from training on raw data.
## Simple Use
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_name = "juungwon/Llama-3-instruction-constructionsafety-layertuning"
tuned_model = AutoModelForCausalLM.from_pretrained(
model_name,
token=access_token,
torch_dtype="auto",
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_name, token=access_token)
tokenizer.pad_token = tokenizer.eos_token
pipe = pipeline("text-generation", model=tuned_model, tokenizer = tokenizer, torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "μΉμ ν 건μ€μμ μ λ¬Έκ°λ‘μ μλλ°©μ μμ²μ μ΅λν 'μμΈνκ³ ' μΉμ νκ² λ΅νμ. λͺ¨λ λλ΅μ νκ΅μ΄(Korean)μΌλ‘ λλ΅ν΄μ€.",
},
{"role": "user", "content": "νλ§μ΄ κ°μμ€ κ³΅μ¬μ μμ
μ μμ μ‘°μΉμ λν΄μ 10κ°μ§ μ λ μλ €μ£ΌμΈμ."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=1024, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Output
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
μΉμ ν 건μ€μμ μ λ¬Έκ°λ‘μ μλλ°©μ μμ²μ μ΅λν 'μμΈνκ³ ' μΉμ νκ² λ΅νμ. λͺ¨λ λλ΅μ νκ΅μ΄(Korean)μΌλ‘ λλ΅ν΄μ€.<|eot_id|><|start_header_id|>user<|end_header_id|>
νλ§μ΄ κ°μμ€ κ³΅μ¬μ μμ
μ μμ μ‘°μΉμ λν΄μ 10κ°μ§ μ λ μλ €μ£ΌμΈμ.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
1. κ³΅μ¬ μ€μλ νμ₯μ¬λ¬΄μ€μ μ€μΉνμ¬ κ³΅μ¬κ΄λ ¨ κΈ°λ‘, μμ
μ§ν, μμ
μ λκΈ°, μμ κ΅μ‘, μμ μ κ² λ±μ μ
무λ₯Ό μνν μ μμ΄μΌ νλ€. 2. κ³΅μ¬ μ Β·νμλ μ£Όλ³μ μ§λ°μΉ¨ν, μ§νμμ, μ§ν 맀μ€λ¬Ό, μ§νλ©΄μ μ΄μ, μ§λ°μ μ΄μ, μ§νλ©΄μ κ· μ΄, λ
Έλ©΄μ μ΄μ μ 무, λλ‘ μμ€λ¬Ό λ±μ μ΄μ μ 무λ₯Ό νμΈνμ¬μΌ νλ€. 3. μ€κ³λμ, μλ°©μ, μμ 보건κ·μΉ, μμ 보건κ·μΉ λ° κ΄λ ¨λ²κ·, μμ 보건κ·μΉκ³Ό κ΄λ ¨λ μ§μΉ¨, μ°μ
μμ 보건기μ€μ κ΄ν κ·μΉμ κ²ν νμ¬ μμ λμ±
μ μ립νμ¬μΌ νλ€. 4. νλ§μ΄ κ°μμ€ κ³΅μ¬ μμλ μμ
μμ μΆλ½λ°©μ§λ₯Ό μνμ¬ μμ λ, μμ λͺ¨, μμ ν λ± κ°μΈλ³΄νΈκ΅¬λ₯Ό μ°©μ©νμ¬μΌ νλ€. 5. νλ§μ΄ κ°μμ€ κ³΅μ¬ μμλ 근골격κ³μ§ν μλ°©μ μνμ¬ μ μ ν ν΄μμκ°μ μ 곡νμ¬μΌ νλ€. 6. νλ§μ΄ κ°μμ€ κ³΅μ¬ μμλ μμ
μμ 건κ°κ΄λ¦¬λ₯Ό μνμ¬ μμ
νκ²½μ κ°μ νκ³ μ μ ν ν΄μ곡κ°μ λ§λ ¨νμ¬μΌ νλ€. 7. νλ§μ΄ κ°μμ€ κ³΅μ¬ μμλ μμ
μμ μ°μ
μ¬ν΄ μλ°©μ μνμ¬ μμ κ΅μ‘, μμ μμ€, μμ μ₯λΉλ₯Ό λ§λ ¨νμ¬μΌ νλ€. 8. νλ§μ΄ κ°μμ€ κ³΅μ¬ μμλ μμ
μμ μμ μ μνμ¬ μμ μμ
κ³νμ μ립νμ¬μΌ νλ€. 9. νλ§μ΄ κ°μμ€ κ³΅μ¬ μμλ μμ
μμ μμ μ μνμ¬ ν μ§, μ§νμμ, ν μΈ΅, 맀μ€λ¬Ό, μΈμ ꡬ쑰물, μ§νμμ, μ§νλ©΄μ μ΄μ μ 무, λλ‘ μμ€λ¬Ό λ±μ μ΄μ μ 무λ₯Ό νμΈνμ¬μΌ νλ€. 10. νλ§μ΄ κ°μμ€ κ³΅μ¬ μμλ μμ
μμ μμ μ μνμ¬ μμ
μ 1μΈλΉ 1κ°μ μμ λͺ¨, μμ ν, μμ λ λ± κ°μΈλ³΄νΈκ΅¬λ₯Ό μ°©μ©νμ¬μΌ νλ€.
```
### Training Data
Training Data will be provided upon requests.
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
**BibTeX:**
@article{llama3cs-layertuning,
title={Llama-3-instruction-constructionsafety-layertuning},
author={L, Jungwon, A, Seungjun},
year={2024},
url={https://huggingface.co/juungwon/Llama-3-instruction-constructionsafety-layertuning}
}
@article{llama3koen,
title={Llama-3-KoEn},
author={L, Junbum},
year={2024},
url={https://huggingface.co/beomi/Llama-3-KoEn-8B}
}
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
|