File size: 2,698 Bytes
d439181
 
fe02ab5
 
 
 
 
 
 
 
 
 
 
 
d439181
 
 
 
fe02ab5
d439181
 
 
 
 
 
fe02ab5
d439181
fe02ab5
 
 
 
d439181
 
 
 
 
 
fe02ab5
 
 
 
d439181
 
 
fe02ab5
 
d439181
 
 
 
 
 
 
 
 
 
fe02ab5
d439181
fe02ab5
 
 
 
 
 
d439181
 
fe02ab5
d439181
 
 
 
 
fe02ab5
 
d439181
 
 
fe02ab5
 
d439181
 
fe02ab5
d439181
fe02ab5
d439181
fe02ab5
d439181
bf26c6f
fe02ab5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
library_name: transformers
tags:
- ipex
- intel
- gaudi
- guanaco
- PEFT
- optimum-habana
license: apache-2.0
datasets:
- timdettmers/openassistant-guanaco
language:
- en
---

# Model Card for Model ID

This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on [timdettmers/openassistant-guanaco dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).


## Model Details

### Model Description

This is a fine-tuned version of the [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) model using Parameter Efficient Fine Tuning (PEFT) with Low Rank Adaptation (LoRA) on the Intel Gaudi 2 AI accelerator. This model can be used for various text generation tasks including chatbots, content creation, and other NLP applications.

- **Developed by:** Migara Amarasinghe
- **Model type:** LLM
- **Language(s) (NLP):** English
- **Finetuned from model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)


## Uses

### Direct Use

This model can be used for text generation tasks such as:
- Chatbots
- Automated content creation
- Text completion and augmentation

### Out-of-Scope Use

- Use in real-time applications where latency is critical
- Use in highly sensitive domains without thorough evaluation and testing


### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.


## Training Details

### Training Hyperparameters

<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
- Training regime: Mixed precision training using bf16
- Number of epochs: 3
- Learning rate: 1e-4
- Batch size: 16
- Seq length: 512


## Technical Specifications

### Compute Infrastructure

#### Hardware

- Intel Gaudi 2 AI Accelerator
- Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz

#### Software

- Transformers library
- Optimum Habana library


## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** Intel Gaudi 2 AI Accelerator
- **Hours used:** < 1 hour