jeromecondere
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -5,17 +5,16 @@ tags: []
|
|
5 |
|
6 |
# Model Card for Meta-Llama-3-8B-for-bank
|
7 |
|
8 |
-
This model, **Meta-Llama-3-8B-for-bank**, is a fine-tuned version of the `meta-llama/Meta-Llama-3-8B-Instruct` model.
|
9 |
-
|
10 |
## Model Details
|
11 |
|
12 |
### Model Description
|
13 |
|
14 |
- **Model Name**: Meta-Llama-3-8B-for-bank
|
15 |
- **Base Model**: `meta-llama/Meta-Llama-3-8B-Instruct`
|
16 |
-
- **Fine-tuning Data**: Custom
|
17 |
- **Version**: 1.0
|
18 |
-
- **License**:
|
19 |
- **Language**: English
|
20 |
|
21 |
### Model Type
|
@@ -49,19 +48,12 @@ This model is designed for financial service tasks such as:
|
|
49 |
|
50 |
This model has been fine-tuned with a dataset specifically created to simulate financial service interactions, covering a variety of questions related to account management and stock trading.
|
51 |
|
52 |
-
## Intended Use
|
53 |
-
|
54 |
-
This model is intended for integration into financial chatbots, virtual assistants, or other systems requiring automated handling of financial queries.
|
55 |
|
56 |
## Limitations
|
57 |
|
58 |
-
- **
|
59 |
-
- **Misinterpretation Risks**: There is a potential risk of misunderstanding complex or ambiguous queries.
|
60 |
|
61 |
-
## Ethical Considerations
|
62 |
|
63 |
-
- **Bias**: Trained on synthetic data, the model may not represent all user demographics.
|
64 |
-
- **Privacy**: The model should be used in compliance with financial privacy regulations.
|
65 |
|
66 |
## How to Use
|
67 |
|
@@ -70,6 +62,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
70 |
|
71 |
# Load tokenizer and model
|
72 |
tokenizer = AutoTokenizer.from_pretrained("jeromecondere/Meta-Llama-3-8B-for-bank")
|
|
|
73 |
model = AutoModelForCausalLM.from_pretrained("jeromecondere/Meta-Llama-3-8B-for-bank").to("cuda")
|
74 |
|
75 |
# Example of usage
|
|
|
5 |
|
6 |
# Model Card for Meta-Llama-3-8B-for-bank
|
7 |
|
8 |
+
This model, **Meta-Llama-3-8B-for-bank**, is a fine-tuned version of the `meta-llama/Meta-Llama-3-8B-Instruct` model.
|
|
|
9 |
## Model Details
|
10 |
|
11 |
### Model Description
|
12 |
|
13 |
- **Model Name**: Meta-Llama-3-8B-for-bank
|
14 |
- **Base Model**: `meta-llama/Meta-Llama-3-8B-Instruct`
|
15 |
+
- **Fine-tuning Data**: Custom bank chat examples
|
16 |
- **Version**: 1.0
|
17 |
+
- **License**: Free
|
18 |
- **Language**: English
|
19 |
|
20 |
### Model Type
|
|
|
48 |
|
49 |
This model has been fine-tuned with a dataset specifically created to simulate financial service interactions, covering a variety of questions related to account management and stock trading.
|
50 |
|
|
|
|
|
|
|
51 |
|
52 |
## Limitations
|
53 |
|
54 |
+
- **Misinterpretation Risks**: Right now this is the first version, so when the query is too complex, inconsistent results will be returned.
|
|
|
55 |
|
|
|
56 |
|
|
|
|
|
57 |
|
58 |
## How to Use
|
59 |
|
|
|
62 |
|
63 |
# Load tokenizer and model
|
64 |
tokenizer = AutoTokenizer.from_pretrained("jeromecondere/Meta-Llama-3-8B-for-bank")
|
65 |
+
#merge it first with llama3-8b-instrucions
|
66 |
model = AutoModelForCausalLM.from_pretrained("jeromecondere/Meta-Llama-3-8B-for-bank").to("cuda")
|
67 |
|
68 |
# Example of usage
|