File size: 1,870 Bytes
2ce85fe
d40fa22
2ce85fe
 
a327556
2ce85fe
 
 
 
 
a327556
 
b3a56a6
2ce85fe
 
6c5abd4
 
79e8113
6c5abd4
79e8113
 
6c5abd4
 
 
 
 
 
 
 
 
 
 
 
 
 
2ce85fe
79e8113
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82fa4e1
79e8113
82fa4e1
79e8113
2ce85fe
a327556
d40fa22
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
base_model: google/gemma-2-2b-it
language:
- en
license: gemma
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
datasets:
- paraloq/json_data_extraction
library_name: peft
---

# Gemma-2 2B Instruct fine-tuned on JSON dataset

This model is a Gemma-2 2b model fine-tuned to paraloq/json_data_extraction.

The model has been fine-tuned to extract data from a text according to a json schema.
## Prompt

The prompt used during training is:
```py
"""Below is a text paired with input that provides further context. Write JSON output that matches the schema to extract information.

### Input:
{input}

### Schema:
{schema}

### Response:
"""
```

## Using the Model

You can use the model with the transformer library or with the wrapper from [unsloth] (https://unsloth.ai/blog/gemma2), which allows faster inference.

```py
import torch
from unsloth import FastLanguageModel

# Required to avoid cache size exceeded
torch._dynamo.config.accumulated_cache_size_limit = 2048

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = f"bastienp/Gemma-2-2B-it-JSON-data-extration",
    max_seq_length = 2048,
    dtype = torch.float16,
    load_in_4bit = False,
    token = HF_TOKEN_READ,
)
```

## Using the Quantized model (llama.cpp)

The model is supplied in GGFU format in 4bit and 8bit.

Example code with Llamacpp:
```py
from llama_cpp import Llama

llm = Llama.from_pretrained(
    "bastienp/Gemma-2-2B-it-JSON-data-extration",
    filename="*Q4_K_M.gguf", #*Q8_K_M.gguf for the 8 bit version
    verbose=False,
)
```

The base model used for fine-tuning is google/gemma-2-2b-it. This repository is **NOT** affiliated with Google.

Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms. 

- **Developed by:** bastienp
- **License:** gemma
- **Finetuned from model :** google/gemma-2-2b-it