sjrhuschlee
commited on
Commit
•
f1543e8
1
Parent(s):
34d5326
Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,8 @@ tags:
|
|
18 |
|
19 |
This is the [deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Extractive Question Answering.
|
20 |
|
|
|
|
|
21 |
## Overview
|
22 |
**Language model:** deberta-v3-large
|
23 |
**Language:** English
|
@@ -28,13 +30,6 @@ This is the [deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large
|
|
28 |
|
29 |
## Model Usage
|
30 |
|
31 |
-
### Using with Peft
|
32 |
-
```python
|
33 |
-
from peft import LoraConfig, PeftModelForQuestionAnswering
|
34 |
-
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
|
35 |
-
model_name = "sjrhuschlee/deberta-v3-large-squad2"
|
36 |
-
```
|
37 |
-
|
38 |
### Using the Merged Model
|
39 |
```python
|
40 |
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
|
@@ -51,4 +46,14 @@ res = nlp(qa_input)
|
|
51 |
# b) Load model & tokenizer
|
52 |
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
|
53 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
```
|
|
|
18 |
|
19 |
This is the [deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Extractive Question Answering.
|
20 |
|
21 |
+
This model was trained using LoRA available through the [PEFT library](https://github.com/huggingface/peft).
|
22 |
+
|
23 |
## Overview
|
24 |
**Language model:** deberta-v3-large
|
25 |
**Language:** English
|
|
|
30 |
|
31 |
## Model Usage
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
### Using the Merged Model
|
34 |
```python
|
35 |
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
|
|
|
46 |
# b) Load model & tokenizer
|
47 |
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
|
48 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
49 |
+
```
|
50 |
+
|
51 |
+
### Using with Peft
|
52 |
+
**NOTE**: This requires code in the PR https://github.com/huggingface/peft/pull/473 for the PEFT library.
|
53 |
+
```python
|
54 |
+
#!pip install peft
|
55 |
+
|
56 |
+
from peft import LoraConfig, PeftModelForQuestionAnswering
|
57 |
+
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
|
58 |
+
model_name = "sjrhuschlee/deberta-v3-large-squad2"
|
59 |
```
|