Update README.md
Browse files
README.md
CHANGED
@@ -74,11 +74,47 @@ Users (both direct and downstream) should be made aware of the risks, biases and
|
|
74 |
|
75 |
## How to Get Started with the Model
|
76 |
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
|
83 |
### Training Data
|
84 |
|
|
|
74 |
|
75 |
## How to Get Started with the Model
|
76 |
|
77 |
+
#### How to use
|
78 |
+
|
79 |
+
You can use this model with Transformers *pipeline* for NER.
|
80 |
+
|
81 |
+
```python
|
82 |
+
from transformers import pipeline
|
83 |
+
|
84 |
+
from transformers import AutoTokenizer, AutoModelForTokenClassification
|
85 |
+
|
86 |
+
tokenizer = AutoTokenizer.from_pretrained("kaejo98/acronym-definition-detection")
|
87 |
+
model = AutoModelForTokenClassification.from_pretrained("kaejo98/acronym-definition-detection")
|
88 |
+
|
89 |
+
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
|
90 |
+
example = "The smart contract (SC) is a fundamental aspect of deciding which care package to go for when dealing Fit for Purpose Practice (FFPP)."
|
91 |
+
|
92 |
+
acronym_results = nlp(example)
|
93 |
+
print(acronym_results)
|
94 |
+
```
|
95 |
+
|
96 |
+
Abbreviation|Description
|
97 |
+
-|-
|
98 |
+
B-O| Non-acronym and definition words
|
99 |
+
B-AC |Beginning of the acronym
|
100 |
+
I-AC |Part of the acronym
|
101 |
+
B-LF |Beginning of long form (definition) of acronym
|
102 |
+
I-LF | Part of the long-form
|
103 |
+
|
104 |
+
### Training hyperparameters
|
105 |
+
|
106 |
+
The following hyperparameters were used during training:
|
107 |
+
- learning_rate: 2e-05
|
108 |
+
- train_batch_size: 12
|
109 |
+
- eval_batch_size: 4
|
110 |
+
- seed: 42
|
111 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
112 |
+
- lr_scheduler_type: linear
|
113 |
+
- num_epochs: 1
|
114 |
+
- weight_decay=0.001
|
115 |
+
- save_steps=35000
|
116 |
+
- eval_steps = 7000
|
117 |
+
- num_train_epochs=1
|
118 |
|
119 |
### Training Data
|
120 |
|