Datasets:
Tasks:
Token Classification
Modalities:
Text
Languages:
English
Size:
100K - 1M
ArXiv:
Tags:
abbreviation-detection
License:
dipteshkanojia
commited on
Commit
•
7f7b202
1
Parent(s):
3571369
changes
Browse files
README.md
CHANGED
@@ -140,14 +140,29 @@ CC-BY-SA 4.0
|
|
140 |
### Installation
|
141 |
|
142 |
We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/>
|
143 |
-
Please see the instructions at these websites to setup your own custom training with our dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
144 |
|
145 |
### Model(s)
|
146 |
|
147 |
-
The working model
|
148 |
-
|
|
|
|
|
149 |
|
150 |
-
|
151 |
|
152 |
-
|
153 |
|
|
|
|
140 |
### Installation
|
141 |
|
142 |
We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/>
|
143 |
+
Please see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy.
|
144 |
+
|
145 |
+
OR<br/>
|
146 |
+
|
147 |
+
However, you can also reproduce the experiments via the Python notebook we [provide here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection/blob/main/nbs/fine_tuning_abbr_det.ipynb) which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps:
|
148 |
+
|
149 |
+
```bash
|
150 |
+
git clone https://github.com/surrey-nlp/PLOD-AbbreviationDetection
|
151 |
+
cd PLOD-AbbreviationDetection
|
152 |
+
pip install -r requirements.txt
|
153 |
+
```
|
154 |
+
|
155 |
+
Now, you can use the notebook to reproduce the experiments.
|
156 |
|
157 |
### Model(s)
|
158 |
|
159 |
+
The working model(s) are present here at these links:<br/>
|
160 |
+
|
161 |
+
- [RoBERTa-Large (Unfiltered)](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr)
|
162 |
+
- [RoBERTa-Base (Filtered)](https://huggingface.co/surrey-nlp/roberta-base-finetuned-abbr)
|
163 |
|
164 |
+
On the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.<br/>
|
165 |
|
166 |
+
### Usage
|
167 |
|
168 |
+
You can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo.
|