File size: 4,771 Bytes
e751177
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8d88de0
e751177
 
 
 
 
 
 
 
 
8d88de0
e751177
 
 
 
 
 
 
 
 
 
 
8d88de0
e751177
8d88de0
e751177
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8d88de0
e751177
 
 
 
 
 
 
 
 
8d88de0
e751177
 
 
5822bd6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e751177
8d88de0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
license: apache-2.0
base_model:
- microsoft/deberta-v3-large
library_name: transformers
tags:
- relation extraction
- nlp
model-index:
  - name: iter-genia-deberta-large
    results:
      - task:
          type: relation-extraction
        dataset:
          name: genia
          type: genia
        metrics:
          - name: F1
            type: f1
            value: 80.821
---


# ITER: Iterative Transformer-based Entity Recognition and Relation Extraction

This model checkpoint is part of the collection of models published alongside our paper ITER, 
[accepted at EMNLP 2024](https://aclanthology.org/2024.findings-emnlp.655/).<br>
To ease reproducibility and enable open research, our source code has been published on [GitHub](https://github.com/fleonce/iter).

This model achieved an F1 score of `80.821` on dataset `genia`

### Using ITER in your code

First, install ITER in your preferred environment:

```text
pip install git+https://github.com/fleonce/iter
```

To use our model, refer to the following code:
```python
from iter import ITER

model = ITER.from_pretrained("fleonce/iter-genia-deberta-large")
tokenizer = model.tokenizer

encodings = tokenizer(
  "An art exhibit at the Hakawati Theatre in Arab east Jerusalem was a series of portraits of Palestinians killed in the rebellion .",
  return_tensors="pt"
)

generation_output = model.generate(
    encodings["input_ids"],
    attention_mask=encodings["attention_mask"],
)

# entities
print(generation_output.entities)

# relations between entities
print(generation_output.links)
```

### Checkpoints

We publish checkpoints for the models performing best on the following datasets:

- **ACE05**:
  1. [fleonce/iter-ace05-deberta-large](https://huggingface.co/fleonce/iter-ace05-deberta-large)
- **CoNLL04**:
  1. [fleonce/iter-conll04-deberta-large](https://huggingface.co/fleonce/iter-conll04-deberta-large)
- **ADE**:
  1. [fleonce/iter-ade-deberta-large](https://huggingface.co/fleonce/iter-ade-deberta-large)
- **SciERC**:
  1. [fleonce/iter-scierc-deberta-large](https://huggingface.co/fleonce/iter-scierc-deberta-large)
  2. [fleonce/iter-scierc-scideberta-full](https://huggingface.co/fleonce/iter-scierc-scideberta-full)
- **CoNLL03**:
  1. [fleonce/iter-conll03-deberta-large](https://huggingface.co/fleonce/iter-conll03-deberta-large)
- **GENIA**:
  1. [fleonce/iter-genia-deberta-large](https://huggingface.co/fleonce/iter-genia-deberta-large)


### Reproducibility

For each dataset, we selected the best performing checkpoint out of the 5 training runs we performed during training.
This model was trained with the following hyperparameters:

- Seed: `2`
- Config: `genia/small_lr_d_ff_150`
- PyTorch `2.3.0` with CUDA `11.8` and precision `torch.float32`
- GPU: `1 NVIDIA H100 SXM 80 GB GPU`

Varying GPU and CUDA version as well as training precision did result in slightly different end results in our tests
for reproducibility.

To train this model, refer to the following command:
```shell
python3 train.py --dataset genia/small_lr_d_ff_150 --transformer microsoft/deberta-v3-large --seed 2
```

```text
@inproceedings{hennen-etal-2024-iter,
    title = "{ITER}: Iterative Transformer-based Entity Recognition and Relation Extraction",
    author = "Hennen, Moritz  and
      Babl, Florian  and
      Geierhos, Michaela",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.findings-emnlp.655",
    doi = "10.18653/v1/2024.findings-emnlp.655",
    pages = "11209--11223",
    abstract = "When extracting structured information from text, recognizing entities and extracting relationships are essential. Recent advances in both tasks generate a structured representation of the information in an autoregressive manner, a time-consuming and computationally expensive approach. This naturally raises the question of whether autoregressive methods are necessary in order to achieve comparable results. In this work, we propose ITER, an efficient encoder-based relation extraction model, that performs the task in three parallelizable steps, greatly accelerating a recent language modeling approach: ITER achieves an inference throughput of over 600 samples per second for a large model on a single consumer-grade GPU. Furthermore, we achieve state-of-the-art results on the relation extraction datasets ADE and ACE05, and demonstrate competitive performance for both named entity recognition with GENIA and CoNLL03, and for relation extraction with SciERC and CoNLL04.",
}
```