{"forum": "BJgrxbqp67", "submission_url": "https://openreview.net/forum?id=BJgrxbqp67", "submission_content": {"title": "Improving Relation Extraction by Pre-trained Language Representations", "authors": ["Christoph Alt", "Marc H\u00fcbner", "Leonhard Hennig"], "authorids": ["christoph.alt@dfki.de", "marc.huebner@dfki.de", "leonhard.hennig@dfki.de"], "keywords": ["relation extraction", "deep language representations", "transformer", "transfer learning", "unsupervised pre-training"], "TL;DR": "We propose a Transformer based relation extraction model that uses pre-trained language representations instead of explicit linguistic features.", "abstract": "Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction, extending the OpenAI Generative Pre-trained Transformer [Radford et al., 2018]. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.", "pdf": "/pdf/e50dda5fadf083812a1b30b98f87672f382acbef.pdf", "archival status": "Archival", "subject areas": ["Natural Language Processing", "Information Extraction"], "paperhash": "alt|improving_relation_extraction_by_pretrained_language_representations", "_bibtex": "@inproceedings{\nalt2019improving,\ntitle={Improving Relation Extraction by Pre-trained Language Representations},\nauthor={Christoph Alt and Marc H{\\\"u}bner and Leonhard Hennig},\nbooktitle={Automated Knowledge Base Construction (AKBC)},\nyear={2019},\nurl={https://openreview.net/forum?id=BJgrxbqp67}\n}"}, "submission_cdate": 1542459628926, "submission_tcdate": 1542459628926, "submission_tmdate": 1580939653340, "submission_ddate": null, "review_id": ["S1ercHpmMV", "BkxqDfEWfE", "Hke1NobVMN"], "review_url": ["https://openreview.net/forum?id=BJgrxbqp67¬eId=S1ercHpmMV", "https://openreview.net/forum?id=BJgrxbqp67¬eId=BkxqDfEWfE", "https://openreview.net/forum?id=BJgrxbqp67¬eId=Hke1NobVMN"], "review_cdate": [1547060621409, 1546891873522, 1547078439499], "review_tcdate": [1547060621409, 1546891873522, 1547078439499], "review_tmdate": [1550269644842, 1550269644622, 1550269644404], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["BJgrxbqp67", "BJgrxbqp67", "BJgrxbqp67"], "review_content": [{"title": "Incremental but solid contribution", "review": "This paper presents a transformer-based relation extraction model that leverages pre-training on unlabeled text with a language modeling objective.\n\nThe proposed approach is essentially an application of the OpenAI GPT to relation extraction. Although this work is rather incremental, the experiments and analysis are thorough, making it a solid contribution.\n\nGiven that the authors have already set up the entire TRE framework, it should be rather easy to adapt the same approach to BERT, and potentially raise the state of the art even further.\n\nIn terms of writing, I think the authors should reframe the paper as a direct adaptation of OpenAI GPT. In its current form, the paper implies much more novelty than it actually has, especially in the abstract and intro; I think the whole story about latent embeddings replacing manually-engineered features is quite obvious in 2019. I think the adaptation story will make the paper shorter and significantly clearer.\n", "rating": "7: Good paper, accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Application of existing method for relation extraction", "review": "This article describes a novel application of Transformer networks for relation extraction.\n\nCONS:\n- Method is heavily supervised. It requires plain text sentences as input, but with clearly marked relation arguments. This information might not always be available, and might be too costly to produce manually. \nDoes this mean that special care has to be taken for sentences in the passive and active voice, as the position of the arguments will be interchanged?\n\n- The method assumes the existence of a labelled dataset. However, this may not always be available. \n\n- There are several other methods, which produce state of the art results on relation extraction, which are minimally-supervised. These methods, in my opinion, alleviate the need for huge volumes of annotated data. The added-value of the proposed method vs. minimally-supervised methods is not clear. \n\nPROS:\n- Extensive evaluation\n- Article well-written\n- Contributions clearly articulated\n", "rating": "5: Marginally below acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Review of Improving Relation Extraction by Pre-trained Language Representations", "review": "The paper presents TRE, a Transformer based architecture for relation extraction, evaluating on two datasets - TACRED, and a commonly used Semeval dataset.\n\nOverall the paper seems to have made reasonable choices and figured out some important details on how to get this to work in practice. While this is a fairly straightforward idea and the paper doesn't make a huge number of innovations on the methodological side, however (it is mostly just adapting existing methods to the task of relation extraction).\n\nOne point that I think is really important to address: the paper really needs to add numbers from the Position-Aware Attention model of Zhang et. al. (e.g. the model used in the original TACRED paper). It appears that the performance of the proposed model is not significantly better than that model. I think that is probably fine, since this is a new-ish approach for relation extraction, getting results that are on-par with the state-of-the-art may be sufficient as a first step, but the paper really needs to be more clear about where it stands with respect to the SOTA.", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["H1gGfCchGN", "HyeSIa92ME", "rJlpR253GE"], "comment_cdate": [1547640330188, 1547640140561, 1547640021225], "comment_tcdate": [1547640330188, 1547640140561, 1547640021225], "comment_tmdate": [1547809596474, 1547809589283, 1547809574845], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2019/Conference/Paper21/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper21/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper21/Authors", "AKBC.ws/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response to Reviewer 3", "comment": "We would like to thank Reviewer 3 for their review and constructive suggestions.\n\nWe acknowledge that distantly- and semi-supervised methods are a vital part of (large-scale) relation extraction. In this work we explicitly focus on the supervised scenario for the following reasons:\nOur main goal is to show that pre-trained language representations, in combination with a self-attentive architecture, are able to perform comparable or better than methods relying on explicit syntactic and semantic features, which are common in current state-of-the-art relation extraction methods. Due to the automated annotation process, distantly- and semi-supervised methods introduce a considerable amount of noise. This requires extending the approach to explicitly account for the noisiness of the data, which makes it more difficult to assess the efficacy of our approach in isolation. In ongoing work we address the distantly-supervised scenario, extending our approach to account for the noise introduced during the automated annotation process.\n\nNo special care has to be taken for passive and active voice. Our approach implicitly assumes the head entity to be provided first, followed by the tail entity. I.e. if the entities are provided as (assuming that person A is an employee of Organisation A), the system predicts \"per:employee_of\", whereas results in the inverse relation \"org:top_members/employees\"."}, {"title": "Response to Reviewer 1", "comment": "We would like to thank Reviewer 1 for their review and constructive suggestions.\n\nWe agree that adapting our approach to BERT is rather easy, in fact, we have already done so for our ongoing experiments.\n\nRegarding the last comment: We agree that our paper is a slightly adapted application of the OpenAI GPT for relation extraction. However, in the introduction, our goal was to motivate this application, because most state-of-the-art approaches (e.g. all competing approaches listed in table 4 and 5) still rely on dependency parse information and other manually-engineered features.\n\nWe will rephrase our paper to clearly indicate the adaptation of the OpenAI GPT."}, {"title": "Response to Reviewer 2", "comment": "We would like to thank Reviewer 2 for their review and constructive suggestions.\n\nWe report the best single-model performance (PA-LSTM) from the original TACRED paper in Table 4. Zhang et al. report slightly different results, 65.4 (2017) vs. 65.1 (2018), and we report the latter. The overall best performing model reported in the original TACRED paper is an ensemble combining 5 independently trained models.\n\nWe will submit a revised version clearly indicating that we compare single-model performance."}], "comment_replyto": ["BkxqDfEWfE", "S1ercHpmMV", "Hke1NobVMN"], "comment_url": ["https://openreview.net/forum?id=BJgrxbqp67¬eId=H1gGfCchGN", "https://openreview.net/forum?id=BJgrxbqp67¬eId=HyeSIa92ME", "https://openreview.net/forum?id=BJgrxbqp67¬eId=rJlpR253GE"], "meta_review_cdate": 1549507321810, "meta_review_tcdate": 1549507321810, "meta_review_tmdate": 1551128235261, "meta_review_ddate ": null, "meta_review_title": "A strong model on TACRED", "meta_review_metareview": "Current SOTA on TACRED uses precomputed syntactic and semantic features. This paper proposes to replace this pipeline with a pretrained Transformer with self-attention. This pretrained model is further fine-tuned to do the TACRED relation extraction. The reviewers like the paper and I am happy with the overall discussion. I believe the pretrained model could be useful for other relation extraction tasks, so I am accepting this with a slight reservation.\n\nAs noted by Reviewer 3, this pretrained model requires supervised annotations. It would be useful if the paper could add a discussion on the following questions:\n\n1. Why is the supervised data required for pretraining a viable option than syntactic and semantic features? The latter are task-agnostic, so I believe they will be readily available for many languages.\n\n2. How hard is it to create pretraining data vs. supervised relation extraction data?", "meta_review_readers": ["everyone"], "meta_review_writers": [], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=BJgrxbqp67¬eId=HJeMZoMFNN"], "decision": "Accept (Poster)"}