paper_id,title,keywords,abstract,meta_title,meta_review,decision 1,"""Improving Relation Extraction by Pre-trained Language Representations""","['relation extraction', 'deep language representations', 'transformer', 'transfer learning', 'unsupervised pre-training']","""Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction, extending the OpenAI Generative Pre-trained Transformer [Radford et al., 2018]. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.""","""A strong model on TACRED""","""Current SOTA on TACRED uses precomputed syntactic and semantic features. This paper proposes to replace this pipeline with a pretrained Transformer with self-attention. This pretrained model is further fine-tuned to do the TACRED relation extraction. The reviewers like the paper and I am happy with the overall discussion. I believe the pretrained model could be useful for other relation extraction tasks, so I am accepting this with a slight reservation.As noted by Reviewer 3, this pretrained model requires supervised annotations. It would be useful if the paper could add a discussion on the following questions:1. Why is the supervised data required for pretraining a viable option than syntactic and semantic features? The latter are task-agnostic, so I believe they will be readily available for many languages.2. How hard is it to create pretraining data vs. supervised relation extraction data?""",Accept (Poster) 2,"""MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts""","['gold-standard corpus', 'biomedical concept recognition', 'named entity recognition and linking']","""This paper presents the formal release of { MedMentions}, a new manually annotated resource for the recognition of biomedical concepts. What distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000 abstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over 3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines. In addition to the full corpus, a sub-corpus of MedMentions is also presented, comprising annotations for a subset of UMLS 2017 targeted towards document retrieval. To encourage research in Biomedical Named Entity Recognition and Linking, data splits for training and testing are included in the release, and a baseline model and its metrics for entity linking are also described.""","""Good paper about a valuable new data set""","""The paper provides a valuable new resource to the community, a data set of 350,000 mentions from 4000 abstracts, all linked to UMLS concepts. MedMentions has some advantages over existing datasets that are either smaller in size, narrower in coverage of concepts, or only provide weakly supervised labels of the mentions (i.e., concepts are associated with an abstract, but not explicitly identified as mentions therein). The reviewers all agree that MedMentions would be a valuable resource for the community. The main criticism of the paper is that the motivation and contribution were not initially clear; however, the authors have addressed this criticism in the responses and have already updated the introduction to make the motivation and contribution more explicit.""",Accept (Poster) 3,"""Integrating User Feedback under Identity Uncertainty in Knowledge Base Construction""","['user feedback', 'entity resolution', 'identity uncertainty']","""Users have tremendous potential to aid in the construction and maintenance of knowledges bases (KBs) through the contribution of feedback that identifies incorrect and missing entity attributes and relations. However, as new data is added to the KB, the KB entities, which are constructed by running entity resolution (ER), can change, rendering the intended targets of user feedback unknowna problem we term identity uncertainty. In this work, we present a framework for integrating user feedback into KBs in the presence of identity uncertainty. Our approach is based on having user feedback participate alongside mentions in ER. We propose a specific representation of user feedback as feedback mentions and introduce a new online algorithm for integrating these mentions into an existing KB. In experiments, we demonstrate that our proposed approach outperforms the baselines in 70% of experimental conditions.""","""Interesting topic to be presented""","""The paper presents an interesting methodology. The results are interesting, however the paper really misses out on an in-depth discussion and reflection of the pros and cons of this approach as well as on a proper related work comparison to similar approaches. """,Accept (Poster) 4,"""On Constrained Open-World Probabilistic Databases""",['probabilistic databases'],"""Increasing amounts of available data have led to a heightened need for representing large-scale probabilistic knowledge bases. One approach is to use a probabilistic database, a model with strong assumptions that allow for efficiently answering many interesting queries. Recent work on open-world probabilistic databases strengthens the semantics of these probabilistic databases by discarding the assumption that any information not present in the data must be false. While intuitive, these semantics are not sufficiently precise to give reasonable answers to queries. We propose overcoming these issues by using constraints to restrict this open world. We provide an algorithm for one class of queries, and establish a basic hardness result for another. Finally, we propose an efficient and tight approximation for a large class of queries.""","""None""","""None""",None 5,"""Scaling Hierarchical Coreference with Homomorphic Compression""","['coreference', 'entity resolution', 'CRF', 'LSH', 'hashing', 'random projections']","""Locality sensitive hashing schemes such as provide compact representations of multisets from which similarity can be estimated. However, in certain applications, we need to estimate the similarity of dynamically changing sets. In this case, we need the representation to be a homomorphism so that the hash of unions and differences of sets can be computed directly from the hashes of operands. We propose two representations that have this property for cosine similarity (an extension of and angle-preserving random projections), and make substantial progress on a third representation for Jaccard similarity (an extension of We employ these hashes to compress the sufficient statistics of a conditional random field (CRF) coreference model and study how this compression affects our ability to compute similarities as entities are split and merged during inference. We study these hashes in a conditional random field (CRF) hierarchical coreference model in order to compute the similarity of entities as they are merged and split during inference. We also provide novel statistical analysis of to help justify it as an estimator inside a CRF, showing that the bias and variance reduce quickly with the number of bits. On a problem of author coreference, we find that our scheme allows scaling the hierarchical coreference algorithm by an order of magnitude without degrading its statistical performance or the model's coreference accuracy, as long as we employ at least 128 or 256 bits. Angle-preserving random projections further improve the coreference quality, potentially allowing even fewer dimensions to be used.""","""Weak accept""","""This paper proposes to use homomorphic compression for hierarchical coreference resolution. The proposed method is novel and experimental results show good results. The paper is well written.However, there are crucial omissions that the authors should have addressed in a revision, such as a detailed comparison w. [Beyer et al.], empirical comparison w. a Word2Vec embedding baseline, etc. We very strongly encourage the authors to include them in the final version. """,Accept (Poster) 6,"""Fine-grained Entity Recognition with Reduced False Negatives and Large Type Coverage""","['Named Entity Recognition', 'Wikipedia', 'Freebase', 'Fine-grained Entity Recognition', 'Fine-grained Entity Typing', 'Automatic Dataset construction']","""Fine-grained Entity Recognition (FgER) is the task of detecting and classifying entity mentions to a large set of types spanning diverse domains such as biomedical, finance and sports. We observe that when the type set spans several domains, detection of entity mention becomes a limitation for supervised learning models. The primary reason being lack of dataset where entity boundaries are properly annotated while covering a large spectrum of entity types. Our work directly addresses this issue. We propose Heuristics Allied with Distant Supervision (HAnDS) framework to automatically construct a quality dataset suitable for the FgER task. HAnDS framework exploits the high interlink among Wikipedia and Freebase in a pipelined manner, reducing annotation errors introduced by naively using distant supervision approach. Using HAnDS framework, we create two datasets, one suitable for building FgER systems recognizing up to 118 entity types based on the FIGER type hierarchy and another for up to 1115 entity types based on the TypeNet hierarchy. Our extensive empirical experimentation warrants the quality of the generated datasets. Along with this, we also provide a manually annotated dataset for benchmarking FgER systems.""","""Good paper; but a heuristic approach""","""This paper design a framework based on heuristic approaches to automatically construct a dataset for the FgER task. The paper is solid and present nice experimental comparisons. Overall, it is clear and easy to follow. However, as pointed out by the reviewers, the proposed approach is heuristic and may not be general enough for handling other tasks or data in other domains. """,Accept (Poster) 7,"""Alexandria: Unsupervised High-Precision Knowledge Base Construction using a Probabilistic Program""","['Fact retrieval', 'Entity extraction', 'Schema learning', 'Unsupervised knowledge base construction', 'Probabilistic programming']","""Creating a knowledge base that is accurate, up-to-date and complete remains a significant challenge despite substantial efforts in automated knowledge base construction. In this paper, we present Alexandria -- a system for unsupervised, high-precision knowledge base construction. Alexandria uses a probabilistic program to define a process of converting knowledge base facts into unstructured text. Using probabilistic inference, we can invert this program and so retrieve facts, schemas and entities from web text. The use of a probabilistic program allows uncertainty in the text to be propagated through to the retrieved facts, which increases accuracy and helps merge facts from multiple sources. Because Alexandria does not require labelled training data, knowledge bases can be constructed with the minimum of manual input. We demonstrate this by constructing a high precision (typically 97 knowledge base for people from a single seed fact.""","""A novel approach to KG construction that shows initial promising results""","""The authors propose Alexandria, a probabilistic programming approach to AKBC. The core idea of the approach is to use a generative probabilistic program to model natural language expressions of facts using templates, and then reverse this process to learn new templates and properties from text. Evaluation for a small domain showed promising results.The critical consensus was that this paper presents an interesting idea and should be accepted. There were several concerns about the representation of related work and some concerns about the evaluation and results, particularly the restriction to a single domain and the computational costs of the system, and criticism about the balance of high-level and low-level ideas in the writing. The authors have addressed some of these concerns in the rebuttal and in a draft revision.""",Accept (Oral) 8,"""NormCo: Deep Disease Normalization for Biomedical Knowledge Base Construction""","['Entity Normalization', 'Biomedical Knowledge Base Construction']","""Biomedical knowledge bases are crucial in modern data-driven biomedical sciences, but auto-mated biomedical knowledge base construction remains challenging. In this paper, we consider the problem of disease entity normalization, an essential task in constructing a biomedical knowledge base. We present NormCo, a deep coherence model which considers the semantics of an entity mention, as well as the topical coherence of the mentions within a single document. NormCo mod-els entity mentions using a simple semantic model which composes phrase representations from word embeddings, and treats coherence as a disease concept co-mention sequence using an RNN rather than modeling the joint probability of all concepts in a document, which requires NP-hard inference. To overcome the issue of data sparsity, we used distantly supervised data and synthetic data generated from priors derived from the BioASQ dataset. Our experimental results show thatNormCo outperforms state-of-the-art baseline methods on two disease normalization corpora in terms of (1) prediction quality and (2) efficiency, and is at least as performant in terms of accuracy and F1 score on tagged documents.""","""Consensus accept; reviewer concerns addressed in revisions""","""The reviewers all agree the paper is a clear accept. The paper presents an end-to-end approach to biomedical concept normalization that supplants previous state of the art pipeline systems based on more conventional bio NLP methods. Although the individual components of the solution are not novel, e.g., siamese networks, GRUs, and distant supervision, etc., they are combined together in highly appropriate ways to solve a difficult entity linking problem. The authors did a commendable job addressing the reviewers comments, questions and concerns by running experiments, providing new results, updating related work to more accurately capture the fact that other entity linking approaches also capture coherence, and addressing a few minor clarity issues.""",Accept (Poster) 9,"""SHINRA: Structuring Wikipedia by Collaborative Contribution""","['Resource construction', 'Structured Wikipedia']","""We are reporting the SHINRA project, a project for structuring Wikipedia with collaborative construction scheme. The goal of the project is to create a huge and well-structured knowledge base to be used in NLP applications, such as QA, Dialogue systems and explainable NLP systems. It is created based on a scheme of Resource by Collaborative Contribution (RbCC). We conducted a shared task of structuring Wikipedia, and at the same, submitted results are used to construct a knowledge base.There are machine readable knowledge bases such as CYC, DBpedia, YAGO, Freebase Wikidata and so on, but each of them has problems to be solved. CYC has a coverage problem, and others have a coherence problem due to the fact that these are based on Wikipedia and/or created by many but inherently incoherent crowd workers. In order to solve the later problem, we started a project for structuring Wikipedia using automatic knowledge base construction shared-task.The automatic knowledge base construction shared-tasks have been popular and well studied for decades. However, these tasks are designed only to compare the performances of different systems, and to find which system ranks the best on limited test data. The results of the participated systems are not shared and the systems may be abandoned once the task is over.We believe this situation can be improved by the following changes:1. designing the shared-task to construct knowledge base rather than evaluating only limited test data2. making the outputs of all the systems open to public so that we can run ensemble learning to create the better results than the best systems3. repeating the task so that we can run the task with the larger and better training data from the output of the previous task (bootstrapping and active learning)We conducted SHINRA2018 with the above mentioned scheme and in this paperwe report the results and the future directions of the project. The task is to extract the values of the pre-defined attributes from Wikipedia pages. We have categorized most of the entities in Japanese Wikipedia (namely 730 thousand entities) into the 200 ENE categories. Based on this data, the shared-task is to extract the values of the attributes from Wikipedia pages. We gave out the 600 training data and the participants are required to submit the attribute-values for all remaining entities of the same category type. Then 100 data out of them for each category are used to evaluate the system output in the shared-task.We conducted a preliminary ensemble learning on the outputs and found 15 F1 score improvement on a category and the average of 8 F1 score improvements on all 5 categories we tested over a strong baseline. Based on this promising results, we decided to conduct three tasks in 2019; multi-lingual categorization task (ML), extraction for the same 5 categories in Japanese with a larger training data (JP-5) and extraction for 34 new categories in Japanese (JP-34).""","""Interesting topic but still not mature presentation""","""As it is clear from the reviewers comments, and also the rebuttal responses, there are still significant amount of points to improve in the paper. However, I believe it is going to be an interesting poster presentation """,Accept (Poster) 10,"""Synonym Expansion for Large Shopping Taxonomies""","['Ontology', 'Taxonomy', 'Synonym', 'Shopping', 'Search']","""We present an approach for expanding taxonomies with synonyms, or aliases. We target large shopping taxonomies, with thousands of nodes. A comprehensive set of entity aliases is an important component of identifying entities in unstructured text such as product reviews or search queries. Our method consists of two stages: we generate synonym candidates from WordNet and shopping search queries, then use a binary classier to lter candidates. We process taxonomies with thousands of synonyms in order to generate over 90,000 synonyms. We show that using the taxonomy to derive contextual features improves classication performance over using features from the target node alone.We show that our approach has potential for transfer learning between dierent taxonomy domains, which reduces the need to collect training data for new taxonomies.""","""Interesting approach to taxonomy expansion""","""The work provides an interesting and yet rather straightforward approach to synonym expansion that relies on a combination of user queries and existing background knowledge in terms of WordNet. The evaluation shows good results for an interesting domain in practice. It would be great if the crowd sourced data would be released. I also wonder if the paper actually covered enough of the related work in particular with respect to synonym expansion from information retrieval. Overall, I think the task itself and the resource would be interesting to have at the conference although it's not a radical innovation, hence, I would recommend it for the poster session. """,Accept (Poster) 11,"""Discriminative Candidate Generation for Medical Concept Linking""","['concept linking', 'clinical nlp', 'information extraction']","""Linking mentions of medical concepts in a clinical note to a concept in an ontology enables a variety of tasks that rely on understanding the content of a medical record, such as identifying patient populations and decision support. Medical concept linking can be formulated as a two-step task; 1) candidate generator, which selects likely candidates from the ontology for the given mention, and 2) a ranker, which orders the candidates based on a set of features to find the best one.In this paper, we propose a candidate generation system based on the DiscK framework [Chen andVan Durme, 2017]. Our system produces a candidate list with both high coverage and a rankingthat is a useful starting point for the second step of the linking process. we integrate our candidate selection process into a current linking system, DNorm [Leaman et al., 2013]. The resulting system achieves similar accuracy paired with with a gain in efficiency due to a large reduction in the number of potential candidates considered.""","""Addresses an important, but under-investigated subproblem of entity linking""","""Summary: while the reviewer opinion on this paper varied, the paper tackles an important problem while also bringing to light an existing technique that many have overlooked for this problem.The main strength of this paper is that it addresses an oft overlooked sub-problem of entity-linking: the candidate generation stage prior to linking (though note, that it is not uncommon for entity linking papers to also evaluate their candidate generation system separately from their linker). They show that when integrating their candidate generation step into an existing entity linker, they can achieve similar accuracy while gaining efficiency by the more aggressive candidate pruning. The method is based on DiscK which is in turn based on query expansion approaches from IR that supports discriminative training, but with more efficient search algorithms that run in sublinear time (at the cost of a limited set of available feature types). The paper shows that the method works better than classic IR baselines such as BM25. There are enough people in our community who would be unfamiliar with these sublinear IR-based classification strategies that the paper could be useful for anyone building a classic entity linking system.The main criticisms of this paper are that it lacks clarity and novelty; the description of the DiscK algorithm is especially unclear, and ultimately, their approach a straightforward application of DiscK to candidate generation. There were a few criticisms about the experiments but the authors seem to have addressed these in the response.""",Accept (Poster) 12,"""OPIEC: An Open Information Extraction Corpus""","['open information extraction', 'text analytics']","""Open information extraction (OIE) systems extract relations and their arguments from natural language text in an unsupervised manner. The resulting extractions are a valuable resource for downstream tasks such as knowledge base construction, open question answering, or event schema induction. In this paper, we release, describe, and analyze an OIE corpus called OPIEC, which was extracted from the text of English Wikipedia. OPIEC complements the available OIE resources: It is the largest OIE corpus publicly available to date (over 340M triples) and contains valuable metadata such as provenance information, confidence scores, linguistic annotations, and semantic annotations including spatial and temporal information. We analyze the OPIEC corpus by comparing its content with knowledge bases such as DBpedia or YAGO, which are also based on Wikipedia. We found that most of the facts between entities present in OPIEC cannot be found in DBpedia and/or YAGO, that OIE facts often differ in the level of specificity compared to knowledge base facts, and that OIE open relations are generally highly polysemous. We believe that the OPIEC corpus is a valuable resource for future research on automated knowledge base construction.""","""A nice dataset paper""","""This paper describes a new Open IE corpus over English Wikipedia. All the reviewers agree this paper is suitable for this venue and the dataset is useful. Overall, the paper is well-written and the experiments are convincing. Despite the novelty of this paper is relatively thin, it is a decent paper. """,Accept (Poster) 13,"""Applying Citizen Science to Gene, Drug, Disease Relationship Extraction from Biomedical Abstracts""","['citizen science', 'relationship extraction', 'biomedical literature', 'abstracts']","""Biomedical literature is growing at a rate that outpaces our ability to harness the knowledge contained therein. In order to mine valuable inferences from the large volume of literature, many researchers have turned to information extraction algorithms to harvest information in biomedical texts. Information extraction is usually accomplished via a combination of manual expert curation and computational methods. Advances in computational methods usually depends on the generation of gold standards by a limited number of expert curators. This process can be time consuming and represents an area of biomedical research that is ripe for exploration with citizen science. Citizen scientists have been previously found to be willing and capable of performing named entity recognition of disease mentions in biomedical abstracts, but it was uncertain whether or not the same could be said of relationship extraction. Relationship extraction requires training on identifying named entities as well as a deeper understanding of how different entity types can relate to one another. Here, we used the web-based application Mark2Cure (pseudo-url) to demonstrate that citizen scientists can perform relationship extraction and confirm the importance of accurate named entity recognition on this task. We also discuss opportunities for future improvement of this system, as well as the potential synergies between citizen science, manual biocuration, and natural language processing. ""","""None""","""None""",None 14,"""Learning Numerical Attributes in Knowledge Bases""","['numerical attribute prediction', 'label propagation', 'value imputation']","""Knowledge bases (KB) are often represented as a collection of facts in the form (HEAD, PREDICATE, TAIL), where HEAD and TAIL are entities while PREDICATE is a binary relationship that links the two. It is a well-known fact that knowledge bases are far from complete, and hence the plethora of research on KB completion methods, specifically on link prediction. However, though frequently ignored, these repositories also contain numerical facts. Numerical facts link entities to numerical values via numerical predicates; e.g., (PARIS, LATITUDE, 48.8). Likewise, numerical facts also suffer from the incompleteness problem. To address this issue, we introduce the numerical attribute prediction problem. This problem involves a new type of query where the relationship is a numerical predicate. Consequently, and contrary to link prediction, the answer to this query is a numerical value. We argue that the numerical values associated with entities explain, to some extent, the relational structure of the knowledge base. Therefore, we leverage knowledge base embedding methods to learn representations that are useful predictors for the numerical attributes. An extensive set of experiments on benchmark versions of FREEBASE and YAGO show that our approaches largely outperform sensible baselines. We make the datasets available under a permissive BSD-3 license. ""","""New embedding method using numerical facts sparks interest but has some obvious limitations.""","""The authors consider the problem of predicting or imputing numerical attributes in knowledge bases. In contrast to simple local or global attribute prediction, they design a regression-based model that uses knowledge graph embeddings and extend the embedding computation to also use numerical attributes. Evaluation shows interesting preliminary results.The critical consensus was that this paper is worthy of acceptance, but has several serious limitations that should be carefully explained. A critical issue is that the relationships used for propagating information do not necessarily correlate to similar numerical attribute values (e.g., geographical location is useful for predicting lat/long but not GDP or population), that a linear regression model is not sufficient to capture the full extent of relationships, that normalization of error values across different relationships skews the evaluation, several important and relevant related works were omitted, and the overall error rates are still somewhat high.""",Accept (Poster) 15,"""Combining Long Short Term Memory and Convolutional Neural Network for Cross-Sentence n-ary Relation Extraction""","['n-ary relation extraction', 'information extraction']","""We propose in this paper a combined model of Long Short Term Memory and Convolutional Neural Networks (LSTM_CNN) model that exploits word embeddings and positional embeddings for cross-sentence n-ary relation extraction. The proposed model brings together the properties of both LSTMs and CNNs, to simultaneously exploit long-range sequential information and capture most informative features, essential for cross-sentence n-ary relation extraction. The LSTM_CNN model is evaluated on standard datasets on cross-sentence n-ary relation extraction, where it significantly outperforms baselines such as CNNs, LSTMs and also a combined CNN_LSTM model. The paper also shows that the proposed LSTM_CNN model outperforms the current state-of-the-art methods on cross-sentence n-ary relation extraction.""","""Simple method that works well on interesting problem""","""The presented cross sentence relation extraction method is simple but well motivated. The experiments show that it works well, setting a new SOTA on the two relevant benchmarks. The ablations and extended analyses are also well done and extensive. Overall, this is a clear paper with a solid contribution that we would all learn something by reading.""",Accept (Poster) 16,"""Learning Relational Representations by Analogy using Hierarchical Siamese Networks""","['relation extraction', 'textual representation', 'siamese network', 'one-shot learning', 'transfer learning']","""We address relation extraction as an analogy problem by proposing a novel approach to learn representations of relations expressed by their textual mentions. In our assumption, if two pairs of entities belong to the same relation, then those two pairs are analogous. Following this idea, we collect a large set of analogous pairs by matching triples in knowledge bases with web-scale corpora through distant supervision. We leverage this dataset to train a hierarchical siamese network in order to learn entity-entity embeddings which encode relational information through the different linguistic paraphrasing expressing the same relation. We evaluate our model in a one-shot learning task by showing a promising generalization capability in order to classify unseen relation types, which makes this approach suitable to perform automatic knowledge base population with minimal supervision. Moreover, the model can be used to generate pre-trained embeddings which provide a valuable signal when integrated into an existing neural-based model by outperforming the state-of-the-art methods on a downstream relation extraction task.""","""None""","""None""",None 17,"""Answering Visual-Relational Queries in Web-Extracted Knowledge Graphs""",[],"""A visual-relational knowledge graph (KG) is a multi-relational graph whose entities are associated with images. We explore novel machine learning approaches for answering visual-relational queries in web-extracted knowledge graphs. To this end, we have created ImageGraph, a KG with 1,330 relation types, 14,870 entities, and 829,931 images crawled from the web. With visual-relational KGs such as ImageGraph one can introduce novel probabilistic query types in which images are treated as first-class citizens. Both the prediction of relations between unseen images as well as multi-relational image retrieval can be expressed with specific families of visual-relational queries. We introduce novel combinations of convolutional networks and knowledge graph embedding methods to answer such queries. We also explore a zero-shot learning scenario where an image of an entirely new entity is linked with multiple relations to entities of an existing KG. The resulting multi-relational grounding of unseen entity images into a knowledge graph serves as a semantic entity representation. We conduct experiments to demonstrate that the proposed methods can answer these visual-relational queries efficiently and accurately.""","""New useful dataset needs to make less strong claims about novelty""","""This paper introduces a useful new dataset called ImageGraph that allows for the assessment of tasks on the combination of images and knowledge graphs. The paper presents a number of tasks over that datasets and architectures to address those tasks. The reviewers agree that the baselines could be improved upon and there is a question as to whether the architectures are promising or not. I think the paper should be accepted because the dataset is fundamentally useful and the authors establish good baselines for the considered tasks. Additionally, using KGs with images together is really promising. However, joint image + kg embeddings have already been investigated elsewhere see [1]. I would recommend that the authors soften their claims of novelty. The work is useful and points to a number of good directions for future work. [1] Towards Holistic Concept Representations: Embedding Relational Knowledge, Visual Attributes, and Distributional Word Semantics. S Thoma, A Rettinger, F Both. International Semantic Web Conference, 694-710""",Accept (Poster) 18,"""Answering Science Exam Questions Using Query Reformulation with Background Knowledge""","['open-domain question answering', 'science question answering', 'multiple-choice question answering', 'passage retrieval', 'query reformulation']","""Open-domain question answering (QA) is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques. Much of the progress in open-domain QA systems has been realized through advances in information retrieval methods and corpus construction. In this paper, we focus on the recently introduced ARC Challenge dataset, which contains 2,590 multiple choice questions authored for grade-school science exams. These questions are selected to be the most challenging for current QA systems, and current state of the art performance is only slightly better than random chance. We present a system that reformulates a given question into queries that are used to retrieve supporting text from a large corpus of science-related text. Our rewriter is able to incorporate background knowledge from ConceptNet and -- in tandem with a generic textual entailment system trained on SciTail that identifies support in the retrieved results -- outperforms several strong baselines on the end-to-end QA task despite only being trained to identify essential terms in the original source question. We use a generalizable decision methodology over the retrieved evidence and answer candidates to select the best answer. By combining query reformulation, background knowledge, and textual entailment our system is able to outperform several strong baselines on the ARC dataset. ""","""Moderate contribution of using external knowledge to improve QA in ARC dataset""","""This paper presents a method for finding important tokens in the question, and then use prior knowledge from conceptNet to answer questions in the ARC dataset. Pros. Combining finding essential terms, using domain knowledge, and textual entailment. Cons. None of the proposed methods are novel. The paper combines a few components to answer questions in ARC; the intro and abstract are written a bit more general than what the paper actually does.The paper studies different ways to incorporate concept net; the results between dev and test are not consistent. Some methods achieve better results on the dev set, but the authors use a model (which is not best in dev) to be used on the test set. """,Accept (Poster) 19,"""Learning Relation Representations from Word Representations""","['Relation representations', 'relation embeddings']","""Identifying the relations that connect words is an important step towards understanding human languages and is useful for various NLP tasks such as knowledge base completion and analogical reasoning. Simple unsupervised operators such as vector offset between two-word embeddings have shown to recover some specific relationships between those words, if any. Despite this, how to accurately learn generic relation representations from word representations remains unclear. We model relation representation as a supervised learning problem and learn parametrised operators that map pre-trained word embeddings to relation representations. We propose a method for learning relation representations using a feed-forward neural network that performs relation prediction. Our evaluations on two benchmark datasets reveal that the penultimate layer of the trained neural network-based relational predictor acts as a good representation for the relations between words.""","""Interesting method and results, but presentation could be improved""","""This paper presents an approach for learning relation embeddings that are a feed forward function of the embeddings of a word pair. The method uses a novel training objective and is shown to work well on out-of-context lexical benchmarks. The paper could be framed better and there are a number of points where the reviewers needed clarifications, but the authors have promised revisions to address these concerns. """,Accept (Poster) 20,"""A Survey on Semantic Parsing""","['survey', 'semantic parsing']","""A significant amount of information in today's world is stored in structured and semi-structured knowledge bases. Efficient and simple methods to query them are essential and must not be restricted to only those who have expertise in formal query languages. The field of semantic parsing deals with converting natural language utterances to logical forms that can be easily executed on a knowledge base. In this survey, we examine the various components of a semantic parsing system and discuss prominent work ranging from the initial rule based methods to the current neural approaches to program synthesis. We also discuss methods that operate using varying levels of supervision and highlight the key challenges involved in the learning of such systems.""","""Meta Review""","""This is a very nice survey of the history and current state of semantic parsing. It does a good job covering a very broad field, hitting the right key points along the way. If there were one thing I would recommend improving, it would be to try to categorize what the open questions are. There is a short section on ""future work"", though it simply provides more references to very recent work. It would be nice to see more detailed thoughts from the authors thoughts on what is missing and what is next, after having read through this vast literature.Note to PCs: I don't know what to recommend as far as oral vs. poster. Neither one seems to fit a survey paper very well. I'm saying poster assuming that only the top few percent of papers will be oral presentations, and while I think this is a good survey, I don't think it is in the top few percent of papers.""",Accept (Poster) 21,"""Joint Learning of Hierarchical Word Embeddings from a Corpus and a Taxonomy""","['Hierarchical Embeddings', 'Word Embeddings', 'Taxonomy']","""Identifying the hypernym relations that hold between words is a fundamental task in NLP. Word embedding methods have recently shown some capability to encode hypernymy. However, such methods tend not to explicitly encode the hypernym hierarchy that exists between words. In this paper, we propose a method to learn a hierarchical word embedding in a specic order to capture the hypernymy. To learn the word embeddings, the proposed method considers not only the hypernym relations that exists between words on a taxonomy, but also their contextual information in a large text corpus. The experimental results on a supervised hypernymy detection and a newly-proposed hierarchical path completion tasks show the ability of the proposed method to encode the hierarchy. Moreover, the proposed method outperforms previously proposed methods for learning word and hypernym-specic word embeddings on multiple benchmarks.""","""Paper with initial unawareness of important related work but convincing revision""","""All reviewers voiced concerns regarding the comparison to recent related work. However, in my view, the authors addressed these concerns well in their revision, comparing directly against Poincar embeddings and LEAR. While the comparison reveals mixed results with respect to LEAR, I believe this work is well executed and of interest to the AKBC community. """,Accept (Poster) 22,"""Semi-supervised Ensemble Learning with Weak Supervision for Biomedical Relationship Extraction""","['weak supervision', 'meta-learning', 'biomedical relationship extraction', 'semi-supervised learning', 'ensemble learning']","""Natural language understanding research has recently shifted towards complex Machine Learning and Deep Learning algorithms. Such models often outperform their simpler counterparts significantly. However, their performance relies on the availability of large amounts of labeled data, which are rarely available. To tackle this problem, we propose a methodology for extending training datasets to arbitrarily big sizes and training complex, data-hungry models using weak supervision. We apply this methodology on biomedical relation extraction, a task where training datasets are excessively time-consuming and expensive to create, yet has a major impact on downstream applications such as drug discovery. We demonstrate in two small-scale controlled experiments that our method consistently enhances the performance of an LSTM network, with performance improvements comparable to hand-labeled training data. Finally, we discuss the optimal setting for applying weak supervision using this methodology.""","""Interesting model that makes use of semi-supervised data and meta-learning on top of that""","""Given a few base-learners, can we learn a meta-learner that performs better than the base-learners? The paper addresses this question in the context of bio-medical relation extraction where training data is often small. A number of base-learners are learned (SVM, LSTM, Logistic Regression etc) on a small training data which are further used to annotate a large amount of unsupervised data. Using data-programming techniques of Ratner et al. 2016, a set of n-weak labels are created. Then a discriminative model (which is called meta-learner here) is further trained on these weak labels to predict the final label.The reviewers had many concerns on the initial version. Some of the main ones are below:1) Reviewer 3 suggests using an additional dataset.2) Reviewer 1 did a great job in suggesting plenty of literature.3) Reviewer 1 also suggested making several empirical results clear.The revised version addresses these concerns with several additions. I am satisfied with the revisions, and I believe the paper will be a good addition to the conference. Based on the revised version, I recommend accepting the paper.A suggestion for improvement: The paper relies a lot on the data programming work of Ratner et al. 2016, 2017, which is explained at a high level in Section 3 and 4. A paper should be self-sufficient when it comes to understanding. Please describe these methods formally and in necessary detail. """,Accept (Poster) 23,"""Scalable Rule Learning in Probabilistic Knowledge Bases""","['Database', 'KB', 'Probabilistic Rule Learning']","""Knowledge Bases (KBs) are becoming increasingly large, sparse and probabilistic. These KBs are typically used to perform query inferences and rule mining. But their efficacy is only as high as their completeness. Efficiently utilizing incomplete KBs remains a major challenge as the current KB completion techniques either do not take into account the inherent uncertainty associated with each KB tuple or do not scale to large KBs.Probabilistic rule learning not only considers the probability of every KB tuple but also tackles the problem of KB completion in an explainable way. For any given probabilistic KB, it learns probabilistic first-order rules from its relations to identify interesting patterns. But, the current probabilistic rule learning techniques perform grounding to do probabilistic inference for evaluation of candidate rules. It does not scale well to large KBs as the time complexity of inference using grounding is exponential over the size of the KB. In this paper, we present SafeLearner -- a scalable solution to probabilistic KB completion that performs probabilistic rule learning using lifted probabilistic inference -- as faster approach instead of grounding. We compared SafeLearner to the state-of-the-art probabilistic rule learner ProbFOIL+ and to its deterministic contemporary AMIE+ on standard probabilistic KBs of NELL (Never-Ending Language Learner) and Yago. Our results demonstrate that SafeLearner scales as good as AMIE+ when learning simple rules and is also significantly faster than ProbFOIL+. ""","""Nice paper but concerns about related work need to be addressed""","""The paper presents a method of learning probabilistic rules froma probabilistic dataset of KB tuples. They first use existingdeterministic rule-learning algorithm AMIE+ to get candidaterules and then learn probabilistic rules using lifted inference.The paper is written clearly. Authors have responded thereviewers' concerns well. Overall there are some concerns that thecontributions of the paper are not substantial enough in quantityand depth. Given the vast existing literature on the topic,the authors should try to resolve the questions of comparisonsthat naturally arise.""",Accept (Poster) 24,"""Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications""","['Adversarial Attack', 'Knowledge Base Completion']","""Representing entities and relations in an embedding space is a well-studied approach for machine learning on relational data. Existing approaches, however, primarily focus on improving ranking metrics and ignore other aspects of knowledge base representations, such as robustness, interpretability, and ability to detect errors. In this paper, we propose adversarial attacks on link prediction models (AALP): identifying the fact to add into or remove from the knowledge graph that changes the prediction of a target fact. Using these attacks, we are able to identify the most influential related fact for a predicted link and investigate the sensitivity of the model to additional made-up facts. We introduce an efficient approach to estimate the effect of making a change by approximating the change in the embeddings upon altering the knowledge graph. In order to avoid the combinatorial search over all possible facts, we introduce an inverter function and gradient-based search to identify the adversary in a continuous space. We demonstrate that our models effectively attack the link prediction models by reducing their accuracy between 6-45% for different metrics. Further, we study patterns in the most influential neighboring facts, as identified by the adversarial attacks. Finally, we use the proposed approach to detect incorrect facts in the knowledge base, achieving up to 55% accuracy in identifying errors.""","""None""","""None""",None