AMSR / conferences_cleaned /akbc20_papers.csv
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
57.4 kB
paper_id,title,keywords,abstract,meta_title,meta_review,decision
1,"""CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning""","['commonsense reasoning', 'natural language generation', 'dataset', 'generate commonsense reasoning', 'compositional generalization']","""Given a set of common concepts like {apple (noun), pick (verb), tree (noun)}, humans find it easy to write a sentence describing a grammatical and logically coherent scenario that covers these concepts, for example, {a boy picks an apple from a tree''}. The process of generating these sentences requires humans to use commonsense knowledge. We denote this ability as generative commonsense reasoning. Recent work in commonsense reasoning has focused mainly on discriminating the most plausible scenes from distractors via natural language understanding (NLU) settings such as multi-choice question answering. However, generative commonsense reasoning is a relatively unexplored research area, primarily due to the lack of a specialized benchmark dataset.In this paper, we present a constrained natural language generation (NLG) dataset, named CommonGen, to explicitly challenge machines in generative commonsense reasoning. It consists of 30k concept-sets with human-written sentences as references. Crowd-workers were also asked to write the rationales (i.e. the commonsense facts) used for generating the sentences in the development and test sets. We conduct experiments on a variety of generation models with both automatic and human evaluation. Experimental results show that there is still a large gap between the current state-of-the-art pre-trained model, UniLM, and human performance.""","""Paper Decision""","""This paper introduces a constrained text generation challenge dataset called ""CommonGen"" in which the idea is build models that accept concepts (nouns and verbs) and then generates plausible sentences conditioned on these. The idea is that doing this successfully requires some sort of ""common sense"" facts and reasoning. While there are some concerns about just how much ""common sense"" is necessarily required for the task, and also the quality assurance processes put in place during data collection, this corpus nonetheless seems like an interesting new resource for the community.""",Accept
2,"""Exploiting Semantic Relations for Fine-grained Entity Typing""","['Fine-grained Entity Typing', 'Hypernym Extraction', 'Semantic Role Labeling']","""Fine-grained entity typing results can serve as important information for entities while constructing knowledge bases. It is a challenging task due to the use of large tag sets and the requirement of understanding the context.We find that, in some cases, existing neural fine-grained entity typing models may ignore the semantic information in the context that is important for typing. To address this problem, we propose to exploit semantic relations extracted from the sentence to improve the use of context. The used semantic relations are mainly those that are between the mention and the other words or phrases in the sentence. We investigate the use of two types of semantic relations: hypernym relation, and verb-argument relation. Our approach combine the predictions made based on different semantic relations and the predictions of a base neural model to produce the final results. We conduct experiments on two commonly used datasets: FIGER (GOLD) and BBN. Our approach achieves at least 2 absolute strict accuracy improvement on both datasets compared with a strong BERT based model.""","""Paper Decision""","""The paper make use of semantic relations (hypernym and verb-argument) to obtain the state of the art performance in entity typing, especially compared to strong baselines such as BERT. The paper presents an interesting message that linguistic features could be still important among the age of end-to-end methods. It is also clear that entity typing is crucial for constructing knowledge bases, making the paper quite appropriate for the proceedings of AKBC. """,Accept
3,"""Cross-context News Corpus for Protest Events related Knowledge Base Construction""","['protests', 'contentious politics', 'news', 'text classification', 'event extraction', 'social sciences', 'political sciences', 'computational social science']","""We describe a gold standard corpus of protest events that comprise of various local and international sources from various countries in English. The corpus contains document, sentence, and token level annotations. This corpus facilitates creating machine learning models that automatically classify news articles and extract protest event related information, constructing databases which enable comparative social and political science studies. For each news source, the annotation starts on random samples of news articles and continues with samples that are drawn using active learning. Each batch of samples was annotated by two social and political scientists, adjudicated by an annotation supervisor, and was improved by identifying annotation errors semi-automatically. We found that the corpus has the variety and quality to develop and benchmark text classification and event extraction systems in a cross-context setting, which contributes to generalizability and robustness of automated text processing systems. This corpus and the reported results will set the currently lacking common ground in automated protest event collection studies.""","""Paper Decision""","""The paper presents a corpus of 10K news articles about protest events, with document level labels, sentence-level labels, and token-level labels. Coarse-grained labels are Protest/Not, and fine-grained labels are things such as triggers/places/times/people/etc. All reviewers agree that this paper is interesting and the contributed resource will be useful for the community, hence we propose acceptance. There were some concerns that the authors fully addressed in their response, updating their paper. We recommend authors to take the remaining suggestions into account when preparing the final version.""",Accept
4,"""OxKBC: Outcome Explanation for Factorization Based Knowledge Base Completion""",[],"""State-of-the-art models for Knowledge Base Completion (KBC) are based on tensor factorization (TF), e.g, DistMult, ComplEx. While they produce good results, they cannot expose any rationale behind their predictions, potentially reducing the trust of a user in the model. Previous works have explored creating an inherently explainable model, e.g. Neural Theorem Proving (NTP), DeepPath, MINERVA, but explainability comes at the cost of performance. Others have tried to create an auxiliary explainable model having high fidelity with the underlying TF model, but unfortunately, they do not scale on large KBs such as FB15k and YAGO.In this work, we propose OxKBC -- an Outcome eXplanation engine for KBC, which provides a post-hoc explanation for every triple inferred by an (uninterpretable) factorization based model. It first augments the underlying Knowledge Graph by introducing weighted edges between entities based on their similarity given by the underlying model. In the augmented graph, it defines a notion of human-understandable explanation paths along with a language to generate them. Depending on the edges, the paths are aggregated into second-order templates for further selection. The best template with its grounding is then selected by a neural selection module that is trained with minimal supervision by a novel loss function. Experiments over Mechanical Turk demonstrate that users find our explanations more trustworthy compared to rule mining.""","""Paper Decision""","""The paper looks into explaining the predictions of factorization models (i.e. post-hoc interpretation). This is an important problem, and there is very little work into this in the context of factorization models. Judging from the paper and the authors' replies here in the discussion forum, the goal is to produce explanations faithful to the underlying factorization model. However, there is between the goal and the evaluation, and certain confusion in the discussion. If the goal is to understand what a factorization model relies on, it is unclear that it makes sense to train the model on human-annotated data or measure how well it agrees with explanations given by humans. We would like to understand what the model is doing rather than what a human thinks the model should be doing. It may well rely on some obscure artifacts, and we would like to know this. If the goal is generating plausible explanations, then it opens up the paper to criticism by one of the reviewers -- there should be a broader comparison to 'more explainable' models, which are not tied to factorization models.Overall, even given these issues with discussion and maybe certain overselling of its faithfulness (even though the explanation on page 6 is plausible, it is still a heuristic), I like the work. It is hard to evaluate faithfulness and there are interesting insights in this submission. Also, this criticism applies to quite a few published papers in interpretability.We expect the author's put some effort into sharpening the argument and addressing AC and reviewers' concerns. I would have loved to see a discussion of stability, maybe artificial experiments where we know what the model is doing, etc. """,Accept
5,"""Learning Relation Entailment with Structured and Textual Information""","['relation entailment', 'structured information', 'textual information']","""Relations among words and entities are important for semantic understanding of text, but previous work has largely not considered relations between relations, or meta-relations. In this paper, we specifically examine relation entailment, where the existence of one relation can entail the existence of another relation. Relation entailment allows us to construct relation hierarchies, enabling applications in representation learning, question answering, relation extraction, and summarization. To this end, we formally define the new task of predicting relation entailment and construct a dataset by expanding the existing Wikidata relation hierarchy without expensive human intervention. We propose several methods that incorporate both structured and textual information to represent relations for this task. Experiments and analysis demonstrate that this task is challenging, and we provide insights into task characteristics that may form a basis for future work. The dataset and code will be released upon acceptance.""","""Paper Decision""","""The paper introduces a method for entailment prediction between relations in a knowledge graph, using the Wikidata dataset. They used a few tricks to construct the dataset (relation sampling, relation expansion, etc.)Overall, the reviewers agree that this paper deserve publication. However several aspects in the presentation should be improved: notation needs to be made clearer, a figure would help understand the main idea, and statistics on relation entailments would be useful to present. We strongly recommend authors to take these suggestions into account when preparing the final version.""",Accept
6,"""DOLORES: Deep Contextualized Knowledge Graph Embeddings""","['Knowledge Graph', 'Contextualized Embeddings']","""We introduce Dolores, a new knowledge graph embeddings, that effectively capture contextual cues and dependencies among entities and relations. First, we note that short paths on knowledge graphs comprising of chains of entities and relations can encode valuable information regarding their contextual usage. We operationalize this notion by representing knowledge graphs not as a collection of triples but as a collection of entity-relation chains, and learn embeddings using deep neural models that capture such contextual usage. Based on Bi-Directional LSTMs, our model learns deep representations from constructed entity-relation chains. We show that these representations can be easily incorporated into existing models to significantly advance the performance on several knowledge graph tasks like link prediction, triple classification, and multi-hop knowledge base completion (in some cases by 11%).""","""Paper Decision""","""The paper introduces a simple and effective approach to obtaining entity embeddings (relying on RNN-encoded walks and ELMo style losses). The approach works well, is simple, and well-motivated.While the underlying principles have been studied (e.g., RNN embeddings of walks or learning representations relying on walks as in DeepWalk), there is enough novelty in the proposed method. The other two reviewers are positive. We would encourage the authors to address the reviewers' comments (e.g., regarding clarity in R3; I had similar issues with understanding the model structure and the learning procedure / objective). It may be interesting to discuss the relation with graph neural networks (esp. with relational GCNs), which also learn a contextualized representation of entities, using similar types of losses. It may make sense to discuss why linearization can be beneficial (from representation learning or efficiency perspectives).""",Accept
7,"""How Context Affects Language Models' Factual Predictions""",[],"""When pre-trained on large unsupervised textual corpora, language models are able to store and retrieve factual knowledge to some extent, making it possible to use them directly for zero-shot cloze-style question answering. However, storing factual knowledge in a fixed number of weights of a language model clearly has limitations. Previous approaches have successfully provided access to information outside the model weights using supervised architectures that combine an information retrieval system with a machine reading component. In this paper, we go one step further and integrate information from a retrieval system with a pre-trained language model in a purely unsupervised way. We report that augmenting pre-trained language models in this way dramatically improves performance and that it is competitive with a supervised machine reading baseline without requiring any supervised training. Furthermore, processing query and context with different segment tokens allows BERT to utilize its Next Sentence Prediction pre-trained classifier to determine whether the context is relevant or not, substantially improving BERT's zero-shot cloze-style question-answering performance and making its predictions robust to noisy contexts.""","""Paper Decision""","""This paper studies how factual predictions of a Masked Language Model (MLM) are influenced by appending additional context via various context construction methods. The work presents a set of interesting probes for the analysis, with good justification on the probe design. The paper is well written, clear, and provides good insights on understanding and improving MLM.""",Accept
8,"""Knowledge Graph Simple Question Answering for Unseen Domains""","['Question Answering', 'Knowledge Graph', 'Domain Adaptation']","""Knowledge Graph Simple Question Answering (KGSQA), in its standard form, does not take into account that human-curated question answering training data only cover a small subset of the relations that exist in a Knowledge Graph (KG), or even worse, that new domains covering unseen and rather different to existing domains relations are added to the KG. In this work, we study KGQA for first-order questions in a previously unstudied setting where new, unseen, domains are added during test time. In this setting, question-answer pairs of the new domain do not appear during training, thus making the task more challenging. We propose a data-centric domain adaptation framework that consists of a KGQA system that is applicable to new domains, and a sequence to sequence question generation method that automatically generates question-answer pairs for the new domain. Since the effectiveness of question generation for KGQA can be restricted by the limited lexical variety of the generated questions, we use distant supervision to extract a set of keywords that express each relation of the unseen domain and incorporate those in the question generation method. Experimental results demonstrate that our framework significantly improves over zero-shot baselines and is robust across domains.""","""Paper Decision""","""This paper studies the problem of simple question answering over new, unseen domains during test time. A domain adaption framework and a seq2seq question generation method have been proposed to tackle this problem and demonstrates significant improvements over the previous baselines.All the reviewers agreed that this paper is well-written and the results are convincing, but the problem is relatively narrow with a focused contribution. Several reviewers also questioned whether this paper contains enough technical contributions. Some other issues have been already addressed during the discussion phase (long tail relations, presentation issues, and adding more related work).However, we recommend accepting the paper considering the simplicity and effectiveness of the approach. We think it would lead to more discussion/future work in this direction.""",Accept
9,"""Enriching Knowledge Bases with Interesting Negative Statements""","['information retrieval', 'knowledge bases', 'ranking', 'negation']","""Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, but abstain from taking any stance towards statements not contained in them.In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards automatically compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In pattern-based query log extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.4M statements for 130K popular Wikidata entities.""","""Paper Decision""","""This paper explores a new direction in knowledge base construction: how to identify *interesting* negative statements for KBs. Towards this general goal, two approaches have been developed: peer-based statistical inference and pattern-based text extraction. Two datasets of negative knowledge bases are provided, along with an extrinsic QA evaluation.There has been quite a bit of discrepancy among the reviews. All the reviewers appreciated that this paper addresses a very important (and previously underestimated) problem but there are lots of discussion around the evaluation: (1) whether the current evaluation is too small-scale/non-rigorous, (2) whether the closed-world assumption is reasonable or not, (3) the correctness of evaluation of extracted KBs. The authors have made substantial revisions during the rebuttal phase and we believe most of these issues have been addressed. Therefore, we recommend accepting this paper.""",Accept
10,"""Semi-Automating Knowledge Base Construction for Cancer Genetics""","['Cancer genetics', 'biomedical nlp', 'information extraction', 'clinical informatics', 'knowledge base construction']","""The vast and rapidly expanding volume of biomedical literature makes it difficult for domain experts to keep up with the evidence. In this work, we specifically consider the exponentially growing subarea of genetics in cancer. The need to synthesize and centralize this evidence for dissemination has motivated a team of physicians (with whom this work is a collaboration) to manually construct and maintain a knowledge base that distills key results reported in the literature. This is a laborious process that entails reading through full-text articles to understand the study design, assess study quality, and extract the reported cancer risk estimates associated with particular hereditary cancer genes (i.e., penetrance ). In this work, we propose models to automatically surface key elements from full-text cancer genetics articles, with the ultimate aim of expediting the manual workflow currently in place.We propose two challenging tasks that are critical for characterizing the findings reported cancer genetics studies: (i) Extracting snippets of text that describe ascertainment mechanisms , which in turn inform whether the population studied may introduce bias owing to deviations from the target population; (ii) Extracting reported risk estimates (e.g., odds or hazard ratios) associated with specific germline mutations. The latter task may be viewed as a joint entity tagging and relation extraction problem. To train models for these tasks, we induce distant supervision over tokens and snippets in full-text articles using the manually constructed knowledge base. We propose and evaluate several model variants, including a transformer-based joint entity and relation extraction model to extract <germline mutation, risk-estimate> pairs. We observe strong empirical performance, highlighting the practical potential for such models to aid KB construction in this space. We ablate components of our model, observing, e.g., that a joint model for <germline mutation, risk-estimate> fares substantially better than a pipelined approach. ""","""Paper Decision""","""The paper addresses the novel task of information extraction from cancer genomics. The reviewers have applauded the important and meaningful application area, and the comprehensive experimental design that beats state of the art. The approaches are straightforward combination of existing methods. There are also some clarity issues, which we expect authors to fix in the final version. """,Accept
11,"""Ranking vs. Classifying: Measuring Knowledge Base Completion Quality""","['knowledge base completion', 'knowledge graph embedding', 'classification', 'ranking']","""Knowledge base completion (KBC) methods aim at inferring missing facts from the information present in a knowledge base (KB). In the prevailing evaluation paradigm, a model does not strictly decide about if a new fact should be accepted, but rather puts it in a relative position to other candidate facts via ranking. We argue that consideration of binary predictions is essential to reflect the actual KBC quality, and propose a novel evaluation paradigm, designed to provide more transparent model selection criteria for a realistic scenario. We construct a data set FB13k-QAQ with an alternative evaluation data structure, where single facts are transformed to entity-relation queries with a corresponding entity set of correct answers. Some randomly chosen correct answers are removed from the data set, resulting in incomplete queries or even queries with no possible answer. The latter specifically contrast the ranking setting. Obtained on the new data set, differences in relative performance of state-of-the-art KB embedding models in the ranking and classification settings confirm that ranking quality does not necessarily translate to completion quality. The results motivate the development of KB embedding models with better prediction separability, and we propose a simple variant of TransE that encourages thresholding and achieves a significant improvement in prediction F-Score relative to the original TransE.""","""Paper Decision""","""This paper proposes a new evaluation for KBC where models need to decide whether to accept a new fact instead of simply ranking the possibilities. The main contribution of this work is the well-motivated evaluation that is better aligned with how these models would in practice be used downstream. There is a secondary contribution of a variant of TransE that is tailored towards the more realistic setting reflected by the evaluation. While there are concerns about the lack of more recent models, the novel method serves to highlight the goal of the new evaluation rather than to claim state-of-the-art performance.""",Accept
12,"""Procedural Reading Comprehension with Attribute-Aware Context Flow""","['Reading comprehension', 'contextual encoding', 'encoder-decoder architecture', 'procedural text', 'entity tracking']","""Procedural texts often describe processes (e.g., photosynthesis, cooking) that happen over entities (e.g., light, food). In this paper, we introduce an algorithm for procedural reading comprehension by translating the text into a general formalism that represents processes as a sequence of transitions over entity attributes (e.g., location, temperature). Leveraging pre-trained language models, our model obtains entity-aware and attribute-aware representations of the text by joint prediction of entity attributes and their transitions. Our model dynamically obtains contextual encodings of the procedural text exploiting information that is encoded about previous and current states to predict the transition of a certain attribute which can be identified as a spans of texts or from a pre-defined set of classes. Moreover, Our model achieves state of the art on two procedural reading comprehension datasets, namely ProPara and npn-Cooking.""","""Paper Decision""","""This paper proposes a model for reading comprehension of procedural texts that jointly predicts entity attributes and transitions in entity state. The model achieves state-of-the-art results on multiple datasets. Reviewers appreciated both the novelty and the simplicity of the proposed model, as well as the strong empirical results. The initial discussion focused around analysis of errors, ablations, and empirical comparison to prior work. The responses and revisions addressed these questions and reviewers appreciated the usefulness of the new error analysis and ablations. """,Accept
13,"""Predicting Institution Hierarchies with Set-based Models""","['Hierarchies', 'Sets', 'Transformers', 'Institutions']","""The hierarchical structure of research organizations plays a pivotal role in science-of-scienceresearch as well as in tools that track the research achievements and output. However, this structureis not consistently documented for all institutions in the world, motivating the need for automatedconstruction methods. In this paper, we present a new task and model for predicting the is-ancestorrelationships of institutions based on their string names. We present a model that predicts is-ancestorrelationships between the institutions by modeling the set operations between the strings. The modeloverall outperforms all non set-based models and baselines on all, but one metric. We also create adataset for training and evaluating models for this task based on the publicly available relationshipsin the Global Research Identifier Database.""","""Paper Decision""","""This paper presents a new task and model for predicting the hierarchical structure of organizations / institutes. The model predicts is-ancestor relationships between the institutions by modeling the set operations between the strings. The authors develop a new dataset, automatically derived from GRID, and compare their set-based model against a few baseline approaches. The reviewers comments were generally positive and praised the usefulness of the task, which makes us recommend acceptance. However, there were some concerns about the experimental setup (choice of baselines and evaluation). We strongly recommend improving these aspects on the final version, as per the reviewers suggestions.""",Accept
14,"""XREF: Entity Linking for Chinese News Comments with Supplementary Article Reference""","['Entity Linking', 'Chinese social media', 'Data Augmentation', 'Multi-Task Learning']","""Automatic identification of mentioned entities in social media posts facilitates quick digestion of trending topics and popular opinions. Nonetheless, this remains a challenging task due to limited context and diverse name variations. In this paper, we study the problem of entity linking for Chinese news comments given mentions' spans. We hypothesize that comments often refer to entities in the corresponding news article, as well as topics involving the entities. We therefore propose a novel model, XREF, that leverages attention mechanisms to (1) pinpoint relevant context within comments, and (2) detect supporting entities from the news article. To improve training, we make two contributions: (a) we propose a supervised attention loss in addition to the standard cross entropy, and (b) we develop a weakly supervised training scheme to utilize the large-scale unlabeled corpus. Two new datasets in entertainment and product domains are collected and annotated for experiments. Our proposed method outperforms previous methods on both datasets. ""","""Paper Decision""","""All reviewers are fairly positive about the paper that deals with entity linking in Chinese online news comments. The key strengths of the paper are: using additional context from associated articles by using data augmentation, novel attention mechanism over news articles, guidance to article attention values. The paper is well written and has good results over state of the art. Reviewers pointed out suggestions for further work like trying another language, doing ablation study, testing the generalizability, etc. While all of these are good ideas to make the work more comprehensive and thorough, still the paper stands on its own merits, and should be a good addition to the conference. """,Accept
15,"""Non-Parametric Reasoning in Knowledge Bases""","['case based reasoning', 'non-parametric', 'knowledge base completion']","""We present a surprisingly simple yet accurate approach to reasoning in knowledge graphs (KGs) that requires no training , and is reminiscent of case-based reasoning in classical artificial intelligence (AI). Consider the task of finding a target entity given a source entity and a binary relation. Our approach finds multiple graph path patterns that connect similar source entities through the given relation, and looks for pattern matches starting from the query source. Using our method, we obtain new state-of-the-art accuracy, outperforming all previous models, on NELL-995 and FB-122. We also demonstrate that our model is robust in low data settings, outperforming recently proposed meta-learning approaches.""","""Paper Decision""","""This paper proposed a case-based reasoning/non-parametric approach for a widely studied knowledge base completion task: given a subject and a relation, predict the object based on a given knowledge graph. The idea is novel and simple: given the subject, it retrieves similar entities in the whole KG and corresponding reasoning paths with respect to the query relation and uses multiple paths of evidence to derive an answer. The approach has been evaluated on multiple benchmarks and demonstrates excellent performance.All the reviewers think this is a strong paper and would lay out a solid framework for future work in this direction. We recommend accepting this paper.As per the suggestions by the reviewers, it is a good idea to consider adding case-based reasoning to the title to reflect the key idea of this approach. It would be also be desired to discuss how this approach compares to other existing approaches (inference time, scalability, etc) in addition to accuracy metrics. """,Accept
16,"""Joint Reasoning for Multi-Faceted Commonsense Knowledge""",['Commonsense knowledgebase construction'],"""Commonsense knowledge (CSK) supports a variety of AI applications, from visual understanding to chatbots. Prior works on acquiring CSK, such as ConceptNet, have compiled statements that associate concepts with properties that hold for most or some of their instances. Each concept and statement is treated in isolation from others, and the only quantitative measure (or ranking) is a confidence score that the statement is valid. This paper aims to overcome these limitations by introducing a multi-faceted model of CSK statements and methods for joint reasoning over sets of inter-related statements. Our model captures four different dimensions of CSK statements: plausibility, typicality, remarkability and salience, with scoring and ranking along each dimension. For example, hyenas drinking water is typical but not salient, whereas hyenas eating carcasses is salient. For reasoning and ranking, we develop a method with soft constraints, to couple the inference over concepts that are related in a taxonomic hierarchy. The reasoning is cast into an integer linear programming (ILP), and we leverage the theory of reduction costs of a relaxed LP to compute informative rankings. Our evaluation shows that we can consolidate existing CSK collections into much cleaner and more expressive knowledge.""","""Paper Decision""","""This paper presents a method for classifying scoring mechanisms in KB resources, applied to commonsense knowledge bases. The reviewers see the application to common-sense knowledge bases as a strong point of the paper and agree that the goal is worthwhile, and that the paper is well-written. However, the reviewers had reservations about the grounding in related work, which should be improved, and maintain that the design decisions should be explained better.""",Accept
17,"""Knowledge Graph Embedding Compression""",[],"""Knowledge graph (KG) representation learning techniques that learn continuous embeddings of entities and relations in the KG have become popular in many AI applications. With a large KG, the embeddings consume a large amount of storage and memory. This is problematic and prohibits the deployment of these techniques in many real world settings. Thus, we propose an approach that compresses the KG embedding layer by representing each entity in the KG as a vector of discrete codes and then composes the embeddings from these codes. The approach can be trained end-to-end with simple modifications to any existing KG embedding technique. We evaluate the approach on various standard KG embedding evaluations and show that it achieves 50-1000x compression of embeddings with a minor loss in performance. The compressed embeddings also retain the ability to perform various reasoning tasks such as KG inference.""","""Paper Decision""","""This paper studies the problem of compressing KG embeddings, and suggests learning to discretize the embeddings and also to undo this discretization. The paper shows this helps the memory requirements without significant loss in quality.All reviewers noted that the paper is well written and presents an interesting solution to the problem of large models. The authors' responses most of the questions raised in the reviews.""",Accept
18,"""IterefinE: Iterative KG Refinement Embeddings using Symbolic Knowledge""","['Knowledge graph refinement', 'embeddings', 'inference']","""Knowledge Graphs (KGs) extracted from text sources are often noisy and lead to poor performance in downstream application tasks such as KG-based question answering. While much of the recent activity is focused on addressing the sparsity of KGs by using embeddings for inferring new facts, the issue of cleaning up of noise in KGs through KG refinement task is not as actively studied. Most successful techniques for KG refinement make use of inference rules and reasoning over ontologies. Barring a few exceptions, embeddings do not make use of ontological information, and their performance in KG refinement task is not well understood. In this paper, we present a KG refinement framework called IterefinE which iteratively combines the two techniques one which uses ontological information and inferences rules, viz.,PSL-KGI, and the KG embeddings such as ComplEx and ConvE which do not. As a result, IterefinE is able to exploit not only the ontological information to improve the quality of predictions, but also the power of KG embeddings which (implicitly) perform longer chains of reasoning. The IterefinE framework, operates in a co-training mode and results in explicit type-supervised embeddings of the refined KG from PSL-KGI which we call as TypeE-X. Our experiments over a range of KG benchmarks show that the embeddings that we produce are able to reject noisy facts from KG and at the same time infer higher quality new facts resulting in upto 9% improvement of overall weighted F1 score.""","""Paper Decision""","""This paper proposes a novel method, IterefineE, for cleaning up noise in KGs. This method combines the advantages of using ontological information and inferences rules and KG embeddings with iterative co-training. IterefineE improves the task of denoising KGs on multiple datasets. While the importance of multiple iterations is mixed, reviewers agree that the combination of two significantly different types of reasoning is a promising direction.""",Accept
19,"""Representing Joint Hierarchies with Box Embeddings""","['embeddings', 'order embeddings', 'knowledge graph embedding', 'relational learning', 'hyperbolic entailment cones', 'knowledge graphs', 'transitive relations']","""Learning representations for hierarchical and multi-relational knowledge has emerged as an active area of research. Box Embeddings [Vilnis et al., 2018, Li et al., 2019] represent concepts with hyperrectangles in pseudo-formula -dimensional space and are shown to be capable of modeling tree-like structures efficiently by training on a large subset of the transitive closure of the WordNet hypernym graph. In this work, we evaluate the capability of box embeddings to learn the transitive closure of a tree-like hierarchical relation graph with far fewer edges from the transitive closure. Box embeddings are not restricted to tree-like structures, however, and we demonstrate this by modeling the WordNet meronym graph, where nodes may have multiple parents. We further propose a method for modeling multiple relations jointly in a single embedding space using box embeddings. In all cases, our proposed method outperforms or is at par with all other embedding methods.""","""Paper Decision""","""Reviewers unanimously appreciated this paper. Please do take into account their feedback to improve the paper.From our perspective, the paper is not written in a scholarly fashion: there is so much work on hierarchical models, learning embeddings of trees, and why not give credit to these people? Please expand your related work discussion and give proper context.""",Accept
20,"""Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking""","['Entity linking', 'Pre-training', 'Wikification']","""In this work, we present an entity linking model which combines a Transformer architecture with large scale pretraining from Wikipedia links. Our model achieves the state-of-the-art on two commonly used entity linking datasets: 96.7% on CoNLL and 94.9% on TAC-KBP. We present detailed analyses to understand what design choices are important for entity linking, including choices of negative entity candidates, Transformer architecture, and input perturbations. Lastly, we present promising results on more challenging settings such as end-to-end entity linking and entity linking without in-domain training data""","""Paper Decision""","""All reviewers agreed that the paper has some strengths with merits outweighing (a few) flaws. This paper investigates the use of a simple architecture for entity disambiguation, while exploring several design decisions along the way. Results show state-of-the-art performance on CoNLL (with a good candidate set) and TAC-KBP, as well as good performance on end-to-end entity linking (detecting and linking mentions).The strengths of this paper are: (1) competitive performance without domain-specific tuning, (2) extremely well done experiments touching on many related issues (negative candidate selection, noise addition, and context selection). One of the reviewers describes it as ""solidly done piece of experimental work"", which ""will be a good benchmark for future efforts"".There are two drawback of the paper. (1) the techniques in this paper by themselves aren't novel. In fact, one cannot attribute a strong technical contribution for this paper. So, if one has to accept the paper it has to be for experiments and analysis and not for the novelty. (2) there is another paper from CONLL'19 which is related. The reviewers liked the experiments in this paper better than the CONLL paper, which are much more thorough in a wider range of experimental settings. """,Accept
21,"""Revisiting Evaluation of Knowledge Base Completion Models""","['Knowledge Graph Completion', 'Link prediction', 'Calibration', 'Triple Classification']","""Representing knowledge graphs (KGs) by learning embeddings for entities and relations has provided accurate models for existing KG completion benchmarks. Although extensive research has been carried out on KG completion, because of the open-world assumption of existing KGs, previous studies rely on ranking metrics and triple classification with negative samples for the evaluation and are unable to directly assess the models on the goals of the task, completion. In this paper, we first study the shortcomings of these evaluation metrics. More specifically, we demonstrate that these metrics 1) are unreliable for estimating calibration, 2) make strong assumptions that are often violated, and 3) do not sufficiently, and consistently, differentiate embedding methods from simple approaches and from each other. To address these issues, we provide a semi-complete KG using a randomly sampled subgraph from the test and validation data of YAGO3-10, allowing us to compute accurate triple classification accuracy on this data. Conducting thorough experiments on existing models, we provide new insights and directions for the KG completion research. ""","""Paper Decision""","""This paper points issues with evaluation of knowledge base completion, proposes triple classification as an evaluation method, and introduces a new dataset to help with this. In this sense, the paper is essentially an analysis paper that focuses on the evaluation aspects of the problem.All the reviewers appreciate the analysis of evaluation metrics for KG completion and also the new dataset. The authors have updated the paper to address many of the concerns raised by in the reviews, in many cases providing additional information to make their case.""",Accept
22,"""TransINT: Embedding Implication Rules in Knowledge Graphs with Isomorphic Intersections of Linear Subspaces""","['Knowledge Graph Embedding', 'Isomorphism', 'Rules', 'Common Sense', 'Implication Rules', 'Knowledge Graph', 'Isomorphic Embedding', 'Semantics Mining', 'Rule Mining']","""Knowledge Graphs (KG), composed of entities and relations, provide a structured representation of knowledge. For easy access to statistical approaches on relational data, multiple methods to embed a KG into f(KG) R^d have been introduced. We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space. Given implication rules, TransINT maps set of entities (tied by a relation) to continuous sets of vectors that are inclusion-ordered isomorphically to relation implications. With a novel parameter sharing scheme, TransINT enables automatic training on missing but implied facts without rule grounding. On a benchmark dataset, we outperform the best existing state-of-the-art rule integration embedding methods with significant margins in link Prediction and triple Classification. The angles between the continuous sets embedded by TransINT provide an interpretable way to mine semantic relatedness and implication rules among relations.""","""Paper Decision""","""This work proposes a new knowledge graph embedding method in the Trans- family, that ensures the implication ordering of relations in the embedding space. The proposed idea on viewing relations as sets of pairs of entities is interesting and provides new perspective as compared to previous KG embedding approaches. The technical content is well explained and justified. There are concerns from the reviewers on experiments and writing. The author has revised the draft to incorporate the review comments. """,Accept
23,"""Enriching Large-Scale Eventuality Knowledge Graph with Entailment Relations""","['eventuality knowledge graph', 'entailment graph', 'commonsense reasoning']","""The knowledge about entailment relations between eventualities (activities, states, and events) can be helpful for many natural language understanding tasks. Conventional acquisition methods of such knowledge cannot be adapted to large-scale entailment graphs. In this paper, we construct an eventuality entailment graph (EEG) by establishing entailment relations in a large-scale discourse-relation-based eventuality knowledge graph and build the graph for million of eventuality nodes by using a three-step approach to improve the efficiency of the construction process. Experiments demonstrate the high quality of the proposed approach.""","""Paper Decision""","""This paper proposes a novel framework for acquiring eventuality entailment knowledge to construct a knowledge graph. The multi-step construction process is well explained and has clear justification. However, the paper could be stronger if it expands more on convincing audience that such knowledge graph is a useful representation, has promising downstream applications. The work can also benefit from adding more empirical evaluation.""",Accept
24,"""Using BibTeX to Automatically Generate Labeled Data for Citation Field Extraction""","['sequence labeling', 'information extraction', 'auto-generated dataset']","""Accurate parsing of citation reference strings is crucial to automatically construct scholarly databases such as Google Scholar or Semantic Scholar. Citation field extraction (CFE) is precisely this task---given a reference label which tokens refer to the authors, venue, title, editor, journal, pages, etc. Most methods for CFE are supervised and rely on training from labeled datasets that are quite small compared to the great variety of reference formats. BibTeX, the widely used reference management tool, provides a natural method to automatically generate and label training data for CFE. In this paper, we describe a technique for using BibTeX to generate, automatically, a large-scale 41M labeled strings), labeled dataset, that is four orders of magnitude larger than the current largest CFE dataset, namely the UMass Citation Field Extraction dataset [Anzaroot and McCallum, 2013]. We experimentally demonstrate how our dataset can be used to improve the performance of the UMass CFE using a RoBERTa-based [Liu et al., 2019] model. In comparison to previous SoTA, we achieve a 24.48% relative error reduction, achieving span level F1-scores of 96.3%.""","""Paper Decision""","""The authors consider the problem of citation field extraction, which is necessary for automatically constructing scholarly databases. While there were some concerns regarding the novelty of the work, the importance of the task and dataset to be released with the work outweights these. """,Accept
25,"""Syntactic Question Abstraction and Retrieval for Data-Scarce Semantic Parsing""","['Semantic Parsing', 'NLIDB', 'WikiSQL', 'Question Answering', 'SQL', 'Information Retrieval']","""Deep learning approaches to semantic parsing require a large amount of labeled data, but annotating complex logical forms is costly. Here, we propose SYNTACTIC QUESTION ABSTRACTION & RETRIEVAL (SQAR), a method to build a neural semantic parser that translates a natural language (NL) query to a SQL logical form (LF) with less than 1,000 annotated examples. SQAR first retrieves a logical pattern from the train data by computing the similarity between NL queries and then grounds a lexical information on the retrieved pattern in order to generate the final LF. We validate SQAR by training models using various small subsets of WikiSQL train data achieving up to 4.9% higher LF accuracy compared to the previous state-of-the-art models on WikiSQL test set. We also show that by using query-similarity to retrieve logical pattern, SQAR can leverage a paraphrasing dataset achieving up to 5.9% higher LF accuracy compared to the case where SQAR is trained by using only WikiSQL data. In contrast to a simple pattern classification approach, SQAR can generate unseen logical patterns upon the addition of new examples without re-training the model. We also discuss an ideal way to create cost efficient and robust train datasets when the data distribution can be approximated under a data-hungry setting.""","""Paper Decision""","""This paper proposed a simple and effective retrieval-based approach for text-to-SQL semantic parsing for the data-scarce setting. The approach has been evaluated on the WikiSQL dataset and demonstrates gains over the previous best model SQLOVA when a small number of training examples were used. It also demonstrates a zero-shot ability to handle unseen logical patterns.All the reviewers agreed that this paper is well-written and the approach is effective and well-justified in the experiments. Therefore, we recommend the acceptance of this paper.A major concern raised among the reviewers is whether this approach can be extended to other truly small semantic parsing datasets and more compositional logic forms. This is worth exploring and can leave it to future work. """,Accept
26,"""Learning Credal Sum Product Networks""","['credal networks', 'imprecise probabilities', 'tractable learning']","""Probabilistic representations, such as Bayesian and Markov networks, are fundamental to much of statistical machine learning. Thus, learning probabilistic representations directly from data is a deep challenge, the main computational bottleneck being inference that is intractable. Tractable learning is a powerful new paradigm that attempts to learn distributions that support efficient probabilistic querying. By leveraging local structure, representations such as sum-product networks (SPNs) can capture high tree-width models with many hidden layers, essentially a deep architecture, while still admitting a range of probabilistic queries to be computable in time polynomial in the network size. While the progress is impressive, numerous data sources are incomplete, and in the presence of missing data, structure learning methods nonetheless appeal to single distributions without characterizing the loss in confidence. In recent work, credal sum-product networks, an imprecise extension of sum-product networks, were proposed to capture this robustness angle. In this work, we are interested in how such representations can be learnt and thus study how the computational machinery underlying tractable learning and inference can be generalized for imprecise probabilities. ""","""Paper Decision""","""This paper develops the first structure learning algorithm for Credal SPNs. The paper is somewhat difficult to evaluate, since the credal paradigm is so different from the usual maximum likelihood paradigm, which makes a direct empirical comparison challenging. By providing more detailed information about the uncertainty, the credal approach certainly has some merit, and while upgrading some SPN structure learning heuristics to the credal setting may not be technically challenging, they are done for the first time in this paper. On the other hand, the reviewers did find many ways in which the paper can be improved. Overall, we recommend acceptance. The authors are encouraged to improve the paper as suggested by the reviewers.""",Accept
27,"""Graph Hawkes Neural Network for Future Prediction on Temporal Knowledge Graphs""","['Hawkes process', 'dynamic graphs', 'temporal knowledge graphs', 'point processes.']","""The Hawkes process has become a standard method for modeling self-exciting event sequences with different event types. A recent work generalizing the Hawkes process to a neurally self-modulating multivariate point process enables the capturing of more complex and realistic influences of past events on the future. However, this approach is limited by the number of event types, making it impossible to model the dynamics of evolving graph sequences, where each possible link between two nodes can be considered as an event type. The problem becomes even more dramatic when links are directional and labeled, since, in this case, the number of event types scales with the number of nodes and link types. To address this issue, we propose the Graph Hawkes Neural Network that can capture the dynamics of evolving graph sequences and predict the occurrence of a fact in a future time. Extensive experiments on large-scale temporal relational databases, such as temporal knowledge graphs, demonstrate the effectiveness of our approach.""","""Paper Decision""","""This paper propose Graph Hawkes Neural Networks (GHNN) which are suited for performing inference in temporal knowledge graphs. By combining the continuous modeling provided by cLSTM with strong assumptions about how events can impact one another, GHNN shows promising results compared strong baselines on the GDELT and ICEWS14 dataset.""",Accept
28,"""Sampo: Unsupervised Knowledge Base Construction for Opinions and Implications""","['knowledge base construction', 'unsupervised', 'matrix factorization']","""Knowledge bases (KBs) have long been the backbone of many real-world applications and services. There are many KB construction (KBC) methods that can extract factual information, where relationships between entities are explicitly stated in text. However, they cannot model implications between opinions which are abundant in user-generated text such as reviews and often have to be mined. Our goal is to develop a technique to build KBs that can capture both opinions and their implications. Since it can be expensive to obtain training data to learn to extract implications for each new domain of reviews, we propose an unsupervised KBC system, Sampo, Specifically, Sampo is tailored to build KBs for domains where many reviews on the same domain are available. We generate KBs for 20 different domains using Sampo and manually evaluate KBs for 6 domains. Our experiments show that KBs generated using Sampo capture information otherwise missed by other KBC methods. Specifically, we show that our KBs can provide additional training data to fine-tune language models that are used for downstream tasks such as review comprehension.""","""Paper Decision""","""This paper addresses the task of unsupervised knowledge base construction. The reviewers like that the authors present a novel unsupervised approach, and are happy with the thorough experiments. However, they also point out that the approach could be motivated better, and that it makes many assumptions that are not explained properly. We recommend acceptance but nudge the authors to consider the reviewer suggestions.""",Accept
29,"""Contrastive Entity Linkage: Mining Variational Attributes from Large Catalogs for Entity Linkage""",[],"""Presence of near identical, but distinct, entities called entity variations makes the task of data integration challenging. For example, in the domain of grocery products, variations share the same value for attributes such as brand, manufacturer and product line, but differ in other attributes, called variational attributes, such as package size and color. Identifying variations across data sources is an important task in itself and is crucial for identifying duplicates. However, this task is challenging as the variational attributes are often present as a part of unstructured text and are domain dependent. In this work, we propose our approach, Contrastive entity linkage, to identify both entity pairs that are the same and pairs that are variations of each other. We propose a novel unsupervised approach, VarSpot, to mine domain-dependent variational attributes present in unstructured text. The proposed approach reasons about both similarities and differences between entities and can easily scale to large sources containing millions of entities. We show the generality of our approach by performing experimental evaluation on three different domains. Our approach significantly outperforms state-of-the-art learning-based and rule-based entity linkage systems by up to 4% F1 score when identifying duplicates, and up to 41% when identifying entity variations.""","""Paper Decision""","""This paper addresses the problem of unsupervised duplicate resolution of attributes for e-commerce and propose a new approach for this, which they call ""contrastive entity linking"". Overall, the reviewers agree that the paper deals with an important problem, and that it is well-written and motivated.""",Accept