AMSR / conferences_cleaned /akbc20_reviews.csv
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
222 kB
paper_id,title,review,rating,confidence
1,"""Interesting New Task/Dataset with minor (addressable) issues""","""Using the notion of a concept (word or words and usage), the authors create a dataset of concept sets (collections of concepts), rationales (free text descriptions of commonsense linkings between concepts), and scenes (sentence(s) using these concepts together).They select concept sets by sampling from existing image/video captions (attempting to control for certain distributional characteristics), generating human written sentences, and finally generating rationales or justifications of the concepts in context (both via AMT).They propose a final task of constrained text generation from the concept set, of both the scene and the associated rationales, and several metrics.The authors experiment with several neural models, ranging from RNN-based approaches, to transformer-based methods (pretrained and otherwise) for language generation.While I have some small concerns about the quality of the dataset construction and the presentation in the paper, I think this will be an asset to the community.Quality:The paper itself is relatively well written, misses some related work, and has some issues mentioned in the clarity section.My biggest concern is a lack of detail about explicit quality control about the annotation process. Any training, instructions, or other measures made to ensure dataset quality should be documented, if not in the main paper, then at least the appendix.In terms of missing related work, recently the T5 model (Raffel et al., 2019) and CTRL model (Keskar et al., 2019) have been used for controlled text generation. The paper speculates but never shows that certain concepts would be over-represented if it sampled from candidate data.Clarity:Overall the paper is relatively clear. I think it would be clearer if the paper did not interchangeably use the words scene and sentence.Table 1 with dataset statistics is also missing rationale counts, which may be of interest to the community at large.The notation used in Section 3.1 about the sampling weights should be revisited - it appears that the second and third terms were reversed. The discussion about the word-weighted, inverse-concept-connectedness-weighted sampling scheme would probably be clearer if phrased explicitly in the context of hypergraphs, where each hyper-edge is a concept set.Significance:I believe that this dataset is likely to be a large benefit to the community. I have some reservations about the quality of AMT annotations, particularly when no quality control measures are described.Pros:Formalize an interesting and challenging taskCollect what appears to be a large dataset for this taskAttempt to account for distributional characteristics of the data.Use a variety of methods, including well-performing model methodsThere are unseen elements in the training/dev/test sets.The paper measures some distributional characteristics of the dataset, such as how close the concepts in it areSeveral reasonable metrics are proposed. As this is a new task, a variety can be important until the community settles on the best available measures.Cons:No documentation of annotator training or attempts at Quality Assurance/Control are made. Some sort of material to this effect would strengthen the contribution, as the major contribution is the dataset.Clarity issues when presenting methodsI am not convinced that the distributional issues the authors believe exists when sampling from real data would manifest.No measurement of dataset distribution vs. real world distribution.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
1,"""Official Blind Review #1""","""The paper proposes a new task of commonsense generation: given a set of concepts, e.g., (cat, eat, outdoors, apple), write a sentence with commonsense phenomena. While the evaluation set is collected through crowdsourcing, the training set comes from existing captioning dataset. The paper proposes sequence to sequence models based on transformer as baselines. The paper is clearly written and experiments are reasonable.I'm a bit uncomfortable calling image captions as sentences describing common sense. Isn't sentences from news articles or Wikipedia also contain common sense? I wished the task is motivated better -- what can we say about models that do this task well? Can we use this dataset to show improvements on tasks that are more realistic, such as MT, semantic parsing, classification, or QA? I don't see much value in adding this new ""PivotBert"" score. Does this correlate with human scores better? Is it more interpretable or intuitive? It seems to rank systems similarly to other measures such as BLEU from Table 2. Questions/Comments:In Section 3.1, why do you run a part-of-speech tags? Do you limit the concepts to be only nouns and verbs? In Section 4, be more specific about UniLM would be helpful. On a related note, how does it compare to T5?In 5.3, how many pairs each annotator annotated? The set up should be more clearly described. In the introduction, it will better to clearly mention how many human references are collected through crowdsourcing and how many are coming from the existing captioning datasets. Does the performance on unseen concept set worse than concept sets that also exist in training set?I didn't find Figure 2 particularly useful.""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct"""
1,"""Interesting new task, thorough baseline experiments and evaluations""","""# SummaryThis paper introduces a new generative commonsense benchmark, CommonGen, in which a system is given a set of a noun or verb words and is expected to generate a simple sentence that describes a commonsense in our daily life. One unique challenge in this task is that it requires relational reasoning with commonsense. Spatial knowledge, object property, human behavior or social conventions, temporal knowledge and general commonsense are dominant relationship types in CommonGen. The dataset is created by carefully collecting concepts, captions, and human annotations via Amazon Mechanical Turk. They experiment several baselines including state-of-the-art UniLM and evaluate the model's performance using a variety of evaluation metrics as well as human evaluation. The experimental results show that even state-of-the-art methods are largely behind human performance. # Pros- Introduce a new large-scale generative common sense benchmark.- Thorough baseline experiments using state-of-the-art generation models and evaluation using a variety of automatic evaluations and human evaluations. # ConsI don't have major concerns. Adding more qualitative analysis or showing a pair of input concepts and output (human-annotated) sentences in Section 3.3 or somewhere would help readers to get a better sense of the task. Also, Table 5 should be moved to main page, rather than keeping it in the Appendix, as you discuss the results using a whole subsection on the main page. ""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
2,"""straight-forward linguistic feature integration experiments""","""The paper describes an approach that models linguistic features extracted from the entity context and applies them to the fine-grained entity type (FET) prediction task. Experiments show that incorporating models for hypernym relation detection and semantic role labelling improve the performance.* I would like to see more motivation for the FET task in the introduction. It is not clear why explicit type modelling is required for the down-stream tasks.* There are many papers that report increase in performance on the NLP tasks, such as question answering, from incorporating these and other linguistic features that should be mentioned in the related work, e.g.[1] Fabian Hommel, Philipp Cimiano, Matthias Orlikowski, Matthias Hartung: Extending Neural Question Answering with Linguistic Input Features. SemDeep@IJCAI 2019: 31-39[2] Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Dan Roth: Question Answering as Global Reasoning Over Semantic Abstractions. AAAI 2018: 1905-1914* Semantic role labelling should be illustrated with examples and clearly motivated for the FET task.* It is interesting to see dataset statistics with respect to the extracted features, e.g. how many hypernym mentions where detected, how many arguments for each of the roles in each of the datasets were extracted?* Error analysis is missing. How many errors are propagated from the previous stages?* ""the hypernyms extracted by our approach are of high precision"" What is the precision of hypernym extraction?* Gating network architecture is not clearly specified. Is it the ""MLP with two fully connected layers""? Formula 3 suggests a linear combination of vectors but the description above does not correspond to this formula.* Abstract should contain more details on the datasets and results: ""We conduct experiments on two commonly used datasets. The results show that our approach successfully improves fine-grained typing performance. """"","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
2,"""semantic relations are helpful to fine-grained entity typing""","""The paper shows that semantic relations associated with mentions can be used to improve fine-grained entity typing. The whole model contains three parts: 1) Base FET Model 2) Hypernym Relation Model 3) Verb-argument Relation Model. Experimental results show that the integrated semantic relation information improves the final performance. The comparisons are extensive. The submission is well suited to the akbc conference.""","""8: Top 50% of accepted papers, clear accept""","""5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"""
2,"""Well motivated idea with good improvement, although the method is a bit ad hoc""","""This work addresses fine-grained entity typing by leveraging semantic relations of the mention with other words in the same sentence. Specifically, the authors showed how to use hypernym relations and verb-argument relations. For the first, they used Wikidata to train a string match based candidate extraction and BERT-based verification model. For the second, they used an existing SRL system to extract relations between the verb and the mention. Then the two system each produce a prediction that is then combined with the base model through a gating network. The proposed method significantly improves over baselines. And they performed ablations studies to show the help from hypernym and verb-argument.Strength:1. The proposed approach is well motivated, and described clearly. 2. The advantage of the proposed modules (HR and VR) is validated through ablation studies. Weakness:1. The proposed method for combining leveraging different semantic relations is an ad hoc ensemble of separate systems. And each system has some other dependencies (extra data, for example, Wikidata, or external trained model, for example, AllenNLP SRL), which introduces more complexity in training. 2. It would help to show some examples to demonstrate the advantages of HR and VR. For example, what kind of sentences do they help and what kind of sentences do they hurt. Questions:Since the model combines three systems, I was wondering if the accuracy would drop, comparing to no HR or no VR, on sentences where there is no hypernym or no verb-argument structure detected. In other words, would adding HR or VR hurt performance on sentences where they only output zero vector? ""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
3,"""Interesting corpus, with some clarifications needed""","""The paper describes a corpus of news articles annotated for protest events. Overall, this is an interesting corpus with a lot of potential for re-use, however, the paper needs some clarifications. A key contribution of the paper is that the initial candidate document retrieval is not based purely on keyword matching, but rather uses a random sampling and active learning based approach to find relevant documents. This is motivated by the incompleteness of dictionaries for protest events. While this might be true, it would have been good to see an evaluation of this assumption with the current data. It is a bit unclear in the paper, but were the K and AL methods run over the same dataset? What are the datasets for which the document relevance precision & recall are reported on page 8?I would also like to see a more detailed comparison with more general-purpose event extraction methods. Is there a reason why methodologies such as [1] and [2] cannot be re-applied for protest event extraction?A small formatting issue: the sub-sections on page 8 need newline breaks in between.[1] Pustejovsky, James, et al. ""Temporal and event information in natural language text."" Language resources and evaluation 39.2-3 (2005): 123-164.[2] Inel, Oana, and Lora Aroyo. ""Validation methodology for expert-annotated datasets: Event annotation case study."" 2nd Conference on Language, Data and Knowledge (LDK 2019). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2019.EDIT:Thank you for addressing the issues I raised. I have changed the review to ""Accept"".""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
3,"""Very carefully designed corpus""","""After looking over authors responses, I've decided to increase my rating of the paper. The main concern I original had was sufficiently motivating the need for this specific dataset (compared to existing alternatives like ACE). The authors (in the comments below) have articulated qualitatively how ACE is insufficient, and demonstrated with experiments that generalization from ACE pretraining to this new dataset is poor. ==== EDIT ====The authors present a corpus of news articles about protest events. 10K Articles are annotated with document level labels, sentence-level labels, and token-level labels. Coarse-grained labels are Protest/Not, and fine-grained labels are things such as triggers/places/times/people/etc. 800 articles are Protest articles.This is very detailed work & I think the resource will be useful. The biggest question here is: If my focus is to work on protest event extraction, what am I gaining by using this corpus vs existing event-annotated corpora (e.g. ACE) that arent necessarily specific to protest events? Id like to see experiments of models run on ACE evaluated against this corpus & an analysis to see where the mistakes are coming from, and whether these mistakes are made by those models when trained on this new corpus.--- Below this are specific questions/concerns ---Annotation:Just a minor clarification question. For the token-level annotations, how did you represent multi-token spans annotated with the same label? For example, in stone-pelting, did you indicate stone, -, and pelting tokens with their own labels or did you somehow additionally indicate that stone-pelting is one cohesive unit?Section 4:Mild nitpick; Can you split the 3 annotation instruction sections into subsections w/ headings for easier navigation?Section 6It says your classifier restricts to the first 256 tokens in the document. But your classifier is modified to a maximum of 128 tokens. Can explain this?Why is the token extraction evaluation only for the trigger?Regarding the statement around These numbers illustrate that the assumption of a news article contain a single event is mistaken. It was mentioned earlier that this assumption is being made. Can be more clear which datasets make this assumption? Can also explain how your limit to 128 (or 256?) tokens does/doesnt make sense given multiple events occur per article?""","""8: Top 50% of accepted papers, clear accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
3,"""This paper describing a new formalism for annotating political related corpus. They provide a dataset annotated with their guideline and introduced a BERT baseline over their corpus""","""This paper provides a detailed guideline for annotating socio-political corpus. The detailed annotation of documents can be time consuming and expensive. The author in the paper proposed a pipelining framework to start annotations from higher levels and get to more detailed annotation if they exist. Along with their framework, they have provided the dataset of annotated documents, sentences and tokens showing if the protest-related language exists or not. The author also outlines the baseline line of the transformer architecture regarding the document level and sentence level classifications. The paper describes the details very clearly. The language is easy to follow. So to list the pros will be as follows:-introduction of a new framework for annotating political documents, -annotating a large scale corpus -They baseline resultsAlthough they have provided the baseline results on the document and sentence level classifications, they have not provided the results of them over the token level task. It would have been interesting to see if those results are also promising.The author has mentioned that they have three levels of annotations (document, sentence, and token ) to save time and not spent time on detailed annotations of negative labels. Can they examine how many samples are labeled negative and how much time (in percent) and money it reduced for annotations?Some minor comments:-In Page 2: I think result should change to resulted in sentence below:Moreover, the assumptions made in delivering a result dataset are not examined in diverse settings. -On page 3 : who want to use this resources. > who want to use these resources. -In page 4: We design our data collection and annotation and tool development > We design our data collection, annotation. and tool development -Page 6 : As it was mentioned above > As it is mentioned above -You are 1 page over limit, but there are some repetition in annotation manual, especially when talking about arguments of an event, you can just say as mentioned above, -The author has mentioned that they have three level of annotations (document, sentence and token ) to save time and not spent time on detailed annotations of negative labels. Can they examine how many samples are labeled negative and how much time (in percent) and money it reduced for annotations?""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct"""
4,"""Good paper but needs more experimentation""","""Clarity: The paper is easy to read and well writtenOriginality: To the best of my knowledge, this is a novel work. Most of the related relevant works are cited by the authors.Significance: In this paper authors propose a mechanism to generate explanations for the inferred KBC facts. This perfectly aligns with the growing efforts on explainable AI systems. I believe that this work will be relevant to the conference audience.Quality: The overall quality is represented by my ratingDetails:In this paper, authors propose a method to generate explanations for the inferred facts for KBC task. They compare their technique with an existing rule-miner technique and show that their system produces more intuitive explanations.I have following comments/questions:1. Authors mention systems that use RL for the same task - DeepPath, MINERVA. However they don't report any quantitative comparison.2. The proposed method explains outcome of the system (i.e. it is outcome explanation). As per authors this method tend to be a bit slower compared to model explanation techniques. Some quantitative comparison would be useful.3. In Table 1 for Unsupervised setting experiment with YAGO, authors report a standard deviation of 0.384 - is this correct? If it is then it's very high.4. In the same table for FB15K the difference in performance of Semi-supervised vs Supervised setting is very high. What's the reason?5. It would be good to also see change in the performance as the amount of supervision changes in the semi-supervised setting. At the moment authors have reported only for one setting.""","""5: Marginally below acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
4,"""Generating explanations of factorization models for link prediction with template scoring""","""The authors propose a novel method OxKBC for generating post-hoc explanations of a trained factorization model for link prediction. The authors identify five different templates which explain a predicted triple (s,r,o): Similarity between tail entities, similarity between relations, both occurring simultaneously, similarity between r and the hadamard product of a two-hop relation in the KB, and o being a frequent tail entity for r. Since the predefined scoring functions do not obviously generalize between different templates, OxKBC performs a two-step computation for each predicted triple where first the best template is selected, and second that template is grounded in the knowledge graph. The authors evaluate OxKBC using a paired comparisons test with Amazon Mechanical Turk, comparing against a rule-mining baseline. Results show OxKBC to handily outperform the baseline.The method presented in the paper is interesting and, to the best of my knowledge, novel. The results are strong, the subject matter is interesting and timely, and I would like to see a version of this paper at AKBC. With that said, there is a somewhat significant design flaw in the model which should, at least, be discussed in the final version: The paper aims to explain the predictions of an existing factorization through the selection of a ""best"" instantiation of a template. However, due to the nature of factorization models, there is no guarantee that any prediction is explaining by a single instantiation, and not a disjunction or conjuction of instantiations, and OxKBC has no mechanism for identifying or addressing such cases. In other words, there may be two instantiations which are *equally good*, or two instantiations which only functions as an explanation *when both are present*, and in these cases all OxKBC can do is score each instantiation.Other comments & questions:- Is there a reason why ""entity & two-length relation similarity"" does not appear as a sixth template? It seems like an obvious continuation of the pattern of the existing templates.- The notation of T_i for a score of a template and Ti for the template being scored in Equation 2 is a bit confusing. I would suggest using another symbol to differentiate the two.""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
4,"""Effective approach to explaining tensor factorization models's prediction for KB completion task""","""The paper proposes a novel approach to providing an explanation for the prediction made by off-the-shelf tensor factorization models by using its scores to create an augmented weighted knowledge graph and scoring higher order paths (aka templates) as explanations. The presentation is mostly clear and the results are convincing of the proposed idea. The paper however suffers from lack of sufficient novelty and adequate experimental comparison with other state-of-the-art approaches that offer explanations.Cons:[Novelty] The paper considers path-based approach to providing explanations to predictions of a KBC model. This idea of using a path is not new. In fact, the similarity function in equation 1 is very similar to KL-REL (Shiralkar et al.). PRA (Lao et al.) and PredPath (Xi et al.) is another paper that has considered meta-paths (same as templates in current paper) for ranking explanations.[Weak baseline and lack of adequate experimental results] The baseline of rule mining seems to be old and a weak one. Although the proposed approach is meant to be faithful to its TF model, since it is a path based approach to providing explanations, it might be worth comparing it to approaches that work with the observable graph. The rule mining approach considered in the paper is appropriate, but an old one and approaches such as KL-REL (single path by Shiralkar et al.) have been proposed that are promising. It might be worth comparing to such recent explainable models. Secondly, some discussion around why/how an explanation provided by the path derived by proposed approach might correlate with statistical information summarized by the TF model will be useful. [Formulation] The underlying TF model draws upon the global, long-range statistical knowledge to derive the prediction for an example. how can this prediction be explained by a single explanation path? In practice, facts can often be explained by alternative paths and/or multiple paths that collectively provide evidence and may fail to justify the prediction individually. E.g. (X, isSpouseOf, Y) can be explained by the fact that they have a child together, or X is son-in-law of Z and Z is mother of Y. Some discussion regarding this bias to use single path is missing.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
5,"""Approximating KG entailment via a form of textual entailment""","""This paper addresses the issue of relation entailment, viz., whether a relation r in a knowledge graph entails a relation r', which the authors define as a form of relational containment (r entails r' if r is contained in set theoretical terms by r'). They then propose a data driven methods to sample a gold standard of such containments, that they use to evaluate and/or train unsupervised and supervised relation entailment models. The authors derive their gold standard from Wikidata, and a number of sampling techniques that are relatively well explained. They rely for their methods on distributed representations of both the relations, and their textual groundings (mapping the triples to bags of words and/or syntactic dependencies derived from Wikipedia snippets via a form of ""reverse entity linking"" of sorts). They then experiment with on the one hand, similarity functions and on the other hand, CNN and biLSTM encoders. Perhaps unsurprisingly, supervised models perform way better than unsupervised models (from 0.57 to 0.71 accuracy). The models are well described.This reviewer finds the experiments well described, but still incomplete. Indeed, the authors fail to assess the impact of the different input information modalities (in the input embedding layers of their neural networks) -- triple and word embeddings. Unless the reader is meant to understand that their ""base model"" in Table 3b) relies only on triple embeddings: this is not clear! Also, this reviewer would like to see results for ""text only"" models. Is this better than reasoning with the triples or with *both* signals? In similar NLP tasks (think textual entailment), one usually proceeds that way. It would also be interesting, for the sake of completeness, to consider such three cases in Table 3a) (similarity-based approaches). Last, but not least, the scores reported are sometimes quite close. Would it be possible to add the standard deviation of your scores somewhere, in particular for Table 3b) as is common in deep learning literature? This reviewer can't see yet if there was a real improvement or only an statistical fluctuation.The discussion of the results is quite informative. ""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
5,"""New dataset introduced along with the task of predicting entailment between relations""","""This paper introduces the task of predicting entailment between canonicalized relations in a knowledge graph. The downstream significance of this work lies in teaching models to understand abstract concepts through predicting entailment between relations, thereby understanding a hierarchy of concepts. The relations are represented using information from knowledge graphs as well as information extracted from text. A variety of methods are explored for building this representation - KGE methods such as TransE, embedding the context between textual mentions of the relation's entities and distribution based methods. The prediction task is then formulated as a ranking problem where the correct parent relation should be ranked higher than all others. The paper is well written and clear except for a few points below. Comments/Questions::I feel the nomenclature of unsupervised/supervised scoring functions is a bit misleading. It would be better suited to call the two approaches as non-parametric vs parametric methods. 1. How do the cosine and euclidean similarity metric serve as a scoring function given that they are symmetric ? 2. Is prediction for parent relations done within all the relations only in that tier or all relations?3. With regards to the relation instance propagation - if the child relations are propagated to the parent, the representation of parent would explicitly include information of the child. I might be missing something but would this not make the task of predicting parent relation trivial since they would be the most similar? ""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
5,"""Supervised and Unsupervised Relation Entailment""","""The paper introduces a method for understanding if a relation is direct entailment of another one. The authors have used the relation already exists in a well-known dataset( Wikidata). I liked the tricks they have used to construct the dataset. (e.g relation sampling, relation expansion, etc.)The paper in a way that it is a little unclear and hard to follow. For instance, there are continuous mentions of single letter references. Some of the references are explained later ( for example P^r in KL-divergence calculation.)--It might also help to have a figure showing the general idea of the model and then mentioning that you are running experiments with different settings.--maybe using the larger equations in separate lines.'--It might have been interesting to see statistics on relation entailments, such as what percent of the relations have more than children. This might also help with understanding the propagation better.In might also be interesting to see some qualitative analysis comparing the ""TransE"", DistMult and ComplEx. Are there domain-dependent. Are there scenarios that the others can outperform TransE?To sum up, The pros of this paper are as follows:-Introducing interesting aspects of analysis in the knowledge graph problems. -analysis of supervised and unsupervised methods to find the entailments in relations.And weaknesses are:-The paper is a little hard to follow. Maybe it is better to add a section to define all the repetitive terms. Also adding model figure can help. -Although I agree that the author has done plenty of experiments, probably some statistic reports on the relations can give more insight into the scope of the problem,Minor comment: Please high light the highest numbers in the tables.""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct"""
6,"""The paper shows good improvement over the existing work. The paper is interesting and the idea is novel.""","""This work proposes Dolores that captures contextual dependency between entity and relation pairs. The proposed method improves existing work. The idea of the paper is to incorporate Random walks and a language model. The idea is interesting and novel. However, the explanation of the training of the method is missing.Comments- The method has a shortcoming that it does not include the last entity. Let's assume that we have a sequence e_1,r_1,e_2,r_2,...,e_n,r_n,e_n+1. For the forward LSTM, e_n+1 is not included while e1 is not included for the backward LSTM.- The loss function of the method is not defined. - How to train the method is not clear. Is the method pretrained before each task?- For the vector of Dolores, how are the multiple paths incorporated? In the experimental setting, the author generates 20 chains for each node. but how to incorporate the multiple chains is not clear- For the link prediction task, it would be better to include ConvE + Dolores.The paper shows good improvement over the existing work. The paper is interesting and the idea is novel.""","""8: Top 50% of accepted papers, clear accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
6,"""Bi-LSTM is proposed to learn knowledge graph embeddings. The key idea is not new and the evaluation has some flaws.""","""This paper presents a knowledge graph embedding approach that generates chains of entities and relations from a knowledge graph and learns the embeddings using Bi-LSTM. Results are shown to demonstrate that the proposed model can be incorporated into existing predictive models for different knowledge graph related tasks.The key idea of using recurrent nets to learn embeddings from knowledge graph paths is not new. The authors try arguing the novelty, e.g., with respect to Das et al. [2017], in terms of 1) the different goal of learning generic embeddings rather than reasoning, and 2) the different way that paths are generated. However, the model by Das et al. also has a representation learning part; path generation of the proposed method cannot be seen as a contribution either as it is from the Node2Vec work.In terms of the experimental evaluation, my main question is whether the comparison is fair. The authors compare original versions of existing methods with such methods incorporated with Dolores. This appears to be a comparison between a model without pretraining vs. the model with pretraining using Dolores. We all know that pretraining helps improve model performance and so, it is not surprising that a model incorporated with Dolores (e.g., ConvKB+Dolores) outperforms its original version and other comparison models without pretraining (e.g., Dolores, RNN-Path-entity). A more fair comparison would be comparing the effect of Dolores as a pretraining method with other pretraining methods.Some technical details in the method and experiment sections need to be clarified:- Section 3.4, ""By accepting triples or paths from certain tasks as input (not the paths generated by path generator)"" <= how exactly are paths obtained frm given tasks?- How are the weights of embeddings at different layers learned in Eq. 4?- In Section 4.1, it is mentioned that 20 chains are generated for each node. Is it always possible to extract 20 chains for any node? And, why is the parameter set to 20? How do different settings of this parameter affect the result?""","""4: Ok but not good enough - rejection""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
6,"""Representation learning yielding nice gains on a number of tasks. Some definitions could be clarified.""","""This paper presents a method of representing knowledge graph nodes and relationsby sampling paths and applying a sequence model. The approach is motivated byrecent advances in building contextualized word representations (in particularElMO) and the learned representations are applied to a number of downstreamtasks, with positive effects. This approach differs from other applications ofRNNs to path modeling in its focus on learning reusable representations bymodeling random walks, rather than attempting to learn to model paths of somespecific type.The results are compelling. Dolores seems to yield representations that can beapplied effectively to a range of downstream tasks and models. I would like tosee more discussion of the models enhanced (ConvKB is introduced in the captionof Table 3. only), and I would also like to see how much Dolores could improvethe non SOTA approaches. However, the current set of evaluations show thatDolores provides significant gains over existing work in a number of settings.Points for improvement:I found the model description to be confusing. We are told repeatedly that theapproach is building representations of [entity, relation] pairs. It is notclear from the description whether we are supposed to assume that therepresentation of this pair is decomposed into separate, concatenated, entityand relation components. From the description of the model, it seems that theoutput layer applies a softmax over all possible (entity, relation)pairs. Conversely, Figure 2 seems to illustrate a decomposition of the outputlayer into concatenated entity and relation representations and Table 5illustrates nearest neighbors of a single entity node (in context). Section 3should be adapted to very explicitly state the nature of the predictive outputlayer, and the loss that is used to train. Since Dolores' training procedure is so different from that of the downstreamtasks, I would like to see some discussion of how the authors avoid overlapbetween pre-training and test graphs for e.g. FB15k.""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
7,"""Official Blind Review #3""","""This work analyses how factual predictions of a Masked Language Model (MLM) such as BERT and RoBERTa are influenced by adding extra context to a query. The paper examines a variety of ways of constructing this context, spanning over settings such as adversarially constructed, generated by a language model, retrieved by supervisedly trained systems, a TF-IDF retrieved baseline and an oracle. The paper finds that enriching a query with a good context can substantially improve performance in the LAMA probe, that analyses factual predictions. Additionally, the results demonstrate that there is considerable headroom for improvement in the retrieval side, evidenced by the results using an oracle retriever. Moreover, the paper shows the importance of BERT's Next Sentence Prediciton task, showing that it makes the model robust to adversarial appended contexts.Overall, the paper is well written and the results are relevant to the community. As argued, completely relying on model's parameters for storing factual knowledge has a series of disadvantages compared to models able to retrieve relevant factual information from a corpus. This is especially relevant when this is done in an unsupervised manner, as it allows proper scaling. The experiments show clear evidence to support the claim that augmenting a query with a proper context greatly enhances performance on a factual knowledge probe. One strong point of this paper is the comparison with multiple strategies for generating contexts.The paper claims to differ from previous work by considering a fully unsupervised setting. While it is true that no extra supervision is needed for the B-RET experiments, the exact same point holds for other work such as REALM (Guu et al, 2020), which the paper mentions. REALM is unsupervisedly pre-trained (including the retrieval portion). It would also be nice to see quantitative comparisons with the contexts retrieved by this model, though it's understandable that the authors don't report this, given how recent this work is and that it is not open-source at the time of writing. Typos & other minor comments:Section 2, Language Models and Probes: It's a bit of a stretch to call modells like T5 a ""variant"" of BERT.Section 2, Open-Domain QA: ""areas"" - > area""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
7,"""Review""","""This paper shows unsupervised performance on LAMA when using various methods to obtain contexts. It is very related to the recent REALM work (which was posted a few days before this submission); both show that transformers perform quite well when given related, retrieved context. This paper does it in a fully unsupervised way, however, and includes some really interesting analysis. I really liked all of the ways the models were probed, including using a generative model to provide context. This at first seemed odd to me, but the authors provide a good justification for why this is an interesting probe in section 4.2.The authors themselves noted the limitations of the work in the paper (e.g., single tokens vs. longer answers, mentioned on page 10), so there is little for me to mention as problematic. My one minor quibble is with the ""unsupervised question answering"" section on page 10. In the first sentence of section 6, the authors are careful to state that they are talking about ""factual unsupervised cloze QA"", but there is no such hedging in the unsupervised QA section just above. There really is only evidence here for simple, factual, predicate-argument structure style questions, and using blanket, unqualified terms like ""question answering"" feels like over-claiming.This review seems very short to me; mostly I write notes about things that aren't clear, or that could be improved, or aren't true. I didn't really have anything to write about this paper. The review is short because the paper is excellent, and I learned a lot from it.""","""9: Top 15% of accepted papers, strong accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
7,"""Great insights!""","""The paper explores how the performance of BERT and DrQA changes as a result of being applied to different text snippets. The paper compares retrieved snippets, generated snippets (NLG), adversarial snippets (answers to different questions), as well as an oracle (using the correct snippet of the extraction from Wikipedia).This is a great paper that provides a lot of insights into how the quality of the underlying content affects the prediction quality. I have very little to complain. I would have appreciated but some significance analysis on the results. I want to point out that TF-IDF is a very weak retrieval model, but I understand that this is not the focus of this paper.""","""9: Top 15% of accepted papers, strong accept""","""5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"""
8,"""Well written and motivated paper. Simple and clear approach. Convincing results. Presentation, especially of experiments, should be improved. ""","""The paper studies the problem of single-relation QA. It proposes a data generation technique for domain adaptation. The model is built using a known decomposition of the problem, with emphasis on making relation prediction easy to re-train using generated data. Additional question data is generated to re-train the model for previously unseen domains. Overall, the paper is well written and motivated. The results are convincing. The paper is narrow and focused in its contribution, but the problem is significant enough to merit such focused contribution. There are some issues it would be good to get the authors input on:(a) The paper provides few examples and no figures. Both would have done much to illustrate the approach and make the paper more accessible. In fact, except the intro, there is not a single question example. This is also critical in the experimental results, which provide little insight into the numerical results (e.g., through qualitative analysis).(b) There's no evaluation (or even examples) of the generated questions. Ideally, this should be done with human evaluation. This can really help understand the gap between the system performance and oracle questions. (c) During experiments, do you train once and then tune and test for each unseen domain? Or do you include the other 5 unseen domains in the training set? (d) Some of the results are only reported in text. They should be in tables. Some numbers are just missing. When you report the performance on seen relations, it's really important to provide the numbers for other recent approaches, and not just provide the much less informative ranking. If the paper is short on space, all the tables can be merged into a single table with some separators. (e) The related work section adds little coming like this at the end, when most of the papers mentioned (if not all) were already discussed at similar level (or deeper) before. Some more minor issues that the authors should address:(a) The methods seems to require enumerating and distinguishing domains. It's not specified how this is done in KB used in the paper. This should be made clear. (b) What is ""terms"" in Section 2 referring to? Is this a non-standard way to refer to tokens. (c) For mention detection, why not use B-I-O tagging? This is the most common standard and seems like a perfect fit here. The current choice seems sub-optimal. (d) In RP, the embeddings are initialized with word2vec, but what about the entity placeholder token? Also, do you use any initialization or embedding sharing between the natural language and the relation tokens? (e) For AS, the paper mentions using a heuristic based on popularity. Does this really address the problem or maybe works just because of some artifacts in the data? It's OK to use this heuristic, but a 1-2 sentence discussion would help. (f) The first paragraph in Section 4.1 is confusing with how it sets expectations for what is described below it. For example, the mention of Wikipedia sentences is confusing. It's clarified later, but still. Again, figure and examples would help a lot. The mention of randomly initialized embeddings (next paragraph) is confusing without mentioning training. (g) Some typos: ""... we create a extract of ..."", ""... users with the same intend many ...""(h) Why take the median for evaluation? Is it strictly better than mean and stddev? (i) The use RQx for research questions is not working. The reader just can't remember what each is referring to. ""","""8: Top 50% of accepted papers, clear accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
8,"""Simple yet reasonable method for KBQA domain adaptation""","""This paper presents a simple approach for domain adaptation in Knowledge Graph Question Answering. The paper consider the setting where the knowledge graph used to back the QA system contains the necessary facts for a test-time domain, but the training domain didn't cover an examples that required inference over that subdomain. To bridge the gap, the paper proposed a simple procedure for constructing synthetic questions over the relations in the test domain.Pros:- The task definition considered is appealing, and identifies another area in which SimpleQuestions is not ""solved"".- The approach yields modest but consistent empirical gains across the different domains considered.- It is relatively simple with few assumptions, making it more likely to generalize.Cons:- The novelty of the paper is fairly limited. Synthetic question generation has been well studied in other areas of QA, including training models from synthetic data only. Domain adaption is also well studied; Wu et. al. (cited here) also study adaptation to unseen relations for KBQA (which is inherently closely related to unseen domain adaptation).- Though not a flaw per se, the generation method is fairly simplistic --- which might work well for something like SimpleQuestions (which hardly require natural language understanding), but not for datasets with richer language structure.- The empirical gains are small; most of the benefit seems to be coming from the improved RP network, which uses standard components.- None of the submodules are pre-trained, it would be interesting to see if using a pre-trained encoder such as a large language model (BERT, etc) would help in covering the gap in linguistic variation & understanding across domains.""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct"""
8,"""A data-centric domain adaptation method for simple KBQA that is marginally novel ""","""This paper studies the problem of answering ""first-order"" questions (more on the terminology later) that correspond to a single fact in a knowledge graph (KG) and focuses on the cross-domain setting where no curated training examples are provided for the unseen test domain. The proposed base KGQA model is modified from the state-of-the-art model on SimpleQuestions from (Petrochuk and Zettlemoyer, 2018) but with the relation prediction component changed from a classification model to a ranking model to better handle unseen relations (more on this later). The key contribution is a way of generating synthetic questions for the relations in the unseen domain for data augmentation. The generation model is from (ElSahar et al., 2018) but is augmented with relation-specific keywords mined from Wikipedia via distant supervision. Evaluation on reshuffled SimpleQuestions shows that the proposed method can achieve a reasonable performance on 6 selected test domains of large to moderate scale, and the question generation strategy is better than several baselines.Strengths- Overall the paper is well-written and easy to follow- Cross-domain semantic parsing/question answering is a very important problem because of the broad applicability of the technique.- The evaluation appears to be well designed and shows some interesting and solid results.Weaknesses- Overall the technical contribution appears to be marginal: it's largely a recombination of known techniques for a simpler version of a widely-studied problem - cross-domain semantic parsing.- The paper rightfully points out the importance of the cross-domain setting. It is, however, a bit surprising to see that the discussion of related work is entirely confined to the works on SimpleQuestions. For a number of clear reasons, building semantic parsing models/training methods that can generalize across domains is a well-recognized demand and has received much attention. It is, for example, a built-in requirement for a number of recent text-to-SQL datasets like Spider. Even just focusing on knowledge graphs/bases, there has been many studies in recent few years. See several early ones listed for references in the end. I'd note that the setting of this paper is sufficiently different from most of the existing studies because it only focuses on questions that correspond to a single fact, but it'd benefit the readers to better position this work in the broader literature.- The necessity of the proposed modifications to the base KGQA model doesn't seem totally necessary to me. Why not just use the state-of-the-art model from (Petrochuk and Zettlemoyer, 2018) and augment it with the generated questions, or at least use it as a baseline? There might need a few minor adjustments to the base model but it doesn't seem to me it would be substantial. The motivation for a ranking-based relation prediction model is given as ""this way we can in principle represent any relation r R during inference time."" However, this doesn't seem to be a very convincing argument. When a new domain is added, in order to apply the proposed data augmentation we would need to re-train the KGQA model. At that point we would have known the relations in the new domain (for the purpose of data augmentation), so why couldn't we train the multi-class classifier again on the augmented data with the new relations added? - There are a number of places in the proposed method that builds ""popularity"" as an inductive bias into the method. For example, answer selection always selects the one with the most popular subject entity; in order for the Wikipedia-based distant supervision to work the entity pairs of a relation are required to exist on (and linked to) Wikipedia, which only a fraction of popular entities in Freebase do. Related to that, evaluation is also only conducted on domains that are well-populated in Freebase. This is not desired for cross-domain semantic parsing because: (1) it's more of an artifact of current datasets, and (2) cross-domain semantic parsing is more valuable if it could work for less popular domains (the ""long tail""); for popular domains, it's more likely one may be willing to pay the cost of data collection because the incentive is higher.Minor:- Personally I don't think ""first-order"" is the best term for this type of question answering because it's easily confused with first-order logic (the description logics behind semantic web/knowledge graphs is a subset of first-order logic). ""Simple"" or ""single-relational"", though still not perfectly precise, may be slightly better if we have to give it a name.[1] Cross-domain semantic parsing via paraphrasing - EMNLP'17[2] Decoupling structure and lexicon for zero-shot semantic parsing. EMNLP'18.""","""5: Marginally below acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
9,"""Great work""","""I am posting here my review from before the revision of the paper by the authors. All my concerns have been addressed in that revision.This paper studies how interesting negative statements can be identified for knowledge bases (KBs). The main contributions are several ideas of how to generate negative statements, and several heuristics to rank them. This paper sets foot into a much-needed domain of research. Negative statements are a very important issue for today's KBs, and the paper does not just formalize the problem and propose means to generate and rank such statements, but also provides user studies. The video of the demo is particularly impressive!I have three main issues with this submission: First, the methods seem to be geared exclusively to famous entities, and more specifically to famous humans. The peer-ranking works great for ""Which actors won the academy award"", but might work much less well on ""Which villages do not have a mayor"". The Google auto-completion, likewise, works great for ""Which football players won the Ballon d'Or"", but it is less clear how it works for ""Which classical musical pieces are not written in B flat major"" (assuming that there is such a Wikidata relation). Thus, the paper should more accurately be called ""enriching KBs with interesting negative statements about famous humans"".The second issue is more fundamental: If I understand correctly, the peer-ranking method makes the closed world assumption. It computes the attributes that peers of the target entity have, and that the target entity itself does not have. These are proposed as negative statements. However, these statements are not necessarily false -- they may just be missing from the KB. The evaluation of the method ignores that problem: It asks users to rate the negative statements based on interestingness -- but does not give the user the option to say ""This statement is actually not false, it is true"". In this way, the proposed method ignores the main problem: that of distinguishing missing information from wrong information. That is surprising, because the paper explicitly mentions that problem on page 9, complaining that the Wikidata SPARQL approach has no formal foundation due to the open world assumption. It is just the same with the first proposed approach. This is what the paper itself states on Page 10: Textual evidence is generally a stronger signal that the negative statement is truly negative -- implying that the first proposed method does not always produce correct negative statements. However, the main part of the paper does not acknowledge this or evaluate whether the produced negative information is actually negative. Only in the appendix (which is optional material), we find that the peer-based method has a ""71% out-of-sample precision"", which, if it were the required value, should be discussed in the main paper. The same goes for the value of 60% for the query-log-based method. The third issue is the evaluation: The proposed methods should be explicitly evaluated wrt. their correctness (i.e., whether they correctly identify wrong statements) in the main part of the paper. Then, they should be compared to the baseline, which is the partial completeness assumption. This is currently not done.The next question is how to rank the negative statements. The baseline here should be the variant of the partial completeness assumption that is used in RUDIK [Ortona et al, 2018]: It limits the partial completeness assumption to those pairs of entities that are connected in the KB. It says: ""If r(x,y) and r'(x,z) and not r(x,z), then r(x,z) is an interesting negative statement"". The proposed method should be compared to this competitor.Thus, while the paper opens a very important domain of research, I have the impression that it oversells its contribution: by ignoring the question of missing vs. wrong statements, by not comparing to competitors, and by focusing its methods exclusively on famous humans. Related work: - The ""universally negated statements"" are the opposite of the ""obligatory attributes"" investigated in [2], which thus appears relevant. - The work of [1] solves a similar problem to the submission: by predicting that a subject s has no more objects for a relation r than those of the KB, it predicts that all other s-r-o triples must be false.- It appears that the peer-ranked method is a cousin of the partial completeness assumption of AMIE, and of the popularity heuristics used in [1]. It says: ""If the KB creators took the care to annotate all your peers with this attribute, and if you had that attribute, they would for sure have annotated you as well. Since they did not, you do not have this attribute."" This is a valid and very interesting method to generate negative statements, but it would have to be stated explicitly and evaluated for correctness.Minor: - It would be great to know the weights of the scores in Definition 2 also in the main paper.- Talking of ""text extraction"" in Section 6 is a bit misleading, because it sounds as if the data was extracted from full natural language text, whereas it is actually extracted from query logs. - It would be good to clarify how the Booking.com-examples were generated (manually or with the proposed method).[1] Predicting Completeness in Knowledge Bases, WSDM 2017[2] Are All People Married? Determining Obligatory Attributes in Knowledge Bases, WWW 2018""","""9: Top 15% of accepted papers, strong accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
9,"""Very important problem, but I am not sure the paper addresses it""","""This paper studies how interesting negative statements can be identified for knowledge bases (KBs). The main contributions are several ideas of how to generate negative statements, and several heuristics to rank them. This paper sets foot into a much-needed domain of research. Negative statements are a very important issue for today's KBs, and the paper does not just formalize the problem and propose means to generate and rank such statements, but also provides user studies. The video of the demo is particularly impressive!I have three main issues with this submission: First, the methods seem to be geared exclusively to famous entities, and more specifically to famous humans. The peer-ranking works great for ""Which actors won the academy award"", but might work much less well on ""Which villages do not have a mayor"". The Google auto-completion, likewise, works great for ""Which football players won the Ballon d'Or"", but it is less clear how it works for ""Which classical musical pieces are not written in B flat major"" (assuming that there is such a Wikidata relation). Thus, the paper should more accurately be called ""enriching KBs with interesting negative statements about famous humans"".The second issue is more fundamental: If I understand correctly, the peer-ranking method makes the closed world assumption. It computes the attributes that peers of the target entity have, and that the target entity itself does not have. These are proposed as negative statements. However, these statements are not necessarily false -- they may just be missing from the KB. The evaluation of the method ignores that problem: It asks users to rate the negative statements based on interestingness -- but does not give the user the option to say ""This statement is actually not false, it is true"". In this way, the proposed method ignores the main problem: that of distinguishing missing information from wrong information. That is surprising, because the paper explicitly mentions that problem on page 9, complaining that the Wikidata SPARQL approach has no formal foundation due to the open world assumption. It is just the same with the first proposed approach. This is what the paper itself states on Page 10: Textual evidence is generally a stronger signal that the negative statement is truly negative -- implying that the first proposed method does not always produce correct negative statements. However, the main part of the paper does not acknowledge this or evaluate whether the produced negative information is actually negative. Only in the appendix (which is optional material), we find that the peer-based method has a ""71% out-of-sample precision"", which, if it were the required value, should be discussed in the main paper. The same goes for the value of 60% for the query-log-based method. The third issue is the evaluation: The proposed methods should be explicitly evaluated wrt. their correctness (i.e., whether they correctly identify wrong statements) in the main part of the paper. Then, they should be compared to the baseline, which is the partial completeness assumption. This is currently not done.The next question is how to rank the negative statements. The baseline here should be the variant of the partial completeness assumption that is used in RUDIK [Ortona et al, 2018]: It limits the partial completeness assumption to those pairs of entities that are connected in the KB. It says: ""If r(x,y) and r'(x,z) and not r(x,z), then r(x,z) is an interesting negative statement"". The proposed method should be compared to this competitor.Thus, while the paper opens a very important domain of research, I have the impression that it oversells its contribution: by ignoring the question of missing vs. wrong statements, by not comparing to competitors, and by focusing its methods exclusively on famous humans. Related work: - The ""universally negated statements"" are the opposite of the ""obligatory attributes"" investigated in [2], which thus appears relevant. - The work of [1] solves a similar problem to the submission: by predicting that a subject s has no more objects for a relation r than those of the KB, it predicts that all other s-r-o triples must be false.- It appears that the peer-ranked method is a cousin of the partial completeness assumption of AMIE, and of the popularity heuristics used in [1]. It says: ""If the KB creators took the care to annotate all your peers with this attribute, and if you had that attribute, they would for sure have annotated you as well. Since they did not, you do not have this attribute."" This is a valid and very interesting method to generate negative statements, but it would have to be stated explicitly and evaluated for correctness.Minor: - It would be great to know the weights of the scores in Definition 2 also in the main paper.- Talking of ""text extraction"" in Section 6 is a bit misleading, because it sounds as if the data was extracted from full natural language text, whereas it is actually extracted from query logs. - It would be good to clarify how the Booking.com-examples were generated (manually or with the proposed method).[1] Predicting Completeness in Knowledge Bases, WSDM 2017[2] Are All People Married? Determining Obligatory Attributes in Knowledge Bases, WWW 2018""","""5: Marginally below acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct"""
9,"""Official Blind Review #1""","""The paper addresses the problem of negative statements in knowledgebase. They formalize the types of negative statements: (a) grounded statement: [s,p,o] does not exist in KB, (b) not exist [s, p, o] (there's no object that satisfy s,p]. To find negative statements, they proposes two methods (a) peer-based candidate retrieval (i.e., heuristic of finding relation that is frequently populated in nearby entities but missing in the target entity) and (b) Using search logs with meta patterns (i.e., search query logs for pattern such as ""Why XXX not"", and find retrieved queries such as ""Why XXX never won the Oscar). I agree with the motivation behind this work studying negative statement is a problem worth pursuing, especially to build a high precision QA system that does not hallucinate. Having said that, I have multiple concerns with the current version of the paper.(1) Evaluation is not rigorous. Both extrinsic evaluation (entity summarization, question answering) is very small scale. Both evaluation only on five examples. I would rather preferred the paper to focus on one evaluation, but do the study much more carefully and in larger scale, reporting statistical significant and so forth. (2) The notion of ""Interesting"" is very subjective. The paper does not even try to define what counts as an ""interesting"" negative statement. Is it highly likely fact that is not true? Does it mean that it is surprising and unknown? (3) In section 5, What is nDCG? I don't think it is defined, and I don't know what Table 5 is talking about. (4) The paper releases the negative statement datasets. I think this could be very valuable to the community if it is released with manual annotations, even for a small subset (2-3K examples). As is, this is model prediction that we don't have a good sense of accuracy, so not very useful. Minor point:- In section 5.2, it talks about randomly sampling 100 popular humans. What's the definition of ""popular""? In the sentence afterwards, it talks about ""expressible"" in Wikidata. Does it mean it involves predicate can be mapped to Wikidata by string matching? by manual matching?- In Section 4, what's the difference between ""popularity"" and ""frequency""?- It would be interesting to see the actual values for hyperparameters for Definition 2. What do you mean by ""withheld training data""?""","""3: Clear rejection""","""3: The reviewer is fairly confident that the evaluation is correct"""
9,"""Well-motivated work in a direction worth exploring""","""This paper studies constructing interesting negative statements about entities to enrich knowledge bases. The authors propose two main approaches and evaluate using both crowdsourcing and extrinsic evaluations on entity summarization and question answering.Pros:- I really like the idea of adding negative statements and the paper provides good motivations for why these are necessary for different domains and downstream tasks. I think this work lays a good starting point for a line of follow-up studies.- The authors explore two approaches from the two main regimes to generate interesting negative statements about entities utilizing some heuristics and do experiments to show their respective weaknesses and strengths.- Dataset collected is of large scale and can be potentially be used for learning tasks on interesting negative statements.Cons:- The extrinsic evaluations seem a bit synthetic and small-scale. It would be interesting to see how actually enriching a KB using these negative statements could help, for example, solving a large open-domain QA dataset.- More complicated baselines could be included such as recent transformer-based language models on the open-domain QA evaluation.In summary, I think this paper is well-written, well-motivated, and lays a good starting point for an important omitted direction in KB-related research.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
10,"""An application of existing techniques to biomedical IE task""","""This paper applies modern deep NLP methods (especially transformers) to information extraction from biomedical texts - specifically those dealing with cancer genomics. The task is to extract two pieces of information - sentences that describe ascertainment and entity/relation extraction for risk ratios. To perform this, the authors use distant supervision from manually curated KB. For ascertainment supervision, authors go into some detail, although I have no idea how supervision for relation extraction is derived. The manually curated KB seem to have standardised names for gene mutations which may not occur exactly in the document. How are they matched to entity mentions in the document itself ? Data release : Do the authors intend to release the data which may arguably the most important part of this paper ?Clarity : The section describing joint ER model needs to be re-written. The authors make token level decision for entity classification. How is this used to extract actual entities which may be multi-token ? Is it possible that a sentence has more then 2 entities (biomedical text are infamous for long sentences). If each token is classified, what is role of enumerating spans ? Why not use CRF for example ?For the relation extraction part, why is only context between two entities considered (and not the words on either side of them) in equation 3 ?In the disjoint model, what do we mean by discarding the sentence ? And what exactly are we concatenating ? Why not just use [CLS] token embedding for classification ?Evaluation of DS : Can you provide any evaluation of the efficacy of the distant supervision ? In general, how many false positives occur during matching ? Also how was distant supervision generated for Entity/relation extraction part.Cross sentence : Can you comment on how much information might be missed if we only do entity/relation extraction within a sentence ? Are there relations that may be extracted by only considering information across sentences ? Loss of Info : How much information is lost by not considering tables ? Are there risk ratios never reported in text ? How prevalent are they ?In general, I believe it is an interesting application paper that show distant supervision can be employed reasonably well in biomedical domain. But the writing leaves one a bit confused about the exact methodology.""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
10,"""Meaningful application, good method design, and comprehensive experiments, but would like to see a test on unlabeled data.""","""This paper applies the state-of-art NLP techniques to (semi-) automate information extraction for cancer genetics. Quality: GoodClarity: Pretty clear and easy to readOriginality: The authors claim that this the first effort to (semi-) automate the extraction of key evidence from full-text can generics papers.Significance of this work: Its significance lies in the application of NLP techniques for cancer genetics KBC. Pros:1. The potential application is meaningful, which will help physicians to construct and maintain a cancer genetics KB. 2. The designed methods and evaluations are reasonable. For extracting snippets of ascertainment text, they generate noisy labels for each sentence from human extracted snippets, and thus, convert this task to a classification task; for extracting risk estimation of germline mutation, they propose a joint model to extract gene and OR entities from each sentence and connect them by predicting each two of them has positive or negative relation. 3. The experiments are comprehensive and the results are good. They show that their methods surpass some baselines and achieve the best performance. Some examples and an ablation study are included in the Appendix. 4. Comprehensive literature review5. Good writing and clear deliveryCons:1. The potential application has not been really tested yet. Even though they show that their methods perform very well on their human-labeled dataset, I would like to see the real usage of these methods on the unlabeled data. I would suggest authors to apply their methods to construct a KB from other cancer genetics papers and do a human evaluation to see the real-world usage of these methods. Or, at least include some extracted information from unlabeled data in the appendix.2. Some details are missing:a. to derive the labels for ascertainment classification, the author defines three types of sentence representations, are they combined to compute the cos similarity?b. The author mentioned the false positives for this labeling method, I would like to see some solutions to this problem. c. What is the matthews correlation coefficient (MCC)? Why do you want to use it besides F1, P, R? ""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
10,"""Successful application of BERT-based models to new tasks, but with limited novelty in the method""","""This work address KB construction in biomedical domain. Specifically, it proposes two tasks in cancer genetics domain: (1) extracting text snippets about ascertainment; (2) extract reported risk estimates for different germline mutations. They first created distant supervision based on existing manually extracted KB. For (1), they showed that classifier using BERT (especially SciBERT) based sentence representation significantly outperforms baseline models; for (2), they used a simple combination of BERT token emebeddings and a dense layer to jointly learn to classify spans into entities and their relation type, which performs better than SVM baselines, and disjoint learning baselines. Strength:1. This paper proposes two tasks with real world applications, and prepared reasonable size datasets. 2. The paper proposed models based on BERT variants and significantly outperforms simple baselines. Weakness:1. The novelty in the method is limited because the techniques used is straightforward combination of existing approaches, for example, sciBERT, using sum of token representation as candidate entity representations, etc.2. There isn't many new insights about the methods. The advantage of joint learning and advantage of sciBERT vs BERT seem not surprising. The paper could benefit from more error analysis of the BERT-based models, or comparing more variants of how to use the BERT token representation (for example, how to combine them into entity representations), which can help the readers understand better the weakness of the current methods and potential directions for improvement. Questions:Since the ground truth is created using the distant supervision, which is imperfect, for example, the paper pointed out that there's many false positives. How do you ensure the evaluation is not influenced by the errors in the distant supervision? typos:abstract:""We propose two challenging tasks that are critical for characterizing the findings reported cancer genetics studies"" ==> ""...reported in cancer genetics...""page 5 bottom:""There can multiple metrics reported throughout the ..."" ==> ""There can be multiple...""""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
11,"""KBC evaluation paper, more experiments needed""","""Summary: The paper proposes a new evaluation paradigm for KB completion methods that focuses on classificiation, rather than ranking. The authors construct an alternative dataset FB13kQAQ which contains false as well as true facts. They analyse the performance of existing KBC methods (DistMult, ComplEx, etc.) on the new dataset and propose a new KBC model, which can be seen as a variant of TransE with thresholding.Whilst I agree with the general premise of developing a better way for evaluating KBC methods, I believe the paper in its current state is not ready for publication. More details below:Section 3:1. I'm not sure how useful type violation queries are, as those shouldn't be particularly hard to predict. Wouldn't it be better to look at ranked predictions of existing state-of-the-art models and extract incorrect queries that are ranked highly to create a hard dataset for future KBC work? 2. Writing quality should be improved. In particular, the dataset creation process should be described in a clearer and much more succint way.Section 4:1. No ned to describe Algorithm 1 in such detail on almost half a page. This space could be used for additional experiments (see below).Section 5:1. Since BCE is used as a loss function and a logistic sigmoid is applied to every triple, this gives a natural classification threshold of 0. a) It would be interesting to see how this simple baseline threshold compares to the tuned one. b) Additionally, one could add a relation-specific bias to every scoring function that is then learned, rather than using a separate tuning strategy for the relation-specific thresholds.2. It would be nice to see models compared with the same number of parameters, as opposed to the same embedding dimensionality (e.g. ComplEx has 2x as many parameters as DistMult for the same embedding dimensionality).3. The authors don't provide an analsys of model performances on S, M, N and F queries separately.4. Some qualitive analysis on what kind of queries particular models struggle with would be nice to see.Section 6:1. The authors don't provide a clear motivation for the newly proposed model and why it should be better than the existing models. Why does A_r have to be a diagonal positive semi-definite matrix? In general, the new model does not seem like a natural add-on to a paper focused on KBC evaluation and should perhaps be separated into a different paper and replaced by more extensive experiments (see above).2. The proposed Region model outperforms ConvE on the F1 score and performs comparably on the MRR. Since ConvE is a relatively old model at this point, the Region model should however be compared to more recent state-of-the-art models, e.g. RotatE (Sun et al., ICLR 2019.) and TuckER (Balazevic et al., EMNLP 2019). 3. How does the proposed model compare on the existing WN18RR and FB15k-237 datasets?Overall, the abstract should be made much more succint and the writing quality of the whole paper could be improved.""","""3: Clear rejection""","""5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"""
11,"""The authors convincingly argue that ranking-based metrics (like MRR) for entity-relation completion are misleading for many downstream tasks. ""","""The main contributions of the paper are a new train/test dataset that provides negative cases and queries to fulfill. Based on this dataset the authors show how to adapt a traditional IR methodology with F1, which is compared to current measurement techniques, especially MMR. Finally, the authors provide a variant of TransE that yields significant improvement in F-Score.The paper is overall interesting, well-written and easy to follow, modulo a few specific points (see below).The idea of introducing new metrics for entity-relation completion tasks that are closer to real-life downstream tasks is compelling and clearly conveyed. Good MRR rankings were useful for entity-relation predictions initially, but as quality rises, better metrics are necessary.The result section shows how to apply the new metric compared to standard metrics, which is convincing.Detailed comments: The exact process used to build the new dataset is unclear. To clarify it, it would be beneficial to report the numbers after every step. Also, it would be great to add an example for step 6 to better understand what ""answerable queries"" refers to exactly, as well as one example per query category (multiple answers, single answer, no answer).Also, it would be interesting to measure how the performance evolves with the overall size of the graph (in terms of the various measurement methods). Finally, it would be good to have a more detailed discussion regarding how the different measurement metrics differ. The text mentions that there is ""almost no correlation""; would it be possible to give substance to this claim with some numbers and to explain this further?If space is an issue to add those various points, the authors could remove or shorten the simple variant of TransE.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
11,"""Interesting new evaluation, could use more analysis""","""Summary:The authors propose a new classification-based evaluation approach for knowledge base completion models. They create new dev/test sets by adding negative examples created by a combination of filtering out entities from true facts and creating type-inconsistent entity-relation pairs. They then evaluate a handful of state-of-the-art embedding models using this new metric. Lastly, they propose a new embedding scoring function for TransE that improves performance on the classification evaluation over the original TransE.Clarity:The explanations of the dataset and model are clear, but could be shorter. The motivation/how it differs from prior work (like the classification evaluation used in Socher 2013) could use more explanation, or perhaps concrete examples.Originality and Significance:The evaluation metric and modification to TransE are both novel. This approach to classification seems like it would give a better indication of model quality than classifying (possibly) perturbed triples as has been done in the past. Pros:- Useful evaluation metric for KBC- Methodology is clearCons:- Limited analysis of the evaluation. Some more discussion/inspection of why certain models perform the way the do could help show what aspects the classification captures that MRR doesn't.Other comments:- I would be curious to see some breakdown of how models perform on the different subsets of the evaluation data. For example, I would expect the embedding models would generally do better on the type-inconsistent queries compared to the missing entities.""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
12,"""Novel but lacks convincing experimentation""","""The paper introduces an end-to-end model for understanding procedural texts. The novelty of the model lies in jointly identifying entity attributes and transitions of entities as described in the text. The model is evaluated on two standard datasets, ProPara and NPM-Cooking and shows that jointly modeling attributes and transitions helps.The model itself is quite straightforward; it computes an entity are representation of the text, then predicts a distribution over entity-attributes which leads to an attribute aware representation of text. Using the attribute representation of the current time step and the previous one, the model predicts the entity-transition. Since the attribute is being inherently predicted, the attribute representation is computed by marginalizing over different attribute values. I think the main weakness of the paper is in the lack of evaluation and analysis:1. In Cat-1 and Cat-2 categories in sentence-level experiments in Table 1, the proposed model lacks being previous work [Gupta & Durrett 2019b] which is fine, but does not perform analysis on why this happens. On the contrary, [Gupta & Durrett 2019b] do show that when looked at class-based accuracy, their model struggles in the ``""movement"" class. It would be important to know how the current model fares. They also note that the challenge in this dataset is when new sub-entities are formed or entities are referred to with different names. It would be important to see analysis of such kind. 2. On the NPM-Cooking dataset the paper invents a new location-prediction experiment and evaluates their model only on that. This experiment seems quite thin given that the original paper [Bosselut et al., 2018] proposes two tasks, Entity-selection and State-change, both of which the current model should be capable of performing. Computational complexity -- Since the model computes entity representation at every time-step and for each entity separately, the paper should explicitly point out the computation complexity of computing entity representations as E * T. Writing / Formatting1. Explicit reference to NPN-cooking is never given. 2. The paper contains quite a few spelling mistakes. E.g. t_loc(e)=""enginge"" in Fig 1.3. The notation is confusing sometimes. E.g. After Eq. 3. it says ""where X0 . . . Xk is the .."" which I think should be ""where S0 . . . Sk is the """"","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
12,"""The model is reasonable but a bit complex; a simpler baseline seems to outperform it. ""","""**Summary**The paper proposes a model that reads a procedural text, and then tracks the attributes and translations of the participating entities. The tracked information is then used for answering questions about the text.The key differences from previous works are:- Both attributes (e.g., at_location(fuel) = engine) and transitions (e.g., Move(destination=outlet)) are tracked. The distribution over attribute values is first predicted based on the contextualized encoding of the entity, and then the transition type is predicted by applying an LSTM over the entity and attribute encodings.- The attribute values can be either from a closed class or a text span. This is useful for open-class attributes such as ""location"". However, note that previous work such as Gupta and Durett, 2019 (pseudo-url) also considers text spans as possible values.The approach is evaluated on ProPara and npn-Cooking datasets. The method outperforms the baselines in most categories.**Pros**1. The method looks reasonable and empirically performs well (with some caveats: see Cons 1 and 2).**Cons**While joint modeling (of attributes and transitions in this case) is intuitively attractive, a strong baseline that does not do joint modeling should be considered more seriously.1. The proposed joint model is pretty complex compared to the ET_BERT baseline, which simply embeds the formatted text and makes predictions directly. Yet ET_BERT outperforms the proposed model in the two sentence-level tasks. It is unclear if the proposed model would outperform ET_BERT if it is extended to other tasks (which can be done by changing the set of possible prediction targets from {created,moved,destroyed} to, for example, the set of all text spans).2. According to the ablation study (Table 3), the model does not seem to gain a lot from jointly modeling attributes and transitions (only a 1-2% drop for ""no attribute aware representation"" and ""no transition prediction"", whereas the differences in document-level F1 scores between models in Table 1 is much larger).**Questions**1. In Section 5.4, the reason why the proposed method lags behind ET_BERT is described as ""highly confident decisions that lead to high precision, but lower prediction rate"". Would it be possible to give more details? What would be the score if the model is forced to predict on all examples (100% prediction rate)? What are the precision/recall/F1?2. Table 3 lists a ""no class prediction"" but not a ""no span prediction"". What is the F1 when only class predictions is allowed?3. Contribution (b) in the introduction states that the model ""consistently"" predicts entity attributes and transitions. What does ""consistency"" refer to in this context? While the attributes and transitions are modeled jointly, there is no explicitly consistency check between the two.4. How often do the prediction errors belong to type 1 in Table 4 (technically correct; span boundary mismatch)?""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
12,"""Simple, yet powerful idea of recasting procedural text understanding task into a query-context reading comprehension task. Model is simple, results are quite compelling.""","""Summary of the paper=================This paper addresses the task of understanding procedural text, using a reading comprehension based approach. This work proposes jointly modeling entity attributes and transitions in order to build robust entity and attribute representations. The paper shows that using a modern pretrained language model, this system is able to beat previous approaches on ProPara and a cooking recipe based dataset (NPN-Cooking).Overall Review============The idea of formulating the procedural text understanding task as a reading comprehension task, where the question/query is encoded alongside the context is simple, yet powerful. This technique side-steps the issue of properly encoding which entities and attributes one wants the model to focus on, as well as how to extract representations for these aspects. I also found the derivation of transition representation, stemming for subsequent attribute states to be relatively simple and intuitive. Overall, this paper is generally well written, with mostly clear exposition of ideas. The results are quite strong and compelling.Pros====1. Relatively simple and powerful (re)formulation of the procedural understanding task into a reading comprehension / question-answering formulation.2. Model design is relatively simple and seems likely to be reproducible.3. Results are quite strong and compelling,4. Experimental design seems to be well done, along with interesting ablations and analyses of resultsMain points to address=================1. My main point to improve is that the paper could have a more clear separation between task (re)formulation and model implementation. Although Section 3 describes the basics of procedural text, it is only while reading the model implementation (Section 4) that the reader is exposed to the specific task re-formulation proposed in this paper.As far as I know from recent related work, none of the systems formulate the task as:[Entity query?, S0, Sk], that is, encoding the actual query in textual form as part of the input to the task. Typically the inputs to the model are simply [S0, , Sk] or (recently with Gupta and Durrett, 2019) the task input is [Entity, S0, Sk].2. In Section 4.1, the entity representation is described as: Rk(e) = BERT(Xk(e)). This is a bit confusing since its unclear what the BERT() operator is exactly. Is this the output representation of the CLS token from the Transformer? Or something else? Please add a clear description of this representation.3. My understanding is that computing A{k-1} and A{k} at each timestep is done to compute representations for T{k}. However, I think it would have been interesting to see how A{k} for one step compares to A{k-1} of the next step. Presumably they are predicting over the same set of entity attributes at the same time. But the later step has more (future) context to look at. Does this improve or degrade the quality of the attribute representation?4. Typo in Figure 1; at_loc(e)=enginge should be at_loc(e)=engine.5. Regarding the NPN-Cooking dataset, Section 5.1 does not reference where the dataset comes from. It took me a while to find the dataset, and realize NPN comes from Neural Process Networks. Since the dataset had no name in the original paper, please be more clear about what the dataset is.6. In Figure 2, there are a couple of confusing labels. On the top attribute representation (A{k-1}), the output is marked with [P(span{k}), P(class{k})]. Is this correct? Perhaps switched with the bottom attribute representation that refers to A{k} ?7. In Section 4.5, Inference and Training, there is a reference to a term loss_{state} which is not defined. Perhaps this was meant to be loss_{class} ?""","""9: Top 15% of accepted papers, strong accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
13,"""New task, dataset and model""","""The paper addresses the prediction of the hierarchical structure of organizations / institutes. The authors develop a new dataset, automatically derived from GRID (global research identifier database), and compare a set-based model against a few baseline approaches. While the task is well-defined and the dataset could potentially be interesting for the community, I have a few doubts regarding the experimental setup (to be more specific: on the choice of baseline models, on the evaluation on the test set and on the final results).Pro:The task sounds interesting and challenging. It could encourage researchers to build and enhance models that combine knowledge from different sources.Con:The task is presented as a knowledge base completion task. Under this premise, I would have expected (1) a comparison with well-known baseline models from the knowledge base completion literature, such as TransE, and (2) a manual evaluation of the results on the test set. However, the authors mention embedding-based approaches only briefly as future work and evaluate the results only based on the manually extracted test set (which is very likely incomplete).Furthermore, given the very low numbers (model results) and the examples in Table 3, it is questionable whether the proposed token-based models are a promising choice for the task.Further comments / questions for authors:- Equation on page 4: I read your motivation for the subtraction. However, I wonder whether your assumptions always hold (for example, would <f(t_s_1), intersection> - <f(t_s_2), intersection> mean that s_1 needs to contain more tokens than s_2?). Have you considered just feeding the different dot products into a feed-forward layer to allow the model to determine the best function automatically?- Why do you use character-representations for countries but token-representations for cities and states?- Page 6: in your itemized list of models, S-Trans is your proposed model and not a baseline model, right? This does not become very clear at that point.- When you use another scoring function for some of your baseline models, the comparison among models is not valid anymore. Can you do an ablation study/analysis on your model, using different scoring functions as well?- Table 3: These examples show that string-based models are not sufficient for the task. It would be very valuable to add an embedding-based baseline model to the paper.- After looking at the results in Tables 4 and 5, I'm wondering how useable the approach would actually be for knowledge base completion and whether it would make more sense to use TokSim for that downstream application instead of S-Trans (because of the higher hit@1 score)- Page 8: typo: in the last sentence before Section 4.1, there is a word missing (maybe ""outperforms""?)- Page 10: typo: BASG => BASF- The related work on set-based models should be extended.""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
13,"""Nice application but low originality and weak empirical evaluation ""","""The paper shows how to infer the organisational structure of an institution. That is, it presents a model for predicting the is-ancestor relationships of institutions based on their string names. To this end, it makes use of Set-Transformers to model the token overlap between the institution names. This use is nice but also not highly original. The experimental evaluation is on a single dataset only. While the authors do present some examples, and overall hierarchy or something that provides some more insights into the learned model should be provided in order to show potential issues with transitivity and connected components. The evaluation only considers known pairs. But an organisational structure should also be consistent. That is, the interesting motivation provided in the intro is not met in the experimental evaluation. Furthermore, the experimental protocol is unclear. It seems to be a single training/test split, although one should consider several random splits or even some form of cross-validation, i.e., over the connected components given.""","""5: Marginally below acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
13,"""Good work with some ambiguity and weakness in contributions and presentation""","""Paper is on modeling the prediction of ancestor relation between names of science institutions. This is on the GRID dataset which already has some hierarchical information. The proposed approach is set-based models (with neural encodings) where the overlap between two names is measured by set overlap at the unigram level. In extended experiments additional metadata like address and type of institution are also incorporated into the model (which contribute a lot to the improvements). A set of simple to intermediate baseline along with different thresholds of token overlap has been tested and the proposed model shows strong improvement in the MAP metric.Paper has a decent writing and structure. Problem and the approach has been explained and motivated well with descriptive examples. The proposed approach has fair amount novelty in using the set-based encoders with a transformer architecture which shows promising improvements against convincing baselines. However, some major aspects of the contribution is not explained well. For example, from abstract and intro, it seems that the dataset is a major contribution of this work, while it is not clear what's the actual dataset? Moreover, it is not clear if the test set is gold-standard here (specially the negative examples that seem to be constructed heuristically). If one premise of the work is to correct noise in GRID, then the same (potentially noisy) data can not be used as test set.Some questions and suggestions:1. What's the nature of the data that you've constructed? Is it pair of names with binary classes? Has a human reviewed this data (specially the heuristically constructed negative instances)? How many names? Are you releasing all of it or subset of it?2. One premise of the work is the usage of set-based models. There, don't you think unigram-level overlap measurement is just too simple? distinguishing many ancestor orgs in ""washington university"" vs. ""university of washington"" can be quite ambiguous on a unigram-based set models (without using meta data like address). Did you think of expanding to at least bigrams in some non-expensive ways? 3. Some Tables are not descriptive neither by the caption nor the description in the body. For example in Tables 1-3 (which have almost same long caption) what are those names under the baseline columns? Probably those are (incorrect) predictions of the baseline model? And probably the proposed model (e.g. set transformers) does not predict those. This needs to be clarified and not guessed. Or in Table-4, where different kinds of GRID features are getting incorporated, the description of the experiment is 2 pages earlier and caption is quite non-descriptive. And caption of Table-5 is plain non-informative.4. there is little information on the the set encoder was actually trained. is it exact replica of Lee et al. 2018?5. in table 4, why there's such major drop in the O-Trans column (is 0.01 a typo)? And why addition of the ""type"" meta-data doesn't help? is it noisy?6. Given the fairly low level of precision (even for the proposed approach), how can one actually use this?7. Minor thing: In constructing your data, how do you deal with institutions that have multiple parents (e.g. Lawrence Berkeley Lab which is part of UC-Berkeley and US department of Energy. Are there multiple instances the child inst.?""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
14,"""decent approach, new datasets""","""This paper focuses on the problem of entity linking in Chinese social media. Compared to entity linking in documents, entity linking in social media poses additional problems as social media post have limited context and the language is informal. The paper proposes XREF, which overcomes these problems by utilizing additional context from comments and associated articles, and by using data augmentation. The paper is overall well written.XREF uses an attention mechanisms to pinpoint relevant context within comments, and detect supporting entities from the news article. A weakly supervised training scheme is utilized to employ unlabelled corpus. The authors also propose two new low-resource datasets. Experimental results demonstrate effectiveness of XREF over other baselines.The paper would have been stronger if results on at least one more language were reported. Discussion/comparison with the following relevant prior work will be useful.1. Entity Linking on Chinese Microblogs via Deep Neural Network, Weixin Zeng, Jiuyang Tang, Xiang Zhao2. Chinese Social Media Entity Linking Based on Effective Context with Topic Semantics, Chengfang Ma, Ying Sha, Jianlong Tan, Li Guo, Huailiang Peng""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
14,"""Entity linking across news and comments in Chinese with promising results""","""The paper describes a method to perform entity linking across news and news comments, in Chinese, using attention mechanisms to pinpoint relevant context within comments and detect supporting entities in the article body. The authors use a weakly supervised training scheme to work with a large scale corpus.The method is well described, the model has promising results compared to the state of the art, and the Chinese-language entity linking corpus is a welcome addition. Because of these reasons, the paper is a good candidate for the conference.The only suggestion I have for the camera-ready version is a discussion about the generalizability of this methodology. Is this method dependent on the article-comment structure? Would it work with other datasets, e.g. a Wikipedia page and editor discussions?Finally, I have a question about the usage of attention. Would it make sense to use other comments in addition to the article body itself for the detection of supporting entities? It seems like this could help in the case when conversations happen between commenters.""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
14,"""An effective model for entity-linking that leverages a related news article for context""","""Summary: This work presents a novel neural model, XREF, for entity linking in Chinese online news comments. Two new datasets of news articles and comments in entertainment and product domains are collected and annotated for evaluation on this task. The unique problem setup-up facilitates XREF to use the corresponding news article in the following ways: (a) construct a candidate entity set with high coverage since comments mostly discuss entities in the article; (b) use a novel attention mechanism over the news article;(c) Guide these article attention values using a supervised loss; Furthermore, XREF leverages unannotated articles and comments using match-based weak supervision. The model achieves improvements over existing SOTA entity linking models and strong baselines for the proposed tasks, especially for plural pronominal mentions. Pros: Authors identify a novel way to tackle the lack of context for entity-linking in social media posts: use the corresponding news article connected to the posts. The work presents an effective model to use a related/linked article when it's available. There is potential to combine XREF with other sources of context like user history for broader applications.Cons:- Comments without entities are left out in the constructed datasets. This could make the task of mention detection harder as negative samples are missing.- The paper lacks ablations for weak supervison and the different attention mechanisms proposed (comment, article)""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
15,"""Interesting Simple Approach, Strong Results""","""This paper provides a very simple approach to the problem of knowledge base completion. The idea is this - given a query (subject, relation), you find other entities similar to subject, see which other paths they can take to their corresponding object if they express the relation, and check if the subject express those paths. Object reached this way are candidate answers. The one with most paths reached is marked correct.One question I have for author is that - when the new relation discovered is entered into memory, does it come with any form of weighting that tells us how confident the model is in its prediction. For example, if we take the (Melinda Gates, works in, ?), the expressed path (ceo, based in) may not be correct for this new subject. Perhaps, a discussion on this problem will make the paper stronger.Why do we need caching ? Can't the paths be discovered in real time ? Can there be better heuristic designed that can be used to filter paths at test time (depending perhaps on the subject of the query itself) ?The authors present results on multiple datasets where they are either SOTA or competitive (I cannot comment on this with complete confidence if the author has missed any other relevant comparisons). They also perform qualitative testing to see why their model has good performance. An error analysis on this model would also make this paper stronger.In general, I like this approach for its simplicity (and generality as the authors note) and in hindsight, it seems surprising why this has never been tried before.""","""8: Top 50% of accepted papers, clear accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
15,"""A simple but well-motivated method with strong performance ""","""This paper proposes a non-parametric reasoning method for reasoning on incomplete knowledge bases. Specifically, for the task of finding a target entity given a source entity and a relation, since this specific relation might be missing for the source entity, multi-hop reasoning is required to get the answer. To get the reasoning paths, this paper proposes to first retrieve similar entities from the knowledge base that have the same outgoing relation, and then gather all possible reasoning paths from these retrieved entities. Finally, these reasoning paths extracted from other entities can be applied to the source entity in the query and get the answer. The methods proposed in this paper is simple yet very effective. They outperformed previous strong models on NELL-995 and FB-122. Moreover, because of the non-parametric property, this method is also robust in low data settings. I also like the general thinking that instead of encoding all the reasoning rules into model parameters, the case-based reasoning system might worth more attention. I think this paper gave a good initial attempt and established a good framework for future work. For example, as the author mentioned in the paper, neural relation extraction systems can be incorporated to replace the exact-string matching. Therefore, I think this paper should be accepted. ""","""8: Top 50% of accepted papers, clear accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
15,"""Good results, but lacking a discussion about the limitations ""","""This paper proposes a non-parametric approach for reasoning on knowledge graphs. The described approach CBR (Case-Based Reasoning) includes several steps including matching similar entities and extracting rules that are used during inference to select the right entity.The approach is shown to be effective in knowledge base completion (FB122, WN18R) and query-answering (NELL-992), and yield better overall results than several competitive parametric approaches including TransE, DistMul, Complex, MINERVA, ASR, KALE, GNTPs.Pros:* Evaluation in terms of performance is sufficient - comparing to several approaches, on two tasks (KBC, Query answering) and three different knowledge-graphs. * State of the art results over several recent, well-performing approaches.* Analysis of why the approach is better than one of the compared systems - MINERVA is insightful* The approach is simple, easy to understand, and its modules can easily be extended with recent ML/DL modules.* The reasoning seems interpretable to some extent since there are actual rules that are retrieved and used.Cons:* The limitations of the approach are not discussed in detail: ** What is the inference time compared to other parametric approaches? Could you include these in the paper?** With the current approach what are the limitations of the size of the graph in terms of the number of triples, entities, relations? * The title of the paper seems too general. This is not the first paper to propose non-parametric approaches for reasoning over knowledge graphs, nor it is an overview paper. If the paper is accepted the title must be changed to a more specific one. ""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
16,"""Interesting paper where motivation of setup and evaluation could be stronger""","""The paper describes a method for predicting different facets ('plausibility', 'typicality', 'remarkability', 'saliency') of validity for common-sense facts.The proposed method is a two-step process (1) prior weights on the facets are regressed from features (e.g. basic statistics of facts) (2a) inference rules are given that connect different facets/facts (considering textual similarity) (2b) a weighted and relaxed ILP/LP is solved to obtain facet scores for all facts considering prior weights and inference rules The paper proposes an interesting approach to an interesting type of problem, it reports a substantial amount of work, and it is very well written.However, I am not convinced regarding the following two central points:a) Motivation, Choice and Definition of facetsA motivation why exactly those 4 facets are chosen is lacking, and the motivation given for single factes (e.g. for generation of funny comments) should be stronger.Moreover, I wonder whether the references given in 3.1 really motivate/clarify the differences between the facets , e.g., is it really the case that the property captured Tandon 2014, Mishra 2017 is plausibility (SOME instances) whereas in contrast Speer and Havasi measure Typicality (MOST instances)? (I would think all three aim at typicality).It would also help to give some insight how exactly the different facets were described to the annotators, to get a feeling what is really measured in the end.b) EvaluationPairwise ranking for 800 pairs of instantiated facts with facets are annotated.70% are used for estimating weights, and 30% are used for evaluation.Now the interesting question, that should be critically investigated, is whether the main part of the pipeline (step 2, the rule-based inference system) actually improves on the simple regression model (step 1).As is reported in the paper, the regression alone model gives preference accuracy of 58%, whereas the full model gives 66%.Putting everything together, this translates into a difference of 19 cases (0.3 * (0.66-0.58) * 800), and efforts to investigate the statistical significance of this difference (type of test, p-value etc) should be reported (but are not).Other points: - there is a lot of important/central material in the appendix. Given that theoretically it should not be necessary that readers/reviewers read it, I wonder whether a 10 page conference paper is the right format for this work.Small questions/comments: - ""crisp"" noun: do I understand it correctly that only single word concepts can be subjects? why? - appendix A.3: ""partitions overlap"" - then partition is not the right word. subset?""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
16,"""Interesting idea, but strange results and not well aware of prior work""","""Summary: The authors propose a multi-facet notion of scoring common sense knowledge, arguing that it is not sufficient to have just a single confidence score, but instead propose four distinct values: plausibility, typicality, remarkability, and salience. These values are codified into a series of constraints that conform to their natural language definitions, leading to an approach for extending existing common sense KBs based on this new multi-faceted perspective. Unfortunately after a detailed inspection at the resulting dataset, the relationship between the proposed scores seems surprising: e.g. in the majority of cases, plausibility score is lower than saliency and typicality, and the scores are highly correlated among each other. Moreover, there is little detail on how the extended versions of CSK datasets were constructed and filtered. Also, some of the design choices for calculation and aggregation of basic scores (Sections 5.1 and 5.2) are not convincing and should be explained. Finally, the authors seem unaware of a significant amount of prior work which would need to be addressed to properly contextualize this contribution.Review: The paper investigates how to associate useful scores to common-sense statements. CSK databases typically provide a single confidence score, which the authors claim is too limiting. The authors primarily motivate their investigation from the downstream goal of dialogue: in such a setting, it may be that a given fact holds nearly all the time, e.g. that people breathe, but it isnt very interesting in most circumstances and not worth discussing. Other things may be interesting, but so unlikely that it would be strange to discuss it. This general observation has been made previously, such as in: Jonathan Gordon and Benjamin Van Durme. 2013. Reporting Bias and Knowledge Extraction. In Automated Knowledge Base Construction (AKBC) 2013: The 3rd Workshop on Knowledge Extraction, at CIKM The authors go beyond this, proposing 4 distinct facets of common knowledge that each deserve a score. I found this part of the paper interesting, the core idea worth pursuing. However, this part of the paper would be significantly improved with more work in relating the proposal to significant prior work. For example: L.K. Schubert and M.H. Tong, ""Extracting and evaluating general world knowledge from the Brown corpus"", Proc. of the HLT/NAACL 2003 Workshop on Text Meaning, May 31, Edmonton, Alberta, Canada. This article proposed the use of a 6-way categorical label on common sense, for purposes of human scoring. It would be nice for the authors here to discuss this, and then argue for why their facets are better. Related as well is the ordinal scale proposed in JOCI: Sheng Zhang, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2017. Ordinal Common-sense Inference. Transactions of the Association for Computational Linguistics, 5:379395. Where plausibility, for example, was proposed as an ordinal value on a continuum. This article by Schubert is the first, to my knowledge, that gave a coherent pitch on the idea of extracting common sense from a large corpus: L.K. Schubert, ""Can we derive general world knowledge from texts?"", M. Marcus (ed.), Proc. of the 2nd Int. Conf. on Human Language Technology Research (HLT 2002), March 24-27, San Diego, CA, pp. 94-97. Especially salient here is the work on LORE: pseudo-url Clark and Harrison released the DART collection: Large-Scale Extraction and Use of Knowledge From Text. P. Clark, P. Harrison. Proc. Fifth Int Conf on Knowledge Capture (KCap) 2009 Maria Liakata and Stephen Pulman were another early example of a framework akin to Schubert's KNEXT: Maria Liakata and Stephen Pulman. 2002. From Trees to Predicate Argument Structures. In Proceedings of COLING. Moving to the experiments in this paper:, after looking at the score values on the ConceptNet part of the dataset, I find several things surprising. For instance, even though plausibility is a superset of typicality and saliency, in 56% and 77% cases plausibility score is lower than typicality and saliency, respectively, on the extended part of ConceptNet. It looks unexpected given the definition of these scores and inference rules 1--2. Additionally, even though the four proposed scores are designed to decompose the original confidence measure, there seems to be a lot of interdependence between them (e.g. the correlation between plausibility and saliency is over 88% on the extended part of ConceptNet). I understand that this has a positive side effect, which is the ability to come up with a lot of inference rules between the four scores, but it should be stated more clearly whether that was the original design goal. Given that the extended CSK datasets are a major contribution of this paper, it is also surprising to see so little detail about the actual extension process, both in the paper and in the appendix. For instance, at the end of the paper the authors mention that inferred statements ... expand the original CSK data by about 50%. It is not further explained how the entirety of the inferred statements is constructed. Given the abundance of inference rules (1--14), there can be a lot of inferred statements, so we assume some filtering was used to prune noisy statements for the final dataset, but we couldnt find any details on filtering in the paper or appendix. Additionally, it is not clear what evaluation measure is used in Table 6. The description of the proposed method is also not very convincing for the following reasons:a. One of the important relationships between statements is the similarity, and your model relies on it in two places -- similar properties expansion and basic scores. The similarity between two strings is measured by the cosine similarity of weighted word embeddings. But word embeddings, especially for word2vec, are not reliable in this situation. E.g., I briefly looked into the database you released, the model gives two words inhabit and live with very different plausibility scores. b. The features for the basic score are all heuristics. Why do you set an arbitrary threshold for quasi-probability, instead of using a soft weighted average over all statements? What's the semantics behind P[s] and P[p]? To validate the effectiveness of every feature, an ablation study should be done.c. For textual entailment score, more details are needed. For the example you provide in the paper, how would the first sentence ""Simba is a lion"" be constructed given a CSK statement? Do you use a universal template to generate sentences for all relationships, or have specific templates for every relationship? I understand the second statement Simba lives in a pride could be constructed by simple concatenation, but its not always grammatically correct, so how do you solve this problem? I conclude the paper approaches an interesting problem and makes algorithmic and resource suggestions, but more should be done to ground the contributions in prior work, the current results are underwhelming, and there are several un-/under-explained design decisions which need to be addressed before publication. The following are just some of the surprising examples found in basic digging through through the resource, the sort that I would anticipate as part of writing the article with associated discussion: (Original ConceptNet with Dice scores)Person attempt suicide has (plausibility 0.94, typicality 0.96, remarkable 0.007)Person iron pant has (plausibility 0.95, typical 0.96, remarkable 0.0006)Looking at the most remarkable facts in Extended ConceptNet (sorting on that value) shows a lot of odd knowledge.Looking at the most salient info about Italy, or China, also shows strange facts.Etc.""","""4: Ok but not good enough - rejection""","""5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"""
16,"""Models for a more nuanced view of commonsense knowledge, but with some open questions""","""Synopsis:This paper attempts to classify different scoring mechanisms introduced in widely used KB resources into four facets: plausibility, typicality, remarkability, and salience. To model these scores, the authors introduce logical dependencies between these scores and use an ILP-based reasoning model (enhanced by vector-space similarity models) to learn these scores. The additional nuance of these measures is an advance in commonsense knowledge and the underlying model provides a compact and convincing set of constraints behind these scores, but some of the methodological details are missing or could be improved to allow a stronger impact.- Clarity: generally easy to read and understand, with some room for improved prose- Quality: thoughtful approach to the problem, but methodological issues raise doubts- Originality: integrates ideas from prior work in a new and novel way- Significance: meaningful contribution to CSK literatureMost knowledge base projects report a score for each fact. However, the semantics of these scores can be difficult to ascertain. In this paper, the authors propose four differing semantics for assessing these facts and a model for estimating these scores. Although these semantics have been, to some extent, been proposed in prior work, this appears to be the first work that integrates all four concepts into a single model. Unfortunately, the last facet score, salience, seems more subjective. Someone living in the African bush might have a very different view of what is salient about hyenas than an office worker, and as such separating the cultural background of the viewer from the score seems short-sighted.The choice to combine properties and objects could be supported more strongly. In particular, whether such a method is appropriate for more sophisticated knowledge representation schemes where qualifiers are present on a property, or how this method would work in more open-world settings where properties or objects can consist of multiword phrases. The use of embeddings addresses that concern, but could still be emphasized.The codification of scoring semantics is done through a set of logical rules relating the four score types to each other both for a single entity and between child and sibling entities in the commonsense knowledge base. These logical rules are used as input to an integer linear programming formulation, and although the problem is intractable, several approximations including partitioning the problem are used to allow tractable inference. One surprising choice is introducing more general statements into the candidate set, by assuming parents have the same properties as their children. Is this necessary or a hack to make the method work? The same question could be asked about word-level embeddings used to score similar properties - what happens when these aren't used, won't IDF weighting always downweight the properties since most commonsense knowledge resources use a few properties, and why not use a more sophisticated language model like RoBERTa? The choice of using ILPs rather than more flexible probabilistic frameworks that have scalable inference engines (ProbLog, PSL, ProPPR, etc.) is confusing, particularly it seems that substantial effort was required to get ILPs to work in this setting. The reliance on hyperparameters, specifically pseudo-formula are not particularly clear, and it would be interesting to know if uniform rule weights diverge substantially from learned rule weights, especially since the reported ablation results suggest the priors dominate the constraints.Further details of the MTurk experiment would be helpful, particularly since some ratings may be subjective. Were there limitations on the geographical location, educational background, or other demographic characteristics to participate? Where the workers compensated fairly? How were instances sampled from large commonsense corpora to ensure balanced coverage? Some specific suggestions:- Abbreviations (ca. incl.) should be avoided in scientific prose- Illustrating the space of plausible, typical, remarkable, and salient facts would be helpful. A Venn diagram of four concentric circles would illustrate this clearly.- Compare to more tractable modeling frameworks (e.g., probabilistic soft logic has been used for knowledge graph analysis and supports confidence values)?- Specify where child and sibling relationships are coming from in the main draft- Provide more details about the MTurk sampling since it seems as if there is a bias towards the DICE factsOverall, I believe this paper provides a thoughtful look at scoring commonsense knowledge and would be welcomed by the AKBC community as a step forward in improving the approach to building and evaluating commonsense. Clearer explanation of some of the methodological choices, the inclusion of additional probabilistic models, and discussion of other knowledge representation paradigms would strengthen this paper.""","""8: Top 50% of accepted papers, clear accept""","""5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"""
17,"""Well-written paper with some oversights""","""This paper proposes to replace a typical knowledge graph embedding approach that produces a (typically large) continuous representation with a (small) discrete representation (a KD code) and a variety of encoder/reconstruction functions. It explores several combinations of encoding and reconstruction and shows benefit on compression metrics while maintaining performance on other tasks. Overall, I would say that this is a well-written paper with some oversights. Addressing them would strengthen its case for acceptance. The work itself ignores the transformer architecture, which seems an obvious candidate for the non-linear reconstruction element. Quality:The paper itself seems well-written and addresses most obvious concerns with the work. It misses some related work, and crucially, ignores the transformer as a choice for reconstruction.Missing related work: Key-Value Memory Networks for Directly Reading Documents (Miller et al., 2016)PyTorch has a paper to cite from NeurIPS 2019: PyTorch: An Imperative Style, High-Performance Deep Learning LibraryClairity:The paper itself is generally very clear.Further elaboration about the particular choice of pseudo gradient for the tempering softmax is needed. Why not use the equivalent of the Straight-through Gumbel-softmax Estimator from (Jang et al., 2016) instead of this pseudo gradient trick?Originality:While both partitioning embedding spaces and the particular learning methods are not novel, the authors do combine them in an interesting way.Significance:It is hard to project the impact of any particular work. This particular paper has potential for helping mobile device (and other resource-constrained) users.Pros:Relatively thorough related work reviewLarge gains in compression ratiosCons:Significance gains on the primary tasks are marginalUses an LSTM but not a transformer for encoding reconstruction. Substitution of a transformer for an LSTM in PyTorch should be straightforward.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
17,"""Appropriate use of discrete representation learning for KG embedding compression ""","""Summary: This work combines knowledge graph (KG) embedding learning with discrete representation learning to compress KG embeddings. A discretization function converts continuous embeddings into a discrete KD code and a reverse-discretization function reconstructs the continuous embeddings. Two discretization training approaches (Vector Quantization [VQ] and Tempering softmax[TS] ) and reverse-discretization functions (Codebook Lookup [CL] and Nonlinear reconstruction [NL]) are proposed. The four resulting combinations are empirically evaluated for link prediction and logical inference tasks, with TS-NL performing best. TS-NL outperforms continuous counterpart on the logical inference task. Furthermore, authors run ablations on the size of the KD code and propose training guidance from continuous embeddings for faster convergence.Pros:- Discrete KD code representation confers desirable properties of interpretability - semantically similar entities are assigned nearby codes- The discretization learning method proposed here can be combined with different KG embedding learning techniques- Results suggest minimal performance decline across multiple KG applications and up to 1000x compression.Questions:- Is the LSTM used in the NL technique a bidirectional LSTM? If no, have you experimented with BiLSTMS since there seems to be nothing inherently unidirectional about the discretization function? If yes, is that the reason for two sets of parameter matrices for I/O/F gates in your LSTM model?- Is the continuous embedding dimension and the dimension of the embeddings obtained after reverse-discretization comparable?- How much more additional inference cost does your method need over the continuous embedding approach? ""","""8: Top 50% of accepted papers, clear accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
17,"""Effective techniques for compression knowledge graph embeddings without much loss in performance ""","""This paper proposes a compression method for knowledge graph embeddings. It learns discretization and reverse-discretization functions to map continuous vectors to discrete vectors and achieves huge storage reduction without much loss in performance. The learned representations are also shown to improve logical inference tasks for knowledge graph. In general, the paper is well written. The description of the method is clear and the experiments are pretty thorough. The results are encouraging, as they show that with a very high compression rate, the model performs on par with the uncompressed model, sometimes even better. One concern is the inference time and additional complexity with the introduction of LSTM-based reconstruction model, although the authors have shown that it only contributes a small runtime empirically. The LSTM module also introduces more hyper parameters. I wonder how they impact the compression performance.Some findings in the experiments are interesting. The discrete representations sometimes significantly outperform the continuous representations in the logical inference tasks. It would be nice to see some concrete examples and more analysis. Also, adding the regularization term helps a lot in terms of faster convergence. Is that true for all the KG embedding methods? How much performance gain it provides in general?""","""8: Top 50% of accepted papers, clear accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
18,"""More comparisons and insights about the results are desired""","""This paper tackles the task of knowledge-base refinement. The described method uses an approach based on co-training using a combination of two models. The first model PSL-KGI and the second model is a knowledge graph embeddings model (ComplEx or ConvE). The experiments are conducted on four datasets based on NELL, YAGO, FB15K, and WN18. The idea of combining 2 conceptually different approaches like PSL-KGI and graph-embeddings work well, however, it is not surprising that it works better than a single model. It is observed in many works (this does not need a citation) that if you combine the prediction of multiple models into an ensemble model, it would work better than a single model. This is especially true if the models have a different nature. Additionally, a co-training setup, similar to the one presented here, would expectedly boost the performance further. In that case, comparing the combined system to a single PSL-KGI or single KGE model is not enough. In order to claim some supreme performance of the method, it should be compared to similar methods that combine multiple models and ensembles of the same model types. Some additional experiments are performed which could be of interest to the community with some further analysis and insights.The analysis of the number of feedback iterations is interesting in order to know the limits of this type of refinement. It is very hard to see the difference from Figure 6 but for some of the datasets, ex. NELL it seems 6 steps do not seem to be enough to see the peak. Also, it is not clear if the difference in the performance is significant for most datasets. More insights into why more steps work or do not work are needed. The ablation of types in Table 5 might be interesting but it needs further discussions what does it mean for a KG and each ontology that for example, RNG is the most important type. How does that help us to know more about them? And how is that related to the number of instances in each type (displayed in Table 2)?Some minor comments: - It is worth noting that embeddings, with a few recent exceptions [Fatemi et al., 2019], do not make use of any form of taxonomic/ontological rules. - there is work that uses taxonomy/rules with some examples being KALE (Guo et al 2016), ASR-X (Minervini 2017), NTP (Minervini et al 2020).- In Table 5, please add the difference of the ablated results, to the overall (All) for better reading. ""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct"""
18,"""Using PSL to improve graph KG embeddings""","""The authors present a method to improve the performance of graph embeddings by using PSL to reason using ontology axioms to predict the types of entities. The Iterefin method is able to the predicted types as supervision to finetune the embeddings (ComplEx and ConvE). The authors propose an iterative method where the predictions from embeddings are fed back to the PSL model to infer new types.The experiments are performed on corrupted versions of NELL, Yago3-10, FB15K-237 and wordnet using the methodology introduced in Pujara's KGI work. The experiments show substantial improvement on datasets with rich ontologies (not wordnet). The effects of iteration are minimal, so it is not clear that they are useful as some iterations result in slight improvements while others result in loss of performance.The ablation studies show that range and subclass are the most important axioms, and other have minimal or no effect. Additional details would be useful about the number of each type of constraint in the PSL model as it is not clear whether the contribution is due to the number of character of the constraint. The importance of the range constraint seems correlated to the method for introducing noise in the evaluation datasets.Pros:- interesting approach for combining two different approaches to reasoning. - good experiments to show the benefits of the method.Cons:- the claim that the iterative methods is helpful (which is part of the name of the system) is not supported by the experiments.- no data on execution times and scalability (all experiments are on small or medium size datasets)- insufficient analysis of the contribution of different axioms (table 5 is not enough).The paper is well written and easy to follow. Substantial room is spent on the analysis of the iterative method, which in my opinion is not producing the desired results. The space could be used to describe the method in more detail and include additional experiment results.""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
18,"""Clear goal, good results, but needs more detail""","""The paper addresses the KG refinement task, aiming to improve KGs that were built via imperfect automated processes, either by removing incorrect relationships or by adding missing links. The key insight is to augment KG embeddings not only with implicit type information, but also with explicit types produced by an ontology-based system. The resulting algorithm, TypeE-X, leverages the benefits of structured (often human-crafted) information and the versatility of continuous embeddings.The model proceeds in two stages. First, the PSL-KGI component (which takes in KG triples, ontology information and inference rules) produces type information for entities and relations. In the second stage, these are passed to the TypeE-X module, which appends this explicit type information to a) implicit type embeddings, and b) general-purpose KG embeddings. These two steps can optionally be repeated for multiple iterations, in a loop.While the high-level picture is clear, there are a few details about the information flow and implementation that are harder to figure out:- At the end of section 3, the authors write ""It is also important to note that PSL-KGI also generates a numberof candidate facts that are not originally in the KG by soft-inference over the ontology and inference rules"". It is not obvious in Figure 1 when this happens.- How is this model parameterized? What exactly is trainable? In the conclusions sections, the authors write ""we will look in ways to combine such methods at the training level"", which raises the previous question again.- How are the types produced by PSL-KGI converted to continuous representations? Is it a simple dictionary lookup?The authors validate their models on four datasets, with ontologies of various sizes. They compare against multiple baselines, including PSL-KGI alone, generic KG embeddings alone, and generic KG embeddings + implicit type embeddings, showing their work outscores previous work.One observation is that the datasets are ""prepared"" somewhat artificially (noise is programmatically inserted in the KGs, and the model is expected to detect these alterations), and it's not entirely clear how well this added noise correlates with the noise encountered in real-world KGs. It would be interesting to provide results on a downstream task (e.g. KG-based question answering) with and without KG refinement, to get an understanding of how much this step helps. However, in authors' defense, they are following the same procedure as previous work, and do make an effort to ensure the de-noising task is reasonably hard (e.g. ""half of the corrupted facts have entities that are type compatible to the relation of the fact"")The ablation studies are insightful -- they look into how the number of loop iterations affect performance on various datasets, the impact of threshold hyper-parameters, and the impact of various ontological rules.Overall, I think the paper is well written. It has a clear goal and convincing evidence to achieve it. However, I would have liked to see a clearer explanation of the algorithm and more implementation details.""","""6: Marginally above acceptance threshold""","""1: The reviewer's evaluation is an educated guess"""
18,"""Interesting augmentation of KG embeddings""","""Summary:The authors propose a new method for identifying noisy triples in knowledge graphs that combines ontological information, using Probabilistic Soft Logic, with an embedding method, such as ConvE or ComplEx. They show that combining these two approaches improves performance on four different datasets with varying amounts of ontological information provided. Clarity:The paper is generally clear and well-written. I think it would be helpful to give a little more detail on how psl-kgi works. For example, it's not entirely clear to me how it outputs the type information.Originality & Significance:As mentioned in the paper, implicit type embeddings have been incorporated into embedding methods, but more extensive ontological information has not been used in this way. They also show that doing so results in improved performance over competitive baselines for this task. Pros: - Novel use of ontological features with more recent embedding approaches for KG refinement - Performance improvement over competitive baselines Cons: - The analysis feels a bit lacking. See comments below for more thoughts here.Comments: - It doesn't seem like the iterative aspect of the model actually helps? From figure 2, it only appears to hurt performance on some datasets, and from figure 3, the change in accuracy appears to be minimal, and not obviously more than just noise. - I would be curious to see how much using the full ontology improves things vs just using explicit type information (DOM, RAN, and type labels for entities). Also, if most of the benefit comes from the type information, it might be interesting to see how much psl-kgi is actually adding, or whether you can just apply the type labels directly (for the datasets that you have them anyways). - While perhaps not in the scope of this paper, it would also be interesting to see how incorporating the ontological information affected other KG tasks, like link prediction. ""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
19,"""Improving box embeddings but issues with evaluation and little analysis""","""Summary: the paper is elaborating on prior work on Box Embeddings. Box Embeddings represent an object as a rectangle in an n-dimensional space, which makes it possible to represent asymmetric relations (compared with symmetric cosine similarity in regular embeddings). The contributions of the papers are: - Shows that box embeddings can predict transitive closure from its transitive reduction- Shows how to use box embeddings (or many other embeddings like order embedding and Poincare embedding) to learn multiple edge types jointly in one graph. Clarity: the paper is easy to read, but it doesn't read like one coherent story. In particular, predicting transitive closure from its transitive reduction (table 2) doesn't fit the main story that focuses on jointly learning multiple relation types (table 3). The reason could be that the results in table 2 are very good, while the results in table 3 are not as good. Originality: the ideas are not original (mostly Vilnis et al., 2018), but the set of experiments are informative and useful. Pros: - Discussing various changes to Box Embeddings training procedure that make then much better than the original work in Vilnis et al., 2018. Most of these changes are prior work though. - Showing that box embeddings can outperform other baselines by a huge margin when predicting transitive closure from its transitive reduction. - A simple and general method to convert graphs with multiple edge types into graphs that can be learned using box embeddings and other embedding methods. Cons: - Isn't predicting transitive closure from its transitive reduction a procedural operation that doesn't require a trained model? The evaluation setup of Nickel and Kiela, 2017, Ganea et al., 2018 drops random edges, which means it can run into situations where a trained model is needed to fill in missing evidence. - No analysis to explain why box embeddings does so well in predicting transitive closure compared to other embedding methods, and why it doesn't do as good in predicting multiple relation types (comparable to Order Embeddings) Notes and questions: - How did you choose the embedding size? do you think the results are agnostic to it?- #nodes = #entities^#relation_types, which means it will explode quickly as the relation types increase. Is there a way around that?- I didn't get why you need to double the number of nodes to represent both relation types. Would it be easier to use ternary (instead of binary) class classification (not related, hypernymy, meronymy)?""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct"""
19,"""Interesting idea, reasonable results, but might have some flaws""","""This paper studies the properties of box embedding from three aspects: 1) how many edges from the transitive closure are necessary for training the box embeddings; 2) can box embeddings be applied to graphs that are not tree-like; 3) how to model different relations in the same space. The author set up experiments using the IsA relation (Hypernymy) and HasPART relation (Meronymy) in WordNet. The results show the effectiveness of box embedding, outperforming other embedding method including order embedding, Poincare embedding, hyperbolic entailment cones, TransE and ComplEx.I am inclined to accept this paper because:* The study explores interesting topic, and show superior properties of box embedding.* They propose a novel approach to jointly modeling different relations in the same box embedding space. But I also have several concerns, which I want the author to address:* The embedding dimension is 10 for baselines, and 5 for your model. They seem too small compared to the embedding methods I am familiar with, and make the box embedding more like a toy model. Is it standard in previous work of box embeddings?* I am wondering what is the difference between your model and previous box embedding methods when you just need to model the hypernym relations. Also, can you also compare previous box embedding methods as baselines in Table 2 and Table 3?* The title seems to focus on modeling the joint hierarchies, but this is only one point explored in the paper. Also, the proposed method actually didn't outperform order embedding method in the this joint hierarchy task. It would be better if you have some discussion in the paper on why order embedding becomes much better here. * For the visualization in Figure 5, why do you only use the hypernym hierarchy? My suggestion is to include all the three hierarchies here.""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct"""
19,"""Review: Representing Joint Hierarchies with Box Embeddings""","""The authors have explored the capability of box embeddings to model hierarchical data, and have demonstrated that it provides superior performance compared to alternative methods, requiring less of the transitive closure. In terms of F1 score for measuring quality of hypernym and meronym predictions, the authors find that their box embedding method outperforms all baselines by a large margin in the single hierarchy settings.Strong points of the paper:--Reasonably well written and rigorously presented--Notwithstanding the sole use of WordNet, the baselines and experimentation left a good impression. The authors were reasonably thorough. --The writing was good, but I would have liked to see an explicit formulation of the binary cross-entropy or the regularized loss that the authors were minimizing for the sake of completeness. From what I can see, the expression has to be derived based on what the authors have written in the text.Weak points:Perhaps the most significant weakness is the exclusive use of WordNet for demonstrating effectiveness. Either a supporting dataset (e.g., in the context of a task like commonsense question answering) or another knowledge base would have lent stronger credence to the claims.Introduction was a mixture of a true introduction and related work. I think the authors should have kept their introduction at a higher levelThe last plot in the paper could have been log-log to show the trends more clearly. ""","""8: Top 50% of accepted papers, clear accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
20,"""thorough analysis of pre-training strategies for transformer-based entity linking""","""The paper describes an evaluation of several pre-training strategies for the task of entity linking, using the AIDA and TAC-KBP baselines. In particular, the authors look at the impact of entity candidate selection strategies, adding noise during pre-training, and context selection methods. The model employed for entity disambiguation is a 4-layer transformer for the language representation, with an MLP final layer to perform disambiguation. The analysis of the pre-training strategies is detailed, and could be interesting for others using the transformer architecture to perform entity linking. Minor issue, but the paper is missing a conclusion section - this could be used to discuss how these results can generalize to other methods for entity linking.""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
20,"""Solidly done piece of empirical work""","""This paper investigates the use of a simple architecture for entity disambiguation: encode the mention and its context with BERT, use an MLP over the mention's fenceposts to compute an embedding, then compare that embedding with embeddings of entity candidates and take the one with the highest dot product. Notably, it uses a transformer pre-trained on Wikipedia to do entity resolution, but does *not* use the BERT model or its pre-trained parameters directly. The paper deals with several design decisions along the way: how to pre-train this model on Wikipedia, how to generate candidates at train and test time, whether or not to mask the input as in BERT, and other hyperparameters. Results show state-of-the-art performance on CoNLL (with a good candidate set) and TAC-KBP, as well as good performance on end-to-end entity linking (detecting and linking mentions).This paper isn't exceptionally creative. However, it's a solidly done piece of empirical work that in my opinion should exist in the literature. While a lot of work has moved onto zero-shot settings (Ling et al./Wu et al./Logeswaran et al. that the authors cite, plus Onoe and Durrett ""Fine-Grained Entity Typing for Domain Independent Entity Linking"") or other embedding-based formulations (Mingda Chen et al. ""EntEval: A Holistic Evaluation Benchmark for Entity Representations""), a strong, conventional, up-to-date supervised baseline should exist in the literature and currently doesn't.The one idea here that seems unconventional is foregoing BERT-based pre-training and only pre-training on the entity linking task itself. This is an interesting choice but I'm not too surprised it works well: Wikipedia is already pretty big, and this approach lets you learn good entity embeddings in the same space as the transformer encoder.The experiments in this paper are quite well-done and touch on a lot of issues surrounding how the system is trained. I'm glad to see the authors use the TAC-KBP data and start to make this more standard -- it would've been nice to see other datasets like WikilinksNED or some of the older/smaller datasets from Ratinov et al. (2011) ""Local and Global Algorithms for Disambiguation to Wikipedia""). The CoNLL data is weird and limited in scope. Nevertheless, achieving state-of-the-art on this well-worn dataset is impressive.Table 4 was probably the most surprising part of the paper to me. It's a little strange that the OURS candidate selection method works poorly on TAC-KBP. It basically seems like a union of phrase table and page, right? I understand the paper's high-level point that random is somehow closer to the true TAC-KBP task, but this argument seems handwavy and doesn't seem like it should make such a large difference.BERT-style noise is also surprisingly effective during pre-training. The paper's interpretation of this makes sense.Overall, I feel like this paper deserves to be published: the results will be a good benchmark for future efforts and I can imagine other researchers using this as a starting point.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
20,"""useful analysis of pretraining strategies for supervised entity linking""","""This paper presents an empirical study of pretraining strategies for supervised entity linking. Previous works either focus on constructing general-purpose entity representations or zero-shot entity linking and do not fully explore pretraining. The paper is well written and is easy to follow. I think the findings in the paper should be of interest to the AKBC community.The proposed model achieves competitive performance even without domain-specific tuning. A detailed empirical analysis of negative candidate selection, noise addition, and context selection is presented. The proposed model is able to perform end-to-end entity linking with simple modeling and low inference cost.Missing comparison with related work on end-to-end entity linking with BERT:Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking, Samuel Broscheit, CoNLL'19A figure / running example to illustrate where the demonstrated benefits of pretraining come from over prior art will strengthen the paper.A discussion on potential limitations of pretraining will also be informative.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
21,"""A good analysis on popular KB-completion datasets plus a carefully labeled triple classification dataset ""","""This paper first analyzes several popular KB-completion datasets and their evaluation methods. Several issues have been highlighted and discussed, such as the assumptions made in the ranking metrics, skewed distributions on semi-inverse relations (in WN18RR & YAGO3-10), confidence scores by popular methods are not calibrated. In addition, the authors also suggest that some simple baselines are actually quite robust. Based on their finding, the author creates a binary triple classification dataset. Effectively, every triples in their dataset are examined by multiple Turkers to ensure the label quality and also to avoid the potential error due to the ""close-world"" assumption behind some existing datasets.General comments:I'm happy to see that the authors revisit the problem of how KB-completion is evaluated. Although the various potential issues of existing datasets and/or evaluation metrics are not necessarily secrete to KB-completion researchers, it is still good to identify and discuss them. While I agree most of the analysis and findings, I have to argue that the reason behind those issues is often that the use case was not carefully discussed and defined first. As a result, it is very easy to find special biases or skewed distributions of some triples, which may be exploited by different models.The proposed YAGO3-TC dataset, in my opinion, is one step towards to right direction. Setting it up as a simple binary classification problem of whether a triple is correct, avoids the implicit and incorrect ""close-world"" assumption, and thus ensures the label correctness. The task is more like fact-checking or simple question answering. However, the potential issue of this dataset is the distribution of the triples. Because it is somewhat selected by two existing methods, it could be sub-optimal compared to, say, triples generated by some human users with a specific scenario in mind.Detailed comments: 1. It is a common and well-known issue that the probabilities or confidence scores of ML models are not calibrated. It is not surprising to see that this problem also exists in KB-completion models. However, given that dev sets are available, why didn't the author apply existing calibration methods (e.g., those mentioned in Guo et al., ICML-17) to the output of the existing models? 2. Similarly, the type information can be used in conjunction with the existing models, even as a post-processing step (e.g., see [1]). The performance of existing models may be improved substantially. 3. For imbalanced class distribution, the ""accuracy"" metric is not very meaningful. Precision/Recall/F1 are better. Another alternative is the ROC analysis (False-positive rate vs. True-positive rate) if the task can be cast as an anomaly detection problem. Again, the choice of evaluation metrics depends on the underlying use-case scenario. 4. Probably too many details and discussions are put in Appendix.[1] Chang et al., Typed Tensor Decomposition of Knowledge Bases for Relation Extraction. EMNLP-2014.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
21,"""A timely and thorough analysis of evaluation of knowledge base completion and a new evaluation benchmark""","""This paper conducts a systematic and thorough analysis of a long-criticized problem of current knowledge base completion (KBC) research -- its evaluation setup. It discovers several major issues of the popular ranking metric with negative sampling and several popular benchmarks, such as unrealistic assumptions, existence of semi-inverse relationships in the KBs, suspicious model calibration behavior, and all together leading to inconclusive evaluation of true model performance. Based on the analysis, the paper further proposes that triple classification is a better metric for KBC because it's less prone to the problems from the open-world nature of KBs and their incompleteness. However, triple classification doesn't work well with randomly sampled negative examples (as shown in the paper). Therefore, this paper also collects a new benchmark from crowdsourcing based on YAGO3-10 where the positive and negative triples are examined and judged by multiple workers, hence forming a more solid ground for evaluation than random sampling. Several simple heuristic baselines are also proposed and shown to perform comparatively with state-of-the-art embedding models on the new benchmark, which shows that there's still a large room for model development.Strengthes:- This is a timely and thorough analytical study on a lone-criticized issue of a popular and important research problem- The investigations are well-designed and show many interesting insights- The new benchmark is also a very good contribution to the field and likely will be used by many studies in the future- Overall the paper is very well written and easy to followWeaknesses- If we think about the ways KBC methods may be used in practical applications, ranking metrics do have their use cases and triple classification may not be the only (or strictly better) metric. For some applications we may have specific hypotheses to validate, for which triple classification may be better. But for some other, more exploratory applications, e.g., discovering new chemical compounds for certain purpose, it may be actually preferred to produce a ranked list of hypotheses and validate the hypotheses using other experiments. Personally I think negative sampling is the true problem of ranking metric due to the incompleteness of KBs, not the ranking nature itself. - It seems that only minimal quality control of crowdsourcing was implemented. Crowd workers could make mistakes, and different workers could produce work of very different quality. The described setup seems to be prone to worker annotation errors. It is mentioned that ""to check the quality of our labels, we randomly consider 100 triples from test data in our study. As a result, 96% of these triples considered to be positive by users in our study, demonstrating the high quality of our labels"" . How was this check done?- The validation set is quite small. Not sure whether it's sufficient to ensure robust modeling decisions.- The paper could benefit from more fine-grained analysis on the new benchmark and possibly pointing out promising venues for future improvement.""","""9: Top 15% of accepted papers, strong accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
21,"""KGC evaluation analysis and classification dataset; experiments need improving""","""Summary: This paper studies the drawbacks of existing KGC evaluation methods, i.e. those based on ranking metrics and triple classification accuracy with negative samples. The authors propose a new KGC evaluation dataset (subset of YAGO3-10) and conduct extensive experiments on existing KGC models (DistMult, RotatE and TuckER).The authors provide a much needed analysis into drawbacks of ranking metrics for KGC and stress the importance of classification-based evaluation. I appreciate the authors' claim that they will realease the code and datasets upon acceptance. However, I believe the paper in its current state is not ready for publication. Questions and more detailed comments below:Section 3.11. Why do you think the existence of ""semi-inverse"" relations is a problem? This only occurs for symmetric relations (e.g. verb_group, also_see) and without these triples, I'm not sure how any model would learn that a given relation is symmetric.2. How do you get the ""0.95 MRR"" figure? Does that mean that the MRR on all other (asymmetric) relations is very low? This is possibly due to WN18RR being very unbalanced, i.e. 2 relations dominate the dataset (derivationally_related_form and hypernym, see Table 3 in [1]) and the low MRR on asymmetric relations may be due to the low performance on the hypernym relation. It would be interesting to see per relation analysis of MRR for both datasets, similar to the one in [1]. Section 3.21. Are the plots in Figure 1 obtained with the same parameter numbers for each model as in Table 1? If so, I'm not sure these numbers are comparable, as RotatE for e.g. FB15k-237 has 5x more parameters than DistMult and 3x more than TuckER.2. You don't define what the axes labels ""Ratio of positives"" and ""Mean score"" are in Figure 1.3. It is not clear to me what the main takeaway from this section is, given that the results differ so much for different negative sampling procedures. 4. Do you have any explanation for the behaviour of TuckER in Figure 1c? Section 3.31. I agree that the performance of rule-based and local approaches is higher than expected, but still, it is far from the performance of state-of-the-art models. To make the comparison fair, these approaches should be compared to the state-of-the-art models with a comparable number of parameters.Section 3.41. Why is the relative ranking of models so different for Random-N vs Careful-N in Table 2? TuckER seems to be the worst performing model on easier Random-N, but perform the best on harder Careful-N. 2. Are different types of negative sampling applied only at test time, or also at training time? Section 51. Why would you train on validation data? And how come this changes the relative ordering of performances across models? Is it because RotatE has a lot of parameters and it's esentially overfitting until you increase the amount of data? Paper should be checked for word repetitions and typos. Writing quality could be improved.[1] Balazevic et al. On Understanding Knowledge Graph Representation, pseudo-url.""","""4: Ok but not good enough - rejection""","""5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"""
22,"""Good motivation but evaluation on more datasets would be more convincing""","""This work proposes a new knowledge graph embedding method in the Trans- family, that ensures the implication ordering of relations in the embedding space. The evaluation of the proposed model is done on a single dataset - FB122, in which it outperforms previous Trans models.Pros:- The proposed method is well motivated and described in detail. - In the current evaluation, the proposed method outperforms the previous model in the same family with a large margin. - The resulted embeddings seem to encode some semantic relatedness which can be considered interpretable. Here the claim of verifying the hypothesis could be backed with more examples than the few in table 3 - by placing them in the appendix rather than our code repository.- It seems that the code will be published, although this statement is not explicitly made. Cons:- The evaluation is done only on one dataset, while related work evaluates their methods on other datasets such as WN18 and NELL. Why arent other standard datasets considered - such as WN18? Are there limitations of the model that are not discussed or what? - More details on the implementation/evaluation would be nice:-- In terms of optimization, only the standard loss is discussed. In section Automatic Grounding of Positive Triples - how exactly the implication constraints applied during training? -- What does mean we create 100 mini-batches of the training set.What is the size of FB122 and why exactly 100 mini-batches?-- What framework is used for the implementation?-- What strategy and how many configurations are used for finding the optimal hyper-parameters?Minor: The distance between Tables 2 and 3 does not seem natural and is making the reading hard. The column in Table 3 is called imb, but there is enough space to write the whole name. ""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct"""
22,"""Interesting angle but not very clear and not enough evidence""","""The paper proposes the idea of a new knowledge graph embedding technique that builds on top of TransH and incorporate implication ordering. Results show that it outperforms the previous state-of-the-art method on link prediction and triple classification tasks on FB122. The idea and the new angle of looking at TransH are interesting, but the paper needs lots of revision in terms of clarity and formatting. More experiments could also be included to better show strength.Pros:- The use of the idea that relations can be viewed as sets of pairs of entities is intriguing and different from most previous KG embedding approaches- The new angle provided for TransH embedding is also worth learningCons:- Readability of this paper is low, due to intentionally adding lots of ""-vspace"" (or equivalent) to fit in the page limit. While I understand the limit is mandatory, lots of the sentences could be rephrased and figures could be rearranged instead of removing white space between lines/formulas/figures/tables, which will make it very hard for readers to follow. - The paper is not very clearly written. Some sentences, such as the last sentence in Intro paragraph 1 and the second sentence from Intro paragraph 2, are not clear to me even after finish reading the paper. Also, some figures are not very clearly illustrated and the same goes for the captions, especially figures 2 and 3. Typos and unclosed parenthesis also exist.- Experiments, although shows promising results, may not be comprehensive enough to show the strength of the proposed model. It only tests on a subset of the KG embedding tasks and compares TransINT with a few models.In summary, this paper needs to be improved in terms of clarity, readability, and the strength of experiments and is currently not ready for publication at AKBC. ""","""4: Ok but not good enough - rejection""","""3: The reviewer is fairly confident that the evaluation is correct"""
22,"""Good focused contribution, writing can be significantly improved""","""This work describes a new approach to learn KG embeddings by preserving the ""implication ordering"" among relations in the embedding space. The paper provides a cute new interpretation of TransH and then uses it to extend TransH to learn entity and relation representations. The crucial novelty of the approach is to map relations into linear subspaces whose parameters are tied using the implication ordering of relations.The paper has clear value and the experiments are sound (although only on one dataset). I find it hard to evaluate its novelty at the moment because there is a clear lack in discussion of prior work in this paper, but I like it overall.Question for authors: Is this the first method that uses the implication ordering of relations for KG representation learning?Why is the method only compared to only trans based rule integration methods? Why are the other methods not discussed and compared against?I think the related work section of this paper needs to be significantly expanded and the method should be contrasted with more works both in the experiments as well in the related work discussion.Also, why are all the evaluations only one the small FB122 dataset. Why are larger datasets not considered?Where does the ordering of relations come from? I could not find this in the paper.The paper is very compressed - the spacing between many lines is very low. I would recommend cutting down on sections 1, 2.1 and 2.2 and expanding on the rest.What are the number of parameters in this model. How does it compare to the original TransH method? How does the performance change with less/more data.page 6: what is c_i? why is this needed?The paper describes a hard parameter sharing scheme. Can we compare this to softer constraints or other variations of parameter sharing as an ablation study?I find the claim of the ""angles between the continuous sets as interpretable"" suspicious. How is this more interpretable than other methods which use vector operations or distances for the same? Particularly, how would we quantify this claim?Typos:page 5: r_i instead of r_1scalars instead of scalarpage 7:ignores instead of ignorepage 9:the tables should have TransINT^G and TransINT^NGpage 10:similarity""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct"""
23,"""Method for conustrcting high-precision entailment graph""","""This paper proposes a method for automatically constructing large entailment graphs. The method seems reasonable but there are various issues regarding clarity, novelty and empirical evaluation. I think this paper is below the bar of ACL/EMNLP but focuses on a topic that is central to a conference like AKBC and thus is a good fit.The method proposed has 3 parts:a) Defining the nodes of the graph which correspond to ""events"". Unlike prior work, these nodes do not have any variables in them and can be both unary binary and ternary. Authors claim that this is an advantage since the semantic meaning is more precise but there is a big price also - using the rules is harder when they are more specific. Unfortunatley there is no real evaluation that tests whether this is indeed an issueb) Defining a local similarity score between pairs of nodes. This is based on combining a similarity score between arguments and a similarity score between predicates. The scores are relatively straightforwardOne thing I found weird was that the score for a set of arguments was defined using *or* rather than *and*, that is, it seems enough if you have two sets of aligned terms, that only one of them has high similarity to give a high score to the entire set. This seems counterintuitive and is not explained.c) Defining a procedure for going through entailment paths and adding edges between events. This is done in a fairly precision-oriented manner, as taking the transitive closure can be too noisy. Overall, during the experimental evaluation it is unclear how many edges one gains by adding the global step since in many cases the numbers given are coarse and it seems like very few new rules were addedOverall, the paper proposes a method for constructing graphs and reports some accuracy of generated rules and it seems to be doing ok. I am unsure if this is enough for AKBC, for a first tier conference there are other things that must be done:1. Some discussion on the usefulness of representation + empirical experiment. The authors change the representation such that it is more specific but this makes it applicability low. It is hard to say whether 100M rules is a lot or not, it could very well be that even though 100M sounds like a lot, trying to use these rules in an application will fail because the rules are too specific. As it is, it is hard to judge the recall of the system, and it is likely that there are many rules it does not capture and it is hard to say how many rules will actually be used in a scenario where one would want to use them.2. Clarity - various places in the paper were unclear:3.2.1: the first paragraph is not clear3.3: Why do the rules create forest? Are there no cyces?3. Experimental evaluationEvaluation is post-hoc only, that is you sample rules from the models and estimate precision. But what about recall? What do the rules cover? The authors say this is a large rule-base, but it is hard to judge. Similarly it seems a better eval. would be to show the rules are useful for some downstream application. Nowadays people are buildling less knowledge-bases of rules since it is an arduous task and moving to encoding knowledge through learning and retrieval on-the-fly. I am not convinced that buildling these KBs would be useful in a NLU task but maybe for testing probes of various sorts.Also - it seems like in 4/10 cases running the global inference did not add rules or added very little. It is hard to know when the number of rules is reported in millinos only. Why is that?There is no empirical comparison to prior work as far as I can see.To conclude:A method for building entailment graphs is presented resulting in tens of millions of rules that are of high precision. However, the paper does little to convince this is a useful representation and rule-base and empirical evaluation is weak.""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
23,"""This paper improves the prior work.""","""This paper proposes a three-step framework for acquiring eventuality entailment knowledge. Compared with existing approaches, the proposed framework can handle much larger scale eventualities in an unsupervised manner.However, it would be better to include a case study and compare it with previous work such as ASER.Comments- The domain of eventuality is commonsense. Can this paper adopt ConceptNet instead of Probase of WordNet?- It is better to include a case study and compare it with previous work such as ASER.- Do hyperparameters (e.g., threshold) affect the result?This paper improves the prior work, but analysis of the proposed method and comparisons are missing""","""6: Marginally above acceptance threshold""","""2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"""
23,"""unsupervised method for entailment in eventuality KG""","""This paper focuses on the problem of adding entailment relations in an eventuality (activities, states, events) KG. The main contribution is a pipelined unsupervised method for this problem. The method is divided into three stages: (1) decomposing eventualities - predicates (mapped to WordNet) and arguments (mapped to Probase), (2) Local inference step - aggregate entailment scores on predicates and arguments, and (3) Global inference step - use transitivity of entailments. Human evaluation demonstrates quality of the inferred entailment relations. Overall the paper is well written.The choice of aggregating scores over aligned arguments (logical OR, Equation 1) is not well motivated. If a single argument matches, the score for the set becomes 1 irrespective of other arguments. Is this expected? Isnt logical AND more suitable? If not, how does a logical OR produce high quality entailment relations? A discussion on such choices would be really useful.In Results Analysis (5.2), the lower performance for s-v compared to s-v-o-p-o is attributed to unary relations between predicates and arguments being ambiguous. However, the difference in performance is there only for global inference step and not for local inference step. It would be interesting to know why there is more drop in case of s-v.It would be interesting to know how the method compares on smaller graphs. If the method cant handle smaller graphs, it would be informative to highlight why and what sizes does it expect. In the first paragraph of Introduction, it will be useful to define eventuality when it is introduced with the help of an example (E.g., the one in Fig 1).Fig 2 caption: generated => generate""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
24,"""Acceptable results, but too much ambiguity in what they actually did""","""Update: Authors have addressed my primary concerns around the value of the contributed dataset and the experiments. I've revised my rating to Accept the paper--------The authors scraped about 6k BibTEX files from Math, Physics, and CS papers. They use it to construct a larger, automatically-generated dataset alternative to the UMASS dataset for sequence tagging of bibliographic entries (e.g. title, authors, venue, year, etc.) They perform some experiments and show some improvement on both UMASS and their new Bibtex-generated dataset.My main critique of this paper is that there are critical details missing that are important to understanding what they did. Ive organized this below in 3 sections:questions about the dataset thats generatedquestions about how they performed their experimentsquestions about how they interpreted their results----Wrong/missing Citations:In the discussion of data augmentation techniques, missing reference to Siegel et al Extracting Scientific Figures with Distantly Supervised Neural Networks. Very related in that they use LaTeX files to generate large amounts of training data for neural models meant to process scientific documents. Section 4 -- I think the reference to Peters et al 2018 for LSTM + CRF is probably wrong. If you read Thai et al, the BiLSTM CRF baseline theyre using is from Lample et al 2016. ----Questions about how they generated the dataset:Figure 1 - how are you tagging periods, hyphens, or words like In that get added in front of the booktitle field?Then, we construct a set of queries from these seeds and perform a web search to get BIBTEX files. -- Can you be more specific how issuing web searches with resulted in BibTeX files? What repositories/websites did you crawl? How did you know which files were proper to grab? The resulting dataset is 6k BibTeX files from Math, Physics and CS. Are these not just mostly from arXiv LaTeX sources? Could one simply have just downloaded the arXiv dumps and gotten way more BibTeX files? Figure 3 -- Can you explain how the generation was done for the additional markup? Its quite an important distinction whether the LaTeX compiler is producing this output or youre writing something custom to do this.To minimize noises causing by the PDF extraction process, we produce a single PDF for each citation -- Nice ideaIn Table 2 -- Might want to report the year field, since its 2nd most common field.Because this is an automatically generated dataset, how do you assess the quality of this generated dataset? One perspective is -- this is treated as data augmentation for improving training, in which case data quality doesnt really matter as long as it helps the model perform on the UMASS test set. But it also looks like youre reporting performance on the Bibtex test set. How does this dataset hold up as a test set in terms of quality vs human-curated UMASS?------Questions about how they performed their experiments:In Training datasets section, did you mean validation instead of cross validation sets? Seems unnecessary to perform cross validation on this size of dataset.The UMASS dataset has 38 entity types:6 coarse grain labels (reference_marker, authors, title, venue, date, reference_id). Venue can be split into 24 fine grain labels (e.g. volume, pages, publisher, editor, etc)Author and editor names can be also split into 4 finer-grain labels (e.g. first, middle, last, affix)Dates can be segmented into year & month.In your new BibTex dataset, there are 59 segment labels that dont necessarily map to the labels in UMASS. Youre missing discussion about how you consolidated these differences. Did you map select labels in one dataset to another? Or did you treat the two datasets as separate label spaces? If so, how did you actually perform those Roberta experiments where you trained on both datasets?Why no experiment without the LM pretraining? Because you only performed UMASS vs (UMASS + LM Pretraining + Bibtex), its unclear any gains in the latter setting are due to continued LM pretraining to adapt Roberta to the bibliographic-entry-parsing domain, or due to the labeled data youve curated. For example, if domain adaptation was key here (and the labeled data didnt matter), could one have also performed LM pretraining on UMASS, or simply performed LM pretraining on a large number of bibliographic entries without going through all of the complicated Bibtex parsing to get labels? The paper also lacks clarity about how the (UMASS + LM Pretraining + Bibtex) was performed. Was there a particular order in which it happened? Was it all done at the same time in a multitask manner? How did you handle the vastly different sizes of the UMASS and Bibtex dataset? How did you handle the differing label spaces?------Questions about how they interpreted their results:It looks like LSTM+CRF with ELmo features is substantially better on UMASS than Roberta. This makes me question whether the improvement from training on Bibtex couldve also happened with an LSTM+CRF (elmo) model rather than Roberta?Suppose the results showed that training on Bibtex helps for both Roberta & LSTM+CRF (elmo) models. Then that would be a great result in favor of the Bibtex data being useful. But if this result only held for Roberta, there might be something else happening here.When reporting UMASS results, why does it say 24 classes? UMASS has 38 classes. What happened to the other classes? Also, when referencing Table 4 results, why does it contain label types that arent in UMASS dataset (e.g. booktitle)? What model is SOTA referring to in Table 4? Its not actually clear from Table 3.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
24,"""Nice contribution for AKBC - use distant supervision to label a large amount of data for CFE and improve performance""","""This paper is quite straightforward and makes a lot of sense for AKBC.The authors note that the task of parsing reference strings is important nowadays for scientists and that the main dataset for training and evaluation is quite small.The authors collect a large number of reference strings from multiple bib styles and label them using bibtex files. They they train a model on this large dataset and improve performance on the standard UCFE benchmark.Overall this is very reasonable I have just two questions:1. What is the accuracy of the automatic labeling procedure? Can some analysis be done?2. The authors present various models that do not use their training data and one (roberta+bibtex+lm) that both uses their new bibtex data and also pre-trains with a MLM objective on their data prior to the downstream training. What is the contribution of the extra pre-training phase? If one just augments the data, it does not work well? This seems like an interesting and simple experiment to add.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
24,"""A useful dataset and model for parsing bibliography entries. ""","""The paper is releasing a dataset and a model for ""citation field extraction"", the task of parsing the a bibliography entry of a paper into its corresponding fields (authors, title, venue, year ... etc). The dataset is automatically generated by collecting 6k bibtex files and 26 bibtex styles, then compile pdfs for the bibtex entries using multiple different styles. The resulting dataset is 41M labeled examples. The model starts with the RoBERTa checkpoint, continue MLM pretraining on this data, then finetune as a sequence labeler, which results into new sota performance. Error analysis shows that there's a big difference between the model performance on common fields (authors, title, .. ) and less common ones (edition, chapter). Pros: - The task is important which makes the dataset a useful resource. - Results are good compared to prior workCons: - little noveltyNotes and questions: - The dataset is constructed by randomly sampling a bibtex entry and a bibtex style, which means the dataset size can be easily increased or decreased. This brings a few thoughts: 1) how did you decide the dataset size of 41M examples?2) Would a larger dataset make things better or a smaller dataset makes things worse? my guess is that the dataset is unnecessarily large, and you can get the same performance with a smaller one3) Instead of random sampling of entries and styles, I will be curious to see if upsampling rare fields can improve their performance without loss in performance on other fields. - Table5: the F1 numbers are not consistent with P and R.- I will be curious to see how well grobid pseudo-url does in this task, especially that it is, AFAIK, the leading PDF parsing tool. - pretraining: how are the citations packaged into sequences of size 512? or do you train on individual examples? ""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
25,"""Retrieval-Based Data-Scarce Semantic Parsing""","""The paper presents a retrieval-based semantic parsing method for the data-scarce setting. The model retrieves a logical pattern from the train data by computing the similarity between NL queries. Then lexicons are added to the retrieved pattern in order to generate the final LF. The motivation and the proposed method make sense to me. The experimental results also show that the approach improves the performance over several baseline methods. The only concern is that the experiments are only conducted on wikisql, which is not very data-hungry. The paper reduces the number of training set to simulate the setup. The paper could be improved by conducting experiments on more small-scale datasets. ""","""8: Top 50% of accepted papers, clear accept""","""5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"""
25,"""Solid Paper on low data semantic parsing with deep analysis""","""Summary: This paper introduces a modeling technique for Text-to-SQL semantic parsing designed to work well in data sparse regimes. The model works by first retrieving the most similar questions from the training set. The SQL logical form of retrieved questions are then ""ungrounded"" and the most common retrieved logical form pattern is then fed into a grounding network which insert entities from the question into the pattern to form the final parsed logical form.Strengths:- The authors demonstrate improved performance over a SQLova baseline in the small data regime and comparable performance when more data is available- The authors perform thoughtful ""generalization tests"" to investigate potential weaknesses or behaviors of their model (the dependence on logical pattern distribution, dependence on the dataset size and generalizing to unseen forms)- The separation of syntactic and lexical parts of the parsing process is interesting and sensible.- Their method is able to leverage additional question similarity resources which allow their model's performance to improve without requiring extra expensive parsing annotation.Weaknesses:- The parsing method is not compositional, harming its generalizability. The model can never generalize to patterns not seen in its database of patterns, but there may well be training signals in the dataset that would allow for this kind of behavior. - Performance gains, whilst certainly present, are relatively modest over SQLova in most settings.- The authors demonstrate that the model can generalize to patterns not seen at training time by adding extra data to the model's database at test time. Whilst this boosts performance, it seems the performance is worse than simply training the model again with the extra data. ""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct"""
25,"""Good focused contribution showing the efficacy of retrieval based models for low resource semantic parsing""","""This paper describes a retrieval-based model which uses query-to-query similarity for the WikiSQL semantic parsing task. The method especially does well when labeled data is scarce.The approach is simple yet effective. The paper is very well written, the experiments are incisive and clearly demonstrate the usefulness of the approach.Can we also compare this model with other supervised approaches such as Berant and Liang, and especially Finegan-Dollak et al.. This comparison will help the readers understand the value of the query similarity based non-parametric approach.It would also be interesting to see how well this model does/fails when the queries become more and more complex and compositional. Current approach seems to work only for specific kinds of queries.The syntactic information of the queries pseudo-formula being used in the retriever module and the lexical representation pseudo-formula being used in the grounder are obtained by slicing the encoding of the CLS token in the table aware BERT encoder. This is strange. What guarantees that the syntactic and semantic representation can be disentangled in this way. Please see this paper for a better way to do this: pseudo-urlA small description of SQLOVA would help.typo: 141: environments""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
26,"""Interesting direction following rather standard arguments with unclear experimental evaluation""","""The paper revisits Credal SPNs and proposed a learning approach for Credal SPNs in the presence of missing information That is, now the weights on sum nodes vary in closed and convex set and in turn, one gets a imprecise probability model. Overall, the paper is well written and structured. The main technical contribution are (1) a group-wise independence test and (2) clustering method, both for the credal setting assuming missing data. Specifically, the independence test is a directly application of complete case analysis plus interpreting missing values as contribution to the base population. For the clustering, thee authors should argue why not existing methods for clustering with incomplete data could be use. In any case, the likelihood approach presented also follow the same logic as the independence test. In both cases, the arguments are a little bit hand waving and fluffy. For instance, it is not clear to me what 2is that value that is poorest fit2 (page 6). Still, the clustering is interesting, although as said, a discussion of related work on clustering incomplete data is missing. The empirical evaluation is interesting and follows the standard protocol for SPN. What i am missing is a repeated argument of why CSPNs are important. Furthermore, the running time should be reported. Also, the authors should provide some insights into the structures learned, also in comparison to the complete data case and the even to the standard SPN setting. Furthermore, it might be interesting to use Random Credal SPNs based on Random SPNs (Peharz et al. UAI 2019) as a baseline to illustrate the benefit of structure learning. Currently the results just show likelihood. But shouldn't we also consider here the number of parameters? At least getting some numbers here would be appreciated. Also, sincce you consider the CLL, one should also show a discriminatively learned SPN. General, the experimental protocol should be described at sufficient details. What were the hyperparameters? Was this crossvalidated?To summarize, nice direction with follows the standard approach for learning SPN for learning CSPN with using ideas from data imputation. The empirical evaluation is not well presented in the main part. Some missing related work on the clustering with incomplete data. ""","""5: Marginally below acceptance threshold""","""5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"""
26,"""review for learning credal sum product networks ""","""In this paper the authors investigate probabilistic representations for learning from incomplete data and specifically investigate credal sum product networks. CSPN are better able to consider data incompleteness which is an important aspect of knowledge bases. The authors perform experiments on a large number of datasets with varying amounts of artificially missing data observing that the optimized log liklihood computed on a learned CSPN generally performed the best.The paper is generally well written and does a good job of explaining the underlying models and algorithms. The paper is not particularly novel but contains a large number of experiments that could be useful to those interested in probabilistic models in regimes with missing data.other comments:- table 4 is a bit busy, there could be a clearer way of presenting and highlighting the relevant results. - section 4.2 has an occurrence of a CPSN type-o""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct"""
26,"""A learning algorithm for a type of graphical model, evaluation is a little limited""","""Summary: the paper is presenting a learning algorithm for Credal Sum Product Network (CSPN), a type of graphical model that is tractable (easy to compute partition function), and can encode uncertainty in the network parameters (instead of fixed weights, the network parameters have range of values (or more generally, are defined using a set of convex constraints between them)). Prior work [Maua et al., 2017] introduced CSPNs and provided an inference algorithm, and this paper is the first to propose a learning algorithm for CSPNs. Pros: first paper to introduce a weight learning algorithm for CSPNs. Evaluation shows better results than Sum Product Network (SPNs)Cons: - evaluation is limited in two aspects, baselines and tasks. 1) baselines: the only baseline considered is SPNs, which is a reasonable but old baseline. It would be good to see how well CSPN learning works compared to more recent models, especially that even CSPN's inference evaluation [Maua et al., 2017] was similarly limited. 2) tasks: evaluation avoided large datasts. It excluded the largest of the subtasks (footnote page 21), and evaluating on large scale textual data is left for future work. Even though the motivation for SPN was that its inference is tractable and fast, the proposed learning algorithm for CSPNs seems to be 10x slower than that of SPN and didn't scale to large datasets. Notes: - The paper mentioned that CSPN avoids the closed-world assumption, and can work with incomplete examples. I agree with the second but not the first. The proposed learning algorithm takes into account that some instances have unknown values, but it is still assuming that the world only contains the provided list of instances (closed-world assumption). - The paper use of the term ""lifting"" seems different from how it is used in Broeck et al., 2011 (doing inference at the first-order level without grounding to predicate logic). This needs to be clarified. ""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct"""
27,"""The paper needs more improvement.""","""This work proposes a graph Hawkes Neural Network for event and time prediction on temporal knowledge graphs. Overall, the paper is nicely written and easy to follow.However, this paper needs more improvement. Also, the proposed method needs to compare other baseline methods and is somewhat limited. Importantly, the analysis of the proposed method is missing.Major comments- About equation (3), the narrative looks wrong and I don't understand why this is equivalent to the conditional probability P(e_o|e_si, e_pi, t_i, g,...). There should be explanations on P(e_si, e_pi| g,...).- The proposed method only considers limited information from historical event sequences. For example, in equation (5) do not include multi-relational entities other than events under the same predicate. Furthermore, it does not capture multi-hop neighbors explicitly.- What is the major advantage of using cLSTM? It would be better to compare with LSTM, Time-LSTM [1], and Time-aware LSTM [2]- What is the definition of lambda_sub in equation (14)?- The definition of t_L is missing. Also I don't understand how to predict time using equation (16).- Furthermore, if the paper assumes that there is only one single event type, then given all the events, the method should predict the same next event time?- About the experiments, it would be better to include static approaches such as ConvE [3], RotatE [4].- Also if the method uses LSTM instead of cLSTM, then what will be the result? This is necessary to show the effectiveness of cLSTM.Minor comments- The example in the introduction is not appropriate. The paper talks about the price of the oil (attribute of a node), but the proposed method does not predict attribute value.[1] What to Do Next: Modeling User Behaviors by Time-LSTM [2] Patient Subtyping via Time-Aware LSTM Networks [3] Convolutional 2d knowledge graph embeddings [4] Rotate: Knowledge graph embedding by relational rotation in complex space ""","""5: Marginally below acceptance threshold""","""5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"""
27,"""Review""","""This paper introduces a new model based on the Hawkes process to model the temporal knowledge base. The proposed method is based on previous work Mei & Eisen, 2017. The previous work target event prediction and time prediction, but in this paper, the task is subject/object prediction and time prediction. Furthermore, the author showed result improvement over strong baselines on several datasets. Pros:The paper is well written and the model is well motivated. The experiments show improvement with the proposed model compared to baselines.Cons:The results from different models are very close to each other. Is the proposed model significantly better than other baselines? Can you run the significance test on the results?""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct"""
27,"""Well formulated solution framework, excellent results, a few details missing""","""The paper addresses the problem of predicting links and time-stamps in a temporal knowledge graph, and proposes a novel neural Hawkes model that uses the continuous neural Hawkes formulation as its basis. Key assumption the authors make is that the object entity interacts (forms a temporal link) with a predicate p with a subject entity s only based on past links involving the same s and p. Thus, the history can be implicitly modeled by aggregating across all the objects. After this, the rest of the model quite closely follows that of Mei & Eisner model. The experiments conducted over GDELT and ICEWS14 datasets show that the proposed GHNN offers significantly better results than Know-Evolve, and is nearly as good as the closest RE-Net. Overall, the paper is well written and the model seems sound.A few details are missing though:1. They claim ""NH approach is limited by the number of event types"" but they haven't compared with it in any experiment2. Knowevolve implementation as in Appendix I gives ~90% predictions when there is no actual change in time -- for both GDELT and ICEWS datasets. Yet the performance of KnowEvolve in time prediction on these two datasets is significantly different. Unclear what is going on here.3. It is not known how much time-series information is available for each fact (not all subject/predicate pairs will be interacting in the previous slices).""","""8: Top 50% of accepted papers, clear accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
28,"""Clearly written and motivated paper, but need more analysis and explanation for experiments""","""This paper addresses the problem of building KBs from datasets like product reviews, that identifies implications between opinions from reviews. The authors claim that opinions and their implications often do not co-occur within the same review and the annotation for such implication relation is expensive, for which matrix factorization techniques turn to be a promising approach. Strength:1. This paper is well written and most contents are easy to follow. The motivation is clear and the method description is well presented with reasoning behind each component choice.2. The authors released the results of generated KB from their model on 20 domains, which could be useful for the research community.3. Their experiment results seem to be in favor of their proposed method, although it's a very simple method.Weakness and Questions:1. Intuitively, the ""implication"" relationship proposed in this paper should be directed. A implies B doesn't mean B implies A. However, the method introduced in the paper decides such implication relationship based on cosine similarity, which is symmetrical.2. Please provide more details about how you obtain representations from universal schema, as this seems to be the major season why you have the huge performance gap between your model and universal schema. 3. For using pre-trained LMs, which BERT model did you use? 4. When applying pre-trained models, why do you mask out the aspect token when predicting the mod token?5. Do you restrict your modifier and aspect tokens to be unigrams? If yes, you should clarify this in the paper, as this is inconsistent with motivation examples you provided. If not, this is not comparable between pre-trained models and other models. And this is very important as the difference between your model and pre-trained LM doesn't seem to be big.6. In section 4.3, if you are feeding LM supervision from generated KB, this is technically not ""supervised"" as there are quite a lot of noise from the generated KB. So it's hard to tell whether the reason why such ""supervised"" model cannot achieve good results is due to the limitation of LM or the noise from training. Thus, your conclusion in 4.3 needs more convincing evidence. 7. Again in section 4.3, you say ""It takes as input a pair of opinions, pseudo-formula and pseudo-formula and predicts a label 1 if pseudo-formula implies pseudo-formula or 0 otherwise."" It seems like you don't feed the LM any context of such extracted opinions. This is not a typical setting for LM, although I understand it's hard to incorporate such ""context"" in your setting. 8. There is little analysis of comparison across domains.9. More prediction examples and error analysis are neededSome minor points in writing:1. I found it hard to follow in the second paragraph of second page, that starts with ""There are a number of challenges in ..."". I didn't understand until I finished reading some later content.2. The authors may want to provide more details about MINI-MINER. For example, whether you use any NLP tools to extract certain linguistic features? And did you use this for baselines as well?3. Although the authors give a brief description about ""factorization techniques over binary matrices"" and why they think this will cause worse results, I was expecting some ablation analysis on this, while they didn't provide any.4. In section 3.3, I think the last pseudo-formula in the first paragraph should be pseudo-formula instead.5. How do you calculate the probability in PMI? Do you use frequency based counts? If yes, clarify.6. I think the authors need better experiment figures to be put in the paper. The current ones seem very rough and some are with bad resolutions. ""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
28,"""Interesting kind of KB but missing discussion of assumptions""","""The authors propose a method to automatically build a Knowledge Base of opinions and implications between them. The KB is realized as a directed graph where nodes correspond to opinions in a canonical form (modifier, aspect), and edges indicate implications. It is built by factorizing a matrix of item-opinion frequencies, and finding the top k neighbors of an opinion.The idea of creating a KB of opinions is relevant for the field of KB construction, and it opens the door to further research where these graphs can be used.Strengths- The proposed method allows to obtain a KB of opinions from raw text as input. For cases where the performance of an opinion extractor can be harmed due to a change in the domain, the authors propose a set of rules.- The authors propose a method for improving robustness, based on optimization under noisy data.- The experiments are thorough, showing results with multiple datasets from different domains, and relevant baselines.- Overall, the paper is clear and well structured.WeaknessesThe paper starts with the promise of modeling implications between opinions, but later I realize that this modeling is limited by assumptions and design, that are not addressed. The first is: implications between opinions are, as across all NLP, very sensitive to relatively small lexical variations in text. Take for example the opinions O1 = ""The movie has complex characters"" and O2 = ""There's good writing going on"". The proposed pipeline would then give as candidate implication O1 -> O2. What if O1 changes to ""The movie has unnecessarily complex characters""? How does the opinion extractor behave in these cases? How does this affect performance? The second, and probably more crucial, is: the directions of the implications are not really modeled by the proposed method. In fact, following the example above, the proposed method might propose both O1 -> O2 and O2 -> O1, because they are close in the space of opinions, and the graph built via k nearest neighbors is undirected.I wonder how unsupervised the method really is, in contrast with what the authors claim. It relies heavily on an opinion extractor, which rather means that when going from raw text to opinions, there is not really anything to learn, but any potential errors from this part of the pipeline are propagated to the factorization step.A core component is the minimization of the reconstruction error of the item/opinion matrix, but the way this minimization is performed in practice is not specified.These weaknesses are more related to how the work is conveyed, and I think the paper would benefit by discussing them. The actual impact of the proposed method and its relevance for the conference are still valuable.""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
28,"""Happy to see the unsupervised method, would like some clearer motivation""","""Summary of contributions:This paper presents an entirely unsupervised method for taking as input a corpus of reviews about a set of items and building as output a directed graph where each node represents an opinion phrase (extracted from text) and edges indicate implication relationships.After extracting opinion phrases from the review corpus (relying largely on prior work) the method constructs on a matrix where rows represent items and columns indicate opinions, with the cells containing counts of how often that opinion was expressed for the item. Matrix factorization is applied to produce embeddings for each opinion, with edges then created between k-nearest neighbors in the embedding space.The contributions can be enumerated as:- proposal of an entirely unsupervised system for construction of this opinion graph- use of matrix factorization to determine similarity between opinions- application of the method to data in several domains and analysis of results- release of data and codeStrengths:It is a fully unsupervised method that takes a corpus of text and produces a potentially useful piece of knowledge.The matrix factorization approach is a simple but elegant way of discovering similarity between extracted phrases expressed in text. Evaluations are conducted in several very different domains, from movies to electronics to restaurants, showing that the method is domain independent.The comparisons with BERT and discussion of potential complement between the proposed approach and a language model approach is nice.The paper is generally well written and easy to follow.Weaknesses:My primary complaint is that I would like to see more motivation of the utility of the constructed knowledge base. (I also somewhat disagree with the use of ""knowledge base"" to describe the outputed graph, as it contains a single relation and entity type, and that relation has ambiguous semantics.) Some specific issues that should be addressed:- What is the use of such a graph? How would one use it in a downstream task? - The abstract asserts that you show that your model can benefit downstream tasks such as review comprehension, but you do not directly show this. If you make this claim, you should provide an experiment showing improvement on this task by incorporating your graph.- Why do we assume that proximity between the opinion embeddings indicates implication rather than just similarity? I assume that there will be some cases where two opinions in your graph imply each other, while in other cases there may be only a single edge (if one happens to have more nearby neighbors). Why should we consider these cases to be different? - What exactly does implication mean in terms of opinions? I'd like to see a clearer definition here.With regard to evaluation, did you measure the inter-annotator agreement between crowdworkers? The task of assessing ""implication"" between opinions seems quite muddy to me. In fact, I personally might disagree with your flagship example of ""complex characters"" implying ""good writing"". I don't think that is always true, and if I was given that example as a crowdworker I may have marked it as incorrect. Maybe my opinion would be in the minority, but this is why I would like to see a better and clearer motivation for the task.With regard to your baselines, I was a little unclear on the application of universal schema. Universal schema uses a matrix where rows are entity pairs and columns are relations. You mention that you map all item-opinion pairs to a single ""has-a"" relation. Does that mean the matrix has only a single column?The font size of Figures 2 and 3 is too small.Minor point: There is a typo in the first full sentence on page 2, where ""second review"" should read ""first review"".Overall Conclusion:This paper presents a novel unsupervised method for extracting opinion phrases and implications between them from a corpus of reviews. The paper would benefit from better motivation for its problem and solution. That said, its unsupervised approach for knowledge extraction offers a useful method for capturing semantic relationships between textual phrases, and the experiments show strong performance against baselines.""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct"""
29,"""nice paper but maybe limited to product databases""","""Update after response:Thanks to the authors for responding to my comments in particular adding the parameter analysis. While I still wonder about the limited scope of the solution, the research is nicely done and the problem formulation is now clearer.-------------Pros:- novel problem definition- experiments on multiple datasets- nice usage of feature extractor in multiple Negatives- problem formulation does not include some assumptions- potential lack of generalizability This paper describes the problem of entity resolution in an environment where there are a wide variety of entity variations. I thought this was a rather novel problem formulation. They introduce the notion of contrastive entity linking to solve this problem. In particular, they define a blocking mechanism for identifying entity variations and a feature extraction algorithm for identifying entity attributes that are core to the entity or that are part of the entity's variation. These can then be used to drive a classifier.My main criticism of the paper is the potential generalizability. While it's applied in three different domains, the datasets essentially of the same kind, namely, product databases which already contain unique entities. From my reading, the assumption is not stated in the problem definition. The problem definition could be more precisely worded. In section 3.3, two assumptions are stated about catalogs, namely, that they ensure that records refer to distinct entities and that entity variation (i.e. record variations) are more similar to each other than base entities. These are important assumptions that make the problem much easier than what was outlined in the problem definition. In terms of evaluation, the paper didn't seem to report a number of critical parameters, namely , the bucket size threshold and similarity threshold during the experiments. I appreciated the experimental settings of using the feature extractor in a number of downstream entity.There's a couple pieces of related work. First, for entity resolution I think this approach bears similarity to [1]. There's been quite a bit of work in the NLP community on identity (see e.g. [2]) that would be useful to discuss. Overall, I thought the paper was a nice contribution. Minor comments:- The paper was easy to read. - it would be good to check the usage of the word record, entity and product, they get confused in places.- It would be nice if the annotated data is also made available in the paper. [1] Zhu, Linhong, Majid Ghasemi-Gol, Pedro Szekely, Aram Galstyan, and Craig A. Knoblock. ""Unsupervised entity resolution on multi-type graphs."" In International semantic web conference, pp. 649-667. Springer, Cham, 2016. [2] Recasens, Marta, Eduard Hovy, and M. Antnia Mart. ""Identity, non-identity, and near-identity: Addressing the complexity of coreference."" Lingua 121.6 (2011): 1138-1152.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
29,"""the paper an interesting work in progress which requires improvements in (a) novelty (b) analysis.""","""Paper is about finding variational attributes for catalogue named entities. Examples of such attributes is ""capacity"" for a memory card (e.g. Sandisk flash drive ""64GB"") and identification of such attributes helps in duplicate detection in E-commerce cataloging and search. The proposed approach is unsupervised where they first detect some candidate entity variations (pairs of entities with high similarity scores) and then in each pair detect the ""significant"" phrase that ""contrasts"" one entity from the other one in the pair as the contrastive feature. The significance phrases (ngrams) is estimated exhaustively from a corpora with a PMI-like metric. Authors experiment with three entity linking systems with diverse architecture ranging from rule-based to logistic regression and neural-based models on three domains (music, grocessary and software catalogs). Results are promising and show that most systems benefit from these features.Paper is written mostly well: problem has been defined and motivated well and the approach is presented in a smooth structure and flow. However, presentation of results and analysis is unclear at parts. Novelty of the approach is modest and is mostly around the detection of detection of contrast features and significant phrases. These approaches which show promising improvements against the baseline, can be expanded to more recent efforts in using deep semantic representations in NER and extraction of multi-word-expression. Experiments are fairly extensive and support the proposed approach well. Analysis is not extensive and should be improved. All together, I found the paper an interesting work in progress which requires improvements in (a) novelty (b) analysis.Questions and suggestion:1. Evaluation of candidate pairs is based on a data that is annotated in a post-extraction fashion (annotator labels the output of the system). So if you don't have a gold-standard set (all possible pairs), how do you compute ""recall"" there? (to compute the F score in table 4).2. Did you experiment with richer models of semantic similarity using embedding, etc?3. Despite the preceding explanation of the notation elements, the formal definition of the 3-way entity linkage is not easy to understand and doesn't connect with the rest of the section. 4. In the core extraction of significant phrases and contrast features, ngram frequency is the major factor (along with some thresholding). It is not clear why authors are comparing their interpretability against ""frequent phrases"" (which are a fairly similar approach). Please elaborate more on Table 3 comparison. 5. Please provide more details and analysis on results of the three way classification, specially around the confusion metrics. Are the improvements similar for different classes. What kind of duplicates does the CF model extract that the No-CFs don't, etc.6. How would analyze last column of table 3 (higher rate of incorrect class for contrast features)?Post Rebuttal comment:After reading authors responses to my and other reviwers' comments and also checking the new draft, I am going to lift my rating of the paper. Thanks for your willingness to improve your work.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""
29,"""Mining significant phrases to identify variants of entities to improve entity resolution""","""The authors propose a new algorithm, contrastive entity linkage (CEL), to identify duplicates and variations of entities in catalogues. The authors introduce the concept of base entities and entity variations. A base entity is defined using a set of attributes, and all variations must have the same values for base attributes, but differ in the non-base attributes. The key idea is to mine significant phrases from the unstructured attributes of a record such as the title or a product. The significant phrases are added as a new attribute, and a classifier is trained using the new ""variational"" attribute. Experiments show that inclusion of the variational attribute improves entity resolution results.pros:- the work is in an important area as entity resolution of near duplicates remains a challenging task.- method is unsupervised so can be easily applied in new domains- method improves the performance of any ER system (as it defines a new feature)cons:- distinction of base and variational attributes is unclear in practice (see below)- no discussion of hyper-parameter tuning- hard to replicate, no reference to open code, several details not fully specifiedThe key contribution of the paper is the VarSpot algorithm to identify variational attributes (contrast features). The main idea is to mine word ngrams whose frequency is larger than expected based on the frequency of the individual parts of the ngram. This idea is similar to the significant terms query in ElasticSearch. The evaluation focuses on three datasets, Amazon/Google software products, groceries, and musicbrainz/lastfm. The evaluations show that the contrast features improve the entity resolution performance on all three datasets for identification of duplicates and variants. The evaluation compares results with and without the contrast features, showing that the three ER systems considered in the evaluation (SILK, Magellan and DeepMatcher) benefit from the contrast features. In all experiments random forest consistently outperforms logistic regression so nit doesn't seem useful to include both.The algorithm has two hyper-parameters, the threshold alpha to prune the significance of an ngram, and the length of the ngrams. The paper does not discuss how these hyper parameters were optimized, or sensitivity to them.The distinction between base and variational attributes is unclear. In many cases, unstructured fields such as titles or descriptions may include both base and variational attributes (how are they distinguished?). Also, variational attributes may appear in structured fields too (eg memory size can be a structured attribute). In these cases it is unclear how the ngrams are identified. This part should be made clearer in the paper.In summary, the paper presents an interest variant of an old problem, and presents a simple method to extract a useful feature from the unstructured attributes in records. The evaluation shows promising results, but is not thorough as it should evaluate the hyper-parameters that are used in constructing the feature. The paper is clearly written and accessible to a wide audience. The related work is incomplete as there isn't a related work section, or a discussion of relevant work on mining of significant phrases.""","""5: Marginally below acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""