paper_id,title,review,rating,confidence 1,"""Incremental but solid contribution""","""This paper presents a transformer-based relation extraction model that leverages pre-training on unlabeled text with a language modeling objective.The proposed approach is essentially an application of the OpenAI GPT to relation extraction. Although this work is rather incremental, the experiments and analysis are thorough, making it a solid contribution.Given that the authors have already set up the entire TRE framework, it should be rather easy to adapt the same approach to BERT, and potentially raise the state of the art even further.In terms of writing, I think the authors should reframe the paper as a direct adaptation of OpenAI GPT. In its current form, the paper implies much more novelty than it actually has, especially in the abstract and intro; I think the whole story about latent embeddings replacing manually-engineered features is quite obvious in 2019. I think the adaptation story will make the paper shorter and significantly clearer.""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct""" 1,"""Application of existing method for relation extraction""","""This article describes a novel application of Transformer networks for relation extraction.CONS:- Method is heavily supervised. It requires plain text sentences as input, but with clearly marked relation arguments. This information might not always be available, and might be too costly to produce manually. Does this mean that special care has to be taken for sentences in the passive and active voice, as the position of the arguments will be interchanged?- The method assumes the existence of a labelled dataset. However, this may not always be available. - There are several other methods, which produce state of the art results on relation extraction, which are minimally-supervised. These methods, in my opinion, alleviate the need for huge volumes of annotated data. The added-value of the proposed method vs. minimally-supervised methods is not clear. PROS:- Extensive evaluation- Article well-written- Contributions clearly articulated""","""5: Marginally below acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 1,"""Review of Improving Relation Extraction by Pre-trained Language Representations""","""The paper presents TRE, a Transformer based architecture for relation extraction, evaluating on two datasets - TACRED, and a commonly used Semeval dataset.Overall the paper seems to have made reasonable choices and figured out some important details on how to get this to work in practice. While this is a fairly straightforward idea and the paper doesn't make a huge number of innovations on the methodological side, however (it is mostly just adapting existing methods to the task of relation extraction).One point that I think is really important to address: the paper really needs to add numbers from the Position-Aware Attention model of Zhang et. al. (e.g. the model used in the original TACRED paper). It appears that the performance of the proposed model is not significantly better than that model. I think that is probably fine, since this is a new-ish approach for relation extraction, getting results that are on-par with the state-of-the-art may be sufficient as a first step, but the paper really needs to be more clear about where it stands with respect to the SOTA.""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 2,"""MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts""","""The paper MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts details the construction of a manually annotated dataset covering biomedical concepts. The novelty of this resource is its size in terms of abstracts and linked mentions as well as the size of the ontology applied (UMLS). The manuscript is clearly written and easy to follow. Although other resources of this type already exist, the authors create a larger dataset covered by a larger ontology. Thus, allowing for the recognition of multiple medical entities at a greater scale than previously created datasets (e.g. CRAFT).Despite the clarity, this manuscript can improve the following:Section 2.3 How many annotators were used?Section 2.4, point 2 - The process used to determine biomedical relevance is not detailed. Section 4.1 - No reason is given for the choice of TaggerOne. In addition, other datasets could have been tested with TaggerOne for comparison with the MedMentions ST21pv results.Misspelling and errors in section 2.3: Rreviewers, IN MEDMENTIONSOverall, this paper fits the conference topics and provides a good contribution in the form of a large annotated biomedical resource.""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct""" 2,"""Solid biomedical entity extraction/linking dataset""","""In this paper the authors introduce MedMentions, a new dataset of biomedical abstracts (PubMed) labeled with biomedical concepts/entities. The concepts some from the broad-coverage UMLS ontology, which contains ~3 million concepts. They also annotate a subset of the data with a filtered version of UMLS more suitable for document retrieval. The authors present data splits and results using an out-of-the-box baseline model (semi-Markov model TaggerOne (Leaman and Lu, 2016)) for end-to-end biomedical entity/concept recognition and linking using MedMentions.The paper describes the data and its curation in great detail. The explicit comparison to related corpora is great. This dataset is substantially larger (hundreds of thousands of annotated mentions vs. ones of thousands) and covers a broader range of concepts (previous works are each limited to a subset of biomedical concepts) than previous manually annotated data resources. MedMentions seems like a high-quality dataset that will accelerate important research in biomedical document retrieval and information extraction.Since one of the contributions is annotation that is supposed to help retrieval, it would be nice to include a baseline model that uses the data to do retrieval. Also, it looks like the baseline evaluation is only on the retrieval subset of the data. Why only evaluate on the subset and not the full dataset, if not doing retrieval?This dataset appears to have been already been used in previous work (Murty et al., ACL 2018), but that work is not cited in this paper. That's fine -- I think the dataset deserves its own description paper, and the fact that the data have already been used in an ACL publication is a testament to the potential impact. But it seems like there should be some mention of that previous publication to resolve any confusion about whether it is indeed the same data.Style/writing comments:- Would be helpful to include more details in the introduction, in particular about your proposed model/metrics. I'd like to know by the end of the introduction, at a high level, what type of model and metrics you're proposing.- replace ""~"" with a word (approximately, about, ...) in text- Section 2.3: capitalization typo ""IN MEDMENTIONS""- Section 2.4, 3: ""Table"" should be capitalized in ""Table 6""- Use ""and"" rather than ""/"" in text- Section 4: maybe just say ""training"" and ""development"" rather than ""Training"" and ""Dev""- 4.1: Markov should be capitalized: semi-Markov- 4.1: reconsider use of scare quotes -- are they necessary? 'lexicons', 'Training', ""dev', 'holdout'- 4.1: replace ""aka"" with ""i.e."" or something else more formal. In general this section could use cleanup.- 4.1: last paragraph (describing metrics, mention-level vs. document-level) is very confusing, please clarify, especially since you claim that a contribution of the paper is to propose these metrics. Is it the case that mention-level F1 is essentially entity recognition and document-level is entity linking? An example could possibly help here.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 2,"""Useful resource, but claims could be better supported, and the uniqueness of the resource better argued""","""The paper describes a new resource ""Med Mentions"" for entity linking of Pubmed abstracts, where entities are concepts in the UMLS type hierarchy -- for example ""Medical Device"".The annotations were manually verified. (I assume the intention is to use this as a benchmark, but the paper does not say)The paper is very rigorous in describing which concepts were considered, and which were pruned. Authors suggest to combine it with ""TaggerOne"" to obtain end-to-end entity recognition and linking system.It is a little bit unclear what the main contribution of this paper is. Is it a benchmark for method development and evaluation (the paper mentions the train/dev/test split twice)? or do the authors propose a new system based on this benchmark?, or was the intent to test a range of baselines on this corpus (and what is the purpose?) -- I believe this lack of clarity could be easily addressed with a change in structure of headings. (Headings are currently not helping the reader, a more traditional paper outline would be helpful.)I appreciate that the paper lists a range of related benchmarks. However, I am missing a discussion of: where the advantage of MedMentions is in contrast to these benchmarks? What is MedMentions offering that none of the other benchmarks couldn't?It is indisputable that a new resource provides value to the community, and therefore should be disseminated. However, the paper quality is more reminiscent of a technical report. A lot of space is dedicated to supplemental information (e.g. page 6) which would be better spent on a clear argumentation and motivation of the steps taken.""","""6: Marginally above acceptance threshold""","""5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature""" 3,"""A nice algorithmic contribution for integrating user feedback for KBC""","""This paper introduces a method to integrate user feedback into KBs in the presence of identity uncertainty, a problem that arises in the integration of new data in knowledge base construction. The proposed method represents user feedback as feedback mentions and uses an online algorithm for integrating these mentions into the KB.The paper targets an important problem in knowledge base construction, i.e., integrating user feedback in the online setting. The proposed hierarchical model looks reasonable and effective. And overall, the work is well presented. The paper makes an algorithmic contribution. The contribution is, however, limited from the perspective of human computation. The experiment uses simulated user feedback for evaluating the method. In real-world settings, user feedback can be skewed to certain types (e.g., negative feedback) or be noisy (so the feedback is not reliable). How would these affect the result?""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct""" 3,"""lack of details, few novelty""","""This paper presents a hierarchical framework for integrating user feedback for KB construction under identity uncertainty.1. it is unclear about the algorithm implementation, such as what is the implementation of feedback mention.2. There is a definition about attribute map, but I can't find where the model uses it. Same thing for the precision of a node pair.3. How to calculate the function g(.) is unclear either. 4. In section 5.2, the COMPLETE definition seems not correct, it is still the definition of PURE.5. The example for constructing positve/negative feedback is too vague.6. The experiment section needs more analysis including qualitative result.""","""3: Clear rejection""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 3,"""Relevant contribution for humans-in-the-loop in KB construction""","""The paper presents a novel solution to an interesting problem - when KBs are automatically expanded user feedback is crucial to identify incorrect and missing entity attributes and relations. More specifically, in the case of entity identity uncertainty, enabling the user feedback as part of the entity resolution mentions, appears novel and important. The paper is well written and organized. Points for improvement:- It would be interesting to see more in-depth analysis on examples where the proposed approach fails, and based on this to also outline open issues and future work. - The human computation aspects of the paper are lacking sufficient explanation in terms of implementation in real settings, as well as positioning with related work in human computation research- it would be interesting to know, what is the experimental design that authors would consider for an evaluation with actual user feedback vs. the simulated one""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 5,"""Interesting paper, incomplete experiments""","""This paper proposes to use homomorphic compression for hierarchical coreference resolution. Contributions of the paper are threefold. First, it proposes a homomorphic cosine preserving hashing version of simhash. Secondly, it presents another homomorphic cosine preserving representation based on random projections. Thirdly, the paper proposes a homomorphic version of minhash for Jaccard similarity. The paper applies these representations to the hierarchical coreference resolution problem using CRF. The authors also provide statistical analysis. Experimental results on real-world datasets are also presented. The paper is generally well written.My main concern with the paper is that it falls short of comparing with other valid baselines. One of the main points of the papers is that sparse and high-dimensional representations of mentions creates problems during probabilistic inference. In order to overcome this problem, the authors propose various hashing-based methods. However, in light of recent advances in word embeddings, one is not limited to using such sparse and high-dimensional representations. One can potentially use off-the-shelf dense and relatively lower-dimensional embeddings such as word2vec, glove etc, or use context-dependent embeddings such as ELMO. Unfortunately, the paper completely ignores this line of research, and neither compares nor cites them. This is clearly not satisfactory, and the paper is not complete with such vital missing pieces.Overall, I think the paper proposes some interesting ideas, but it is incomplete in light of the issues above. Moreover, applying the proposed hashed representation on at least one other task will make the paper stronger.""","""5: Marginally below acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct""" 5,"""review ""","""-summaryThis paper addresses the problem of model scaling and efficiency in coreference resolution. Commonly used models represent mentions as sparse feature vectors which are iteratively agglomerated to form clusters of mentions. The feature vector of a cluster is a function of all the mentions in the cluster which causes them to become more dense over time and computationally expensive to operate over. To address this, the authors investigate hashing schemes to expedite similarity calculation. Importantly, they propose a homomorphic hash function that is able to update the representation of a cluster online as new mentions are added or removed. -pros a homomorphic simhash that can speed up similarity calculations while maintaining performance experiments show that the hashing schema is able to perform within 1 F1 point of the exact similarity model. good coverage of related work and explanation of methods-cons only tested on a single dataset the lines in Fig 2 are not the clearest presentation of the speed accuracy tradeoff due to the large differences in scales and the fact that the maximum accuracies are never reached for some lines. Maybe a table or another presentation of these results could help.""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct""" 5,"""Interesting study of simhash for a coreference resolution algorithm""","""This paper proposes to use a variant of simhash to estimate cosine similarities in a particular coreference resolution algorithm called hierarchical coreference. The original algorithm maintains many different feature sets (for mentions and groups of mentions) subject to union and difference operations, and frequently needs to estimate the cosine similarity between various such sets. To avoid the associated costs, the paper proposes to use sketching techniques instead. The proposed techniques are simple but, as the study shows, can be very effective. I like the paper overall. My main nitpick is that it's unclear whether the proposed techniques are useful for other, less specific tasks as well.Pros:- Simple techniques- Analysis given- Convincing experimental results in the considered application- Very clear presentationCons:- Quite specific, potentially little impact- Somewhat straightforward- Relationship to other coreference resolution methods unclear- NEW AFTER REVISION: unclear relationship to AKMV sketch (see D3)Detailed comments:D1. What's a ""bag type"" in Sec 4?D2. On the one hand, I like the tutorial style that the paper is partly written in. On the other hand, large parts of the (initial) discussion are not directly related to the contribution of the paper; this part could be shortened.D3. The solution to homomorphic simhash is quite obvious. The solution to homomorphic minhash is reminiscent to the AKMV sketch of Beyer et al., ""On Synopses for Distinct-Value Estimation Under Multiset Operations"", SIGMOD 2007. (What's different?)D4. Has the estimator pseudo-formula been used before? If not, this might be further highlighted.D5. It may help give a name to the estimator in 3.2. as well as to spell out its definition.D6. I found the agree/disagree notation in Fig 1 somewhat misleading. What does it mean for the two models to agree? The decision whether to accept/reject is probabilistic.D7. What is the total size of all sketches maintained by the algorithm in the various experiments? (It appears 1kb per node [my rough guess] is quite substantial, although it may be less than in the exact method, at least for some nodes.)D8. It would also be interesting to see the performance w.r.t. number of steps taken.D9. Is it possible to speed up the exact method to obtain similar performance improvements? Has the method been tuned (e.g., to that fact that most proposals appear to be rejected?)D10. It remains unclear how the proposed hierarchical coreference model relates to state-of-the-art models, both in terms of cost and in terms of performance. This is a weak point: one may have the impression that this paper speeds up an method that is not state-of-the-art anymore.Typos:""We estimate parameters with hyper-parameter search""""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct""" 6,"""Good paper with two caveats""","""This paper presents a methodology for automatically annotating texts from Wikipedia with entity spans and types. The intended use is the generation of training sets for NER systems with a type hierarchy going beyond the common four types per, loc, org, misc.I think this paper is a good fit for AKBC, given that entity recognition and linking systems are an essential part of knowledge-base population. This should be good for interesting discussions.A big caveat of this paper is that, while the approach discussed by the authors is generally sound, it is very tailored towards Wikipedia texts, and it is unclear how it can be generalized beyond. Since this approach is meant for downstream training of NER systems, these systems will be limited to Wikipedia-style texts. A statement from the authors regarding this limitation would have been nice. Or maybe it's not a limitation at all? If the authors could show that NER systems trained on this Wikipedia-style texts do perform well on, say, scientific texts, that would be already good.A second issue I see is that Stage III of the algorithm filters out sentences that might have missing entity labels. This makes sense, provided that the discarded sentences are not fundamentally different from the sentences retained in the NER tagger training set. For example, they could be longer on average, or could have more subordinate clauses, or just more entities on average, etc. This is something the authors should look into and should report in their paper. If they find that there are differences, an experiment would be nice that applies an NER tagger trained on the produced data to these sentences and verifies its performance there.A minor point: The authors claim their approach scales to thousands of entity types, which I find a bit of an overstatement, given that the dataset produced by the authors has <1.5k different types (which is already quite a lot).""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 6,"""New distant supervision method focusing on mention coverage, but incomplete evaluation ""","""The paper proposes a new way to get distant supervision for fine grained NER using wikipedia, freebase and a type system. The paper explores using links in wikipedia as entities, matching them to freebase, and then expanding the using some string matching, and then finally pruning sentences using heuristics. Methods are compared on FIGER. The paper also introduces a dataset, but it is not fully described. One interesting aspect about this paper is, as far as I can tell, one of the few works actually doing mention detection, and exploring the role mention detection. That being said, its unclear what is new about the proposed source of supervision. The first two stages seem similar to standard methods and the last method (generally speaking pruning noisy supervision), has also been explored (e.g. in the context of distant supervision for IE, pseudo-url, and see Section 2.2) . Its also not clear to me what specifically targets good mention detection in this method. The experiments do somewhat argue that mention detection is improving, but not really on FIGER, but instead on the new dataset (this inconsistency causes me some pause). All that being said, I don't think it would matter much if the supervision were incorporated into an existing system (e.g. pseudo-url or pseudo-url) and demonstrated competitive results (I understand that these systems use gt mention, but any stand in would be sufficient). Table 5 on the other hand has some results that show baselines that are significantly worse than the original FIGER paper (without gt mentions) and the proposed supervision beating it (on the positive side, beating the original FIGER results too, but not included it in the table, see Section 3 of pseudo-url ). So in the end, I'm not convinced on the experimental side.This paper could be significantly improved on the evaluation. Incomplete reference to previous work on FIGER and insufficient description of their new datasets are areas of concern. Cursory reference, instead of evaluation, on new fine grained datasets (like open-type) also seem like missing pieces of a broader story introducing a new form of supervision.""","""4: Ok but not good enough - rejection""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 6,"""Super-exciting stuff, great analysis and overall a well written and executed paper""","""This paper studies the problem of fine grained entity recognition ( and typing . They initially present an excellent analysis of how fine grained entity type sets are not a direct extension of the usual coarse grained types (person, organization, etc) and hence training systems on data annotated on coarse grained typed datasets would lead to low recall systems. Secondly, they show that automatically created fine grained typed datasets are also not sufficient since it leads to low recall because of noisy distant supervision. This analysis was both an interesting read and also sets the stage for the main contribution of this work.The main contribution of this work is to create a high recall large training corpora to facilitate and typing. The authors propose a staged pipeline which takes in input (a) a text corpus linked to a knowledge base (using noisy distant supervision), (b) a knowledge base (and also a list of aliases for entities) and (c) a typed hierarchy. All of these are readily available. The output is a high-recall linked corpus with linked entities to the KB and the type hierarchy. To show the usability of the training corpus the paper performs excellent intrinsic and extrinsic analysis. In the intrinsic analysis, they show that the new corpus indeed has a lot more mentions annotated. In the extrinsic analysis, they show that state art of the models trained on the new corpus has very impressive gains in recall when compared to the original existing datasets and also wiki dataset with distant supervision. There is a loss in precision though, but it is fairly small when compared to the massive gains in recall. This experiments warrants the usability of the generated training corpus and also the framework they stipulate which I think everyone should follow.Great work!Questions:1. In table 3, can you also report the difference in entities (and not entity mentions). I would be interested to see if you were able to find new entities. 2. Are you planning to release the two training datasets (and if possible the code ?)Suggestion: The heuristics definitely work great but I think we can still do better if parts of stage 2 and 3 were replaced by a simple machine learned system. For example, in stage 2, just relying on heuristics to match to the prefix trees would result in always choosing the shortest alias and would be problematic if aliases of different entities share the same prefix. Restricting to entities in the document would definitely help here but still there might be unrecoverable errors. A simple model conditioned on the context would definitely perform better. Similarly in sentence 3, the POS based heuristics would just be more robust if a classifier is learned. ""","""9: Top 15% of accepted papers, strong accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 7,"""Very interesting overview of an existing knowledge base, will inspire interesting discussions""","""The paper is about a knowledge base that is constructed with a probabilistic model (Markov logic network, as implemented in Infer.NET). The system is expansive and covers many important aspects, such as data types and values, templates for extraction and generation, noise models. The model addresses three kinds of large scale inferences: (1) template learning, (2) schema learning, and (3) fact retrieval. The knowledge base construction approach is evaluated against other knowledge base approaches, YAGO2, NELL, Knowledge Vault, and DeepDive.The paper is a very interesting overview over the knowledge base Alexandia, that would inspire interesting discussions at the AKBC conference. My only pet peeve are a set of initial claims, intended to be distinguishing this approach from others, which are simply not true:- ""Alexandia's key differences are its unsupervised approach"" -- clearly Alexandria requires prior knowledge on types, and furthermore weakly supervision for template learning etc. Unsupervised means ""not require any clue about what is to be predicted"". - ""KnowledgeVault cannot discover new entities"" -- I'd be surprised.- ""DeepDive uses hand-constructed feature extractors"" -- It is a matter of preferences whether one seeds the extraction with patterns or data. This statement does not convince me.While the discussion of these differences is important, I suggest the authors use more professional language.""","""9: Top 15% of accepted papers, strong accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 7,"""Learning to extract facts with little prior except a structural prior from a probabilistic program""","""This paper uses a probabilistic program describing the process by which facts describing entities can be realised in text, and a large number of web pages, to learn to perform fact extraction about people using a single seed fact.Despite the prodigious computational cost (close to a half million hours of computation to acquire a KB only about people) I found the scale at which this paper applied probalistic programming exciting. It suggests that providing a structural prior in this form, and then learning the parameters of a model with that structure is a practical technique that could be applied widely.Questions that arose whilst reading: the most direct comparison was with a system using Markov Logic Networks, in which the structural prior takes the form of a FOL theory. A more direct comparison would have been useful - in particular, an estimate of the difficulty of expressing and equivalently powerful model, and the computational cost of trainining that model, in MLN.Quite a lot of tuning was required to make training tractable (for outlying values of tractable) - this limits the likely applicability of the technique.The paper suggests in future work an extension to open domain fact extraction, but it is not clear how complex or tractable the require prob program would be. The one in the paper is in some respects (types mainly) specific to the facts-about-people setting.It is unclear why theTAC-KBP Slot Filling track was mentioned, given that performance on this track does not seem to have been evaluated. An informal evaluation suggesting beyond SoA performance is mentioned, but not useful details given. This significantly weakens what otherwise could be a stand-out paper""","""8: Top 50% of accepted papers, clear accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 7,"""Although the paper has a lot of promise, it is not suitable for publication in an academic conference in its present form, in my view. I advise the authors to review all the comments, and prepare a significantly revised version for future publication.""","""The authors present Alexandria, a system for unsupervised, high-precision knowledge base construction. Alexandria uses a probabilistic program to define a process of converting knowledge base facts into unstructured text. The authors evaluate Alexandria by constructing a high precision (typically 97%+) knowledge base for people from a single seed fact. Strengths:--although it is unusual to combine and intro and related work into a single section, I enjoyed the authors' succinct statement (which I agree with) about the 'holy grail of KB construction'. Overall, I think the writing of the paper started off on a strong note.--the paper has a lot of promise in considering a unified view of KB construction. Although I am not recommending an accept, I hope the authors will continue this work and submit a revised version for future consideration (whether in the next iteration of this conference, or another)Weaknesses:--It is not necessary to be putting 'actual programs' (Figure 1). It does not serve much of a purpose and is inappropriate; the authors should either use pseudocode, or just describe it in the text. --In Web-scale fact extraction, the domain-specific insight graphs (DIG) system should also be mentioned and cited, since it is an end-to-end system that has been used for KGC in unusual domains like human trafficking. --Starting from Section 2, the whole paper starts reading a bit like a technical manual/specification document. This kind of paper is not appropriate for an academic audience; the paper should have been written in a much more conceptual way, with actual implementations/programs/code details relegated to a github repo/wiki, and with a link to the same in the paper.""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct""" 8,"""Very interesting paper that describes a positive contribution to the state of the art in BioNLP""","""This paper proposes a deep-learning-based method to solve the known BioNLP task of disease normalization on the NCBI disease benchmark (where disease named entities are normalized/disambiguated against the MeSH and OMIM disease controlled vocabularies and taxonomies). The best known methods (DNorm, TaggerOne) are based on a pipeline combination of sequence models (conditional random fields) for disease recognition, and (re)ranking models for linking/normalization.The current paper proposes instead an end-to-end entity recognition and normalization system relying on word-embeddings, siamese architecture and recursive neural networks to improve significantly (4%, 84 vs 80% F1-score, T. 3). A key feature is the use of a GRU autoencoder to encode or represent the ""context"" (related entities of a given disease within the span of a sentence), as a way of approximating or simulating collective normalization (in graph-based entity linking methods), which they term ""coherence model"". This model is combined (weighted linear combination) with a model of the entity itself.Finally, the complete model is trained to maximize similarity between MeSH/OMIM and this combined representation.The model is enriched further with additional techniques (e.g., distant supervision). The paper is well written, generally speaking. The evaluation is exhaustive. In addition to the NCBI corpus, the BioCreative5 CDR (chemical-disease relationship) corpus is used. Ablation tests are carried out to test for the contribution of each module to global performance. Examples are discussed.There are a few minor issues that it would help to clarify:(1) Why GRU cells instead of LSTM cells?(2) Could you please explain/recall why (as implied) traditional models are NP-hard? I didn't get it. Do you refer to the theoretical complexity of Markov random fields/probabilistic graphical models? Maybe you should speak of combinatorial explosion instead and give some combinatorial figure (and link this to traditional methods). My guess is that this is important, as the gain in runtime performance (e.g., training time - F. 4) might be linked to this.(3) A link should be added to the GitHub repository archiving the model/code, to ensure reproducibility of results.(4) Could you please check for *statistical significance* for T. 3, 5, 6 and 7? At least for the full system (before ablations). You could use cross-validation. ""","""9: Top 15% of accepted papers, strong accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 8,"""Simple, fast method with decent results on disease normalization (linking)""","""Summary:The authors address the problem of disease normalization (i.e. linking). They propose a neural model with submodules for mention similarity and for entity coherence. They also propose methods for generating additional training data. Overall the paper is nicely written with nice results from simple, efficient methods.Pros:- Paper is nicely written with good coverage of related work- LCA analysis is a useful metric for severity of errors- strong results on the NCBI corpus- methods are significantly faster and require far fewer parameters than TaggerOne while yielding comparable resultsCons:- Results on BC5 are mixed. Why?- Data augmentation not applied to baselines- Methods are not very novelQuestions:- Were the AwA results applied only at test time or were the models (including baselines) re-trained using un-resolved abbreviation training data?""","""7: Good paper, accept""","""5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature""" 8,"""very competent work on an important problem""","""The paper presents a method for named entity disambiguationtailored to the important case of medical entities,specifically diseases with MeSH and OMIMas the canonicalized entity repository.The method, coined NormCo, is based on a cleverly designedneural network with distant supervision from MeSH tags ofPubMed abstracts and an additional heuristic for estimatingco-occurrence frequencies for long-tail entities.This is very competent work on an important and challengingproblem. The method is presented clearly, so it would be easyto reproduce its findings and adopt the method for furtherresearch in this area.Overall a very good paper.Some minor comments:1) The paper's statement that coherence models havebeen introduced only recently is exaggerated. For general-purpose named entity disambiguation, coherencehas been prominently used already by the works ofRatinov et al. (ACL 2011), Hoffart et al. (EMNLP 2011)and Ferragina et al. (CIKM 2010); and the classicalworks of Cucerzan (EMNLP 2007) and Milne/Witten (CIKM 2008)implicitly included considerations on coherence as well.This does not reduce the merits of the current paper,but should be properly stated when discussing prior works.2) The experiments report only micro-F1. Why is macro-F1 (averaged over all documents) not considered at all?Wouldn't this better reflect particularly difficult caseswith texts that mention only long-tail entities,or with unusual combinations of entities?""","""8: Top 50% of accepted papers, clear accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 9,"""A practical work but lacks methodological contribution""","""This paper introduces a project for structuring Wikipedia by aggregating the outputs from different systems through ensemble learning. It presents a case study of entity and attribute extraction from Japanese Wikipedia. My major concern is the lack of methodological contribution. - Ensemble learning, which seems most like the methodological contribution, is applied in a straightforward way. The finding that ensemble learning gives better results than individual learners is trivial.- Authors state that a key feature of the project is using bootstrapping or active learning. This, however, is not explained in the paper nor supported by experimental results.Clarification or details are needed for steps introduced by section 3-6:- In ""Extended Named Entity"", why would the top-down ontology ENE better than the inferred or crowd created ones? I think each of them has pros and cons.- In ""Categorization of Wikipedia Entities"", is training data created by multiple annotators? what is the agreement between the multiple annotators for the test (and the training) data? How much error of the machine learning model is caused by incorrect human annotations?- In ""Share-Task Definition"", ""We give out 600 training data for each category."" does it mean 600 entities?- In ""Building the Data"", what is the performance of experts and crowds in the different stages?Writing should be improved. Some examples:- what does it mean by ""15 F1 score improvement on a category"".- a lot of text in the abstract is repeated in the introduction.- ""For example, Shinjuku Station is a kind of railway station is a type of ..."": not a sentence.- ""4 show the most frequent categories"": should be Table 1.- page 8, ""n t""L corrupted symbol.As the last comment, I wonder how (much) this ensemble learning method can be better than crowd based KBC methods, as motivated by abstract and introduction. I would assume that machine learning has similar reliability issue as crowdsourcing even when ensemble learning is used. ""","""4: Ok but not good enough - rejection""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 9,"""The paper reports a summary of the SHINRA project for structuring Wikipedia collaborative construction scheme.""","""The paper described SHINRA2018 task that construct knowledge base rather than evaluating only limited test data. The paper repeat the task with larger and better training data from the output of the previous task. The paper is well written in general, though there are some redundancies between abstract and introduction with exactly the same content. The SHINRA share task provide a good resource and platform for evaluating knowledge graph construction task on Japanese Wikipedia. One of the concerns is that the paper did not really solve the first statement in abstract that it still evaluates on limited test data with 100 samples. The main contribution of the paper seems to be ensemble learning which has been proved efficient in many previous work.""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct""" 9,"""The paper described a information extraction task, but too many questions are unanswered""","""The paper tackles an important problem: extraction of structured data from unstructured text, but lack of comparison with existing approaches.Section 1Wikipedia is not only a knowledge based of the names in the world. Maybe the authors wanted to say the ""entities of the world""?The motivation of the paper is limited: what is the goal of the structured knowledge base? I the goal is better consistency, how to we improve consistency. There is nothing in the paper that indicate that the consistency is better than, say, manually created data with a voting mechanism. Why this approach to KB structuring is inherently coherent? Section 2If the only problem of CYC is the coverage, why the authors did not try to improve the coverage of CYC instead of inventing a new method?Section 3Why top-down design of ontology is needed. If the authors have learned it, what are the supporting evidence for it?Section 4No annotation reliability (e.g. the inter-annotator agreement score). Section 5Why ""chemical compound"" was selected and not ""movie"" or ""building"" as more common sub-categories?600 data points in quite small compared to standard datasets. What was the cost of the annotations?Section 6Why the Workers (Lancers) were not used? What was their accuracy/cost ? Maybe the cost could compensate the low accuracy.Section 7Why is it scientifically interesting to know that the authors are happy ?Section 8Why did the authors participate in the shared task? What are the references for stacking? As far as I know, stacking performs poorly compared to proper inference techniques, such as CRF. Why is it different in this case? Overall: the English writing is very approximate. I'm not a native speaker myself, but I would suggest the authors to send the paper to a native English speaker for correction.""","""4: Ok but not good enough - rejection""","""3: The reviewer is fairly confident that the evaluation is correct""" 10,"""Simple approach for a practical use case""","""The authors describe and evaluate two approaches to collecting alternative aliases (synonyms) for entities in a taxonomy: expansion from WordNet synsets and from search queries followed by a binary classification to refine the generated candidate sets. Mitigating vocabulary mismatch in search applications provides a good motivating use case for ontology/taxonomy construction and is an important research direction.Questions:* How were the negative samples for training the classifier selected in 4.3?* What is the overlap between the synonym sets generated using WordNet and the search queries?* Can the WordNet-generated candidates improve performance for aligning synonyms collected from search queries? i.e. output of the first method as input to the second synonym selection method.* Are there other evaluation results that can show improvement from implementing the proposed approaches on the target tasks, e.g. search or information extraction?Remarks:* Semantic Network seems to be a synonym for a Knowledge Graph, which is a more frequently used term. The relation has to be made explicit.* The structure of the paper is confusing: only one of the candidate selection methods is described in the Section 3 but experimental results for two approaches are reported in Section 4.""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 10,"""Interesting approach to taxonomy extension""","""The paper presents an interesting approach to taxonomy extension that is based on identifying synonyms for component words of multi-word terms in the taxonomy. The approach seems to rely very much on WordNet, which may be a weakness. Coverage of WordNet is rather limited and the approach may therefore be limited in application. Also, word sense disambiguation (selecting the appropriate sense for a component word) is a challenge that has not been addressed in full detail, although this will be dealt with by the classification step in filtering, if I understand correctly. Overall, the paper is well-written and clear in ambitions and achieved results. The experiments use an extensive crowdsourced gold standard, which is a valuable research outcome on its own if it will be released publicly.""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct""" 10,"""Authors propose a two stage synonym expansion framework for large shopping taxonomies. Paper is well written and empirical study is well conducted.""","""Authors present a method for expanding taxonomies with synonyms or aliases. The proposed method has two stages, 1) generate synonym candidates from WordNet and then 2) use a binary classifier to filter the candidates. The method is simple and effective. Paper is well written with ample empirical study and analysis. Couple of minor comments:1) Rather than using WordNet, for step 1, is it possible to use a similarity based clustering method to mine candidates for each concept from a corpus?2) For the word embeddings used in step 2, did the authors use off-the-shelf pre-compuated embeddings or compute the embeddings from a domain specific (in this case shopping) corpus? Will the performance improve if a domain specific embedding is applied?""","""7: Good paper, accept""","""5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature""" 11,"""Compelling initial result but would benefit from additional experiments, especially direct comparisons with prior work.""","""This work applies an existing system, DiscK (Chen and Van Durme, 2017) to medical concept linking. They show that DiscK improves the mean reciprocal rank of candidate generation over several simple baselines, and leads to slightly worse but significantly more efficient linking performance in one existing downstream system, DNorm (Leaman et al., 2013). Pros:- Clear description of the task and associated challenges. - Thorough explanation of the proposed system, experiments, and results. In particular, the discussion of the performance trade-off between MRR and coverage, and why it is useful that DiscK maintains robust MRR even though it under-performs several baselines w.r.t. coverage, was clear and useful. - The downstream concept linking result is compelling: for example, the DNorm+DiscK system with k=5000 and d=1000 only considers 4% of candidates but its coverage declines by just 2% compared to considering all 100% of candidates (as in DNorm alone). Cons/Suggestions:- Section 3/DiscK Overview: Without having read the DiscK paper in full (admittedly), I found Section 3 very hard to follow. How are the feature weights chosen? How are the feature parameters trained? Whats the difference between a feature type and a feature value? Since the DiscK work is integral to the rest of the paper, the authors should spend more time giving a high-level overview of the approach (targeted at nave readers) before delving into the details.- Section 4: The authors note at the end of Section 4 that While we found that many mentions could be matched to concepts by lexical features, a significant portion required non-lexical features. It would be helpful if the authors provided a concrete example of such a case. Also, they note that While features from additional mention properties, such as the surrounding sentence, were tested, none provided an improvement over features build from the mention span text alone. This merits more detailed explication: What features did they try? Since examples requiring non-lexical features were a consistent source of error, why do the authors think that non-span-based features failed to influence model performance on these examples? - Data: It is unclear to me whether expanding the ontology to include the additional concepts (e.g. Finding) would prevent comparison with other systems for this dataset, and what is gained from this decision. In addition, it would be helpful if the authors explained what a preferred entry is in the context of the SNOWMED-CT ontology. - Evaluation & Results: The authors note that The concept linking systems that were evaluated in the shared task may have included a candidate generation stage, but evaluations of these stages are not provided. However, it is unclear to me why the systems that do include a candidate generation phase could not either be re-run or re-implemented to get such results, especially since Share/CLEF is a shared task with many entries and corresponding published system descriptions. Since direct comparisons with prior work were omitted, it is hard to gauge the strength of the baselines and proposed system. In addition, it might be useful to test the value of higher MRR in the downstream concept linking task by comparing the DiscK-based candidate generator against one of the baseline approaches with higher coverage but lower MRR. Can DNorm capitalize on a more informative ranking, or are the results similar as long as a basic level of coverage is achieved for a given k?- Related Work: The authors only briefly mention entity linking work outside of the biomedical domain. Did the authors evaluate any of these approaches, especially in the context of computationally-efficient linking architectures? Also, a more detailed description of DNorm would be useful. ""","""5: Marginally below acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 11,"""Some efficiency gains, but more rigorous evaluation and motivation required""","""This paper proposes using an existing method, DiscK, to pre-filter candidate concepts for a medical concept linking system. When combined with an existing state-of-the-art linker (DNorm), the joint system is able to consider much fewer candidate concepts, albeit with some drop in accuracy. These results are potentially significant, but assessing the actual impact requires additional measurements and better explanation of the motivation.Pros:- DiscK + DNorm achieves concept linking accuracy close to the previous state-of-the-art, while requiring DNorm to process significantly fewer candidates. For example, the system gets up to .759 MRR when considering 2000 candidates, compared to .781 MRR for the full set of ~125K candidates (Table 5).Cons:- Evaluation of candidate generation without downstream linking (Section 6) was not compelling. Without information about how the linker might use these candidate sets, it is hard to tell whether these numbers are good or not.- The claims of superior computational efficiency in Section 8 are not supported quantitatively. There should be a comparison between DNorm alone and DiscK + DNorm of how many mentions or documents can be processed per second. Otherwise, it is not clear how much overhead DiscK adds compared to the cost of running DNorm on more candidates. Hopefully this issue can be easily addressed.- Writing was at times unclear and misleading. Most importantly, the authors write, ""DiscK allows us to efficiently retrieve candidates over a large ontology in linear time."" In fact the point of DiscK is that it enables sublinear queries in the size of the candidate space/ontology. But perhaps the authors are referring to being linear in something else? After all, pairwise scoring models are also linear in the size of the candidate space.- Straightforward application of existing approaches, with no new technical contributionAdditional comments:- As an outsider to this medical linking task, I wanted to understand more why speed is important. How expensive is it to run DNorm on a large corpus? Are there online settings in which low latency is desired, or is this only for offline processing?- I would like some discussion of two other seemingly more obvious ways to improve runtime: (1) Learn a fast pairwise comparison model and filter using that. The number of candidates is large but not unmanageably large (125K), so it seems possible that you could still get significant wall clock speedups by doing this (especially as this is very parallelizable). Such an approach would also not have to abide by the restrictions imposed by the DiscK framework. (2) Speed up DNorm directly. Based on the original paper (Leaman et al., 2013), it seems that DNorm is slow primarily because of the use of high-dimensional tf-idf vectors. Is this correct? If so, might a simple dimensionality reduction technique, or use of lower-dimensional word vectors (e.g. from word2vec) already make DNorm much more efficient, and thus obviate the need for incorporating DiscK? Relatedly, are other published linkers as slow as DNorm, or much faster?- Based on table 5, in practice you would probably choose a relatively high value of K (say ~2000), to maintain near state-of-the-art accuracy. We also know that at high values of K, character 4-gram is competitive with or even better than DiscK. So, what is the runtime profile of the character 4-gram approach? Footnote 8 mentions that it is asymptotically slower, but computing these character overlaps should be very fast, so perhaps speed is not actually a big issue (especially compared to the time it takes DNorm to process the remaining K candidates). This is is related to point (1) above.- The authors mention that DNorm is state-of-the-art, but they don't provide context for how well other approaches do on this linking task. It would be good to know whether combining DiscK + DNorm is competitive with say, the 2nd or 3rd-best approaches.- The organization of this paper was strange. In particular, Section 8 had the most important results, but was put after Discussion (Section 7) and was not integrated into the actual ""Evaluation and Results"" section (Section 6).- Measuring MRR in Section 6 is unmotivated if you claim you're only evaluating candidate generation, and plan to re-rank all the candidate sets with another system anyways. It is still good to report these MRR numbers, so that they can be compared with the numbers in Section 8, so perhaps all of the MRR's should be reported together.""","""5: Marginally below acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct""" 11,"""throrough study and a good reminder to think about this important step in the entity linking pipeline""","""In the area of entity linking, this paper focuses on candidate generation. The author make a valid point, for the importance of this stage: While much work focuses on disambiguation of a candidate set of entities, this only matters if the correct choice is present in the candidate set. This might not be true for simple candidate generation methods. The paper suggests a weighted combination of different candidate generating features, such as exact string match, BM25, character 4-grams or abbreviation matches. These are contrasted with two variations of the DiscK approach.The evaluation is based on the Share/Clef eHealth task. It is very exhaustive, evaluating different properties of the candidate set. From an efficiency standpoint, I would be interested in candidate methods that contain the true candidate in a small pool set i.e., 1000 might not be affordable in a real system. I only have one doubt: The lines do not cross for MRR (Figure 2), so how is it possible that they cross for coverage (Figure 1)? Maybe standard error bars would be helpful to understand which differences are due to quantization artifacts.It would have been useful to study the effects of the candidate set on the final result when combined with a candidate-based entity linker.I am not sure I follow the terminology of (DiscK), weighted, combined, binary -- it would help to have an exhaustive list of methods with consistent names.If the authors find it useful, here is a related paper that also discusses the effects of the candidate method during entity linking: Dalton, Jeffrey, and Laura Dietz. ""A neighborhood relevance model for entity linking."" Proceedings of the 10th Conference on Open Research Areas in Information Retrieval. 2013.""","""9: Top 15% of accepted papers, strong accept""","""5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature""" 12,"""OPIEC: An Open Information Extraction Corpus""","""In this paper, the authors build a new corpus for information extraction which is larger comparing to the prior public corpora and contains information not existing in current corpora. The dataset can be useful in other applications. The paper is well written and easy to follow. It also provides details about the corpus. However, there are some questions for the authors: 1) It uses the NLP pipeline and the MinIE-SpaTe system. When you get the results, do you evaluate to what extent that the results are correct? 2) In Section 3.4, the author mentioned the correctness is around 65%, what do you do for those incorrect tuples? 3) Have you tried any task-based evaluation on your dataset?""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct""" 12,"""Good paper on producing a triple store from Wikipedia articles.""","""This paper presents a dataset of open-IE triples that were collected from Wikipedia with the help of a recent extraction system. This venue seems like an ideal fit for this paper and I think it would make a good addition to the conference program. While there is little technical originality, the overall execution of the experimental part is quite good and I like that the authors focused in their report on describing how they filtered the output of the employed IE system and that they present interesting examples from the conducted filtering steps.I particularly liked the section on comparing the new resource to the existing knowledge bases from the same source (YAGO, DBpedia), I think it makes a lot of sense to pick resources that leverage other parts of Wikipedia (category system, ...) and not the main article text, and to look into how coverage of the these different approaches relates.It would have been nice to also compare against other datasets/triple stores/... that used open-IE to extract from Wikipedia. A couple of references discussing such are listed in the paper, e.g., DefIE or KB-Unify seem like good candidates.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 12,"""Interesting analysis, but may not be enough content""","""The paper describes the creation of OPIEC -- an Open IE corpus over English Wikipedia.The corpus is created in a completely automatic manner, by running an off-the-shelf OIE system (MinIE), which yields 341M SVO tuples. Following, this resource is automatically filtered to identify triples over named entities (using an automatic NER system, yielding a corpus of 104M tuples), and only entities which match entries in Wikipedia (5.8M tuples).On the positive side, I think that resources for Open IE are useful, and can help spur more research and analyses.On the other hand, however, I worry that OPIEC may be too skewed towards the predictions of a specific OIE system, and that the work presented here consists mainly of running off-the-shelf can be extended to contain more novel substance, such as a new Open IE system and its evaluation against this corpus, or more dedicated manual analysis. For example, I believe that most similar resources (e.g., ReVerb tuples) were created as a part of a larger research effort.The crux of the matter here I think, is the accuracy of the dataset, reported tersely in Section 5.3, in which a manual analysis (who annotated? what were their guidelines? what was their agreement?) finds that the dataset is estimated to have 60% correct tuples. Can this be improved? Somehow automatically verified?Detailed comments:- I think that the paper should make it clear in the title or at least in the abstract that the corpus is created automatically by running an OIE system on a large scale. From current title and abstract I was wrongfully expecting a gold human-annotated dataset.- Following on previous points, I think the paper misses a discussion on gold vs. predicted datasets for OIE, and their different uses. Some missing gold OIE references:Wu and Weld (2010), Akbik and Loser (2012), Stanovsky and Dagan (2016).- Following this line, I don't think I agree with the claim in Section 4.3 that ""it is the largest corpus with golden annotations to date"". As far as I understand, the presented corpus is created in a completely automated manner and bound to contain prediction errors.- I think that some of the implementation decisions seem sometimes a little arbitrary. For instance, for the post-processing example which modifies (Peter Brooke; was a member of; Parliament) to (Peter Brooke; was ; a member of Parliament), I think I would've preferred the original relation, imagining a scenario where you look for all members of parliament (X; was a member of; Parliament), or all of the things Peter Brooke was a member of (Peter Brooke; was a member of; Y) seems more convenient to me.Minor comments & typos:- I assume that in Table 1, unique relations and arguments are also in millions? I think this could be clearer, if that's the indeed the case.- I think it'd be nice to add dataset sizes to each of the OPIEC variants in Fig 1.- End of Section 3.1 ""To avoid loosing this relationship"" -> ""losing this relationship""- Top of P. 6: ""what follows, we [describe] briefly discuss these""- Section 4.5 (bottom of p. 9) ""NET type"" -> ""NER type""?""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 14,"""Solid work, convincing experiments and results""","""The paper presents innovative work towards learning numerical attributes in a KB, which the authors claim to be the first of its kind. The approach leverages KB embeddings to learn feature representations for predicting missing numerical attribute values. The assumption is that data points that are close to each other in a vector (embeddings) space have the same numerical attribute value. Evaluation of the approach is on a set of highest ranked 10 numerical attributes in a QA setting with questions that require a numerical answer.The paper is well written and the approach is explained in full detail. My only concern is the application of the method across numerical attributes of (very) different granularity and context. The authors briefly mention the granularity aspect in section 3 and point to a normalization step. However, this discussion is very brief and leaves perhaps some further questions open on this.""","""8: Top 50% of accepted papers, clear accept""","""2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper""" 14,"""Nice paper on prediction of numeric attributes in KBs.""","""The paper reports on the prediction of numerical attributes in knowledge bases, a problem that has indeedreceived too little attention. It lays out the problem rather clearly and defines datasets, a number of baselinesas well as a range of embedding-based models that I like because they include both simple pipeline models (learn embeddings first, predict numerical attributes later) and models that include the numerical attributesinto embedding learning. I also appreciate that the paper makes an attempt to draw together ideas from differentresearch directions.Overall, I like the paper. It's very solidly done and can serve as excellent base for further studies. Of course, I also have comments/criticisms.First, the authors state that one of the contributions of the paper is to ""introduce the problem of predicting the value of entities numerical attributes in KB"". This is unfortunately not true. There is relatively old work by Davidov andRappoport (ACL 2010) on learning numeric attributes from the web (albeit without a specific reference to KBs), and a more recent study specifically aimed at attributes from KBs (Gupta et al. EMNLP 2015) which proposed and modelled exactly the same task, including defining freely available FreeBase-derived datasets.More generally speaking, the authors seem to be very up to date regarding approaches that learn embeddings directlyfrom the KB, but not regarding approaches that use text-based embeddings. This is unfortunate, since themodel that we defined is closely related to the LR model defined in the current paper, but no direct comparison is possible due to the differences in embeddings and dataset.Second, I feel that not enough motivation is given for some of the models and their design decisions. For example,the choice of linear regression seems rather questionable to me, since the assumptions of linear regression (normal distribution/homoscedasticity) are clearly violated by many KB attributes. If you predict, say, country populations, the error for China and India is orders of magnitude higher then the error for other countries, and the predictions are dominated by the fit to these outliers. This not only concerns the models but also the evaluation, becauseMAE/RMSE also only make sense when you assume that the attributes scale linearly -- Gupta et al. 2015 use thisas motivation for using logistic regression and a rank-based evaluation. I realize that you comment on non-linear regression on p.12 and give a normalized evaluation on p.13: I appreciate that, even though I think that it only addresses the problem to an extent.Similarly, I like the label propagation idea (p. 7) but I lack the intuition why LP should work on (all) numeric attributes.If, say, two countries border each other, I would expect their lat/long to be similar, but why should their (absolute) GDP be similar? What is lacking here is a somewhat more substantial discussion of the assumptions that this (and the other) models make about the structure of the knowledge graph and the semantics of the attributes.Smaller comment:* Top of p.5, formalization: This looks like f is a general function, but I assume that one f is supposed to be learned for each attribute? Either it should be f_a, or f should have the signature E x A -> R. (p4: Why is N used as a separate type for numeric attributes if the function f is then supposed to map into reals anyway?)""","""8: Top 50% of accepted papers, clear accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 14,"""interesting paper with decent contribution, but doubts about practical viability""","""The paper presents a method to predict the valuesof numerical properties for entities where these propertiesare missing in the KB (e.g., population of cities or height of athletes).To this end, the paper develops a suite of methods, usingregression and learned embeddings. Some of the techniquesresemble those used in state-of-the-art knowledge graph completion,but there is novelty in adapting these techniquesto the case of numerical properties.The paper includes a comprehensive experimental evaluationof a variety of methods.This is interesting work on an underexplored problem,and it is carried out very neatly.However, I am skeptical that it is practically viable.The prediction errors are such that the predicted valuescan hardly be entered into a high-quality knowledge base.For example, for city population, the RMSE is in the orderof millions, and for person height the RMSE is above 0.05(i.e., 5cm). So even on average, the predicted values areway off. Moreover, average errors are not really the decisivepoint for KB completion and curation.Even if the RMSE were small, say 10K for cities, for somecities the predictions could still be embarrassingly off.So a knowledge engineer could not trust them and would hardly consider them for completing the missing values.Specific comment:The embeddings of two entities may be close for differentreasons, potentially losing too much information.For example, two cities may have close embeddings becausethey are in the same geo-region (but could have very differentpopulations) or because they have similar characteristics(e.g., both being huge metropolitan areas). Likewise, twoathletes could be close because of similar origin andsimilar success in Olympic Games, or because they playedin the same teams. It is not clear how the proposed methods can cope withthese confusing signals through embeddings or other techniques.""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 15,"""Nice experimental study""","""The paper addresses cross-sentence n-ary relation extraction. The authors propose a model consisting of an LSTM layer followed by an CNN layer and show that it outperforms other model choices. The experiments are sound and complete and the presented results look convincing. The paper is well written and easy to follow. In total, it presents a nice experimental study.Some unclear issues / questions for the authors:- ""The use of multiple filters facilitates selection of the most important feature for each feature map"": What do you mean with this sentence? Don't you get another feature map for each filter? Isn't the use of multiple filters rather to capture different semantics within the sentence?- ""The task of predicting n-ary relations is modeled both as a binary and multi-class classification problem"": How do you do that? Are there different softmax layers? And if yes, how do you decide which one to use?- Table 1/2: How can you draw conclusions about the performance on binary and ternary relations from these tables? I can only see the distinction of single sentence and cross sentence there.- Table 3: The numbers for short distance spans (mostly 20.0) look suspicious to me. What is the frequency of short/medium/long distance spans in the datasets? Are they big enough to be able to draw any conclusions from them?- You say that CNN_LSTM does not work because after applying the CNN all sequential information is lost. But how can you apply an LSTM afterwards then? Is there any recurrence at all? (The sequential information would not be lost after the CNN if you didn't apply pooling. Have you tried that?)- Your observation that more than two positional embeddings decrease the performance is interesting (and unexpected). Do you have any insights on this? Does the model pay attention at all to the second of three entities? What would happen if you simply deleted this entity or even some context around this entity (i.e., perform an adversarial attack on your model)?Other things that should be improved:- sometimes words are in the margin- there are some typos, e.g., ""mpdel"", ""the dimensions ... was set... and were initialised"", ""it interesting""""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 15,"""Good paper with small modeling improvements but thorough evaluations""","""The paper presents a method for n-ary cross sentence relation extraction.Given a list of entities, and a list of sentences the task is to identify which relation (from a predefined list) is described between the entities in the given sentences.The proposed model stacks CNN on LSTM to get long range dependencies in the text, and shows to be effective, either beating or equalling the state-of-the-art on two datasets for the task.Overall, I enjoyed reading the paper, and would like to see it appear in the conference.While the proposed model is not very novel, and was shown effective on other tasks such as text classification or sentiment analysis, this is the first time it was applied for this specific task.In addition, I appreciate the additional evaluations, which ablate the different parts of the model, analyze its performance by length between entities, compare it with many variations as baselines and against state-of-the-art for the task.My main comments are mostly in terms of presentation - see below.Detailed comments:In general, I think that wording can be tighter, and some repetitive information can be omitted. For example, Section 4 could be condensed to highlight the main findings, instead of splitting them across subsections.I think that Section 3.1.2 (Position Features) would benefit from an example showing an input encoding.Table 5 shows up in the references. Minor comments and typos:Text on P. 9 overflows the page margins.I think that Table 3 would be a little easier to read if the best performance in each column were highlighted in some manner.Section 2, p. 3: mpdel -> model.Perhaps using Figure 1 instead of Listing 1 is more consistent with *ACL-like papers?""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 15,"""Review of Combining Long Short Term Memory and Convolutional Neural Network for Cross-Sentence n-ary Relation Extraction""","""The paper presents an approach to cross-sentence relation extraction that combines LSTMs and convolutional neural network layers with word and position features. Overall the choices made seem reasonable, and the paper includes some interesting analysis / variations (e.g., showing that an LSTM layer followed by a CNN is a better choice than the other way around).Evaluation is performed on two datasets, Quirk and Poon (2016) and a chemical induced disease dataset. The paper compares a number of model variations, but there don't appear to be any comparisons to state-of-the-art results on these datasets. The paper could benefit from comparisons to SOTA on these or other datasets.""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 17,"""Interesting approach to a new task + new data set""","""This work aims to address the problem of answering visual-relational queries in knowledge graphs where the entities are associated with web-extracted images.The paper introduces a newly constructed large scale visual-relational knowledge graph built by scraping the web. Going beyond previous data sets like VisualGenome having annotations within the image, the ImageGraph data set that this work proposes allows for queries over relations between multiple images and will be useful to the community for future work. Some additional details about the dataset would have been useful such as the criteria used to decide ""low quality images"" that were omitted from the web crawl as well as the reason for omitting 15 relations and 81 entities from FB15k. While existing relational-learning models on knowledge graphs employ an embedding matrix to learn a representation for the entity, this paper proposes to use deep neural networks to extract a representation for the images. By employing deep representations of images associated with previously unseen entities, their method is also able to answer questions by generalizing to novel visual concepts, providing the ability to zero-shot answer questions about these unseen entities.The baselines reported by the paper are weak especially the VGG+DistMult baseline with very low classifier score leading to its uncompetitive result. It would be worth at this point to try and build a better classifier that allows for more reasonable comparison with the proposed method. (Accuracy 0.082 is really below par) As for the probabilistic baseline, it only serves to provide insights into the prior biases of the data and is also not a strong enough baseline to make the results convincing.Well written paper covering relevant background work, but would be much stronger with better baselines.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 17,"""Compelling new task and dataset""","""The paper introduces several novel tasks for visual reasoning resembling knowledge base completion tasks but involving images linked to entities: finding relations between entities represented by images and finding images given an image and a relation. The task is accompanied with a new dataset, which links images crawled from the web to FreeBase entities. The authors propose and evaluate the first approach on this dataset.The paper is well written and clearly positions the novelty of the contributions with respect to the related work.Questions:* What are the types of errors of the proposed approach? The error analysis is missing. A brief summary or a table based on the sample from the test set can provide insights of the limitations and future directions.* Is this task feasible? In some cases information contained in the image can be insufficient to answer the query. Error analysis and human baseline would help to determine the expected upper-bound for this task.* Which queries involving images and KG are not addressed in this work? The list of questions in 4.1. can be better structured, e.g. in a table/matrix: Target (relation/entity/image), Data (relation/entity/image)""","""9: Top 15% of accepted papers, strong accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 17,"""REVIEW""","""The paper proposes to extend knowledge base completion benchmarks with visual data to explore novel query types that allow searching and completing knowledge bases by images. Experiments were conducted on standard KB completion tasks using images as entity representations instead of one-hot vectors, as well as a zero shot tasks using unseen images of unknown entities.Overall I think that enriching KBs with visual data is appealing and important. Using images to query knowledge bases can be a practical tool for several applications. However, the overall experimental setup suffers from several problems. The results are overall very low. In the non-zero shot experiments I would like to see a comparison to using entity embeddings, and maybe even using a combination of both, as this is the more interesting setup. For instance, I would like to see whether using images as additional information can help building better entity representations. The explored link prediction models are all known, so apart from using images instead of entities there is very limited novelty. The authors find that concatenation followed by dot-product with relation-vector works best. This is very unfortunate because it means that there is no interaction between h and t at all, i.e.: s(h, r, t) = [h;t] * r = h * r_1 + t * r_2. This means that finding t given h,r only depends on r and not on h at all. Finally, this shows that the proposed image embeddings derived from the pretrained VGG16 model are not very useful for establishing relations.Given the mentioned problems I can unfortunately not recommend this paper for acceptence.Other comments: - I wouldn't consider a combination of pretrained image embeddings bsaed on CNNs with KB embeddings a ""novel machine learning approach"", but rather a standard technique - redefine operators when describing LP models: dot is typically used for element-wise multiplication, for concatenation use [h; t] for instance- (head, relation, tail) is quite unusual --> better: (subject, predicate, object)- baselines are super weak. Concatenation should be the baseline as it connects h and t with r indepedent of each other. What is the probabilistic baseline?""","""4: Ok but not good enough - rejection""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 18,"""Good paper, some clarifications needed""","""SummaryThis paper addresses the ARC dataset by reformulating the question using embeddings from ConceptNet. Their model selects a few terms from the question using the embeddings from ConceptNet, rewrites the query based on the selected terms, retrieves the documents and solves the query. The empirical result shows that embeddings from ConceptNet is beneficial, and the overall result is comparable to recent performance on ARC dataset.Qualitypros1) This paper contains a thorough study of recent QA models and datasets.2) This paper describes the model architecture, conducts ablation studies of different Essential terms classification, and includes thorough comparisons with recent models on ARC challenges.cons- Although the paper includes recent works on QA models/datasets, it doesnt contain much studies on query reformulations. For example, ""Ask the Right Questions: Active Question Reformulation with Reinforcement Learning (Buck et al., ICLR 2018) is one of the related works that the paper didnt cite.- The paper does not have any example of reformulated queries or error analysis.Claritypros1) The paper describes the framework and model architecture carefully.cons1) It is hard to understand how exactly they reformulate the query based on selected terms. (I think examples would help) For example, in Fig 2, after activities, used, conserve and water were selected, how does rewriter write the query? The examples will help.2) Similar to the above, it would be helpful to see the examples of decision rules in Section 5.2.3) It is hard to understand how exactly each component of the model was trained. First of all, is rewrite module only trained on Essential Terms dataset (as mentioned in Section 3.1.3) and never fine-tuned on ARC dataset? Same question for entailment modules: is it only trained on SciTail, not fine-tuned on ARC dataset? How did decision rules trained? Are all the modules trained separately, and havent been trained jointly? What modules were trained on ARC dataset? All of these are a bit confusing since therere many components and many datasets were used.Originality & significancepros* Query reformulation methods have been used on several QA tasks (like Buck et al 2018 above), and incorporating background knowledge has been used before too (as described in the paper), but I think its fairly original to do both in the same time.cons* It is a bit disappointing that the only part using background knowledge is selecting essential terms using ConceptNet embedding. I think the term using background knowledge is too general term for this specific idea.In general, I think the paper has enough contribution to be accepted, if some descriptions are better clarified.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 18,"""good qa paper""","""This paper focuses on the recently introduced ARC Challenge dataset, which contains 2,590 multiple choice questions authored for grade-school science exams. The paper presents a system that reformulates a given question into queries that are used to retrieve supporting text from a large corpus of science-related text. The rewriter is able to incorporate background knowledge from ConceptNet. A textual entailment system trained on SciTail that identifies support in the retrieved results. Experiments show that the proposed system is able to outperform several baselines on ARC.* (Sec 2.2) ""[acl-2013] Paraphrase-driven learning for open question answering"" and ""[emnlp-2017] Learning to Paraphrase for Question Answering"" can be added in the related work section.* (Sec 3.1) Seq2seq predicts 0 and 1 to indicate whether the word is salient. A more straightforward method is using a pointer network for the decoder, which directly selects words from the input. This method should be more effective than seq2seq used in Sec 3.1.1.* (Sec 3.1) How about the performance of removing the top crf layer? The LSTM layer and the classifier should play the most important role.* How to better utilize external resources is an interesting topic and is potentially helpful to improve the results of answering science exam questions. For example, the entailment module described in Sec 5.1 can be trained on other larger data, which in turn helps the problem with smaller data. I would like to see more details about this.* Are the improvements significant compared to the baseline methods? Significance test is necessary because the dataset is quite small.* Experiments on large-scale datasets are encouraged.""","""6: Marginally above acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 18,"""The system is a combination of many existing techniques, and is outperformed by several works.""","""This paper introduces an end-to-end system to answer science exam questions for the ARC challenge. The system is a combination of several existing techniques, including (i) query rewriting based on seq2seq or NCRF++, (ii) answer retriever, (iii) entailment model based on match-LSTM, and (iv) knowledge graph embeddings. The description of the system is clear, and there is abundant ablation study. However, I have following concerns about this paper:1. There seems to be no new techniques proposed in the system. Hence, the novelty of this work is questioned.2. I do not understand why the authors use TransH, which is a KG embedding model that differentiates one entity into different relation-specific representations.3. The system is significantly outperformed by Sun et al. 2018 and Ni et al. 2018.""","""6: Marginally above acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct""" 19,"""Elegant Method, Neat Result, Manuscript Needs Major Reorganization ""","""This paper presents a simple model for representing lexical relations as vectors given pre-trained word embeddings.Although the paper only evaluates on out-of-context lexical benchmarks, as do several other papers in lexical-semantics, the empirical results are very encouraging. The proposed method achieves a substantial gain over existing supervised and unsupervised methods of the same family, i.e. that rely only on pre-trained word embeddings as inputs.In my view, the main innovation behind this method is in the novel loss function. Rather than using just a single word pair and training it to predict the relation label, the authors propose using *pairs of word pairs* as the instance, and predicting whether both pairs are of the same relation or not. This creates a quadratic amount of examples, and also decouples the model from any schema of pre-defined relations; the model is basically forced to learn a general notion of similarity between relation vectors. I think it is a shame that the loss function is described as a bit of an afterthought in Section 4.4. I urge the authors to lead with a clear and well-motivated description of the loss function in Section 3, and highlight it as the main modeling contribution.I think it would greatly strengthen the paper to go beyond the lexical benchmarks and show whether the learned relation vectors can help in downstream tasks, such as QA/NLI, as done in pair2vec (pseudo-url). This comment is true for every paper in lexical semantics, not only this one in particular.It would also be nice to have an empirical comparison to pattern-based methods, e.g. Vered Shwartz's line of work, the recent papers by Washio and Kato from NAACL 2018 and EMNLP 2018, or the recently-proposed pair2vec by Joshi et al (although this last one was probably published at the same time that this paper was submitted). The proposed method doesn't need to be necessarily better than pattern-based methods, as long as the fundamental differences between the methods are clearly explained. I think it would still be a really exciting result to show that you can get close to pattern-based performance without pattern information.My main concern with the current form of the paper is that it is written in an extremely convoluted and verbose manner, whereas the underlying idea is actually really simple and elegant. For example, in Section 3, there's really no reason to use so many words to describe something as standard as an MLP. I think that if the authors try to rewrite the paper with the equivalent space as an ACL short (around 6-7 pages in AKBC format), it would make the paper much more readable and to-the-point. As mentioned earlier, I strongly advise placing more emphasis on the new loss function and presenting it as the core contribution.Minor comment: Section 2.2 describes in great detail the limitation of unsupervised approaches for analogies. While this explanation is good, it does not properly credit ""Linguistic Regularities in Sparse and Explicit Word Representations"" (Levy and Goldberg, 2014) for identifying the connection between vector differences and similarity differences. For example, the term 3CosAdd was actually coined in that paper, and not in the original (Mikolov, Yih, and Zweig; 2013) paper, in order to explain the connection between adding/subtracting vectors and adding/subtracting cosine similarities. The interpretation of PairDiff as a function of word similarities (as presented in 2.2) is very natural given Levy and Goldberg's observation.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 19,"""Novel solution and comprehensive experimentation""","""This paper proposes a novel solution to the relation composition problem when you already have pre trained word/entity embeddings and are interested only in learning to compose the relation representations . The proposed supervised relational composition method learns a neural network to classify the relation between any given pair of words and as a by-product, the penultimate layer of this neural network, as observed by the authors, can be directly used for relation representation.The experiments have been performed on the BATS dataset and the DiffVec Dataset.The inferences that were made to advocate for the usefulness of proposed MnnPL are as follows-- Out_of_domain relation prediction experiment to test for generalisability showed that MnnPL outperformed other baselines at this task- The interesting analysis in table 3 highlights the difficulty in representing lexicographic relations as compared to encyclopedic. MnnPL outperforms others here too. The authors provide a reasonable explanation (Figure 2) as to why the PairDiff operator that was proposed to work well on the Google analogy dataset works worse in this scenario.- The experiment to measure the degree of relational similarity using Pearson correlation coefficient, showcases that the relational embeddings from MnnPL are better correlated with human notion of relational similarity between word pairs- They also showed that MnnPl is less biased to attributional similarity between wordsThe authors show that the proposed MnnPL had outperformed other baselines on several experiments.Some of the positive aspects about the paper- elaborately highlighted all the implementations details in a crisp manner- Extensive experimentation done and a very due-diligent evaluation protocol.- In the experiments the authors have compared their proposed supervised operator extensively with other unsupervised operators like PairDiff, CONCAT, e.t.c and some supervised operators like SUPER_CONCAT, SUPER_DIFF e.t.c. They also compared against the bilinear operator proposed by Hakami et al., 2018, which was published very recently.Some of the limitations of the work- Though extensive experiments have been done and elaborate evaluation protocols have been followed to advocate for MnnPL, I believe that it lacked slightly on novelty side.- Reference to Table 2 on page 12 should actually be a reference to figure 2Questions for rebuttal:-- Some reasoning on why does the CONCAT baseline show a better Pearson correlation coefficient than PairDIff?- Interesting to see that CBOW performed better than others, especially GLOVE, on all experiments. Some analysis on this.- Break down of the performance for the two semantic relation types, could be shown on a few other datasets to strengthen claim.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 19,"""OK work, but the framing makes it sound trivial""","""== Summary ==The paper proposes a way to embed the relation between a given pair of words (e.g., (aquarium, fish)).- Assume a dataset of |R| relations (e.g., hypernym, meronym, cause-effect) and many word pairs for each relation.- In each of the 10 test scenarios, 5 relations are randomly chosen. The test procedure is: for each pair (a, b) with embedding r(a, b), rank (c, d) based on cosine(r(a, b), r(c, d)). Evaluate how well we can retrieve pairs from the same relation as (a, b) (top-1 accuracy and mean average precision).- The proposed method is to train a multiclass classifier on the |R| - 5 relations. The model is a feed-forward network, and the output of the last hidden layer is used as r(a, b).- The method is compared with unsupervised baselines (e.g., PairDiff: subtracting pre-trained embeddings of a and b) as well as similar supervised methods trained on a different objective (margin rank loss).== Pros ==- The evaluation is done on relations that are not in the training data. It is not trivial that a particular relation embedding would generalize to unseen and possibly unrelated relations. The proposed method generalizes relatively well. Compare this to the proposed margin rank loss objective, which performs well on the classification task (Table 7) but is worse than PairDiff on test data.== Cons ==- The paper is framed in such a way that the method is trivial. My initial thought from the abstract was: ""Of course, supervised training is better than unsupervised ones"", and the introduction does not help either. The fact that the method generalizes to unseen test relations, while a different supervised method does to a lesser extent, should be emphasized earlier.- The reason why the embedding method generalizes well might have something to do with the loss function used rather than how the word vectors are combined (difference, concatenation, bilinear, etc.). This could be investigated more. Maybe one loss is better at controlling the sizes of the vectors, which is an issue discussed in Section 2.2.- The result on the DiffVec dataset is pretty low, considering that a random baseline would get an accuracy of ~20% (1 in 5 classes). The results also seem to be only a bit better than the unsupervised baseline PairDiff on the BATS dataset.- The writing is a bit confusing at times. For instance, on page 12, for the ""lexical-overlap"" category, it should not be possible for any two test pairs to have exactly 1-word overlap, or maybe I am missing something.""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct""" 20,"""easy to follow and includes sufficient references""","""The survey paper examines the various components of semantic parsing and discusses previous work. The semantic parsing models are categorized into different types according to the supervision forms and the modeling techniques. Overall, the survey is easy to follow and includes sufficient references. The following points can be improved:* Training a semantic parser involves NL, MR, context, data, model, and learning algorithms. A summarization and examples of popular datasets are helpful.* Sec 3.1 Rule based systems can be expanded. The current section is too brief.* Sec 2.1 Language for Meaning Representation and Sec 2.2 Grammar should be merged. * P9: ""Machine translation techniques"" is not a method instead of `` 5. Alternate forms of supervision""* Sec 8 is too brief now. More discussions on future work are welcome.minor:* Combinatory categorial Grammar -> Combinatory Categorial Grammar""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 20,"""Nice review of the development and recent advances in semantic parsing ""","""This work provided a comprehensive review of important works in semantic parsing. It starts with the rule-based systems in the early days. Then it described the introduction of statistical methods to learn from natural language and logical form pairs. Finally, it summarized the recent advances in weakly supervised semantic parsing or learning semantic parsing from denotations and the rise of seq2seq models. It also briefly compared different learning strategies (MML, RL, Max-Margin). The paper is well written and easy to follow. The survey covers most of the important works in the field. It provides a good summary of the development of the field as well as the most recent advances. I support the acceptance of this paper. Some minor comments: ""the ATIS domain is : What states border Texas :x.state(x)borders(x, texas)."" Is this example from ATIS? It seems more like a GeoQuery example. Regarding Reinforcement Learning (section 7.2), there is a recent work (Liang et al, 2018) that is quite relevant. It introduced a principled RL method for semantic parsing, and compared it with other objectives like MML. It also introduced a systematic exploration strategy to address the exploration problem mentioned in this section. Might be worth discussing here. Memory Augmented Policy Optimization for Program Synthesis and Semantic Parsing, Liang, Chen and Norouzi, Mohammad and Berant, Jonathan and Le, Quoc V and Lao, Ni, Advances in Neural Information Processing Systems, 2018""","""8: Top 50% of accepted papers, clear accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 20,"""Good survey paper, but no originality""","""Summary:This paper conducts a thorough survey on semantic parsing, as the title suggested. This paper introduces the formal definition of the semantic parsing, categorized them, describes the development of the system from 1970s to very recent in (fairly) chronological order.Quality & clarity:The survey is very thorough and self-contained, and the descriptions are all very clear and well-written.Originality & significance:Since it is the survey paper, its hard to say it has originality. General comment:Although the survey is very thorough, the paper does not have an original contribution, which conference paper should have.* UpdateI had a misunderstanding about the policy regarding survey paper. I agree with other reviewers that the paper is a well-written survey. Therefore I vote for the acceptance.""","""7: Good paper, accept""","""3: The reviewer is fairly confident that the evaluation is correct""" 21,"""Needs better evaluation ""","""This paper presents a method to jointly learn word embeddings using co-occurrence statistics as well as by incorporating hierarchical information from semantic networks like WordNet. In terms of novelty, this work only provides a simple extension to earlier papers [1,2] by changing the objective function to instead make the word embeddings of a hypernym pair similar but with a scaling factor that depends on the distance of the words in the hierarchy.While the method seems to learn some amount of semantic properties, most of the baselines reported seem either outdated or ill fitted to the task and do not serve well to evaluate the value of the proposed method for the given task. For example the JointRep baseline is based on a semantic similarity task which primarily learns word embeddings based on synonym relations and seems to not be an appropriate baseline to compare the current approach to.Further, there are two primary methods of incorporating semantic knowledge into word embeddings - by incorporating them during the training procedure or by post processing the vectors to include this knowledge. While I understand that this method falls into the first category, it is still important and essential to compare to both types of strategies of word vector specialization. In this regard [3] has been shown to beat HyperVec and other methods on hypernym detection and directionality benchmarks and should be included in the results. It would be also interesting to see how the current approach fares on graded hypernym benchmarks such as Hyperlex. Minor comments : Section 4.2 there is a word extending out of the column boundaries. [1] Alsuhaibani, Mohammed, et al. ""Jointly learning word embeddings using a corpus and a knowledge base."" PloS one (2018)[2] Bollegala, Danushka, et al. ""Joint Word Representation Learning Using a Corpus and a Semantic Lexicon."" AAAI. 2016.[3] Vuli, Ivan, and Nikola Mrki. ""Specialising Word Vectors for Lexical Entailment."" NAACL-HLT 2018.""","""5: Marginally below acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 21,"""Well motivated approach but some concerns""","""This paper proposed a joint learning method of hypernym from both raw text and supervised taxonomy data. The innovation is that the model is not only modeling hypernym pairs but also the whole taxonomy. The experiments demonstrate better or similar performance on hypernym pair detection task and much better performance on a new task called ""hierarchical path completion"". The method is good motivated and intuitive. Lots of analysis on the results are done which I liked a lot. But I have some questions for the authors.1) One major question I have is for the taxonomy evaluation part, I think there are works trying to do taxonomy evaluation by using node-level and edge-level evaluation. 'A Short Survey on Taxonomy Learning from Text Corpora:Issues, Resources, and Recent Advances' from NAACL 2017 did a nice summarization for this. Is there any reason why this evaluation is not applicable here?2) At the end of section 4.2, the author mentioned Retrofit, JointReps and HyperVec are using the original author prepared wordnet data. Then the supervised training data is different for different methods? Is there a more controlled experiment where all experiments are using the same training data?3) In section 4.4, there are three prediction methods are introduced including ADD, SUM, and DH. The score is calculated using cosine similarity. But the loss function used in the model is by minimizing the L2 distance between word embeddings? Is there any reason why not use L2 but cosine similarity in this setting? Also, I'm assuming SUM and DH are using cosine similarity as well? It might be useful to add that bit of information.4) The motivation for this paper is to using taxonomy instead of just hypernym pairs? Another line of research trying to encode the taxonomy structure into the geometry space such that the taxonomy will be automatically captured due to the self-organized geometry space. Some papers including but not restricted 'Order-Embeddings of Images and Language', 'Probabilistic Embedding of Knowledge Graphs with Box Lattice Measures'. Probably this line of work is not directly comparable, but it might be useful to add to the related work session.A few minor points: 1) In equation four of section 3, t_max appears for the first time. This equation maybe part of the GLOVE objective, but a one-sentence explanation of t_max might be needed here.2) at the end of section 3, the calculation of gradients for different parameters are given, but the optimization is actually performed by AdaGrad. Maybe it would be good to move these equations to the appendix.3) In section 4.1 experiment set up, the wordnet training data is generated by performing transitive closure I assume? How does the wordnet synsets get mapped to its surface form in order to do further training and evaluation?""","""5: Marginally below acceptance threshold""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 21,"""Recent relevant work not adequately discussed or compared. ""","""Paper summary: This paper presents a method of learning word embeddings for the purpose of representing hypernym relations. The learning objective is the sum of (a) a measure of the distributional inclusion difference vector magnitude and (b) the GloVE objective. Experiments on four benchmark datasets are mostly (but not entirely positive) versus some other methods.The introduction emphasizes the need for a representation that ""able to encode not only the direct hypernymy relations between the hypernym and hyponym words, but also the indirect and the full hierarchical hypernym path. There has been significant interest in recent work on representations aiming for exactly this goal, including Poincare Embeddings [Nikel and Kiela], Order Embeddings [Vendrov et al], Probabilistic Order Embeddings [Lai and Hockenmaier], Box embeddings [Vilnis et al]. It seems that there should be empirical comparisons to these methods.I found the order of presentation awkward, and sometimes hard to follow. For example, I would have liked to see a clear explanation of test-time inference before the learning objective was presented, and Im still left wondering why there is not a closer correspondence between the multiple inference methods described (in Table 3) and the learning objective.I would also have liked to see a clear motivation for why the GloVE embedding is compatible with and beneficial for the hypernym task. Relatedness is different than hypernymy.""","""4: Ok but not good enough - rejection""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 22,"""The paper proposes a potentially interesting direction, though needs to improve in many aspects.""","""The authors propose to semi-supervised method for relation classification, which trains multiple base learners using a small labeled dataset, and applies an ensemble of them to annotate unlabeled examples for semi-supervised learning. They conducted experiments on a BioCreative shared task for detecting chemical-protein interactions from biomedical abstracts. The experimental results suggest that ensemble could help denoise the self supervised labels. Overall, this is an interesting direction to explore, but the paper can also improve significantly in several aspects.First, the discussion about related work is a bit lacking and at place slightly misguided. The authors seem to be unaware of a growing body of recent work in biomedical machine reading using indirect supervision, e.g., Deep Probabilistic Logic (EMNLP-18). These methods have demonstrated success in extracting biomedical relations across sentences and beyond abstracts, using zero labeled examples. Likewise, in discussing semi-supervised approaches and cross-view learning, the authors should discuss their obvious connections to recent progress such as Semi-Supervised Sequential Modeling with Cross-View Learning (EMNLP-18) and EZLearn (IJCAI-18).Additionally, the authors seem to pitch the proposed method against standard weak supervision approaches. E.g., the paper stated ""For the creation of the weak labels, we use classifiers pretrained in a small labeled dataset, instead of large Knowledge Bases which might be unavailable."" Effectively, the authors implied that distant supervision is not applicable here because it requires ""large knowledge bases"". In fact, the precise attraction of distant supervision is that knowledge bases are generally available for the relations of value, even though their coverage is sparse and not up-to-date. And past work has shown success with small KBs using distant supervision, such as recent work in precision oncology: - Distant Supervision for Relation Extraction beyond the Sentence Boundary (EACL-17) - Cross-Sentence N-ary Relation Extraction with Graph LSTMs (TACL-17)A labeled dataset contains strictly more information than the annotated relations, and arguably is harder to obtain than the corresponding knowledge base. Some prior work even simulated distant supervision scenarios from labeled datasets, e.g., DISTANT SUPERVISION FOR CANCER PATHWAY EXTRACTION FROM TEXT (PSB-15). These won't detract significance in exploring ensemble learning, but the proposed direction should be considered as complementary rather than competing with standard weak supervision approaches.The paper should also discuss related datasets and resources other than BioCreative. E.g., BioNLP Shared Task on event extraction (Kim et al. 2009) is influential in biomedical relation extraction, followed by a number of shared tasks in the same space.Another major area that can be improved is the technical and experimental details. E.g.:The ensemble of base learners is key to the proposed method. But it's unclear from the paper how many base learners have been considered, what are their types, etc. At the high level, the paper should specify the total number of candidate base learners, distribution over major types (SVM, NN, ...), the types of chosen centroids, etc. For each type, the paper should include details about the variation, perhaps in supplement. E.g., SVM kernels, NN architectures. Table 3 shows the mean F1 score of base learners, which raises a number of questions. E.g., what're the top and bottom performers and their scores? If the mean is 53, that means some base learner learning from partial training set performs similar to LSTM learning from the whole training set, so it seems that out right some learner (supposedly not LSTM) is more suited for this domain? It's unclear whether LSTM is included in the base learner. If it was, then its low performance would mean that some top performers are even higher, so the ensemble gain could simply stem from their superiority. In any case, the lack of details makes it very hard to assess what's really going on in the experiment.A key point in the paper is that high-capacity methods such as LSTM suffers from small dataset. While in general this could be true, the LSTM in use seems to be severely handicapped. For one, the paper suggested that they undersampled in training LSTM, which means that LSTM was actually trained using less data compared to others. They did this for imbalance, but it is probably unnecessary. LSTMs are generally less sensitive to label imbalance. And if one really has to correct for that, reweighting instances is a better option given the concern about small training set to begin with.The paper also didn't mention how the word embedding was initialized in LSTM. If they're randomly initialized, that's another obvious handicap the LSTM suffers from. There are publicly available PubMed word2vec embedding, not to more recent approaches like Elmo/Bert. Minor comments:In Table 3, what does it mean by ""F1 of weak labels""? Does it mean that at test time, one compute the ensemble of predictions from base learners?The paper uses D_G in one place and D_B in another. Might be useful to be consistent.""Text trimming"" sounds quite odd. What the paragraph describes is just standard feature engineering (i.e., taking n-grams from whole text vs. in between two entities).Fig 3: it's unclear which line is which from the graph/caption. One has to guess from the analysis, Fig 5: what was used in tSNE? I.e., how is each instance represented? It's also hard to make any conclusion from the graph, as the authors' tried to make the point about distributions.""","""5: Marginally below acceptance threshold""","""5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature""" 22,"""review ""","""-summaryThis paper addresses the problem of generating training data for biological relation extraction, looking specifically at the biocreative chemprot dataset. The authors weakly label data by using a set of weak classifiers, then use those predictions as additional training data for a meta learning algorithm.-pros- nicely written paper with good exposition on related works- good analysis experiments-cons- experiments all on a single dataset- methodology isnt very novel-questions- In table 3, majority vote of weak classifiers outperforms the meta learning. Are these two numbers comparable and if so, does this mean an ensemble of weak classifiers is actually the best performing method in your experiments.""","""7: Good paper, accept""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct""" 22,"""An interesting paper in methodology, but I'm not entirely convinced by the experiments""","""This paper proposes an interesting combination of semi-supervised learning and ensemble learning for information extraction, with experiments conducted on a biomedical relation extraction task. The proposed method is novel (to certain degree) and intuitively plausible, but I'm not entirely convinced of its effectiveness. Pros:- Novel combination of semi-supervised learning and ensemble learning for information extraction- Good discussion of literature- Paper is well written and easy to followCons:- Experimental design and results are not supportive enough of the effectiveness of the proposed method.Specifically, I think the following problems undermine the claims:- The experiments are conducted on a single dataset. I understand it may be hard to get more biomedical RE datasets, but the proposed method is in principle not limited to biomedical RE (even though the authors limited their claim to biomedical RE). In general I think it needs multiple datasets to test a generic learning methodology.- Only one split of training data is used, and the unlabeled data to labeled data are close in scale. It would be really informative if there are experiments and performance curves with different labeled subsets of different sizes. Also, (after under-sampling) 4500 unlabeled vs. 2000 labeled seems less impressive and may be not very supportive of the usefulness of the proposed method. What will happen if there are only 200 labeled examples? 400?- No comparison with other semi-supervised learning methods for information extraction, so it's not clear how competent the proposed method is compared with other alternatives. - The fact that the mean base learner performance is often on par with the meta-learner, and simple majority vote of the weak labels can largely outperform the meta-learner may suggest a better meta-learner other than LSTM should have been used.""","""5: Marginally below acceptance threshold""","""3: The reviewer is fairly confident that the evaluation is correct""" 23,"""review""","""This paper was interesting and rather clearly written, as someone who didn't have much background in rule learning.Section 5.1.1.1 : ""(ii) tuples contained in the answer of Q' where Q' is the same as Q but without the rule with empty body, but not in the training set"" is unclear.In the algorithm, line 22, aren't rules removed until H leads to a ""safe"" UCQ ?Section 6.1 : ""In line 7 we formulate a UCQ Q from all the candidate rules in H (explained in 5.2 with an example)"". I was unable to find the example in section 5.2 It would be interesting to have an idea of the maximum scale that ProbFoil+ can handle, since it seems to be the only competitor to the suggested method. In section 7.2 does the ""learning time"" include the call to AMIE+ ? if not, it would be interesting to break down the time into its deterministic and learning components, since the former is only necessary to retrieve the correct probabilities.Being new to this subject, I found the paper to be somewhat clear. However, I found that it was sometimes hard to understand what was a part of the proposed system, and what was done in Amie+ or Slimshot.For example, ""But, before calling Slimshotfor lifted inference on the whole query, we first break down the query to independent subqueries such that no variable is common in more than one sub-query. Then, we performinference separately over it and later unify the sub-queries to get the desired result.""This is described as important to the speedup over ProbFoil+ in the conclusion, yet doesn't appear in Algorithm 1. Similarly, ""it caches the structure of queries before doing inference "" is mentioned in the conclusion but I couldn't map it to anything in Algorithm 1 or in the paper.I lean toward an accept because the work seems solid, but I feel like I don't have the background required to judge on the contributions of this paper, which seems to me like a good use of Amie+/Slimshot with a reasonable addition of SGD to learn rule weights. Some of the components which are sold as important for the speed-up in the conclusion aren't clear enough in the main text. Some numbers to experimentally back-up how important these additions are to the algorithms would be welcome. ""","""6: Marginally above acceptance threshold""","""1: The reviewer's evaluation is an educated guess""" 23,"""Good but not surprising""","""In general, the paper presents a routing practice, that is, apply lifted probabilistic inference to rule learning over probabilistic KBs, such that the scalability of the system is enhanced but being applicable to a limited scope of rules only. I would not vote for reject if other reviewers agree to acceptance.Specifically, the proposed algorithm SafeLearner extends ProbFOIL+ by using lifted probabilistic inference (instead of using grounding), which first applies AMIE+ to find candidate deterministic rules, and then jointly learns probabilities of the rules using lifted inference.The paper is structured well, and most part of the paper is easy to follow.I have two major concerns with the motivation. It reads that there are two challenges associated with rule learning from probabilistic KBs, i.e., sparse and probabilistic nature. 1) While two challenges are identified by the authors, but the paper deals with the latter issue only? How does sparsity affect the algorithm design?2) The paper can be better motivated, although there is one piece of existing work for learning probabilistic rules from KBs (De Raedt et al. [2015]). Somehow, I am not convinced by the potential application of the methods; that is, after generating the probabilistic rules, how can I apply the probabilistic rules? It will be appreciated if the authors can present some examples of the use of probabilistic rules. Moreover, if it is mainly to complete probabilistic KBs, how does this probabilistic logics based approach compare against embedding based approach?""","""6: Marginally above acceptance threshold""","""1: The reviewer's evaluation is an educated guess""" 23,"""Sound approach for rule learning but heavy dependence on black-box algorithm to propose candidate rules""","""The paper proposes a model for probabilistic rule learning to automate the completion of probabilistic databases. The proposed method uses lifted inference which helps in computational efficiency given that non-lifted inference in rules containing ungrounded variables can be extremely computationally expensive.The databases used contain binary relations and the probabilistic rules that are learned, are also learned for discovering new binary relations. The use of lifted inference restricts the proposed model to only discover rules that are a union of conjunctive queries.The proposed approach uses AMIE+, a method to generate deterministic rules, to generate a set of candidate rules for which probabilistic weights are then learned. The model initializes the rule probabilities as the confidence scores estimated from the conditional probability of the head being true given that the body is true, and then uses a maximum likelihood estimate of the training data to learn the rule probabilities. The paper presents empirical comparison to deterministic rules and ProbFOIL+ on the NELL knowledge base. The proposed approach marginally performs better than deterministic rule learning.The approach proposed is straightforward and depends heavily on the candidate rules produced by the AMIE+ algorithm. The paper does not provide insights into the drawbacks of using AMIE+, the kind of rules that will be hard for AMIE+ to propose, how can the proposed method be improved to learn rules beyond the candidate rules. End-to-end differentiable proving from NeurIPS (NIPS) 2017 also tackles the same problem and it would be nice to see comparison to that work. ""","""4: Ok but not good enough - rejection""","""4: The reviewer is confident but not absolutely certain that the evaluation is correct"""