{"forum": "BylEpe9ppX", "submission_url": "https://openreview.net/forum?id=BylEpe9ppX", "submission_content": {"title": "Answering Visual-Relational Queries in Web-Extracted Knowledge Graphs", "authors": ["Daniel O\u00f1oro-Rubio", "Mathias Niepert", "Alberto Garc\u00eda-Dur\u00e1n", "Roberto Gonz\u00e1lez-S\u00e1nchez", "Roberto J. L\u00f3pez-Sastre"], "authorids": ["daniel.onoro@neclab.eu", "mathias.niepert@neclab.eu", "alberto.duran@neclab.eu", "roberto.gonzalez@neclab.eu", "robertoj.lopez@uah.es"], "keywords": [], "abstract": "A visual-relational knowledge graph (KG) is a multi-relational graph whose entities are associated with images. We explore novel machine learning approaches for answering visual-relational queries in web-extracted knowledge graphs. To this end, we have created ImageGraph, a KG with 1,330 relation types, 14,870 entities, and 829,931 images crawled from the web. With visual-relational KGs such as ImageGraph one can introduce novel probabilistic query types in which images are treated as first-class citizens. Both the prediction of relations between unseen images as well as multi-relational image retrieval can be expressed with specific families of visual-relational queries. We introduce novel combinations of convolutional networks and knowledge graph embedding methods to answer such queries. \nWe also explore a zero-shot learning scenario where an image of an entirely new entity is linked with multiple relations to entities of an existing KG. The resulting multi-relational grounding of unseen entity images into a knowledge graph serves as a semantic entity representation. We conduct experiments to demonstrate that the proposed methods can answer these visual-relational queries efficiently and accurately.", "archival status": "Archival", "subject areas": ["Machine Learning", "Question Answering", "Knowledge Representation", "Relational AI"], "pdf": "/pdf/4e7ff4643f03767a05bf551c53affc1a7242020f.pdf", "paperhash": "o\u00f1ororubio|answering_visualrelational_queries_in_webextracted_knowledge_graphs", "_bibtex": "@inproceedings{\no{\\~n}oro-rubio2019answering,\ntitle={Answering Visual-Relational Queries in Web-Extracted Knowledge Graphs},\nauthor={Daniel O{\\~n}oro-Rubio and Mathias Niepert and Alberto Garc{\\'\\i}a-Dur{\\'a}n and Roberto Gonz{\\'a}lez-S{\\'a}nchez and Roberto J. L{\\'o}pez-Sastre},\nbooktitle={Automated Knowledge Base Construction (AKBC)},\nyear={2019},\nurl={https://openreview.net/forum?id=BylEpe9ppX}\n}"}, "submission_cdate": 1542459579892, "submission_tcdate": 1542459579892, "submission_tmdate": 1580939654631, "submission_ddate": null, "review_id": ["rJxQ2N6mfE", "H1x8rhmCWV", "Byenu7AhbN"], "review_url": ["https://openreview.net/forum?id=BylEpe9ppX¬eId=rJxQ2N6mfE", "https://openreview.net/forum?id=BylEpe9ppX¬eId=H1x8rhmCWV", "https://openreview.net/forum?id=BylEpe9ppX¬eId=Byenu7AhbN"], "review_cdate": [1547060394904, 1546693694029, 1546605427967], "review_tcdate": [1547060394904, 1546693694029, 1546605427967], "review_tmdate": [1550269635863, 1550269635650, 1550269625962], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["BylEpe9ppX", "BylEpe9ppX", "BylEpe9ppX"], "review_content": [{"title": "Interesting approach to a new task + new data set", "review": "This work aims to address the problem of answering visual-relational queries in knowledge graphs where the entities are associated with web-extracted images.\n\nThe paper introduces a newly constructed large scale visual-relational knowledge graph built by scraping the web. Going beyond previous data sets like VisualGenome having annotations within the image, the ImageGraph data set that this work proposes allows for queries over relations between multiple images and will be useful to the community for future work. Some additional details about the dataset would have been useful such as the criteria used to decide \"low quality images\" that were omitted from the web crawl as well as the reason for omitting 15 relations and 81 entities from FB15k. \n\nWhile existing relational-learning models on knowledge graphs employ an embedding matrix to learn a representation for the entity, this paper proposes to use deep neural networks to extract a representation for the images. By employing deep representations of images associated with previously unseen entities, their method is also able to answer questions by generalizing to novel visual concepts, providing the ability to zero-shot answer questions about these unseen entities.\n\nThe baselines reported by the paper are weak especially the VGG+DistMult baseline with very low classifier score leading to its uncompetitive result. It would be worth at this point to try and build a better classifier that allows for more reasonable comparison with the proposed method. (Accuracy 0.082 is really below par) As for the probabilistic baseline, it only serves to provide insights into the prior biases of the data and is also not a strong enough baseline to make the results convincing.\n\nWell written paper covering relevant background work, but would be much stronger with better baselines.", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Compelling new task and dataset", "review": "The paper introduces several novel tasks for visual reasoning resembling knowledge base completion tasks but involving images linked to entities: finding relations between entities represented by images and finding images given an image and a relation. The task is accompanied with a new dataset, which links images crawled from the web to FreeBase entities. The authors propose and evaluate the first approach on this dataset.\n\nThe paper is well written and clearly positions the novelty of the contributions with respect to the related work.\n\nQuestions:\n* What are the types of errors of the proposed approach? The error analysis is missing. A brief summary or a table based on the sample from the test set can provide insights of the limitations and future directions.\n* Is this task feasible? In some cases information contained in the image can be insufficient to answer the query. Error analysis and human baseline would help to determine the expected upper-bound for this task.\n* Which queries involving images and KG are not addressed in this work? The list of questions in 4.1. can be better structured, e.g. in a table/matrix: Target (relation/entity/image), Data (relation/entity/image)", "rating": "9: Top 15% of accepted papers, strong accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "REVIEW", "review": "The paper proposes to extend knowledge base completion benchmarks with visual data to explore novel query types that allow searching and completing knowledge bases by images. Experiments were conducted on standard KB completion tasks using images as entity representations instead of one-hot vectors, as well as a zero shot tasks using unseen images of unknown entities.\n\nOverall I think that enriching KBs with visual data is appealing and important. Using images to query knowledge bases can be a practical tool for several applications. However, the overall experimental setup suffers from several problems. The results are overall very low. In the non-zero shot experiments I would like to see a comparison to using entity embeddings, and maybe even using a combination of both, as this is the more interesting setup. For instance, I would like to see whether using images as additional information can help building better entity representations. The explored link prediction models are all known, so apart from using images instead of entities there is very limited novelty. The authors find that concatenation followed by dot-product with relation-vector works best. This is very unfortunate because it means that there is no interaction between h and t at all, i.e.: s(h, r, t) = [h;t] * r = h * r_1 + t * r_2. This means that finding t given h,r only depends on r and not on h at all. Finally, this shows that the proposed image embeddings derived from the pretrained VGG16 model are not very useful for establishing relations.\n\nGiven the mentioned problems I can unfortunately not recommend this paper for acceptence.\n\nOther comments: \n- I wouldn't consider a combination of pretrained image embeddings bsaed on CNNs with KB embeddings a \"novel machine learning approach\", but rather a standard technique \n- redefine operators when describing LP models: \\odot is typically used for element-wise multiplication, for concatenation use [h; t] for instance\n- (head, relation, tail) is quite unusual --> better: (subject, predicate, object)\n- baselines are super weak. Concatenation should be the baseline as it connects h and t with r indepedent of each other. What is the probabilistic baseline?", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["SJgrNSEf4N", "BkelH5gGNN", "SyeZ6nyp7N", "BJlqZej4m4", "S1lSE5O4Q4", "SygcEDu474"], "comment_cdate": [1549055277044, 1549040183689, 1548709049161, 1548165122109, 1548155437114, 1548154674303], "comment_tcdate": [1549055277044, 1549040183689, 1548709049161, 1548165122109, 1548155437114, 1548154674303], "comment_tmdate": [1549055277044, 1549040183689, 1548709049161, 1548165122109, 1548155437114, 1548154674303], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2019/Conference/Paper2/AnonReviewer2", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper2/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper2/AnonReviewer2", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper2/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper2/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper2/Authors", "AKBC.ws/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "RE", "comment": "I am sorry I misunderstood that the model was finetuned, but from the paper it was not obvious to me. Anyway my main issue remains, the concatenation model doesn't compute entity interaction which leaves me with the conclusion that VGG is not a good feature extractor. So if the concatenation model works the way I understand it, it means that results cannot be promising because you are computing a score where entity representations do not interact at all which should be essential for any reasonable link prediction model."}, {"title": "Response to comments", "comment": "We respectfully disagree with the statement \u201cthe experiments tell us only that the pretrained VGG16 model is a bad feature extractor,\u201d and we are not sure how you come to that conclusion. The results for the image retrieval tasks are mixed not because the VGG16 network is a bad feature extractor but because the entities and corresponding images are heterogeneous. For instance, there are entities such as \u201cAlbert Einstein\u201d for which the feature extraction works very well, and other such as \u201cUSA\u201d for which it is challenging because there is no canonical image representing said entity. We envision in the future models that pay attention to particular more canonical images in this case. We do not anticipate significant improvement by simply using a different feature extractor. Replacing the VGG16 network with a deeper or more advanced CNN, for example, should lead to minor improvements only. \nMoreover, we would like to clarify that the model is pre-trained on ImageNet, but the entire model is end-to-end fine tuned, with each of the scoring functions.\n\nThe results for relation prediction and zero-shot learning are actually promising. Since the input for the system is an image, there is no information about the entity that it represents. Hence, the problem and the performance cannot be compared with previous methods as we present the problem (and possible solutions) for the first time. \n"}, {"title": "RE", "comment": "Well, the biggest issues still remain though. Considering everything this work added images from a web search to an existing KG and showed that traditional approaches do not work well when using **a pretrained VGG16 model**, in fact they do not work at all. Don't get me wrong, the idea of exploring visual information for AKBC and using it for zero-shot learning is very interesting but the experiments are just too limited. End-to-end training should be considered instead of taking a pretrained feature extractor that doesn't work for this task, or at the very least other models for visual feature extraction should have been tried. The problem is that in its current form the experiments tell us only that the pretrained VGG16 model is a bad feature extractor but nothing conclusive about existing models for link prediction. Because of that unfortunately also the zero shot experiments add little value."}, {"title": "Response to comments", "comment": "We thank the reviewer for the encouraging review.\n\nWe have observed that the proposed models learn to relate the two entity types that are involved in a certain relationship/predicate. This is illustrated in Figure 6. However, it sometimes struggles to find an image of the entity that completes the query. This is related to your concern regarding the feasibility of the task. This highly depends on the entities involved in the query. Queries involving entities with canonical images (\u201cStatue of Liberty\u201d) are easier to answer than those involving entities that can be represented with heterogeneous images (\u201cUnited States of America\u201d). This is a nice suggestion, so we will include this discussion in the paper.\n\nOur queries solely include visual information. This means that we do not address queries where the underlying entity for a given image is known.\n"}, {"title": "Response to comments", "comment": "Thank you for the helpful review.\n\nThe novelty of our submission is in the visual KB that we have created, the novel query types we propose, and the combination of standard CNNs with KB embedding scoring functions. We do not claim that the scoring functions are novel. What is novel is the combination of CNNs on image data with these scoring functions for the novel query types we propose. Our work shows that some existing scoring functions are not better than a concatenation, at least for the image retrieval task. We have explored the zero-shot learning setting in the link prediction problem for the first time in the literature. We think that this also makes the work novel.\n\nExperiments show that the difficulty and performance are different from that of the traditional link prediction problem. We have (adapted and) benchmarked some of the most standard scoring functions from the link prediction literature in this problem and show that: i) they work, to some extent, in the relation prediction task, and ii) they perform poorly in the image retrieval task, where the (naive) concatenation approach performs the best. We remark again the complexity of the latter problem (see rebuttal for reviewer 1). We think that this set of experiments constitutes a solid answer to the question regarding the ability of the traditional scoring function to answer these new types of visual queries.\n\nPlease also note the setup descriptions in Section 4.1. In the image retrieval and link prediction scenarios, (experiments (1) and (2)) we have images for which we do not know the underlying KG entities. This is the reason why we have not considered approaches that leverage entity embeddings, as this information is not available in our setting.\n\nThank you for your suggestion regarding the notation used to define certain operators. We completely agree. However, we think the notation (head, relation, tail) is not that uncommon, as it has been used in previous works (e.g. (Bordes et al. 2013), (Garcia-Duran&Niepert, 2017)).\n\nFinally, the probabilistic baseline is explained in Section 5.2 at the end of the second paragraph: \u201cThe second baseline (probabilistic baseline) computes the\nprobability of each relation type using the set of training and validation triples. The baseline\nranks relation types based on these prior probabilities.\u201d\n"}, {"title": "Response to comments ", "comment": "Thank you for your helpful review.\n\nFB15k contain entities like \u201cISO_3166-1:SO\u201d, or \u201cC16:1(n-7)\u201d, for which the crawler returned few images, or even no image at all. After removing those entities from the graph, some relations were not present in the data set.\n\nWe could have replaced our feature extractor VGG-16 with more sophisticated/up-to-date CNNs such as ResNet or DenseNet. These models obtain moderate gains in accuracy on ImageNet, but they come at the cost of (largely) slower running times. In our opinion, the low performance relates to the difficulty of the problem. Our data set has 15 times more of categories/entities than ImageNet, hence the low performance in the image retrieval task. However, for the relation prediction task, where the output space is comparable to that of ImageNet, performances are much more competitive.\n\nWe agree that the experiments provide insights into the bias of the data set. Most importantly, the experiments show that the introduced problem is much harder than the traditional link prediction problem. We wanted to evaluate the scoring functions of state-of-the-art link prediction methods on the problems. At least for the visual entity prediction problem, these scoring functions did not work better than a concatenation. We hope this serves as a first step to this new problem."}], "comment_replyto": ["BkelH5gGNN", "SyeZ6nyp7N", "S1lSE5O4Q4", "H1x8rhmCWV", "Byenu7AhbN", "rJxQ2N6mfE"], "comment_url": ["https://openreview.net/forum?id=BylEpe9ppX¬eId=SJgrNSEf4N", "https://openreview.net/forum?id=BylEpe9ppX¬eId=BkelH5gGNN", "https://openreview.net/forum?id=BylEpe9ppX¬eId=SyeZ6nyp7N", "https://openreview.net/forum?id=BylEpe9ppX¬eId=BJlqZej4m4", "https://openreview.net/forum?id=BylEpe9ppX¬eId=S1lSE5O4Q4", "https://openreview.net/forum?id=BylEpe9ppX¬eId=SygcEDu474"], "meta_review_cdate": 1549889782041, "meta_review_tcdate": 1549889782041, "meta_review_tmdate": 1551128229672, "meta_review_ddate ": null, "meta_review_title": "New useful dataset needs to make less strong claims about novelty", "meta_review_metareview": "This paper introduces a useful new dataset called ImageGraph that allows for the assessment of tasks on the combination of images and knowledge graphs. The paper presents a number of tasks over that datasets and architectures to address those tasks. The reviewers agree that the baselines could be improved upon and there is a question as to whether the architectures are promising or not. \n\nI think the paper should be accepted because the dataset is fundamentally useful and the authors establish good baselines for the considered tasks. Additionally, using KGs with images together is really promising. However, joint image + kg embeddings have already been investigated elsewhere see [1]. I would recommend that the authors soften their claims of novelty. The work is useful and points to a number of good directions for future work. \n\n[1] Towards Holistic Concept Representations: Embedding Relational Knowledge, Visual Attributes, and Distributional Word Semantics. S Thoma, A Rettinger, F Both. International Semantic Web Conference, 694-710", "meta_review_readers": ["everyone"], "meta_review_writers": [], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=BylEpe9ppX¬eId=Byx0ebeJSV"], "decision": "Accept (Poster)"}