{"forum": "7NZkNhLCjp", "submission_url": "https://openreview.net/forum?id=7NZkNhLCjp", "submission_content": {"keywords": ["protests", "contentious politics", "news", "text classification", "event extraction", "social sciences", "political sciences", "computational social science"], "TL;DR": "We describe a gold standard corpus of protest events that comprise of various local and international sources from various countries in English.", "authorids": ["AKBC.ws/2020/Conference/Paper90/Authors"], "title": "Cross-context News Corpus for Protest Events related Knowledge Base Construction", "authors": ["Anonymous"], "pdf": "/pdf/251850e331dc53570620d5d3d9341a2f37e684df.pdf", "subject_areas": ["Databases", "Information Extraction", "Machine Learning", "Applications"], "abstract": "We describe a gold standard corpus of protest events that comprise of various local and international sources from various countries in English. The corpus contains document, sentence, and token level annotations. This corpus facilitates creating machine learning models that automatically classify news articles and extract protest event related information, constructing databases which enable comparative social and political science studies. For each news source, the annotation starts on random samples of news articles and continues with samples that are drawn using active learning. Each batch of samples was annotated by two social and political scientists, adjudicated by an annotation supervisor, and was improved by identifying annotation errors semi-automatically. We found that the corpus has the variety and quality to develop and benchmark text classification and event extraction systems in a cross-context setting, which contributes to generalizability and robustness of automated text processing systems. This corpus and the reported results will set the currently lacking common ground in automated protest event collection studies.", "paperhash": "anonymous|crosscontext_news_corpus_for_protest_events_related_knowledge_base_construction"}, "submission_cdate": 1581705821189, "submission_tcdate": 1581705821189, "submission_tmdate": 1586358872346, "submission_ddate": null, "review_id": ["2aiAqA9nok_", "lLeGs0eaos6", "uR2OvU3Q0RR"], "review_url": ["https://openreview.net/forum?id=7NZkNhLCjp¬eId=2aiAqA9nok_", "https://openreview.net/forum?id=7NZkNhLCjp¬eId=lLeGs0eaos6", "https://openreview.net/forum?id=7NZkNhLCjp¬eId=uR2OvU3Q0RR"], "review_cdate": [1585318287602, 1585432992582, 1585375954134], "review_tcdate": [1585318287602, 1585432992582, 1585375954134], "review_tmdate": [1587564876606, 1586893487415, 1585695488566], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2020/Conference/Paper90/AnonReviewer1"], ["AKBC.ws/2020/Conference/Paper90/AnonReviewer2"], ["AKBC.ws/2020/Conference/Paper90/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["7NZkNhLCjp", "7NZkNhLCjp", "7NZkNhLCjp"], "review_content": [{"title": "Interesting corpus, with some clarifications needed", "review": "The paper describes a corpus of news articles annotated for protest events. Overall, this is an interesting corpus with a lot of potential for re-use, however, the paper needs some clarifications. A key contribution of the paper is that the initial candidate document retrieval is not based purely on keyword matching, but rather uses a random sampling and active learning based approach to find relevant documents. This is motivated by the incompleteness of dictionaries for protest events. While this might be true, it would have been good to see an evaluation of this assumption with the current data. It is a bit unclear in the paper, but were the K and AL methods run over the same dataset? What are the datasets for which the document relevance precision & recall are reported on page 8?\n\nI would also like to see a more detailed comparison with more general-purpose event extraction methods. Is there a reason why methodologies such as [1] and [2] cannot be re-applied for protest event extraction?\n\nA small formatting issue: the sub-sections on page 8 need newline breaks in between.\n\n[1] Pustejovsky, James, et al. \"Temporal and event information in natural language text.\" Language resources and evaluation 39.2-3 (2005): 123-164.\n[2] Inel, Oana, and Lora Aroyo. \"Validation methodology for expert-annotated datasets: Event annotation case study.\" 2nd Conference on Language, Data and Knowledge (LDK 2019). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2019.\n\nEDIT:\nThank you for addressing the issues I raised. I have changed the review to \"Accept\".", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Very carefully designed corpus", "review": "After looking over authors responses, I've decided to increase my rating of the paper. The main concern I original had was sufficiently motivating the need for this specific dataset (compared to existing alternatives like ACE). The authors (in the comments below) have articulated qualitatively how ACE is insufficient, and demonstrated with experiments that generalization from ACE pretraining to this new dataset is poor. \n\n==== EDIT ====\n\nThe authors present a corpus of news articles about protest events. 10K Articles are annotated with document level labels, sentence-level labels, and token-level labels. Coarse-grained labels are Protest/Not, and fine-grained labels are things such as triggers/places/times/people/etc. 800 articles are Protest articles.\n\nThis is very detailed work & I think the resource will be useful. The biggest question here is: If my focus is to work on protest event extraction, what am I gaining by using this corpus vs existing event-annotated corpora (e.g. ACE) that aren\u2019t necessarily specific to protest events? I\u2019d like to see experiments of models run on ACE evaluated against this corpus & an analysis to see where the mistakes are coming from, and whether these mistakes are made by those models when trained on this new corpus.\n\n--- Below this are specific questions/concerns ---\n\nAnnotation:\nJust a minor clarification question. For the token-level annotations, how did you represent multi-token spans annotated with the same label? For example, in \u201cstone-pelting\u201d, did you indicate \u201cstone\u201d, \u201c-\u201d, and \u201cpelting\u201d tokens with their own labels or did you somehow additionally indicate that \u201cstone-pelting\u201d is one cohesive unit?\n\nSection 4:\nMild nitpick; Can you split the 3 annotation instruction sections into subsections w/ headings for easier navigation?\n\nSection 6\nIt says your classifier restricts to the first 256 tokens in the document. But your classifier is modified to a maximum of 128 tokens. Can explain this?\nWhy is the token extraction evaluation only for the trigger?\n\nRegarding the statement around \u201cThese numbers illustrate that the assumption of a news article contain a single event is mistaken\u201d. \nIt was mentioned earlier that this assumption is being made. Can be more clear which datasets make this assumption? \nCan also explain how your limit to 128 (or 256?) tokens does/doesn\u2019t make sense given multiple events occur per article?\n", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "This paper describing a new formalism for annotating political related corpus. They provide a dataset annotated with their guideline and introduced a BERT baseline over their corpus", "review": "This paper provides a detailed guideline for annotating socio-political corpus. \nThe detailed annotation of documents can be time consuming and expensive. The author in the paper proposed a pipelining framework to start annotations from higher levels and get to more detailed annotation if they exist. \nAlong with their framework, they have provided the dataset of annotated documents, sentences and tokens showing if the protest-related language exists or not. \n\nThe author also outlines the baseline line of the transformer architecture regarding the document level and sentence level classifications. \n\nThe paper describes the details very clearly. The language is easy to follow. \nSo to list the pros will be as follows:\n-introduction of a new framework for annotating political documents, \n-annotating a large scale corpus \n-They baseline results\n\nAlthough they have provided the baseline results on the document and sentence level classifications, they have not provided the results of them over the token level task. It would have been interesting to see if those results are also promising.\n\nThe author has mentioned that they have three levels of annotations (document, sentence, and token ) to save time and not spent time on detailed annotations of negative labels. Can they examine how many samples are labeled negative and how much time (in percent) and money it reduced for annotations?\nSome minor comments:\n-In Page 2: I think \u201cresult\u201d should change to \u201cresulted\u201d in sentence below:\nMoreover, the assumptions made in delivering a result dataset are not examined in diverse settings. \n\n-On page 3 : who want to use this resources. \u2014> who want to use these resources. \n\n-In page 4: We design our data collection and annotation and tool development \u2014 > We design our data collection, annotation. and tool development \n\n-Page 6 : As it was mentioned above \u2014> As it is mentioned above \n\n-You are 1 page over limit, but there are some repetition in annotation manual, especially when talking about arguments of an event, you can just say as mentioned above, \n\n-The author has mentioned that they have three level of annotations (document, sentence and token ) to save time and not spent time on detailed annotations of negative labels. Can they examine how many samples are labeled negative and how much time (in percent) and money it reduced for annotations?", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["Rt-F8lJuNX8", "w5LfCh6z3Fs", "a6evoVvULlL", "aAyCvNEQb26", "h-x-A_eYtwD", "80PRpplG4DU"], "comment_cdate": [1586893554674, 1586495754831, 1586481203612, 1586381261311, 1586379965627, 1586379384886], "comment_tcdate": [1586893554674, 1586495754831, 1586481203612, 1586381261311, 1586379965627, 1586379384886], "comment_tmdate": [1586893554674, 1586499990709, 1586481203612, 1586381261311, 1586379965627, 1586379384886], "comment_readers": [["everyone"], ["everyone", "AKBC.ws/2020/Conference/Paper90/Authors", "AKBC.ws/2020/Conference/Paper90/Reviewers/Submitted", "AKBC.ws/2020/Conference/Paper90/Area_Chairs", "AKBC.ws/2020/Conference/Program_Chairs"], ["everyone"], ["everyone", "AKBC.ws/2020/Conference/Paper90/Authors", "AKBC.ws/2020/Conference/Paper90/Reviewers/Submitted", "AKBC.ws/2020/Conference/Paper90/Area_Chairs", "AKBC.ws/2020/Conference/Program_Chairs"], ["everyone", "AKBC.ws/2020/Conference/Paper90/Authors", "AKBC.ws/2020/Conference/Paper90/Reviewers/Submitted", "AKBC.ws/2020/Conference/Paper90/Area_Chairs", "AKBC.ws/2020/Conference/Program_Chairs"], ["everyone", "AKBC.ws/2020/Conference/Paper90/Authors", "AKBC.ws/2020/Conference/Paper90/Reviewers/Submitted", "AKBC.ws/2020/Conference/Paper90/Area_Chairs", "AKBC.ws/2020/Conference/Program_Chairs"]], "comment_writers": [["AKBC.ws/2020/Conference/Paper90/AnonReviewer2", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper90/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper90/AnonReviewer2", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper90/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper90/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper90/Authors", "AKBC.ws/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Updated review", "comment": "Thanks for the detailed response! I've revised my review above."}, {"title": "Details to Q1 and Q5", "comment": "Thanks for your evaluatipn. Please find extended versions of Answer 1 and 5 below.\n\nAnswer 1: \n -- The difference of the event definition can be stated as follows: \"Moreover, the event definition of ACE and TAC-KBP does not capture the contentious politics (CP) events. For instance, the ACE definition of the event type DEMONSTRATE, in itself, is too restrictive to be applicable in terms of a broad understanding of CP for two reasons. First, as it seems to limit the scope of this event type to spontaneous (that is unorganized) gatherings of people, it excludes certain actions of political and/or grassroots organizations such as political parties and NGOs. Protest actions of such organizations sometimes do not involve mass participation despite aiming at challenging authorities, raising their political agendas or issuing certain demands. Putting up posters, distributing brochures, holding press declarations in public spaces are examples of such protest events. Secondly, the requirement of mass participation in a public area leaves many protest actions such as on-line mass petitions and boycotts, which are not necessarily tied to specific locations where people actually gather, and actions of individuals or small groups such as hunger strikes and self-immolation.\"\n-- The results are as follows, quoting from the new version of the paper,: \"Finally, we run an event extraction model, which is again a BERT-base model, that is trained on ACE event extraction data on the same test data. We measured the trigger detection performance of this model based on its CONFLICT category predictions. The F1 scores of the CONFLICT type are .543 and .479 on its own and on our new data respectively.\"\n\nAnswer 5: \nThe performance of a token extractor based on BERT-base for the information types is 0.722 for Event Trigger, 0.683 for Time, 0.683 for Place, 0.436 for Facility, 0.604 for Participant, 0.593 for Organizer, and 0.491 for Target in terms of F1. More details about this result are available in the newly added Table 4 now. Additionally, we fine-tuned the Flair NER model, which is trained on CoNLL 2003 NER data, on our data by mapping our place, participant, and organizer tags to LOC, PER, and ORG in CoNLL data respectively. This model yielded significantly better results, which are .780, .697, and .652 for the place, participant, and organizer types respectively, in comparison to the BERT-base model.\n\n"}, {"title": "Seeking more specifics about responses to Q1 and Q5", "comment": "Thanks for the response! If possible, could you expand on the responses you've provided above? It's one matter to say you've updated the paper (thanks for doing that), but it'd be really valuable to actually know the content of that update.\n\nFor example, regarding Q1, \"We have added information about why we do not use ACE in the relevant work section. Moreover, we added information about an experiment in which we tested a model trained on ACE data on our corpus. The mismatch between the event definitions causes the ACE based model to yield significantly lower results on our data.\"\n\n--> I'd like to know what this reason is. Can you include here and/or post a shortened version?\n--> What are these results? Can you give more specifics?\n\nFor Q5, what are these evaluation results? Can you be more specific?\n\nResponses to Q2-4, 6-7 are great, thanks."}, {"title": "Update to the paper and comments on the review", "comment": "We appreciate the time you spent reading our paper and your comments that helped us to improve the paper. We have updated the paper in light of your comments. \n\nQuestion 1: A key contribution of the paper is that the initial candidate document retrieval is not based purely on keyword matching, but rather uses a random sampling and active learning-based approach to find relevant documents. This is motivated by the incompleteness of dictionaries for protest events. While this might be true, it would have been good to see an evaluation of this assumption with the current data. \n\nAnswer 1: We added the information you requested. The relevant fragment in this new version of the paper \"Moreover, lexical variance across contexts cannot always be captured using key terms. For instance, the terms \\bandh\" and \\idol immersion\" are event types that are specific to India and not covered by any general-purpose protest key terms list. Our evaluation of four key term lists, which are reported by Huang et al. [2016], Wang et al. [2016], Weidmann and Rod [2019], and Makarov et al. [2016], yielded .68 and .80 precision and recall on our randomly sampled batches at best\"\n\nQuestion 2: It is a bit unclear in the paper, but were the K and AL methods run over the same dataset? What are the datasets for which the document relevance precision & recall are reported on page 8?\n\nAnswer 2: The K method was evaluated on our random samples. We performed a specific experiment to evaluate the AL method. We exploited our data to date as training data to create various ML-based classifiers, applied the filtering on a random sample using these models, and annotated all included and 200 excluded documents to calculate these scores.\n\nQuestion 3: I would also like to see a more detailed comparison with more general-purpose event extraction methods. Is there a reason why methodologies such as [1] and [2] cannot be re-applied for protest event extraction?\n\nAnswer 3: We have added details about why we do not use ACE. Because ACE has comparable event types that are more relevant to our work than TimeML. Moreover, we started our work before [2] was published. We will be looking at this and check whether we can use the proposed methodology in [2] could improve our corpus.\n\nQuestion 4: A small formatting issue: the sub-sections on page 8 need newline breaks in between.\n\nAnswer 4: If you mean the error correction methods, we will correct these in the camera-ready version. We are very sorry, we have just understood this may be what you intend with your comment. Updating the paper again may harm the consistency of our answers to the other reviewer's questions. Please let us know if you think we misunderstand you."}, {"title": "Update to the paper and comments on the review ", "comment": "We appreciate the time you spent reading our paper and your comments that helped us to improve the paper. We have updated the paper in light of your comments. We corrected the language errors, removed repetitions, and fit the paper in the page limit. More specifically, in response to your comments:\n\nQuestion 1: Although they have provided the baseline results on the document and sentence level classifications, they have not provided the results of them over the token level task. It would have been interesting to see if those results are also promising.\n\nAnswer 1: We added the results of the BERT-base on the token level task. They are in the Table 4 now.\n\nQuestion 2: The author has mentioned that they have three levels of annotations (document, sentence, and token ) to save time and not spent time on detailed annotations of negative labels. Can they examine how many samples are labeled negative and how much time (in percent) and money it reduced for annotations?\n\nAnswer 2: We described the gain we obtained using these three levels as follows in the new version of the paper \"The aim here is to maximize time and resource efficiency and performance by utilizing the feedback of each level of annotation for the whole process. The lack of clear boundaries between these levels at the beginning of the annotation project had caused a relatively lower IAA and more time to be spent on the quality check and correction of the dataset. For these same reasons, we add a new step, namely sentence level, to the aforementioned main steps of protest event pipelines.\""}, {"title": "Update to the paper and comments on the review", "comment": "We appreciate the time you spent reading our paper and your comments that helped us to improve the paper. We have updated the paper in light of your comments. More specifically, in response to your comments:\n\nQuestion 1: If my focus is to work on protest event extraction, what am I gaining by using this corpus vs existing event-annotated corpora (e.g. ACE) that aren\u2019t necessarily specific to protest events? I\u2019d like to see experiments of models run on ACE evaluated against this corpus & analysis to see where the mistakes are coming from, and whether these mistakes are made by those models when trained on this new corpus.\n\nAnswer 1: We have added information about why we do not use ACE in the relevant work section. Moreover, we added information about an experiment in which we tested a model trained on ACE data on our corpus. The mismatch between the event definitions causes the ACE based model to yield significantly lower results on our data.\n\nQuestion 2: For the token-level annotations, how did you represent multi-token spans annotated with the same label? For example, in \u201cstone-pelting\u201d, did you indicate \u201cstone\u201d, \u201c-\u201d, and \u201cpelting\u201d tokens with their own labels or did you somehow additionally indicate that \u201cstone-pelting\u201d is one cohesive unit?\n\nAnswer 2: We have added the footnote 6 that provide details of our setting. As a short answer: \"stone-pelting\" is treated as a single unit (token).\n\nQuestion 3: Can you split the 3 annotation instruction sections into subsections w/ headings for easier navigation?\n\nAnswer 3: We have formatted the names of the annotation as boldface at the beginning of each related paragraph. We believe this touch made this section more readable than before.\n\nQuestion 4: It says your classifier restricts to the first 256 tokens in the document. But your classifier is modified to a maximum of 128 tokens. Can explain this?\n\nAnswer 4: The document classifier and token extractor models use the token length 512, which is the default value for BERT-base model. The 256 remained there from a previous experiment. We removed this part from the paper. The sentence classification exploits only 128 tokens, since each sentence in a document is predicted separately. The length 128 was sufficient for the sentence level.\n\nQuestion 5: Why is the token extraction evaluation only for the trigger?\nAnswer 5: We added the evaluation results for other information types.\n\nQuestion 6: Regarding the statement around \u201cThese numbers illustrate that the assumption of a news article contain a single event is mistaken\u201d. It was mentioned earlier that this assumption is being made. Can be more clear which datasets make this assumption? \n\nAnswer 6: We added a reference, which is Tanev et al. (2008), about Europe Media Monitor. This project facilitates only the first sentence of a news article and does not search for any other event in the rest of the article. We have personal contact with this team and we were advised by them about not searching more than one event in a document. Although we have shared our results with them, they think using many (thousands) of sources would compensate for the information they loose in a document. However, if want to have control over sources, we think we should use fewer sources and benefit from a source as much as possible.\n\nQuestion 7: Can also explain how your limit to 128 (or 256?) tokens does/doesn\u2019t make sense given multiple events occur per article?\nAnswer 7: The limit is only for the sentence classification. We classify each sentence separately. Therefore, the length limit does not restrict us. Each sentence is assumed to contain at least one event at this level. We are working on event coreference resolution to link the sentences that are predicted as containing events at the moment. Our annotations contain this information and we are working on reflecting this in our pipeline."}], "comment_replyto": ["w5LfCh6z3Fs", "a6evoVvULlL", "80PRpplG4DU", "2aiAqA9nok_", "uR2OvU3Q0RR", "lLeGs0eaos6"], "comment_url": ["https://openreview.net/forum?id=7NZkNhLCjp¬eId=Rt-F8lJuNX8", "https://openreview.net/forum?id=7NZkNhLCjp¬eId=w5LfCh6z3Fs", "https://openreview.net/forum?id=7NZkNhLCjp¬eId=a6evoVvULlL", "https://openreview.net/forum?id=7NZkNhLCjp¬eId=aAyCvNEQb26", "https://openreview.net/forum?id=7NZkNhLCjp¬eId=h-x-A_eYtwD", "https://openreview.net/forum?id=7NZkNhLCjp¬eId=80PRpplG4DU"], "meta_review_cdate": 1588298634150, "meta_review_tcdate": 1588298634150, "meta_review_tmdate": 1588341535413, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "The paper presents a corpus of 10K news articles about protest events, with document level labels, sentence-level labels, and token-level labels. Coarse-grained labels are Protest/Not, and fine-grained labels are things such as triggers/places/times/people/etc. \n\nAll reviewers agree that this paper is interesting and the contributed resource will be useful for the community, hence we propose acceptance. There were some concerns that the authors fully addressed in their response, updating their paper. We recommend authors to take the remaining suggestions into account when preparing the final version.", "meta_review_readers": ["everyone"], "meta_review_writers": ["AKBC.ws/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=7NZkNhLCjp¬eId=czAMDA7JgOD"], "decision": "Accept"}