AMSR / conferences_raw /iclr20 /ICLR.cc_2020_Conference_B1gUn24tPr.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
12.2 kB
{"forum": "B1gUn24tPr", "submission_url": "https://openreview.net/forum?id=B1gUn24tPr", "submission_content": {"title": "Classification Attention for Chinese NER", "authors": ["Yuchen Ge", "FanYang", "PeiYang"], "authorids": ["geyc2@lenovo.com", "yangfan24@lenovo.com", "yangpei4@lenovo.com"], "keywords": ["Chinese NER", "NER", "tagging", "deeplearning", "nlp"], "TL;DR": "Classification Attention for Chinese NER", "abstract": "The character-based model, such as BERT, has achieved remarkable success in Chinese named entity recognition (NER). However, such model would likely miss the overall information of the entity words. In this paper, we propose to combine priori entity information with BERT. Instead of relying on additional lexicons or pre-trained word embeddings, our model has generated entity classification embeddings directly on the pre-trained BERT, having the merit of increasing model practicability and avoiding OOV problem. Experiments show that our model has achieved state-of-the-art results on 3 Chinese NER datasets.", "pdf": "/pdf/c4df8bbf5593032c160b6e31ed084b4a73f3daba.pdf", "paperhash": "ge|classification_attention_for_chinese_ner", "original_pdf": "/attachment/c4df8bbf5593032c160b6e31ed084b4a73f3daba.pdf", "_bibtex": "@misc{\nge2020classification,\ntitle={Classification Attention for Chinese {\\{}NER{\\}}},\nauthor={Yuchen Ge and FanYang and PeiYang},\nyear={2020},\nurl={https://openreview.net/forum?id=B1gUn24tPr}\n}"}, "submission_cdate": 1569438894416, "submission_tcdate": 1569438894416, "submission_tmdate": 1577168228779, "submission_ddate": null, "review_id": ["ryejz8JhFr", "ByeUmO-R5H", "rylcHyzA9r"], "review_url": ["https://openreview.net/forum?id=B1gUn24tPr&noteId=ryejz8JhFr", "https://openreview.net/forum?id=B1gUn24tPr&noteId=ByeUmO-R5H", "https://openreview.net/forum?id=B1gUn24tPr&noteId=rylcHyzA9r"], "review_cdate": [1571710483133, 1572898846390, 1572900674426], "review_tcdate": [1571710483133, 1572898846390, 1572900674426], "review_tmdate": [1572972627207, 1572972627165, 1572972627123], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2020/Conference/Paper191/AnonReviewer2"], ["ICLR.cc/2020/Conference/Paper191/AnonReviewer3"], ["ICLR.cc/2020/Conference/Paper191/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1gUn24tPr", "B1gUn24tPr", "B1gUn24tPr"], "review_content": [{"experience_assessment": "I have read many papers in this area.", "rating": "3: Weak Reject", "review_assessment:_thoroughness_in_paper_reading": "I read the paper thoroughly.", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "title": "Official Blind Review #2", "review_assessment:_checking_correctness_of_derivations_and_theory": "I carefully checked the derivations and theory.", "review": "Summary:\n This paper discussed an approach to do named entity resolution (NER, the paper focuses only on Chinese NER but I think it could generalize to other languages as well). The idea is based on smart integration and extension of multiple existing building blocks: 1) BERT pre-trained model 2) a previous work to get document embedding by doing weighted average of word embedding (https://openreview.net/pdf?id=SyK00v5xx) and 3) Scaled dot-product attention mechanism applied directly to multi-label classification. The \"Introduction\", \"Related work\", and \"Experiment Settings\" sections are well written and covers many details and decent references. Especially, the \"experiments\" section is described in a great amount of details, which should be very helpful for reproducibility. \n \nContributions:\n * The author found an interesting application of the original algorithm (https://openreview.net/pdf?id=SyK00v5xx) to represent the entity class embedding based on averaging \"BERT\" embeddings of all the component words. This could be implemented as a pre-processing step against any training dataset to derive \"pre-learned\" entity class embedding.\n * Instead of the common approach of connecting the BERT sequence outputs directly to CRF layer, the author added an intermediate layer to calculate the classification attention between a sentence (sequence of token embedding) and any entity class (based on the above pre-learned entity embedding). This result plus the original sentence embedding are concatenated. The concatenation is further fed into a few additional layers to produce the final inputs into CRF layer.\n\nWeakness:\n * The paper lacks novelty. As pointed above, I did not see that the contribution from the paper is sufficiently original. It is a good application of various existing methods though.\n\nI also have a few suggestions/questions below:\n\n* The ERNIE paper (https://arxiv.org/abs/1907.12412v1) is mentioned in the related work. Since ERNIE can potentially learn a good vocab for Chinese, did you ever compare your approach vs ERNIE+CRF? \n* There is one paper that I know which is pretty relevant to what you are doing here, which is probably worth a reference. https://arxiv.org/abs/1805.04174. In that paper, the idea is to co-learn a class embedding and perform text classification. Their class attention is performed through dot-production attention though.\n* The Table index seems wrong in your paper. (I think Table 2 is not mentioned in your paper, but all tables (3-6) is offset by 1). \n* There are some minor typos or places that need some clarifications.\n - in the abstract: \"character-based\" model. This is a little confusing. Because BERT is a word-piece based model. word-piece could across multiple characters for English. IIUC, You probably want to say \"Chinese-character\" instead of character.\n - in \"Introduction\", \"providing greater weight to characters identical to each entity class\", you might want to revise this sentence to clarify its meaning further.\n - In section 3.2, you might want to give some explanation to some notations (the first time you refer to it). For example, what is $L$, what is $m$ and $n$. What is $S$? Also why the denominator of Emb(Word) is not $n-m+1$? \n\n - The last paragraph in section 3.3 needs more clarification as well. How do you merge the three tensors after attention stage? (a concatenation ?) . The last sentence mentioned \"residential\", I guess instead you want to say \"residual\". You might also want to clarify where the \"3 layers\" of residual appear in your network.\n\n - In your experiment, (if I did not miss), did you freeze the BERT parameters and entity embeddings when finetuning your NER model?\n\n - in Table 2 and Table 3, why the 13-layer BERT + CRF performs significantly worse on Recall (Table 2) and significantly better on Recall (Table 3)? \n\n\n "}, {"rating": "3: Weak Reject", "experience_assessment": "I do not know much about this area.", "review_assessment:_checking_correctness_of_derivations_and_theory": "I did not assess the derivations or theory.", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "title": "Official Blind Review #3", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review": "This paper tries to improve the performance of Chinese NER by developing a novel attention mechanism that leverages BERT pre-trained model which considers bi-directional context. Experiments on a number of tasks show that the proposed approach is effective.\n\nComments:\n[1] A bunch of experiments are conducted\n[2] Chinese NER is a hard problem, but it would be great to see the proposed approach generalizable to other tasks. So, the contribution of this paper is limited\n[3] The proposed algorithm is simple and effective, but the novelty is a bit low\n"}, {"rating": "3: Weak Reject", "experience_assessment": "I have read many papers in this area.", "review_assessment:_checking_correctness_of_derivations_and_theory": "N/A", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "title": "Official Blind Review #1", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review": "Comments by sections : \n\nSummary : \n\nThe use of entity is not clear. Are you referring to named-entities ?\n\n1 INTRODUCTION\n\n\"Due to the differences in language structure\" : this paragraph is not clear. It should say explicitly that in Chinese word can be composed of one or a couple of characters and that word boundaries can not detected graphically. \n\n \"A common mitigation is to add external lexicon as a reference\" references are needed on how to includes words in embedding (except just training word embeddings)\n \n \" an entity classification-assisted model\" : not very clear : the classification is assisted ?\n \n \" The first step is to form word embeddings of entities appearing in the trainning sentences through character embeddings and the second step is to aggregate entity embeddings by category and generate classification embeddings\" : is it a multi-task training ?\n \n \"After that, we designed a novel Attention mechanism to integrate entity \" : is it the same model or two different propositions of the paper ? \"After that\" is not very clear as a transition.\n \n Section 2 :\n \n \"more works Yang et al. (2016) Ruder12 et al. (2017) \" : strange formulation\n \n \"What\u2019s more, the attention mechanism...\" : odd expression.\n \n \"In this paper, we revise the Scaled Dot-Product Attention to Classification Attention which would give a weighted representation of the input sentences through a series of entity classes.\" : maybe it should be moved to the introduction as a novelty proposed by the paper.\n \n \n 3.2 EMBEDDING EXTRACTION FOR ENTITY CLASS\n \n \" the smooth inverse frequency is abandoned\" : why ? please explain this choice.\n \"the weighted projection of the word embeddings on their first singular vector is removed.\" : explain. If it is a common practice, give a citation otherwise justify this choice.\n \n 3.3 CLASSIFICATION ATTENTION \n \n Here again, the proposed attention system is described but not justified : why would a class specific attention system be better ? What are the expected advantages ?\n \n 4.1.3 EXPERIMENTAL RESULTS\n Experiments are conducted on 4 dataset and the proposed model is compared to a \"standard\" BERT-based model and several results form the litterature. The proposed model outperforms sometimes the other models, often by a small margin as it is usually the case in NER experiments. \n But more insight on the strengths of the models should be given by conducting an ablation study. \n \n \n In conclusion ,this paper present an incremental improvement over BERT-based NER for Chinese. The proposed approach is not sufficiently justified and experiments, even if showing improvements over state-of-the-art models or published results, does not sufficiently explore the benefits of the proposed model (with ablation study for example). "}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1576798689817, "meta_review_tcdate": 1576798689817, "meta_review_tmdate": 1576800945337, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "The paper is interested in Chinese Name Entity Recognition, building on a BERT pre-trained model. All reviewers agree that the contribution has limited novelty. Motivation leading to the chosen architecture is also missing. In addition, the writing of the paper should be improved.\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1gUn24tPr&noteId=OVEpOfB95M"], "decision": "Reject"}