paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
Reciprocal Attention Fusion for Visual Question Answering
1805.04247
Table 2: Comparison of the state-of-the-art methods with our single model performance on VQAv2.0 test-dev and test-standard server.
['Methods', 'Test-dev Y/N', 'Test-dev No.', 'Test-dev Other', 'Test-dev All', 'Test-standard Y/N', 'Test-standard No.', 'Test-standard Other', 'Test-standard All']
[['RAF (Ours)', '[BOLD] 84.1', '[BOLD] 44.9', '[BOLD] 57.8', '[BOLD] 67.2', '[BOLD] 84.2', '[BOLD] 44.4', '[BOLD] 58.0', '[BOLD] 67.4'], ['BU, adaptive K Teney et\xa0al. ( 2017 )', '81.8', '44.2', '56.1', '65.3', '82.2', '43.9', '56.3', '65.7'], ['MFB Yu et\xa0al. ( 2018 )', '-', '-', '-', '64.9', '-', '-', '-', '-'], ['ResonNetIlievski and Feng ( 2017 )', '-', '-', '-', '-', '78.9', '42.0', '57.4', '64.6'], ['MUTANBen-Younes et\xa0al. ( 2017 )', '80.7', '39.4', '53.7', '63.2', '80.9', '38.6', '54.0', '63.5'], ['MCB Fukui et\xa0al. ( 2016 ); Goyal et\xa0al. ( 2016 )', '-', '-', '-', '-', '77.4', '36.7', '51.2', '59.1'], ['HieCoAtt Lu et\xa0al. ( 2016 ); Goyal et\xa0al. ( 2016 )', '-', '-', '-', '-', '71.8', '36.5', '46.3', '54.6'], ['Language onlyGoyal et\xa0al. ( 2016 )', '-', '-', '-', '-', '67.1', '31.6', '27.4', '44.3'], ['Common answerGoyal et\xa0al. ( 2016 )', '-', '-', '-', '-', '61.2', '0.4', '1.8', '26.0']]
in all question categories and overall by a significant margin of 1.7%. The bottom up, adaptive-kTeney et al. et al. reports currently the best performance among on VQAv2 test-standard dataset. This indicates our models superior capability to interpret and incorporate multi-modal relationships for visual reasoning.
Sequential Attention-based Network for Noetic End-to-End Response Selection
1901.02609
Table 4: Ablation analysis on the development set for the DSTC7 Ubuntu dataset.
['[BOLD] Sub', '[BOLD] Models', '[BOLD] R@1', '[BOLD] R@10', '[BOLD] R@50', '[BOLD] MRR']
[['1', 'ESIM', '0.534', '0.854', '0.985', '0.6401'], ['1', '-CtxDec', '0.508', '0.845', '0.982', '0.6210'], ['1', '-CtxDec & -Rev', '0.504', '0.840', '0.982', '0.6174'], ['1', 'Ensemble', '0.573', '0.887', '0.989', '0.6790'], ['2', 'Sent-based', '0.021', '0.082', '0.159', '0.0416'], ['2', 'Ensemble1', '0.023', '0.091', '0.168', '0.0475'], ['2', 'ESIM', '0.043', '0.125', '0.191', '0.0713'], ['2', '-CtxDec', '0.034', '0.117', '0.191', '0.0620'], ['2', 'Ensemble2', '0.048', '0.134', '0.194', '0.0770'], ['4', 'ESIM', '0.515', '0.887', '0.988', '0.6434'], ['4', '-CtxDec', '0.492', '0.877', '0.987', '0.6277'], ['4', '-CtxDec & -Rev', '0.490', '0.875', '0.986', '0.6212'], ['4', 'Ensemble', '0.551', '0.909', '0.992', '0.6771'], ['5', 'ESIM', '0.534', '0.854', '0.985', '0.6401'], ['5', '+W2V', '0.530', '0.858', '0.986', '0.6394'], ['5', 'Ensemble', '0.575', '0.890', '0.989', '0.6817']]
For Ubuntu subtask 1, ESIM achieved 0.854 R@10 and 0.6401 MRR. If we removed context’s local matching and matching composition to accelerate the training process (“-CtxDec”), R@10 and MRR dropped to 0.845 and 0.6210. Further discarding the last words instead of the preceding words for the context (“-CtxDec & -Rev”) degraded R@10 and MRR to 0.840 and 0.6174. Ensembling the above three models (“Ensemble”) achieved 0.887 R@10 and 0.6790 MRR. Ensembling was performed by averaging output from models trained with different parameter initializations and different structures.
Sequential Attention-based Network for Noetic End-to-End Response Selection
1901.02609
Table 5: Ablation analysis on the development set for the DSTC7 Advising dataset.
['[BOLD] Sub', '[BOLD] Models', '[BOLD] R@1', '[BOLD] R@10', '[BOLD] R@50', '[BOLD] MRR']
[['1', '-CtxDec', '0.222', '0.656', '0.954', '0.3572'], ['1', '-CtxDec & -Rev', '0.214', '0.658', '0.942', '0.3518'], ['1', 'Ensemble', '0.252', '0.720', '0.960', '0.4010'], ['3', '-CtxDec', '0.320', '0.792', '0.978', '0.4704'], ['3', '-CtxDec & -Rev', '0.310', '0.788', '0.978', '0.4550'], ['3', 'Ensemble', '0.332', '0.818', '0.984', '0.4848'], ['4', '-CtxDec', '0.248', '0.706', '0.970', '0.3955'], ['4', '-CtxDec & -Rev', '0.226', '0.714', '0.946', '0.3872'], ['4', 'Ensemble', '0.246', '0.760', '0.970', '0.4110']]
For Ubuntu subtask 1, ESIM achieved 0.854 R@10 and 0.6401 MRR. If we removed context’s local matching and matching composition to accelerate the training process (“-CtxDec”), R@10 and MRR dropped to 0.845 and 0.6210. Further discarding the last words instead of the preceding words for the context (“-CtxDec & -Rev”) degraded R@10 and MRR to 0.840 and 0.6174. Ensembling the above three models (“Ensemble”) achieved 0.887 R@10 and 0.6790 MRR. Ensembling was performed by averaging output from models trained with different parameter initializations and different structures. We used ESIM without context’s local matching and matching composition for computational efficiency. We observed similar trends as on the Ubuntu data set. “-CtxDec & -Rev” degraded R@10 and MRR over “-CtxDec”, yet the ensemble of the two models always produced significant gains over individual models.
Recursive Graphical Neural Networks for Text Classification
1909.08166
Table 5: Ablation study on R52 and Reuters21578. It is clear that LSTM plays an important role by alleviating over-smoothing problem, especially in multi-label classification, which is more prone to over-smoothing.
['Model', 'R52', 'Reuters21578']
[['w/o LSTM', '84.74', '43.82'], ['w/o Attention', '94.39', '81.31'], ['w/o Global node', '93.85', '76.81'], ['Proposal', '95.29', '82.01']]
From the results we can see that removing any of the three parts of our proposed model would lead to a decline in accuracy. Among the three parts, the accuracy of the model without LSTM decreases most significantly. We assume that this is because that the over-smoothing problem becomes very severe with relatively big layer number. Furthermore, compared with multiple label classification, smoothing is more acceptable for single label classification. Because if the representation of all the nodes grow to be related to the correct label, it will not hurt the performance. However, over-smoothing is more harmful for multi-label problem, because multi-label requires the representation of different parts related to different labels to be distinguishable.
Semantic Sentence Matching with Densely-connectedRecurrent and Co-attentive Information
1805.11360
(b) SelQA
['[BOLD] Models', '[BOLD] MAP', '[BOLD] MRR']
[['CNN-DAN ', '0.866', '0.873'], ['CNN-hinge ', '0.876', '0.881'], ['ACNN ', '0.874', '0.880'], ['AdaQA ', '0.891', '0.898'], ['[BOLD] DRCN', '[BOLD] 0.925', '[BOLD] 0.930']]
However, the proposed DRCN using collective attentions over multiple layers, achieves the new state-of-the-art performance, exceeding the current state-of-the-art performance significantly on both datasets.
Semantic Sentence Matching with Densely-connectedRecurrent and Co-attentive Information
1805.11360
Table 3: Classification accuracy for natural language inference on MultiNLI test set. * denotes ensemble methods.
['[BOLD] Models', '[BOLD] Accuracy (%) [BOLD] matched', '[BOLD] Accuracy (%) [BOLD] mismatched']
[['ESIM ', '72.3', '72.1'], ['DIIN ', '78.8', '77.8'], ['CAFE ', '78.7', '77.9'], ['LM-Transformer ', '[BOLD] 82.1', '[BOLD] 81.4'], ['[BOLD] DRCN', '79.1', '78.4'], ['DIIN* ', '80.0', '78.7'], ['CAFE* ', '80.2', '79.0'], ['[BOLD] DRCN*', '[BOLD] 80.6', '[BOLD] 79.5'], ['[BOLD] DRCN+ELMo*', '[BOLD] 82.3', '[BOLD] 81.4']]
Our plain DRCN has a competitive performance without any contextualized knowledge. And, by combining DRCN with the ELMo, one of the contextualized embeddings from language models, our model outperforms the LM-Transformer which has 85m parameters with fewer parameters of 61m. From this point of view, the combination of our model with a contextualized knowledge is a good option to enhance the performance.
Semantic Sentence Matching with Densely-connectedRecurrent and Co-attentive Information
1805.11360
(a) TrecQA: raw and clean
['[BOLD] Models', '[BOLD] MAP', '[BOLD] MRR']
[['[ITALIC] [BOLD] Raw version', '[ITALIC] [BOLD] Raw version', '[ITALIC] [BOLD] Raw version'], ['aNMM ', '0.750', '0.811'], ['PWIM ', '0.758', '0.822'], ['MP CNN ', '0.762', '0.830'], ['HyperQA ', '0.770', '0.825'], ['PR+CNN ', '0.780', '0.834'], ['[BOLD] DRCN', '[BOLD] 0.804', '[BOLD] 0.862'], ['[ITALIC] [BOLD] clean version', '[ITALIC] [BOLD] clean version', '[ITALIC] [BOLD] clean version'], ['HyperQA ', '0.801', '0.877'], ['PR+CNN ', '0.801', '0.877'], ['BiMPM ', '0.802', '0.875'], ['Comp.-Aggr. ', '0.821', '0.899'], ['IWAN ', '0.822', '0.889'], ['[BOLD] DRCN', '[BOLD] 0.830', '[BOLD] 0.908']]
However, the proposed DRCN using collective attentions over multiple layers, achieves the new state-of-the-art performance, exceeding the current state-of-the-art performance significantly on both datasets.
Semantic Sentence Matching with Densely-connectedRecurrent and Co-attentive Information
1805.11360
Table 6: Accuracy (%) of Linguistic correctness on MultiNLI dev sets.
['[BOLD] Category', '[BOLD] ESIM', '[BOLD] DIIN', '[BOLD] CAFE', '[BOLD] DRCN']
[['[BOLD] Matched', '[BOLD] Matched', '[BOLD] Matched', '[BOLD] Matched', '[BOLD] Matched'], ['Conditional', '[BOLD] 100', '57', '70', '65'], ['Word overlap', '50', '79', '82', '[BOLD] 89'], ['Negation', '76', '78', '76', '[BOLD] 80'], ['Antonym', '67', '[BOLD] 82', '[BOLD] 82', '[BOLD] 82'], ['Long Sentence', '75', '81', '79', '[BOLD] 83'], ['Tense Difference', '73', '[BOLD] 84', '82', '82'], ['Active/Passive', '88', '93', '[BOLD] 100', '87'], ['Paraphrase', '89', '88', '88', '[BOLD] 92'], ['Quantity/Time', '33', '53', '53', '[BOLD] 73'], ['Coreference', '[BOLD] 83', '77', '80', '80'], ['Quantifier', '69', '74', '75', '[BOLD] 78'], ['Modal', '78', '[BOLD] 84', '81', '81'], ['Belief', '65', '[BOLD] 77', '[BOLD] 77', '76'], ['Mean', '72.8', '77.46', '78.9', '[BOLD] 80.6'], ['Stddev', '16.6', '10.75', '10.2', '[BOLD] 6.7'], ['[BOLD] Mismatched', '[BOLD] Mismatched', '[BOLD] Mismatched', '[BOLD] Mismatched', '[BOLD] Mismatched'], ['Conditional', '60', '69', '85', '[BOLD] 89'], ['Word overlap', '62', '[BOLD] 92', '87', '89'], ['Negation', '71', '77', '[BOLD] 80', '78'], ['Antonym', '58', '[BOLD] 80', '[BOLD] 80', '[BOLD] 80'], ['Long Sentence', '69', '73', '77', '[BOLD] 84'], ['Tense Difference', '79', '78', '[BOLD] 89', '83'], ['Active/Passive', '91', '70', '90', '[BOLD] 100'], ['Paraphrase', '84', '[BOLD] 100', '95', '90'], ['Quantity/Time', '54', '69', '62', '[BOLD] 80'], ['Coreference', '75', '79', '83', '[BOLD] 87'], ['Quantifier', '72', '78', '80', '[BOLD] 82'], ['Modal', '76', '75', '81', '[BOLD] 87'], ['Belief', '67', '81', '83', '[BOLD] 85'], ['Mean', '70.6', '78.53', '82.5', '[BOLD] 85.7'], ['Stddev', '10.2', '8.55', '7.6', '[BOLD] 5.5']]
We used annotated subset provided by the MultiNLI dataset, and each sample belongs to one of the 13 linguistic categories. Especially, we can see that ours outperforms much better on the Quantity/Time category which is one of the most difficult problems. Furthermore, our DRCN shows the highest mean and the lowest stddev for both matched and mismatched problems, which indicates that it not only results in a competitive performance but also has a consistent performance.
Tale of tails using rule augmented sequence labeling for event extraction
1908.07018
Table 1: InDEE-2019 dataset for five languages, namely, Marathi, Hindi, English, Tamil and Bengali. Number of tags or labels for each dataset and their respective train, validation and test split used in the experiments.
['Languages', 'Marathi(Mr) Doc', 'Marathi(Mr) Sen', 'Hindi(Hi) Doc', 'Hindi(Hi) Sen', 'English(En) Doc', 'English(En) Sen', 'Tamil(Ta) Doc', 'Tamil(Ta) Sen', 'Bengali(Bn) Doc', 'Bengali(Bn) Sen']
[['Train', '815', '15920', '678', '13184', '456', '5378', '1085', '15302', '699', '18533'], ['Val', '117', '2125', '150', '2775', '56', '642', '155', '2199', '100', '2621'], ['Test', '233', '4411', '194', '3790', '131', '1649', '311', '4326', '199', '4661'], ['#Labels', '43', '43', '44', '44', '48', '48', '47', '47', '46', '46']]
We focus on the EE task and model as a sequence labeling problem. We use the following terminologies throughout the paper: Event Trigger : The main word that identifies occurrence of the event mentioned in the document. Event Arguments : Several words that define an events such as place , time , reason , after-effects , participant , casualties. We are interested in extracting event triggers, event arguments from the document. We release the new dataset named InDEE-2019 which consists of tagged event extraction data in disaster domain covering five languages: Marathi, English, Hindi, Bengali and Tamil. We train our marathi model on 15.9K sentences and predict on 43 labels. We train our model on varying training set size, namely, 20%, 40%, 60%, 80% and 100% to ascertain the impact of rules with decreasing amount of dataset. It is observed that at lesser training instances, our rule based approach is able to outperform baseline on 3K(20%) and 6K(40%) training instances. Both macro and micro F1 scores show improvement over baseline model. As training instances increases, deep learning models are able to learn large number of parameters that were difficult to learn on lesser training instances.
Does Multimodality Help Human and Machine for Translation and Image Captioning?
1605.09186
Table 6: BLEU and METEOR scores for human description generation experiments.
['Method', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4', 'METEOR']
[['Image + sentences', '54.30', '35.95', '23.28', '15.06', '39.16'], ['Image only', '51.26', '34.74', '22.63', '15.01', '38.06'], ['Sentence only', '39.37', '23.27', '13.73', '8.40', '32.98'], ['Our system', '60.61', '44.35', '31.65', '21.95', '33.59']]
To evaluate the importance of the different modalities for the image description generation task, we have performed an experiment where we replace the computer algorithm with human participants. The two modalities are the five English description sentences, and the image. The output is a single description sentence in German. The experiment asks the participants for the following tasks: Given both the image and the English descriptions: ’Describe the image in one sentence in German. You can get help from the English sentences provided.’ Given only the image: ’Describe the image in one sentence in German.’ Given only one English sentence: ’Translate the English sentence into German.’ The experiment was performed by 16 native German speakers proficient in English with age ranging from 23 to 54 (coming from Austria, Germany and Switzerland, of which 10 are female and 6 male). The experiment is performed on the first 80 sentences of the validation set. Participants performed 10 repetitions for each task, and not repeating the same image across tasks. For humans, the English description sentences help to obtain better performance. Removing the image altogether and providing only a single English description sentence results in a significant drop. We were surprised to observe such a drop, whereas we expected good translations to obtain competitive results. In addition, we have provided the results of our submission on the same subset of images; humans clearly obtain better performance using METEOR metrics, but our approach is clearly outperforming on the BLEU metrics. The participants were not trained on the train set before performing the tasks, which could be one of the reasons for the difference. Furthermore, given the lower performance of only translating one of the English description sentences on both metrics, it could possibly be caused by existing biases in the data set.
Does Multimodality Help Human and Machine for Translation and Image Captioning?
1605.09186
Table 1: Training Data for Task 1.
['Side', 'Vocabulary', 'Words']
[['English', '10211', '377K'], ['German', '15820', '369K']]
This dataset consists of 29K parallel sentences (direct translations of image descriptions from English to German) for training, 1014 for validation and finally 1000 for the test set. We preprocessed the dataset using the punctuation normalization, tokenization and lowercasing scripts from Moses. This reduces the target vocabulary from 18670 to 15820 unique tokens. During translation generation, the splitted compounds are stitched back together.
Does Multimodality Help Human and Machine for Translation and Image Captioning?
1605.09186
Table 2: BLEU and METEOR scores on detokenized outputs of baseline and submitted Task 1 systems. The METEOR scores in parenthesis are computed with -norm parameter.
['System Description', 'Validation Set METEOR (norm)', 'Validation Set BLEU', 'Test Set METEOR (norm)', 'Test Set BLEU']
[['Phrase-based Baseline (BL)', '53.71 (58.43)', '35.61', '52.83 (57.37)', '33.45'], ['BL+3Features', '54.29 (58.99)', '36.52', '53.19 (57.76)', '34.31'], ['BL+4Features', '54.40 (59.08)', '36.63', '53.18 (57.76)', '34.28'], ['Monomodal NMT', '51.07 (54.87)', '35.93', '49.20 (53.10)', '32.50'], ['Multimodal NMT', '44.55 (47.97)', '28.06', '45.04 (48.52)', '27.82']]
Overall, we were able to improve test set scores by around 0.4 and 0.8 on METEOR and BLEU respectively over a strong phrase-based baseline using auxiliary features.
Does Multimodality Help Human and Machine for Translation and Image Captioning?
1605.09186
Table 3: BLEU scores for various deep features on the image description generation task using the system of Xu et al. [Xu et al.2015].
['Network', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4']
[['VGG-19', '58.2', '31.4', '18.5', '11.3'], ['ResNet-50', '68.4', '45.2', '30.9', '21.1'], ['ResNet-152', '68.3', '44.9', '30.7', '21.1']]
The results increase during the first layers but stabilize from Block-4 on. Based on these results and considering that a higher spatial resolution is better, we have selected layer ’res4f_relu’ (end of Block-4, after ReLU) for the experiments on multimodal MT. We also compared the features from different networks on the task of image description generation with the system of Xu et al. The results for generating English descriptions Therefore, given the increase in computational cost, we have decided to use ResNet-50 features for our submission.
Does Multimodality Help Human and Machine for Translation and Image Captioning?
1605.09186
Table 5: BLEU and METEOR scores of our NMT based submissions for Task 2.
['System', 'Validation METEOR', 'Validation BLEU', 'Test METEOR', 'Test BLEU']
[['Monomodal', '36.3', '24.0', '35.1', '23.8'], ['Multimodal', '34.4', '19.3', '32.3', '19.2']]
Several explanations can clarify this behavior. First, the architecture is not well suited for integrating image and text representations. This is possible as we did not explore all the possibilities to benefit from both modalities. Another explanation is that the image context contain too much irrelevant information which cannot be discriminated by the lone attention mechanism. This would need a deeper analysis of the attention weights in order to be answered.
Exploiting stance hierarchies for cost-sensitivestance detection of Web documents
2007.15121
Table 3. Class distribution for discuss/stance classification.
['[EMPTY]', '[BOLD] Instances', '[BOLD] Neutral', '[BOLD] Stance']
[['[BOLD] Train', '13,427', '8,909', '4,518'], ['[BOLD] Test', '7,064', '4,464', '2,600']]
For the first stage of our pipeline (relevance classification), we merge the classes discuss, agree and disagree, to one related class, facilitating a binary unrelated-related classification task. For the second stage (neutral/stance classification), we consider only related documents and merge the classes agree and disagree to one stance class and consider the rest as neutral. We notice that the neutral documents are about twice the stance documents. For the final stage (agree/disagree classification), we only consider the instances of the agree and disagree classes. With respect to stage 1 of our pipeline (relevance classification), precision and recall of the related class (the important class is this stage) is 0.91 and 0.93, respectively. Regarding stage 2 (neutral/stance classification), precision and recall of the important stance class is 0.67 and 0.71, respectively, while that of the neutral class is 0.82 and 0.80, respectively. In general, this task is harder than relevance classification (stage 1). We note that there is room for improvement for the important stance class. The stage 3 of our pipeline deals with the harder problem of agree/disagree classification. We notice that our classifier performs well on the agree class (P = 0.79, R = 0.75), but poorly on the disagree class (P = 0.40, R = 0.44). We observe that, even a dedicated classifier which only considers stance documents struggles detecting many instances of the disagree class. Nevertheless, our method outperforms the existing methods by more than 28% of F1 score. There are two main reasons affecting the performance of the disagree class: i) the classifiers of stage 2 and 3 may filter out many instances belonging to this class, ii)
Exploiting stance hierarchies for cost-sensitivestance detection of Web documents
2007.15121
Table 3. Class distribution for discuss/stance classification.
['[EMPTY]', '[BOLD] Instances', '[BOLD] Unrelated', '[BOLD] Related']
[['[BOLD] Train', '49,972', '36,545', '13,427'], ['[BOLD] Test', '25,413', '18,349', '7,064']]
For the first stage of our pipeline (relevance classification), we merge the classes discuss, agree and disagree, to one related class, facilitating a binary unrelated-related classification task. For the second stage (neutral/stance classification), we consider only related documents and merge the classes agree and disagree to one stance class and consider the rest as neutral. We notice that the neutral documents are about twice the stance documents. For the final stage (agree/disagree classification), we only consider the instances of the agree and disagree classes. With respect to stage 1 of our pipeline (relevance classification), precision and recall of the related class (the important class is this stage) is 0.91 and 0.93, respectively. Regarding stage 2 (neutral/stance classification), precision and recall of the important stance class is 0.67 and 0.71, respectively, while that of the neutral class is 0.82 and 0.80, respectively. In general, this task is harder than relevance classification (stage 1). We note that there is room for improvement for the important stance class. The stage 3 of our pipeline deals with the harder problem of agree/disagree classification. We notice that our classifier performs well on the agree class (P = 0.79, R = 0.75), but poorly on the disagree class (P = 0.40, R = 0.44). We observe that, even a dedicated classifier which only considers stance documents struggles detecting many instances of the disagree class. Nevertheless, our method outperforms the existing methods by more than 28% of F1 score. There are two main reasons affecting the performance of the disagree class: i) the classifiers of stage 2 and 3 may filter out many instances belonging to this class, ii)
Exploiting stance hierarchies for cost-sensitivestance detection of Web documents
2007.15121
Table 3. Class distribution for discuss/stance classification.
['[EMPTY]', '[BOLD] Instances', '[BOLD] Agree', '[BOLD] Disagree']
[['[BOLD] Train', '4,518', '3,678', '840'], ['[BOLD] Test', '2,600', '1,903', '697']]
For the first stage of our pipeline (relevance classification), we merge the classes discuss, agree and disagree, to one related class, facilitating a binary unrelated-related classification task. For the second stage (neutral/stance classification), we consider only related documents and merge the classes agree and disagree to one stance class and consider the rest as neutral. We notice that the neutral documents are about twice the stance documents. For the final stage (agree/disagree classification), we only consider the instances of the agree and disagree classes. With respect to stage 1 of our pipeline (relevance classification), precision and recall of the related class (the important class is this stage) is 0.91 and 0.93, respectively. Regarding stage 2 (neutral/stance classification), precision and recall of the important stance class is 0.67 and 0.71, respectively, while that of the neutral class is 0.82 and 0.80, respectively. In general, this task is harder than relevance classification (stage 1). We note that there is room for improvement for the important stance class. The stage 3 of our pipeline deals with the harder problem of agree/disagree classification. We notice that our classifier performs well on the agree class (P = 0.79, R = 0.75), but poorly on the disagree class (P = 0.40, R = 0.44). We observe that, even a dedicated classifier which only considers stance documents struggles detecting many instances of the disagree class. Nevertheless, our method outperforms the existing methods by more than 28% of F1 score. There are two main reasons affecting the performance of the disagree class: i) the classifiers of stage 2 and 3 may filter out many instances belonging to this class, ii)
Exploiting stance hierarchies for cost-sensitivestance detection of Web documents
2007.15121
Table 5. Document stance classification performance.
['System', 'FNC', 'F1 [ITALIC] m', 'F1Unrel.', 'F1Neutral', 'F1Agree', 'F1Disagr.', 'F1 [ITALIC] mAgr/Dis']
[['Majority vote', '0.39', '0.21', '0.84', '0.00', '0.00', '0.00', '0.00'], ['FNC baseline', '0.75', '0.45', '0.96', '0.69', '0.15', '0.02', '0.09'], ['SOLAT (Baird et al., 2017 )', '0.82', '0.58', '[BOLD] 0.99', '0.76', '[BOLD] 0.54', '0.03', '0.29'], ['Athene (Hanselowski et al., 2017 )', '0.82', '0.60', '[BOLD] 0.99', '[BOLD] 0.78', '0.49', '0.15', '0.32'], ['UCLMR (Riedel et al., 2017 )', '0.82', '0.58', '[BOLD] 0.99', '0.75', '0.48', '0.11', '0.30'], ['CombNSE (Bhatt et al., 2018 )', '[BOLD] 0.83', '0.59', '0.98', '0.77', '0.49', '0.11', '0.30'], ['StackLSTM (Hanselowski et al., 2018 )', '0.82', '0.61', '[BOLD] 0.99', '0.76', '0.50', '0.18', '0.34'], ['LearnedMMD (Zhang et al., 2019 )', '0.79', '0.57', '0.97', '0.73', '0.50', '0.09', '0.29'], ['3-Stage Trad (Masood and Aker, 2018 )', '0.82', '0.59', '0.98', '0.76', '0.52', '0.10', '0.31'], ['L3S', '0.81', '[BOLD] 0.62', '0.97', '0.75', '0.53', '[BOLD] 0.23', '[BOLD] 0.38']]
First, we notice that, considering the problematic FNC-I evaluation measure, CombNSE achieves the highest performance (0.83), followed by SOLAT, Athene, UCLMR, StackLSTM, 3-Stage Trad. (0.82) and our pipeline approach (0.81). However, we see that the ranking of the top performing systems is very different if we consider the more robust macro-averaged F1 measure (F1m). Specifically, L3S now achieves the highest score (0.62), outperforming the best baseline system (stackLSTM) by one percentage point, while CombNSE is now in the fourth position (F1m = 0.59).
Exploiting stance hierarchies for cost-sensitivestance detection of Web documents
2007.15121
Table 7. Confusion matrix of our pipeline system.
['[EMPTY]', 'Agree', 'Disagree', 'Neutral', 'Unrelated']
[['Agree', '1,006', '278', '495', '124'], ['Disagree', '237', '160', '171', '129'], ['Neutral', '555', '252', '3,381', '276'], ['Unrelated', '127', '31', '523', '17,668']]
Overall, we note that the neutral and agree classes seem to be frequently confused, what seems intuitive given the very similar nature of these classes, i.e. a document which discusses a claim without explicitly taking a stance is likely to agree with it. The results for the disagree class illustrate that stage 1 misclassifies (as unrelated) 18.5% of all disagree instances (129 instances, in total), while this percentage is less than 7% for the agree and neutral classes. In stage 2, we see that 171 disagree instances (25.5%) are misclassified as neutral, while this percentage is similar for the agree class (26.2%). Finally, in the last stage, 34% of the disagree instances are misclassified as agree, which demonstrates the difficulty of this task. As we explained above, the highly unbalanced data distribution and the limited amount of training data are likely to contribute significantly to this picture.
Exploiting stance hierarchies for cost-sensitivestance detection of Web documents
2007.15121
Table 8. Confusion matrix of different stages.
['Stage1', 'Unrelated', 'Unrelated 17,668', 'Related 681']
[['[EMPTY]', 'Related', '529', '6,535'], ['[EMPTY]', '[EMPTY]', 'Neutral', 'Stance'], ['Stage2', 'Neutral', '3,575', '889'], ['[EMPTY]', 'Stance', '760', '1,840'], ['[EMPTY]', '[EMPTY]', 'Agree', 'Disagree'], ['Stage3', 'Agree', '1,436', '467'], ['[EMPTY]', 'Disagree', '387', '310']]
We note the increasing difficulty of each stage. Stage 1 misclassifies a small number of related instances as unrelated (less than 8%). Stage 2 misclassifies 29% of the stance instances as neutral. Finally, stage 3 misclassifies the majority of disagree instances (55%) as agree, and around 25% of the agree instances as disagree. It is evident from these results that there is much room for improvement for the last stage of our pipeline.
Data Augmentation with Atomic Templates for Spoken Language Understanding
1908.10770
Table 3: SLU performances on the DSTC3 evaluation set when removing different modules of our method.
['[BOLD] Method', '[BOLD] F1-score']
[['HD + AT(seed abr. + comb.)', '88.6'], ['- dstc2', '86.2'], ['- dstc3_seed', '84.5'], ['- dstc2, - dstc3_seed', '84.3'], ['- sentence generator', '74.0'], ['- atomic templates', '70.5']]
Ablation Study By removing SLU model pretraining on DSTC2 (“- dstc2”) and finetuning on the seed data (“- dstc3_seed”), we can see a significant decrease in SLU performance. When we subsequently cast aside the sentence generator (“- sentence generator”, i.e. using the atomic exemplars as inputs of SLU model directly), the SLU performance decreases by 10.3%. This shows that the sentence generator can produce more natural utterances. If we replace the atomic exemplars as the corresponding act-slot-value triples (“- atomic templates”), the SLU performance drops sharply. The reason may be that the atomic templates provide a better description of corresponding semantic meanings than the surfaces of the triples.
Data Augmentation with Atomic Templates for Spoken Language Understanding
1908.10770
Table 2: SLU performances of different systems on the DSTC3 evaluation set.
['[BOLD] SLU', '[BOLD] Augmentation', '[BOLD] F1-score']
[['ZS', 'w/o', '68.3'], ['HD', 'w/o', '78.5'], ['HD', 'Naive', '82.9'], ['HD', 'AT (seed abridgement)', '85.5'], ['HD', 'AT (combination)', '87.9'], ['HD', 'AT (seed abr. + comb.)', '[BOLD] 88.6'], ['HD', 'Human Zhu et al. ( 2014 )', '90.4'], ['HD', 'Oracle', '96.9']]
We can see that: The hierarchical decoding (HD) model gets better performance than the zero-shot learning (ZS) method of SLU. The seed data dstc3_seed limits the power of the SLU model, and even the naive augmentation can enhance it. Our data augmentation method with atomic templates (AT) improves the SLU performance dramatically. One reason may be the generated data has higher variety of semantic meaning than the naive augmentation. Combination can make up more dialogue acts and shows better result than Seed abridgement, while Seed abridgement provides more realistic dialogue acts. Thus, their union gives the best result. The best performance of our method is close to the human-designed sentence-level templates Zhu et al.
Trivial Transfer Learning for Low-Resource Neural Machine Translation
1809.00357
Table 4: Results of child following a parent with swapped direction. “Baseline” is child-only training. “Aligned” is the more natural setup with English appearing on the “correct” side of the parent, the numbers in this column thus correspond to those in Table 2.
['Parent - Child', 'Transfer', 'Baseline', 'Aligned']
[['enFI - ETen', '22.75‡', '21.74', '24.18'], ['FIen - enET', '18.19‡', '17.03', '19.74'], ['enRU - ETen', '23.12‡', '21.74', '23.54'], ['enCS - ETen', '22.80‡', '21.74', 'not run'], ['RUen - enET', '18.16‡', '17.03', '20.09'], ['enET - ETen', '22.04‡', '21.74', '21.74'], ['ETen - enET', '17.46', '17.03', '17.03']]
This interesting result should be studied in more detail. \newcitefirat-cho-bengio:etal:2016 hinted possible gains even when both languages are distinct from the low-resource languages but in a multilingual setting. Not surprisingly, the improvements are better when the common language is aligned. We see gains in both directions, although not always statistically significant. Future work should investigate if this performance boost is possible even for high-resource languages. Similar behavior has been shown in \newciteniu-denkowski-carpuat:2018:WNMT2018, where in contrast to our work they mixed the data together and added an artificial token indicating the target language.
Trivial Transfer Learning for Low-Resource Neural Machine Translation
1809.00357
Table 2: Transfer learning with English reused either in source (encoder) or target (decoder). The column “Transfer” is our method, baselines correspond to training on one of the corpora only. Scores (BLEU) are always for the child language pair and they are comparable only within lines or when the child language pair is the same. “Unrelated” language pairs in bold. Upper part: parent larger, lower part: child larger. (“EN” lowercased just to stand out.)
['Parent - Child', 'Transfer', 'Baselines: Only Child', 'Baselines: Only Parent']
[['enFI - enET', '19.74‡', '17.03', '2.32'], ['FIen - ETen', '24.18‡', '21.74', '2.44'], ['[BOLD] enCS - enET', '20.41‡', '17.03', '1.42'], ['[BOLD] enRU - enET', '20.09‡', '17.03', '0.57'], ['[BOLD] RUen - ETen', '23.54‡', '21.74', '0.80'], ['enCS - enSK', '17.75‡', '16.13', '6.51'], ['CSen - SKen', '22.42‡', '19.19', '11.62'], ['enET - enFI', '20.07‡', '19.50', '1.81'], ['ETen - FIen', '23.95', '24.40', '1.78'], ['enSK - enCS', '22.99', '23.48‡', '6.10'], ['SKen - CSen', '28.20', '29.61‡', '4.16']]
Furthermore, the improvement is not restricted only to related languages as Estonian and Finnish as shown in previous works. We reach an improvement of 3.38 BLEU for ENET when parent model was ENCS, compared to improvement of 2.71 from ENFI parent. This statistically significant improvement contradicts \newcitedabre2017empirical who concluded that the more related the languages are, the better transfer learning works. We see it as an indication that the size of the parent training set is more important than relatedness of languages. On the other hand, this transfer learning works well only when the parent has more training data than the child. This is another indication, that the most important is the size of the parent corpus compared to the child one.
Trivial Transfer Learning for Low-Resource Neural Machine Translation
1809.00357
Table 3: Maximal score reached by ENET child for decreasing sizes of child training data, trained off an ENFI parent (all ENFI data are used and models are trained for 800k steps). The baselines use only the reduced ENET data.
['Child Training Sents', 'Transfer BLEU', 'Baseline BLEU']
[['800k', '19.74', '17.03'], ['400k', '19.04', '14.94'], ['200k', '17.95', '11.96'], ['100k', '17.61', '9.39'], ['50k', '15.95', '5.74'], ['10k', '12.46', '1.95']]
It is a common knowledge, that gains from transfer learning are more pronounced for smaller childs. Our transfer learning (“start with a model for whatever parent pair”) may thus resolve the issue of applicability of NMT for low resource languages as pointed out by \newcitekoehn-knowles:2017:NMT.
Trivial Transfer Learning for Low-Resource Neural Machine Translation
1809.00357
Table 7: Summary of vocabulary overlaps for the various language sets. All figures in % of the shared vocabulary.
['Languages', 'Unique in a Lang.', 'In All', 'From Parent']
[['ET-EN-FI', '24.4-18.2-26.2', '19.5', '49.4'], ['ET-EN-RU', '29.9-20.7-29.0', '8.9', '41.0'], ['ET-EN-CS', '29.6-17.5-21.2', '20.3', '49.2'], ['AR-RU-ET-EN', '28.6-27.7-21.2-9.1', '4.6', '6.2'], ['ES-FR-ET-EN', '15.7-13.0-24.8-8.8', '18.4', '34.1'], ['ES-RU-ET-EN', '14.7-31.1-21.3-9.3', '6.0', '21.4'], ['FR-RU-ET-EN', '12.3-32.0-22.3-8.1', '6.3', '23.1']]
, what portion is shared by all the languages and what portion of subwords benefits from the parent training. We see a similar picture across the board, only AR-RU-ET-EN stands out with the very low number of subwords (6.2%) available already in the parent. The parent AR-RU thus offered very little word knowledge to the child and yet lead to a gain in BLEU.
Trivial Transfer Learning for Low-Resource Neural Machine Translation
1809.00357
Table 9: Candidate total length, BLEU n-gram precisions and brevity penalty (BP). The reference length in the matching tokenization was 36062.
['[EMPTY]', 'Length', 'BLEU Components', 'BP']
[['Base ENET', '35326', '48.1/21.3/11.3/6.4', '0.979'], ['ENRU+ENET', '35979', '51.0/24.2/13.5/8.0', '0.998'], ['ENCS+ENET', '35921', '51.7/24.6/13.7/8.1', '0.996']]
In the table, we show also individual n-gram precisions and brevity penalty (BP) of BLEU. The longer output clearly helps to reduce the incurred BP but the improvements are also apparent in n-gram precisions. In other words, the observed gain cannot be attributed solely to producing longer outputs.
MAttNet: Modular Attention Network for Referring Expression Comprehension
1801.08186
Table 1: Comparison with state-of-the-art approaches on ground-truth MS COCO regions.
['[EMPTY]', '[EMPTY]', 'feature', 'RefCOCO val', 'RefCOCO testA', 'RefCOCO testB', 'RefCOCO+ val', 'RefCOCO+ testA', 'RefCOCO+ testB', 'RefCOCOg val*', 'RefCOCOg val', 'RefCOCOg test']
[['1', 'Mao\xa0', 'vgg16', '-', '63.15', '64.21', '-', '48.73', '42.13', '62.14', '-', '-'], ['2', 'Varun\xa0', 'vgg16', '76.90', '75.60', '78.00', '-', '-', '-', '-', '-', '68.40'], ['3', 'Luo\xa0', 'vgg16', '-', '74.04', '73.43', '-', '60.26', '55.03', '65.36', '-', '-'], ['4', 'CMN\xa0', 'vgg16-frcn', '-', '-', '-', '-', '-', '-', '69.30', '-', '-'], ['5', 'Speaker/visdif\xa0', 'vgg16', '76.18', '74.39', '77.30', '58.94', '61.29', '56.24', '59.40', '-', '-'], ['6', 'Listener\xa0', 'vgg16', '77.48', '76.58', '78.94', '60.50', '61.39', '58.11', '71.12', '69.93', '69.03'], ['7', '[BOLD] Speaker+Listener+Reinforcer\xa0', 'vgg16', '79.56', '78.95', '80.22', '62.26', '64.60', '59.62', '72.63', '71.65', '71.92'], ['8', 'Speaker+ [BOLD] Listener+Reinforcer\xa0', 'vgg16', '78.36', '77.97', '79.86', '61.33', '63.10', '58.19', '72.02', '71.32', '71.72'], ['9', 'MAttN:subj(+attr)+loc(+dif)+rel', 'vgg16', '80.94', '79.99', '82.30', '63.07', '65.04', '61.77', '73.08', '73.04', '72.79'], ['10', 'MAttN:subj(+attr)+loc(+dif)+rel', 'res101-frcn', '83.54', '82.66', '84.17', '68.34', '69.93', '65.90', '-', '76.63', '75.92'], ['11', 'MAttN:subj(+attr+attn)+loc(+dif)+rel', 'res101-frcn', '[BOLD] 85.65', '[BOLD] 85.26', '[BOLD] 84.57', '[BOLD] 71.01', '[BOLD] 75.13', '[BOLD] 66.17', '-', '[BOLD] 78.10', '[BOLD] 78.12']]
First, we compare our model with previous methods using COCO’s ground-truth object bounding boxes as proposals. Results are shown in Table. As all of the previous methods (Line 1-8) used a 16-layer VGGNet (vgg16) as the feature extractor, we run our experiments using the same feature for fair comparison. Note the flat fc7 is a single 4096-dimensional feature which prevents us from using the phrase-guided attentional pooling in Fig. Despite this, our results (Line 9) still outperform all previous state-of-the-art methods. After switching to the res101-based Faster R-CNN (res101-frcn) representation, the comprehension accuracy further improves another ∼3% (Line 10). Note our Faster R-CNN is pre-trained on COCO’s training images, excluding those in RefCOCO, RefCOCO+, and RefCOCOg’s validation+testing. Our full model (Line 11) with phrase-guided attentional pooling achieves the highest accuracy over all others by a large margin.
MAttNet: Modular Attention Network for Referring Expression Comprehension
1801.08186
Table 4: Comparison of segmentation performance on RefCOCO, RefCOCO+, and our results on RefCOCOg.
['RefCOCOg Model', 'RefCOCOg Backbone Net', 'RefCOCOg Split', 'RefCOCOg Pr@0.5', 'RefCOCOg Pr@0.6', 'RefCOCOg Pr@0.7', 'RefCOCOg Pr@0.8', 'RefCOCOg Pr@0.9', 'RefCOCOg IoU']
[['MAttNet', 'res101-mrcn', 'val', '64.48', '61.52', '56.50', '43.97', '14.67', '47.64'], ['MAttNet', 'res101-mrcn', 'test', '65.60', '62.92', '57.31', '44.44', '12.55', '48.61']]
(res101-mrcn). We apply the same procedure described in Sec. Then we feed the predicted bounding box to the mask branch to obtain a pixel-wise segmentation. We use Precision@X (X∈{0.5,0.6,0.7,0.8,0.9}) Results are shown in Table. Some referential segmentation examples are shown in Fig. Note this is not a strictly fair comparison as our model was trained with fewer images. Overall, the AP of our implementation is ∼2 points lower. The main reason may due to the shorter 600-pixel edge setting and smaller training batch size. Even though, our pixel-wise comprehension results already outperform the state-of-the-art ones with a huge margin (see Table. and we believe there exists space for further improvements.
MAttNet: Modular Attention Network for Referring Expression Comprehension
1801.08186
Table 7: Object detection results.
['net', '[ITALIC] APbb', '[ITALIC] APbb50', '[ITALIC] APbb75']
[['res101-frcn', '34.1', '53.7', '36.8'], ['res101-mrcn', '35.8', '55.3', '38.6']]
We firstly show the comparison between Faster R-CNN and Mask R-CNN on object detection in Table. Both models are based on ResNet101 and were trained using same setting. In the main paper, we denote them as res101-frcn and res101-mrcn respectively. It shows that Mask R-CNN has higher AP than Faster R-CNN due to the multi-task training (with additional mask supervision).
MAttNet: Modular Attention Network for Referring Expression Comprehension
1801.08186
Table 8: Instance segmentation results.
['net', '[ITALIC] AP', '[ITALIC] AP50', '[ITALIC] AP75']
[['res101-mrcn (ours)', '30.7', '52.3', '32.4'], ['res101-mrcn\xa0', '32.7', '54.2', '34.0']]
Note this is not a strictly fair comparison as our model was trained with fewer images. Overall, the AP of our implementation is ∼2 points lower. The main reason may due to the shorter 600-pixel edge setting and smaller training batch size. Even though, our pixel-wise comprehension results already outperform the state-of-the-art ones with a huge margin (see Table. and we believe there exists space for further improvements.
Code-Mixed to Monolingual Translation Framework
1911.03772
Table 1: Evaluation results.
['[BOLD] Model', '[BOLD] BLEU', '[BOLD] TER', '[BOLD] Adq.', '[BOLD] Flu.']
[['GNMT', '2.44', '75.09', '0.90', '1.12'], ['CMT1', '15.09', '58.04', '3.18', '3.57'], ['CMT2', '16.47', '55.45', '3.19', '3.97']]
Two variants of our system was tested, one without token reordering (CMT1) and one with (CMT2). Manual scoring (in the range 1-5, low to high quality) of Adequacy and Fluency banchs:2015adequacy was done by a bi-lingual linguist, fluent in both En and Bn, with Bn as mother tongue. We can clearly see that our pipeline outperforms GNMT by a fair margin (about 13.34 BLEU, 18.34 TER) and token reordering further improves our system, especially in the case of fluency. Also, for a deeper analysis, we performed two experiments using CMT2.
Code-Mixed to Monolingual Translation Framework
1911.03772
Table 2: Error contribution.
['[BOLD] Module', '[BOLD] Contribution']
[['Language Tagger', '36'], ['Back Transliteration', '12'], ['Machine Translation', '25']]
Exp 1. We randomly took 100 instances where BLEU score achieved was less than 15. Then we fed this back to our pipeline and collected outputs from each of the modules. We manually associated each of the errors with the respective module causing it, considering the input to it was correct. Language tagger being the starting module in our pipeline requires the most improvement for better results followed by the machine translation system and the back-transliteration module. All of these are supervised models and can be improved with more training data. Exp 2. A linguist proficient in both English and Bengali manually divided our test data into two sets, one where the matrix language was Bengali (MBn) and the other where matrix language was English (MEn). The size of (MBn) was 1205 and for (MEn) it was 395. When feeding the sets separately to CMT2, the BLEU and TER score achieved on MBn was 16.98 and 55.02 while on MEn it was 9.3 and 65.11 respectively. This problem can be easily solved if matrix and embedded languages are identified first, and then passed on to different systems accordingly, i.e. one for (MBn) type, and one for (MEn) type.
Adapting End-to-End Speech Recognition for Readable Subtitles
2005.12143
Table 8: Word error rate and summarization quality by the models with length count-down. Adaptation results in large gains. The model with length encoding outperforms the baseline and the one with length embedding.
['[BOLD] Model', '[BOLD] Ratio (output to desired length)', '[BOLD] WER', '[BOLD] R-1', '[BOLD] R-2', '[BOLD] R-L']
[['(1): Baseline (satisfy length)', '0.97', '55.1', '62.5', '41.5', '60.4'], ['(2): + adapt', '0.96', '39.9', '74.6', '[BOLD] 57.0', '72.6'], ['(3): Length embedding', '1.00', '57.8', '61.9', '40.5', '59.8'], ['(4): + adapt', '0.96', '39.3', '74.3', '55.2', '72.5'], ['(5): Length encoding', '1.00', '57.4', '62.6', '40.7', '60.1'], ['(6): + adapt', '0.96', '[BOLD] 38.6', '[BOLD] 75.1', '56.4', '[BOLD] 73.2']]
While the baseline ASR model achieves some degree of compression after adaptation, it cannot fully comply with length constraints. Therefore, the following experiments examine the effects of training with explicit length count-downs.
Collecting Entailment Data for Pretraining:New Protocols and Negative Results
2004.11997
Table 6: Model performance on the SuperGLUE validation and diagnostic sets. The Avg. column shows the overall SuperGLUE score—an average across the eight tasks, weighting each task equally—as a mean and standard deviation across three restarts.
['[BOLD] Intermediate- [BOLD] Training Data', '[BOLD] Avg. [ITALIC] μ ( [ITALIC] σ)', '[BOLD] BoolQ [BOLD] Acc.', '[BOLD] CB [BOLD] F1/Acc.', '[BOLD] CB [BOLD] F1/Acc.', '[BOLD] COPA [BOLD] Acc.', '[BOLD] MultiRC [BOLD] F1 [ITALIC] a/EM', '[BOLD] MultiRC [BOLD] F1 [ITALIC] a/EM', '[BOLD] ReCoRD [BOLD] F1/EM', '[BOLD] ReCoRD [BOLD] F1/EM', '[BOLD] RTE [BOLD] Acc.', '[BOLD] WiC [BOLD] Acc.', '[BOLD] WSC [BOLD] Acc.']
[['[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)'], ['None', '67.3 (1.2)', '84.3', '83.1 /', '89.3', '90.0', '70.0 /', '27.3', '86.5 /', '85.9', '85.2', '[BOLD] 71.9', '64.4'], ['Base', '[BOLD] 72.2 (0.1)', '84.4', '[BOLD] 97.4 /', '[BOLD] 96.4', '[BOLD] 94.0', '71.9 /', '33.3', '86.1 /', '85.5', '88.4', '70.8', '[BOLD] 76.9'], ['Paragraph', '70.3 (0.1)', '84.7', '[BOLD] 97.4 /', '[BOLD] 96.4', '90.0', '70.4 /', '29.9', '[BOLD] 86.7 /', '[BOLD] 86.0', '86.3', '70.2', '67.3'], ['EditPremise', '69.6 (0.6)', '83.0', '92.3 /', '92.9', '89.0', '71.2 /', '31.2', '86.4 /', '85.7', '85.6', '71.0', '65.4'], ['EditOther', '70.3 (0.1)', '84.2', '91.8 /', '94.6', '91.0', '70.7 /', '31.3', '86.2 /', '85.6', '87.4', '71.5', '68.3'], ['Contrast', '69.2 (0.0)', '84.1', '93.1 /', '94.6', '87.0', '71.4 /', '29.5', '84.8 /', '84.1', '84.5', '71.5', '67.3'], ['MNLI8.5k', '71.0 (0.6)', '84.7', '96.1 /', '94.6', '92.0', '71.7 /', '32.3', '86.4 /', '85.7', '87.4', '74.0', '68.3'], ['MNLIGov8.5k', '70.9 (0.5)', '[BOLD] 84.8', '[BOLD] 97.4 /', '[BOLD] 96.4', '92.0', '71.4 /', '32.0', '86.2 /', '85.6', '86.3', '71.6', '70.2'], ['ANLI8.5k', '70.5 (0.3)', '84.7', '96.1 /', '94.6', '89.0', '71.6 /', '31.8', '85.7 /', '85.0', '85.9', '[BOLD] 71.9', '70.2'], ['MNLI', '70.0 (0.0)', '85.3', '89.0 /', '92.9', '88.0', '[BOLD] 72.2 /', '35.4', '84.7 /', '84.1', '89.2', '71.8', '66.3'], ['ANLI', '70.4 (0.9)', '85.4', '92.4 /', '92.9', '90.0', '72.0 /', '33.5', '85.5 /', '84.8', '[BOLD] 91.0', '71.8', '66.3'], ['[BOLD] XLNet (large cased)', '[BOLD] XLNet (large cased)', '[BOLD] XLNet (large cased)', '[BOLD] XLNet (large cased)', '[BOLD] XLNet (large cased)', '[BOLD] XLNet (large cased)', '[BOLD] XLNet (large cased)', '[BOLD] XLNet (large cased)', '[BOLD] XLNet (large cased)', '[BOLD] XLNet (large cased)', '[BOLD] XLNet (large cased)', '[BOLD] XLNet (large cased)', '[BOLD] XLNet (large cased)'], ['None', '62.7 (1.3)', '82.0', '83.1 /', '89.3', '76.0', '69.9 /', '26.8', '80.9 /', '80.1', '69.0', '65.2', '63.5'], ['Base', '67.7 (0.0)', '83.1', '90.5 /', '92.9', '[BOLD] 89.0', '70.5 /', '28.2', '78.2 /', '77.4', '85.9', '68.7', '64.4'], ['Paragraph', '67.3 (0.0)', '82.5', '[BOLD] 90.8 /', '[BOLD] 94.6', '85.0', '69.8 /', '28.1', '79.4 /', '78.6', '83.8', '69.7', '64.4'], ['EditPremise', '67.0 (0.4)', '82.8', '82.8 /', '91.1', '83.0', '69.8 /', '28.6', '79.3 /', '78.5', '85.2', '[BOLD] 70.2', '[BOLD] 65.4'], ['EditOther', '67.2 (0.1)', '82.9', '84.4 /', '91.1', '87.0', '70.2 /', '29.1', '79.4 /', '78.6', '85.6', '69.7', '63.5'], ['Contrast', '66.3 (0.6)', '83.0', '82.5 /', '89.3', '83.0', '69.8 /', '28.3', '80.2 /', '79.5', '85.9', '68.2', '58.7'], ['MNLI8.5k', '67.6 (0.1)', '83.5', '89.5 /', '92.9', '88.0', '69.4 /', '28.3', '79.5 /', '78.6', '86.3', '69.3', '62.5'], ['MNLIGov8.5k', '67.5 (0.3)', '82.5', '89.5 /', '[BOLD] 94.6', '85.0', '70.0 /', '28.1', '79.8 /', '79.0', '87.4', '68.7', '62.5'], ['ANLI8.5k', '67.2 (0.3)', '83.4', '86.3 /', '91.1', '83.0', '69.3 /', '28.9', '[BOLD] 81.2 /', '[BOLD] 80.4', '85.9', '70.1', '63.5'], ['MNLI', '67.7 (0.1)', '[BOLD] 84.0', '85.5 /', '91.1', '[BOLD] 89.0', '[BOLD] 71.5 /', '[BOLD] 31.0', '79.1 /', '78.3', '87.7', '68.5', '63.5'], ['ANLI', '[BOLD] 68.1 (0.4)', '83.7', '82.8 /', '91.1', '86.0', '71.3 /', '30.0', '80.1 /', '79.3', '[BOLD] 89.5', '69.6', '66.3']]
Our first observation is that our overall data collection pipeline worked well for our purposes: Our Base data yields models that transfer substantially better than the plain RoBERTa or XLNet baseline, and at least slightly better than 8.5k-example samples of MNLI, MNLI Government or ANLI.
Collecting Entailment Data for Pretraining:New Protocols and Negative Results
2004.11997
Table 4: NLI modeling experiments with RoBERTa, reporting results on the validation sets for MNLI and for the task used for training each model (Self), and the GLUE diagnostic set. We compare the two-class Contrast with a two-class version of MNLI.
['[BOLD] Training Data', '[BOLD] Self', '[BOLD] MNLI', '[BOLD] GLUE Diag.']
[['Base', '84.8', '81.5', '40.5'], ['Paragraph', '78.3', '78.2', '31.7'], ['EditPremise', '82.9', '79.8', '35.5'], ['EditOther', '82.5', '82.6', '33.9'], ['MNLI8.5k', '87.5', '87.5', '44.6'], ['MNLIGov8.5k', '87.7', '85.4', '40.7'], ['ANLI8.5k', '35.7', '85.6', '39.8'], ['MNLI', '90.4', '[BOLD] 90.4', '49.2'], ['ANLI', '61.5', '90.1', '49.7'], ['MNLI\xa0(two-class)', '94.0', '[BOLD] 94.0', '–'], ['MNLI8.5k (two-class)', '92.4', '92.4', '–'], ['Contrast', '91.6', '80.6', '–']]
As NLI classifiers trained on Contrast cannot produce the neutral labels used in MNLI, we evaluate them separately, and compare them with two-class variants of the MNLI models. Our first three interventions, especially EditPremise, show much lower hypothesis-only performance than Base. This indicates that these these results cannot be explained away as a consequence of the lower quality of the evaluation sets for these three new datasets. This adds further evidence, alongside our PMI results, that these interventions reduce the presence of such artifacts. While we do not have a direct baseline for the two-class Contrast in this experiment, comparisons with MNLI 8.5k are consistent with the encouraging results seen above.
Named Entities in Medical Case Reports: Corpus and Experiments
2003.13032
Table 4: Annotated relations between entities. Relations appear within a sentence (intra-sentential) or across sentences (inter-sentential)
['Type of Relation case has condition', 'Intra-sentential 28', 'Intra-sentential 18.1%', 'Inter-sentential 127', 'Inter-sentential 81.9%', 'Total 155', 'Total 4.0%']
[['case has finding', '169', '7.2%', '2180', '92.8%', '2349', '61.0%'], ['case has factor', '153', '52.9%', '136', '47.1%', '289', '7.5%'], ['modifier modifies finding', '994', '98.5%', '15', '1.5%', '1009', '26.2%'], ['condition causes finding', '44', '3.6%', '3', '6.4%', '47', '1.2%']]
Entities can appear in a discontinuous way. We model this as a relation between two spans which we call “discontinuous” (cf. Especially findings often appear as discontinuous entities, we found 543 discontinuous finding relations. The numbers for conditions and factors are lower with seven and two, respectively. Entities can also be nested within one another. This happens either when the span of one annotation is completely embedded in the span of another annotation (fully-nested; cf. when there is a partial overlapping between the spans of two different entities (partially-nested; cf. There is a high number of inter-sentential relations in the corpus (cf. This can be explained by the fact that the case entity occurs early in each document; furthermore, it is related to finding and factor annotations that are distributed across different sentences. The most frequently annotated relation in our corpus is the has-relation between a case entity and the findings related to that case. This correlates with the high number of finding entities.
Named Entities in Medical Case Reports: Corpus and Experiments
2003.13032
Table 5: Span-level precision (P), recall (R) and F1-scores (F1) on four distinct baseline NER systems. All scores are computed as average over five-fold cross validation.
['[EMPTY]', 'CRF P', 'CRF R', 'CRF F1', 'BiLSTM CRF P', 'BiLSTM CRF R', 'BiLSTM CRF F1', 'MTL P', 'MTL R', 'MTL F1', 'BioBERT P', 'BioBERT R', 'BioBERT F1']
[['case', '0.59', '0.76', '[BOLD] 0.66', '0.40', '0.22', '0.28', '0.55', '0.38', '0.44', '0.43', '0.64', '0.51'], ['condition', '0.45', '0.18', '0.26', '0.00', '0.00', '0.00', '0.62', '0.62', '[BOLD] 0.62', '0.33', '0.37', '0.34'], ['factor', '0.40', '0.05', '0.09', '0.23', '0.04', '0.06', '0.6', '0.53', '[BOLD] 0.56', '0.17', '0.10', '0.12'], ['finding', '0.50', '0.33', '0.40', '0.39', '0.26', '0.31', '0.62', '0.61', '[BOLD] 0.61', '0.41', '0.53', '0.46'], ['modifier', '0.74', '0.32', '0.45', '0.60', '0.42', '0.47', '0.66', '0.63', '[BOLD] 0.65', '0.51', '0.52', '0.50'], ['micro avg.', '0.52', '0.31', '0.39', '0.41', '0.23', '0.30', '0.52', '0.44', '[BOLD] 0.47', '0.39', '0.49', '0.43'], ['macro avg.', '0.51', '0.31', '0.38', '0.37', '0.23', '0.28', '0.61', '0.58', '[BOLD] 0.59', '0.40', '0.49', '0.44']]
To evaluate the performance of the four systems, we calculate the span-level precision (P), recall (R) and F1 scores, along with corresponding micro and macro scores.
Transferable Representation Learning in Vision-and-Language Navigation
1908.03409
Table 2: Results for Validation Seen and Validation Unseen, when trained with a small fraction of Fried-Augmented ordered by scores given by model trained on cma. SPL and SR are reported as percentages and NE and PL in meters.
['Dataset size', 'Strategy', '[BOLD] Validation Seen PL', '[BOLD] Validation Seen NE ↓', '[BOLD] Validation Seen SR ↑', '[BOLD] Validation Seen SPL ↑', '[BOLD] Validation Unseen PL', '[BOLD] Validation Unseen NE ↓', '[BOLD] Validation Unseen SR ↑', '[BOLD] Validation Unseen SPL ↑']
[['1%', 'Top', '11.1', '8.5', '21.2', '17.6', '11.2', '8.5', '20.4', '16.6'], ['1%', 'Bottom', '10.7', '9.0', '16.3', '13.1', '10.8', '8.9', '15.4', '14.1'], ['2%', 'Top', '11.7', '7.9', '25.5', '21.0', '11.3', '8.2', '22.3', '18.5'], ['2%', 'Bottom', '14.5', '9.1', '17.7', '12.7', '11.4', '8.4', '17.5', '14.1']]
Scoring generated instructions. We use this trained model to rank all the paths in Fried-Augmented and train the RCM agent on different portions of the data. Using high-quality examples–as judged by the model–outperforms the ones trained using low-quality examples. Note that the performance is low in both cases because none of the original human-created instructions were used—what is important is the relative performance between examples judged higher or lower. This clearly indicates that the model scores instruction-path pairs effectively.
Transferable Representation Learning in Vision-and-Language Navigation
1908.03409
Table 3: AUC performance when the model is trained on different combinations of the two tasks and evaluated on the dataset containing only PR and RW negatives.
['Training', 'Val. Seen', 'Val. Unseen']
[['cma', '82.6', '72.0'], ['nvs', '63.0', '62.1'], ['cma + nvs', '[BOLD] 84.0', '[BOLD] 79.2']]
Improvements from Adding Coherence Loss. Finally we show that training a model on cma and nvs simultaneously improves the model’s performance when evaluated on cma alone. The model is trained using combined loss αLalignment+(1−α)Lcoherence with α=0.5 and is evaluated on its ability to differentiate incorrect instruction-path pairs from correct ones. As noted earlier, PS negatives are easier to discriminate, therefore, to keep the task challenging, the validation sets were limited to contain validation splits from PR and RW negative sampling strategies only. The area-under ROC curve (AUC) is used as the evaluation metric.
Transferable Representation Learning in Vision-and-Language Navigation
1908.03409
Table 4: Comparison on R2R Leaderboard Test Set. Our navigation model benefits from transfer learned representations and outperforms the known SOTA on SPL. SPL and SR are reported as percentages and NE and PL in meters.
['Model', 'PL', 'NE ↓', 'SR ↑', 'SPL ↑']
[['Random\xa0', '9.89', '9.79', '13.2', '12.0'], ['Seq-to-Seq\xa0', '8.13', '7.85', '20.4', '18.0'], ['Look Before You Leap\xa0', '9.15', '7.53', '25.3', '23.0'], ['Speaker-Follower\xa0', '14.8', '6.62', '35.0', '28.0'], ['Self-Monitoring\xa0', '18.0', '5.67', '[BOLD] 48.0', '35.0'], ['Reinforced Cross-Modal\xa0', '12.0', '6.12', '43.1', '38.0'], ['The Regretful Agent\xa0', '13.7', '5.69', '[BOLD] 48.0', '40.0'], ['ALTR (Ours)', '10.3', '[BOLD] 5.49', '[BOLD] 48.0', '[BOLD] 45.0']]
Our ALTR agent significantly outperforms the SOTA at the time on SPL–the primary metric for R2R–improving it by 5% absolute measure, and it has the lowest navigation error (NE). It furthermore ties the other two best models for SR. Compared to RCM, our ALTR agent is able to learn a more efficient policy resulting in shorter trajectories to reach the goal state, as indicated by its lower path length.
Transferable Representation Learning in Vision-and-Language Navigation
1908.03409
Table 5: Ablations on R2R Validation Seen and Validation Unseen sets, showing results in VLN for different combinations of pre-training tasks. SPL and SR are reported as percentages and NE and PL in meters.
['Method', 'cma', 'nvs', '[BOLD] Validation Seen PL', '[BOLD] Validation Seen NE ↓', '[BOLD] Validation Seen SR ↑', '[BOLD] Validation Seen SPL ↑', '[BOLD] Validation Unseen PL', '[BOLD] Validation Unseen NE ↓', '[BOLD] Validation Unseen SR ↑', '[BOLD] Validation Unseen SPL ↑']
[['Speaker-Follower ', '-', '-', '-', '3.36', '66.4', '-', '-', '6.62', '35.5', '-'], ['RCM', '-', '-', '12.1', '3.25', '67.6', '-', '15.0', '6.01', '40.6', '-'], ['Speaker-Follower (Ours)', '✗', '✗', '15.9', '4.90', '51.9', '43.0', '15.6', '6.40', '36.0', '29.0'], ['Speaker-Follower (Ours)', '✓', '✗', '14.9', '5.04', '50.2', '39.2', '16.8', '5.85', '39.1', '26.8'], ['Speaker-Follower (Ours)', '✗', '✓', '16.5', '5.12', '48.7', '34.9', '18.0', '6.30', '34.9', '20.9'], ['Speaker-Follower (Ours)', '✓', '✓', '11.3', '4.06', '60.8', '55.9', '14.6', '6.06', '40.0', '31.2'], ['RCM (Ours)', '✗', '✗', '13.7', '4.48', '55.3', '47.9', '14.8', '6.00', '41.1', '32.7'], ['RCM (Ours)', '✓', '✗', '10.2', '5.10', '51.8', '49.0', '9.5', '5.81', '44.8', '42.0'], ['RCM (Ours)', '✗', '✓', '19.5', '6.53', '34.6', '20.8', '18.8', '6.79', '33.7', '20.6'], ['RCM (Ours)', '✓', '✓', '13.2', '4.68', '55.8', '52.7', '9.8', '5.61', '46.1', '43.0']]
The first ablation study analyzes the effectiveness of each task individually in learning representations that can benefit the navigation agent. When pre-trainning CMA and NVS jointly, we see a consistent 11-12% improvement in SR for both the SF and RCM agents as well as improvement in agent’s path length, thereby also improving SPL. When pre-training CMA only, we see a consistent 8-9% improvement in SR for both the SF and RCM agents. When pre-training NVS only, we see a drop in performance. Since there are no cross-modal components to train the language encoder in NVS, training on NVS alone fails to provide a good initialization point for the downstream navigation task that requires cross-modal associations. However, pre-training with NVS and CMA jointly affords the model additional opportunities to improve visual-only pre-training (due to NVS), without compromising cross-modal alignment (due to CMA).
Transferable Representation Learning in Vision-and-Language Navigation
1908.03409
Table 6: Ablations showing the effect of adapting (or not) the learned representations in each branch of our RCM agent on Validation Seen and Validation Unseen. SPL and SR are reported as percentages and NE and PL in meters.
['Image encoder', 'Language encoder', '[BOLD] Validation Seen PL', '[BOLD] Validation Seen NE ↓', '[BOLD] Validation Seen SR ↑', '[BOLD] Validation Seen SPL ↑', '[BOLD] Validation Unseen PL', '[BOLD] Validation Unseen NE ↓', '[BOLD] Validation Unseen SR ↑', '[BOLD] Validation Unseen SPL ↑']
[['✗', '✗', '13.7', '4.48', '55.3', '47.9', '14.8', '6.00', '41.1', '32.7'], ['✓', '✗', '15.9', '5.05', '50.6', '38.2', '14.9', '5.94', '42.5', '33.1'], ['✗', '✓', '13.8', '4.68', '56.3', '46.6', '13.5', '5.66', '43.9', '35.8'], ['✓', '✓', '13.2', '4.68', '55.8', '52.7', '9.8', '5.61', '46.1', '43.0']]
The second ablation analyzes the effect of transferring representations to either of the language and visual encoders. The learned representations help the agent to generalize on previously unseen environments. When either of the encoders is warm-started, the agent outperforms the baseline success rates and SPL on validation unseen dataset. In the absence of learned representations, the agent overfits on seen environments and as a result the performance improves on the validation seen dataset. Among the agents that have at least one of the encoders warm-started, the agent with both encoders warm-started has significantly higher SPL (7%+) on the validation unseen dataset.
Empower Entity Set Expansion via Language Model Probing
2004.13897
Table 2: Mean Average Precision on Wiki and APR. “∇” means the number is directly from the original paper.
['[BOLD] Methods', '[BOLD] Wiki MAP@10', '[BOLD] Wiki MAP@20', '[BOLD] Wiki MAP@50', '[BOLD] APR MAP@10', '[BOLD] APR MAP@20', '[BOLD] APR MAP@50']
[['Egoset\xa0Rong et al. ( 2016 )', '0.904', '0.877', '0.745', '0.758', '0.710', '0.570'], ['SetExpan\xa0Shen et al. ( 2017 )', '0.944', '0.921', '0.720', '0.789', '0.763', '0.639'], ['SetExpander\xa0Mamou et al. ( 2018 )', '0.499', '0.439', '0.321', '0.287', '0.208', '0.120'], ['CaSE\xa0Yu et al. ( 2019b )', '0.897', '0.806', '0.588', '0.619', '0.494', '0.330'], ['MCTS\xa0Yan et al. ( 2019 )', '0.980∇', '0.930∇', '0.790∇', '0.960∇', '0.900∇', '0.810∇'], ['CGExpan-NoCN', '0.968', '0.945', '0.859', '0.909', '0.902', '0.787'], ['CGExpan-NoFilter', '0.990', '0.975', '0.890', '0.979', '0.962', '0.892'], ['CGExpan-Comb', '0.991', '0.974', '0.895', '0.983', '0.984', '0.937'], ['CGExpan-MRR', '[BOLD] 0.995', '[BOLD] 0.978', '[BOLD] 0.902', '[BOLD] 0.992', '[BOLD] 0.990', '[BOLD] 0.955']]
Overall Performance. We can see that CGExpan along with its ablations in general outperform all the baselines by a large margin. Comparing with SetExpan, the full model CGExpan achieves 24% improvement in MAP@50 on the Wiki dataset and 49% improvement in MAP@50 on the APR dataset, which verifies that our class-guided model can refine the expansion process and reduce the effect of erroneous entities on later iterations. In addition, CGExpan-NoCN outperforms most baseline models, meaning that the pre-trained LM itself is powerful to capture entity similarities. However, it still cannot beat CGExpan-NoFilter model, which shows that we can properly guide the set expansion process by incorporating generated class names. Moreover, by comparing our full model with CGExpan-NoFilter, we can see that negative class names indeed help the expansion process by estimating a clear boundary for the target class and filtering out erroneous entities. Such an improvement is particularly obvious on the APR dataset. The two versions of our full model overall have comparable performance, but CGExpan-MRR consistently outperforms CGExpan-Comb. To explain such a difference, empirically we observe that high-quality entities tend to rank high in most of the ranked lists. Therefore, we use the MRR version for the rest of our experiment, denoted as CGExpan.
AliCoCo: Alibaba E-commerce Cognitive Concept Net
2003.13230
Table 6. Experimental results in semantic matching between e-commerce concepts and items.
['Model', 'AUC', 'F1', 'P@10']
[['BM25', '-', '-', '0.7681'], ['DSSM (huang2013learning)', '0.7885', '0.6937', '0.7971'], ['MatchPyramid (pang2016text)', '0.8127', '0.7352', '0.7813'], ['RE2 (yang2019simple)', '0.8664', '0.7052', '0.8977'], ['Ours', '0.8610', '0.7532', '0.9015'], ['Ours + Knowledge', '[BOLD] 0.8713', '[BOLD] 0.7769', '[BOLD] 0.9048']]
Our knowledge-aware deep semantic matching model outperforms all the baselines in terms of AUC, F1 and Precision at 10, showing the benefits brought by external knowledge. To further investigate how knowledge helps, we dig into cases. Using our base model without knowledge injected, the matching score of concept “中秋节礼物 (Gifts for Mid-Autumn Festival)” and item “老式大月饼共800g云南特产荞三香大荞饼荞酥散装多口味 (Old big moon cakes 800g Yunnan…)” is not confident enough to associate those two, since the texts of two sides are not similar. After we introduce external knowledge for “中秋节 (Mid-Autumn Festival)” such as “中秋节自古便有赏月、吃月饼、赏桂花、饮桂花酒等习俗。(It is a tradition for people to eat moon cakes in Mid-Autumn…)”, the attention score for “中秋节 (Mid-Autumn Festival)” and “月饼 (moon cakes) ” increase to bridge the gap of this concept-item pair.
AliCoCo: Alibaba E-commerce Cognitive Concept Net
2003.13230
Table 3. Experimental results of different sampling strategy in hypernym discovery.
['Strategy', 'Labeled Size', 'MRR', 'MAP', 'P@1', 'Reduce']
[['Random', '500k', '58.97', '45.30', '45.50', '-'], ['US', '375k', '59.66', '45.73', '46.00', '150k'], ['CS', '400k', '58.96', '45.22', '45.30', '100k'], ['UCS', '325k', '59.87', '46.32', '46.00', '175k']]
When it achieves similar MAP score in four active learning strategies, we can find that all the active learning sampling strategies can reduce labeled data to save considerable manual efforts. UCS is the most economical sampling strategy, which only needs 325k samples, reducing 35\% samples comparing to random strategy. It indicates that high confident negative samples are also important in the task of hypernym discovery.
AliCoCo: Alibaba E-commerce Cognitive Concept Net
2003.13230
Table 4. Experimental results in shopping concept generation.
['Model', 'Precision']
[['Baseline (LSTM + Self Attention)', '0.870'], ['+Wide', '0.900'], ['+Wide & BERT', '0.915'], ['+Wide & BERT & Knowledge', '[BOLD] 0.935']]
Comparing to the baseline, which is a base BiLSTM with self attention architecture, adding wide features such as different syntactic features of concept improves the precision by 3\% in absolute value. When we replace the input embedding with BERT output, the performance improves another 1.5\%, which shows the advantage of rich semantic information encoded by BERT. After introducing external knowledge into our model, the final performance reaches to 0.935, improving by a relative gain of 7.5\% against the baseline model, indicating that leveraging external knowledge benefits commonsense reasoning on short concepts.
AliCoCo: Alibaba E-commerce Cognitive Concept Net
2003.13230
Table 5. Experimental results in shopping concept tagging.
['Model', 'Precision', 'Recall', 'F1']
[['Baseline', '0.8573', '0.8474', '0.8523'], ['+Fuzzy CRF', '0.8731', '0.8665', '0.8703'], ['+Fuzzy CRF & Knowledge', '[BOLD] 0.8796', '[BOLD] 0.8748', '[BOLD] 0.8772']]
Comparing to baseline which is a basic sequence labeling model with Bi-LSTM and CRF, adding fuzzy CRF improves 1.8% on F1, which indicates such multi-path optimization in CRF layer actually contributes to disambiguation. Equipped with external knowledge embeddings to further enhance the textual information, our model continuously outperform to 0.8772 on F1. It demonstrates that introducing external knowledge can benefit tasks dealing with short texts with limited contextual information.
End-to-end Named Entity Recognition from English Speech* *submitted to Interspeech-2020
2005.11184
Table 4: E2E NER from speech: Micro-Average scores, with and without LM.
['[BOLD] E2E NER', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1']
[['without LM', '0.38', '00.21', '0.27'], ['with LM', '[BOLD] 0.96', '[BOLD] 0.85', '[BOLD] 0.90']]
We also studied the effect of language model on the F1 scores. Therefore, based on the quantitative analysis, we conclude that the NER results are closely dependent on the language model, and if an LM is trained on a bigger corpus, then the recall could be increased further.
Latent Suicide Risk Detection on Microblog via Suicide-Oriented Word Embeddings and Layered Attention
1910.12038
Table 5: Performance comparison for different word embedding and different detection model, where “So-W2v”, “So-Glove” and “So-FastText” represent suicide-oriented word embeddings based on Word2vec, GloVe and FastText respectively. Acc and F1 represent accuracy and F1-score.
['LSTM', 'Acc(%)', 'Word2vec 79.21', 'GloVe 80.17', 'FastText 82.59', 'Bert 85.15', 'So-W2v 86.00', 'So-GloVe 86.45', 'So-FastText [BOLD] 88.00']
[['LSTM', 'F1(%)', '78.58', '79.98', '82.18', '85.69', '86.17', '86.69', '[BOLD] 88.14'], ['SDM', 'Acc(%)', '86.54', '86.55', '87.08', '88.89', '90.83', '91.00', '[BOLD] 91.33'], ['SDM', 'F1(%)', '86.63', '85.13', '86.91', '87.44', '90.55', '90.56', '[BOLD] 90.92']]
5.4.1 Effectiveness of Suicide-oriented Word Embeddings We find that without suicide-related dictionary, Bert outperforms other three word embeddings with 2% higher accuracy and 1.5% higher F1-score on both models. After leveraging suicide-related dictionary, suicide-oriented word embeddings based on FastText achieves the best performance with accuracy 88.00% 91.33%, F1-score 88.14%, 90.92% on two models. Obviously, there is a gap between suicide-oriented word embeddings and normal word embeddings which can verify the effectiveness of the former.
Latent Suicide Risk Detection on Microblog via Suicide-Oriented Word Embeddings and Layered Attention
1910.12038
Table 6: Performance comparison between different suicide risk detection model, where Acc and F1 represent accuracy and F1-score respectively.
['[EMPTY]', 'Full testset Acc', 'Full testset F1', 'Harder sub-testset Acc', 'Harder sub-testset F1']
[['SVM', '70.34', '69.01', '61.17', '64.11'], ['NB', '69.59', '70.12', '65.14', '62.20'], ['LSTM', '88.00', '88.14', '76.89', '75.32'], ['SDM', '[BOLD] 91.33', '[BOLD] 90.92', '[BOLD] 85.51', '[BOLD] 84.77']]
In this case, LSTM and SDM employs So-FastText word embeddings as their input. SDM improves the accuracy by over 3.33% and obtains 2.78% higher F1-score on full data set.
Latent Suicide Risk Detection on Microblog via Suicide-Oriented Word Embeddings and Layered Attention
1910.12038
Table 7: Ablation test for SDM with different inputs.
['Inputs', 'Accuracy', 'F1-score']
[['Text', '88.56', '87.99'], ['Text+Image', '89.22', '89.22'], ['Text+User’s feature', '90.66', '90.17'], ['Text+Image +User’s feature', '[BOLD] 91.33', '[BOLD] 90.92']]
To show the contribution of different input to the final classification performance, we design an ablation test of SDM after removing different input. All SDMs are based on embedding So-Fasttext. Since not every post contains image and user’s features contain missing value, we do not only use images nor user’s feature as input of SDMs. Besides, user’s features play a more important role than visual information. The more modalities we use, the better performance we get.
Learning to Stop in Structured Prediction for Neural Machine Translation
1904.01032
Table 5: BLEU and length ratio of models on Zh→En validation set. †indicates our own implementation.
['Model', 'Train', 'Decode', 'BLEU', 'Len.']
[['Model', 'Beam', 'Beam', 'BLEU', 'Len.'], ['Seq2Seq†', '-', '7', '37.74', '0.96'], ['w/ Len. reward†', '-', '7', '38.28', '0.99'], ['BSO†', '4', '3', '36.91', '1.03'], ['BSO†', '8', '7', '35.57', '1.07'], ['This work', '4', '3', '38.41', '1.00'], ['This work', '8', '7', '39.51', '1.00']]
We compare our model with seq2seq, BSO and seq2seq with length reward Huang et al. (our proposed method does not require tuning of hyper-parameter). Fig. Our proposed model comparatively achieves better length ratio on almost all source sentence length in dev-set.
Language Models with Transformers
1904.09408
Table 2: Ablation study. Compare CAS with not adding LSTM layers (CAS-Subset) and not updating Transformer block parameters (CAS-LSTM).
['[BOLD] Model', '[BOLD] Datasets [BOLD] PTB', '[BOLD] Datasets [BOLD] PTB', '[BOLD] Datasets [BOLD] WT-2', '[BOLD] Datasets [BOLD] WT-2', '[BOLD] Datasets [BOLD] WT-103', '[BOLD] Datasets [BOLD] WT-103']
[['[BOLD] Model', '[BOLD] Val', '[BOLD] Test', '[BOLD] Val', '[BOLD] Test', '[BOLD] Val', '[BOLD] Test'], ['BERT-CAS-Subset', '42.53', '36.57', '[BOLD] 51.15', '[BOLD] 44.96', '[BOLD] 44.34', '[BOLD] 43.33'], ['BERT-CAS-LSTM', '[BOLD] 40.22', '[BOLD] 35.32', '53.82', '47.00', '53.66', '51.60'], ['GPT-CAS-Subset', '47.58', '41.85', '54.58', '50.08', '[BOLD] 35.49', '[BOLD] 35.48'], ['GPT-CAS-LSTM', '[BOLD] 47.24', '[BOLD] 41.61', '[BOLD] 50.55', '[BOLD] 46.62', '36.68', '36.61']]
but it fixes all Transformer blocks during fine-tuning. As can be seen, both CAS-Subset and CAS-LSTM improve significantly upon a naive use of BERT and GPT. This is to be expected since fine-tuning improves performance. On the smaller dataset, i.e. PTB, adding LSTMs is more effective. This might be due to the overfitting incurred in updating Transformers. On the other hand, on the larger datasets, i.e. WT-103, adding an LSTM is less effective, which means that adapting the Transformer parameters for better sentence-level representation is more important. Combing both together leads to further improvement. CAS outperforms AWD-LSTM-MoS on all three datasets.
Language Models with Transformers
1904.09408
Table 1: Performance of Coordinate Architecture Search (CAS). ‘Val’ and ‘Test’ denote validation and test perplexity respectively.
['[BOLD] Model', '[BOLD] Datasets [BOLD] PTB', '[BOLD] Datasets [BOLD] PTB', '[BOLD] Datasets [BOLD] WT-2', '[BOLD] Datasets [BOLD] WT-2', '[BOLD] Datasets [BOLD] WT-103', '[BOLD] Datasets [BOLD] WT-103']
[['[BOLD] Model', '[BOLD] Val', '[BOLD] Test', '[BOLD] Val', '[BOLD] Test', '[BOLD] Val', '[BOLD] Test'], ['AWD-LSTM-MoS-BERTVocab', '43.47', '38.04', '48.48', '42.25', '54.94', '52.91'], ['BERT', '72.99', '62.40', '79.76', '69.32', '109.54', '107.30'], ['BERT-CAS (Our)', '39.97', '34.47', '38.43', '34.64', '40.70', '39.85'], ['BERT-Large-CAS (Our)', '[BOLD] 36.14', '[BOLD] 31.34', '[BOLD] 37.79', '[BOLD] 34.11', '[BOLD] 19.67', '[BOLD] 20.42'], ['AWD-LSTM-MoS-GPTVocab', '50.20', '44.92', '55.03', '49.77', '52.90', '51.88'], ['GPT', '79.44', '68.79', '89.96', '80.60', '63.07', '63.47'], ['GPT-CAS (Our)', '[BOLD] 46.24', '[BOLD] 40.87', '[BOLD] 50.41', '[BOLD] 46.62', '[BOLD] 35.75', '[BOLD] 34.24']]
First note that GPT and BERT are significantly worse than AWD-LSTM-MoS. It confirms our hypothesis that neither BERT nor GPT are effective tools for language modeling. Applying them naively leads to significantly worse results compared to AWS-LSTM-MoS on three datasets. It demonstrates that language modeling requires strong capabilities in modeling the word order dependency within sentences. However, due to the combination of self-attention and positional encoding, both GPT and BERT primarily capture coarse-grained language representation but with limited word-level context.
Language Models with Transformers
1904.09408
Table 5: Efficiency of different search methods on PTB and WT-2.
['[BOLD] Search Method', '[BOLD] Search Cost (GPU days) [BOLD] PTB', '[BOLD] Search Cost (GPU days) [BOLD] WT-2', '[BOLD] Method Class']
[['NAS Zoph and Le ( 2016 )', '1,000 CPU days', 'n.a.', 'reinforcement'], ['ENAS Pham et al. ( 2018 )', '0.5', '0.5', 'reinforcement'], ['DARTS (first order) Liu et al. ( 2018 )', '0.5', '1', 'gradient descent'], ['DARTS (second order) Liu et al. ( 2018 )', '1', '1', 'gradient descent'], ['BERT-CAS (Our)', '[BOLD] 0.15', '[BOLD] 0.38', 'greedy search'], ['GPT-CAS (Our)', '0.23', '0.53', 'greedy search']]
As can be seen, BERT-CAS is cheaper than all others. The results indicate that by leveraging the prior knowledge of the design of the neural networks for specific tasks, we could only optimize the architectures in a small confined sub-space, that leads to speed up the search process. For example, BERT-CAS is directly based on BERT, applying search upon such effective neural networks could facilitate the adaptation to similar tasks.
Language Models with Transformers
1904.09408
Table 6: Compare model parameter size and results with GPT-2. The GPT-2 model size and results are from Radford et al. (2019).
['[BOLD] Model', '[BOLD] Parameters', '[BOLD] Datasets PTB', '[BOLD] Datasets WT-2', '[BOLD] Datasets WT-103']
[['GPT-2', '345M', '47.33', '22.76', '26.37'], ['GPT-2', '762M', '40.31', '19.93', '22.05'], ['GPT-2', '1542M', '35.76', '[BOLD] 18.34', '[BOLD] 17.48'], ['BERT-Large-CAS', '395M', '[BOLD] 31.34', '34.11', '20.42']]
We specifically compare the proposed model with the recent state-of-the-art language model GPT-2 Radford et al. WT-103. More surprisingly, the proposed method performs better than GPT-2 (1542M) which has around 4 times more parameters. On WT-103, BERT-Large-CAS is better than GPT-2 (762M) which has around 2 times more parameters. Note that on WT-2, our method performs worse than GPT-2, we suspect the reason is that the WebText still contains the texts that are similar to the Wikipedia. WT-2 is quite small in terms of scale. In contrast, we regard the results on WT-103 (50 times larger than WT-2) as a more reasonable comparison with GPT-2.
Fast and Scalable Expansion of Natural Language Understanding Functionality for Intelligent Agents
1805.01542
Table 2: Results for around 200 custom developer domains. For F1, higher values are better, while for SER lower values are better. * denotes statistically significant SER difference compared to both baselines.
['Approach', '[ITALIC] F1 [ITALIC] Intent Mean', '[ITALIC] F1 [ITALIC] Intent Median', '[ITALIC] F1 [ITALIC] Slot Mean', '[ITALIC] F1 [ITALIC] Slot Median', '[ITALIC] SER Mean', '[ITALIC] SER Median']
[['Baseline CRF/MaxEnt', '94.6', '96.6', '80.0', '91.5', '14.5', '9.2'], ['Baseline DNN', '91.9', '95.9', '85.1', '92.9', '14.7', '9.2'], ['Proposed Pretrained DNN *', '[BOLD] 95.2', '[BOLD] 97.2', '[BOLD] 88.6', '[BOLD] 93.0', '[BOLD] 13.1', '[BOLD] 7.9']]
For the custom domain experiments, we focus on a low resource experimental setup, where we assume that our only target training data is the data provided by the external developer. We report results for around 200 custom domains, which is a subset of all domains we support. For training the baselines, we use the available data provided by the developer for each domain, e.g., example phrases and gazetteers. This training data size was selected empirically based on baseline model accuracy. The generated utterances may contain repetitions for domains where the external developer provided a small amount of example phrases and few slot values per gazetteer. For the proposed method, we pre-train a DNN model on 4 million utterances and fine tune it per domain using the 50K grammar utterances of that domain and any available gazetteer information (for extracting gazetteer features). This suggests that the baseline DNN models (without pretraining) cannot be trained robustly without large available training data. The proposed pre-trained DNN significantly outperforms both baselines across all metrics (paired t-test, p
Fast and Scalable Expansion of Natural Language Understanding Functionality for Intelligent Agents
1805.01542
Table 3: Results on domains A, B and C for the proposed pretrained DNN method and the baseline CRF/MaxEnt method during experimental early stages of domain development. * denotes statistically significant SER difference between proposed and baseline
['Train Set', 'Size', 'Method', '[ITALIC] F1 [ITALIC] intent', '[ITALIC] F1 [ITALIC] slot', '[ITALIC] SER']
[['Domain A (5 intents, 36 slots)', 'Domain A (5 intents, 36 slots)', 'Domain A (5 intents, 36 slots)', 'Domain A (5 intents, 36 slots)', 'Domain A (5 intents, 36 slots)', 'Domain A (5 intents, 36 slots)'], ['Core*', '500', 'Baseline', '85.0', '63.9', '51.9'], ['data', '500', 'Proposed', '86.6', '66.6', '48.2'], ['Bootstrap', '18K', 'Baseline', '86.1', '72.8', '49.6'], ['data*', '18K', 'Proposed', '86.9', '73.8', '47.0'], ['Core +', '3.5K', 'Baseline', '90.4', '74.3', '40.5'], ['user data*', '3.5K', 'Proposed', '90.1', '75.8', '37.9'], ['Core +', '43K', 'Baseline', '92.1', '80.6', '33.4'], ['bootstrap +', '43K', 'Proposed', '91.9', '80.8', '32.8'], ['user data', '43K', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Domain B (2 intents, 17 slots)', 'Domain B (2 intents, 17 slots)', 'Domain B (2 intents, 17 slots)', 'Domain B (2 intents, 17 slots)', 'Domain B (2 intents, 17 slots)', 'Domain B (2 intents, 17 slots)'], ['Bootstrap', '2K', 'Baseline', '97.0', '94.7', '10.1'], ['data*', '2K', 'Proposed', '97.8', '95.3', '6.3'], ['User data', '2.5K', 'Baseline', '97.0', '94.7', '8.2'], ['[EMPTY]', '2.5K', 'Proposed', '97.1', '96.4', '7.1'], ['Bootstrap +', '52K', 'Baseline', '96.7', '95.2', '8.2'], ['user data*', '52K', 'Proposed', '97.0', '96.6', '6.4'], ['Domain C (22 intents, 43 slots)', 'Domain C (22 intents, 43 slots)', 'Domain C (22 intents, 43 slots)', 'Domain C (22 intents, 43 slots)', 'Domain C (22 intents, 43 slots)', 'Domain C (22 intents, 43 slots)'], ['Core*', '300', 'Baseline', '77.9', '47.8', '64.2'], ['data', '300', 'Proposed', '85.6', '46.6', '51.8'], ['Bootstrap', '26K', 'Baseline', '46.1', '65.8', '64.0'], ['data*', '26K', 'Proposed', '49.1', '68.9', '62.8'], ['Core +', '126K', 'Baseline', '92.3', '78.3', '28.1'], ['bootstrap. +', '126K', 'Proposed', '92.7', '72.7', '31.9'], ['user data*', '126K', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]']]
We evaluate our methods on three new built-in domains referred here as domain A (5 intents, 36 slot types), domain B (2 intents, 17 slot types) and domain C (22 intents, 43 slot types). Core data refers to core example utterances, bootstrap data refers to domain data collection and generation of synthetic (grammar) utterances, and user data refers to user interactions with our agent. Here we evaluate whether we can accelerate the development process by achieving accuracy gains in early, low resource stages, and bootstrap a model faster. Results for the non pre-trained DNN baseline are similar, and omitted for lack of space. Our proposed DNN models are pre-trained on 4 million data from mature domains and then fine tuned on the available target data. The baseline CRF/MaxEnt models are trained on the available target data. The types of target data slightly differ across domains according to domain development characteristics. For example, for domain B there was very small amount of core data available and it was combined with the bootstrap data for experiments.
Non-autoregressive Machine Translation with Disentangled Context Transformer
2001.05136
Table 1: The performance of non-autoregressive machine translation methods on the WMT14 EN-DE and WMT16 EN-RO test data. The Step columns indicate the average number of sequential transformer passes. Shaded results use a small transformer (d_{model}=d_{hidden}=512). Our EN-DE results show the scores after conventional compound splitting (luong-etal-2015-effective; Vaswani2017AttentionIA).
['[BOLD] Model n: # rescored candidates', '[BOLD] en\\rightarrowde Step', '[BOLD] en\\rightarrowde BLEU', '[BOLD] de\\rightarrowen Step', '[BOLD] de\\rightarrowen BLEU', '[BOLD] en\\rightarrowro Step', '[BOLD] en\\rightarrowro BLEU', '[BOLD] ro\\rightarrowen Step', '[BOLD] ro\\rightarrowen BLEU']
[['Gu2017NonAutoregressiveNM (n=100)', '1', '19.17', '1', '23.20', '1', '29.79', '1', '31.44'], ['Wang2019NonAutoregressiveMT (n=9)', '1', '24.61', '1', '28.90', '–', '–', '–', '–'], ['Li2019HintBasedTF (n=9)', '1', '25.20', '1', '28.80', '–', '–', '–', '–'], ['Ma2019FlowSeqNC (n=30)', '1', '25.31', '1', '30.68', '1', '32.35', '1', '32.91'], ['Sun2019Fast (n=19)', '1', '26.80', '1', '30.04', '–', '–', '–', '–'], ['ReorderNAT', '1', '26.51', '1', '31.13', '1', '31.70', '1', '31.99'], ['Shu2019LatentVariableNN (n=50)', '1', '25.1', '–', '–', '–', '–', '–', '–'], ['[BOLD] Iterative NAT Models', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Lee2018DeterministicNN', '10', '21.61', '10', '25.48', '10', '29.32', '10', '30.19'], ['Ghazvininejad2019MaskPredictPD (CMLM)', '4', '25.94', '4', '29.90', '4', '32.53', '4', '33.23'], ['[EMPTY]', '10', '27.03', '10', '30.53', '10', '33.08', '10', '33.31'], ['Gu2019LevenshteinT (LevT)', '7+', '27.27', '–', '–', '–', '–', '7+', '33.26'], ['[BOLD] Our Implementations', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['CMLM + Mask-Predict', '4', '26.73', '4', '30.75', '4', '33.02', '4', '33.27'], ['CMLM + Mask-Predict', '10', '[BOLD] 27.39', '10', '31.24', '10', '[BOLD] 33.33', '10', '[BOLD] 33.67'], ['DisCo + Mask-Predict', '4', '25.83', '4', '30.15', '4', '32.22', '4', '32.92'], ['DisCo + Mask-Predict', '10', '27.06', '10', '30.89', '10', '32.92', '10', '33.12'], ['DisCo + Easy-First', '4.82', '27.34', '4.23', '[BOLD] 31.31', '3.29', '33.22', '3.10', '33.25'], ['[BOLD] AT Models', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Vaswani2017AttentionIA (base)', 'N', '27.3', '–', '–', '–', '–', '–', '–'], ['Vaswani2017AttentionIA (large)', 'N', '28.4', '–', '–', '–', '–', '–', '–'], ['[BOLD] Our Implementations', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['AT Transformer Base (EN-RO teacher)', 'N', '27.38', 'N', '31.78', 'N', '34.16', 'N', '34.46'], ['AT Transformer Base + Distillation', 'N', '28.24', 'N', '31.54', '–', '–', '–', '–'], ['AT Transformer Large (EN-DE teacher)', 'N', '28.60', 'N', '31.71', '–', '–', '–', '–']]
First, our re-implementations of CMLM + Mask-Predict outperform Ghazvininejad2019MaskPredictPD (e.g., 31.24 vs. 30.53 in de\rightarrowen with 10 steps). This is probably due to our tuning on the dropout rate and weight averaging of the 5 best epochs based on the validation BLEU performance (Sec.
Non-autoregressive Machine Translation with Disentangled Context Transformer
2001.05136
Table 4: Effects of distillation across different models and inference. All results are BLEU scores from the dev data. T and b denote the max number of iterations and beam size respectively.
['Model', 'T', '[BOLD] en\\rightarrowde raw', '[BOLD] en\\rightarrowde dist.', '[BOLD] en\\rightarrowde \\Delta', '[BOLD] ro\\rightarrowen raw', '[BOLD] ro\\rightarrowen dist.', '[BOLD] ro\\rightarrowen \\Delta']
[['CMLM + MaskP', '4', '22.7', '25.5', '2.8', '33.2', '34.8', '1.6'], ['CMLM + MaskP', '10', '24.5', '25.9', '1.4', '34.5', '34.9', '0.4'], ['DisCo + MaskP', '4', '21.4', '24.6', '3.2', '32.3', '34.1', '1.8'], ['DisCo + MaskP', '10', '23.6', '25.3', '1.7', '33.4', '34.3', '0.9'], ['DisCo + EasyF', '10', '23.9', '25.6', '1.7', '34.0', '35.0', '1.0'], ['AT Base (b=1)', 'N', '25.5', '26.4', '0.9', '–', '–', '–'], ['AT Base (b=5)', 'N', '26.1', '26.8', '0.7', '–', '–', '–']]
Consistent with previous models (Gu2017NonAutoregressiveNM; UnderstandingKD), we find that distillation facilitates all of the non-autoregressive models. Moreover, the DisCo transformer benefits more from distillation compared to the CMLM under the same mask-predict inference. This is in line with UnderstandingKD who showed that there is correlation between the model capacity and distillation data complexity. The DisCo transformer uses contextless keys and values, resulting in reduced capacity. Autoregressive translation also improves with distillation from a large transformer, but the difference is relatively small. Finally, we can observe that the gain from distillation decreases as we incorporate more global information in inference (more iterations in NAT cases and larger beam size in AT cases).
Non-autoregressive Machine Translation with Disentangled Context Transformer
2001.05136
Table 6: Dev results from bringing training closer to inference.
['Training Variant', '[BOLD] en\\rightarrowde Step', '[BOLD] en\\rightarrowde BLEU', '[BOLD] ro\\rightarrowen Step', '[BOLD] ro\\rightarrowen BLEU']
[['Random Sampling', '4.29', '[BOLD] 25.60', '3.17', '[BOLD] 34.97'], ['Easy-First Training', '4.03', '24.76', '2.94', '34.96']]
Easy-First Training So far we have trained our models to predict every word given a random subset of the other words. But this training scheme yields a gap between training and inference, which might harm the model. We attempt to make training closer to inference by training the DisCo transformer in the easy-first order. Similarly to the inference, we first predict the easy-first order by estimating P(Y_{n}|X) for all n. Then, use that order to determine Y_{{\tt obs}}^{n}. The overall loss will be the sum of the negative loglikelihood of these two steps. In both directions, this easy-first training does not ameliorate performance, suggesting that randomness helps the model. Notice also that the average number of iterations in inference decreases (4.03 vs. 4.29, 2.94 vs. 3.17). The model gets trapped in a sub-optimal solution with reduced iterations due to lack of exploration.
Non-autoregressive Machine Translation with Disentangled Context Transformer
2001.05136
Table 7: Dev results with different decoding strategies.
['[BOLD] Inference Strategy', '[BOLD] en\\rightarrowde Step', '[BOLD] en\\rightarrowde BLEU', '[BOLD] ro\\rightarrowen Step', '[BOLD] ro\\rightarrowen BLEU']
[['Left-to-Right Order', '6.80', '21.25', '4.86', '33.87'], ['Right-to-Left Order', '6.79', '20.75', '4.67', '34.38'], ['All-But-Itself', '6.90', '20.72', '4.80', '33.35'], ['Parallel Easy-First', '4.29', '[BOLD] 25.60', '3.17', '[BOLD] 34.97'], ['Mask-Predict', '10', '25.34', '10', '34.54']]
Alternative Inference Algorithms ’s probability conditioned on the easier positions from the previous iteration. We evaluate two alternative orderings: left-to-right and right-to-left. We see that both of them yield much degraded performance. We also attempt to use even broader context than parallel easy-first by computing the probability at each position based on all other positions (all-but-itself, Y_{obs}^{n,t}=Y_{\neq n}^{t-1}). We again see degraded performance, suggesting that cyclic dependency (e.g., Y_{m}^{t-1}\in Y_{{\tt obs}}^{n,t} and Y_{n}^{t-1}\in Y_{{\tt obs}}^{m,t}) breaks consistency. For example, a model can have two output candidates: “Hong Kong” and “New York” (Zhang2020POINTERCT). In this case, we might end up producing “Hong York” due to this cyclic dependency. These results suggest that the easy-first ordering we introduced is a simple yet effective approach.
Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System
1910.08381
Table 8. Extremely large Q&A dataset results.
['[BOLD] Performance (ACC) [BOLD] BERTlarge', '[BOLD] Performance (ACC) [BOLD] KD', '[BOLD] Performance (ACC) [BOLD] MKD', '[BOLD] Performance (ACC) [BOLD] TMKD']
[['77.00', '73.22', '77.32', '[BOLD] 79.22']]
To further evaluate the potential of TMKD, we conduct extensive experiments on CommQA-Unlabeled extremely large-scale corpus data (0.1 billion unlabeled Q&A pairs) and CommQA-Labeled (12M labeled Q&A labeled pairs). Four separate teacher models (T1-T4) are trained with batch size of 128 and learning rate with {2,3,4,5}∗e−5. Max sequence length is set as 200, and number of epochs as 4. Interestingly, on this extremely large Q&A dataset, TMKD even exceeds the performance of its teacher model (ACC: 79.22 vs 77.00), which further verifies the effectiveness of our approach.
Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System
1910.08381
Table 4. Model comparison between our methods and baseline methods. ACC denotes accuracy (all ACC metrics in the table are percentage numbers with % omitted). Specially for MNLI, we average the results of matched and mismatched validation set.
['[BOLD] Model', '[BOLD] Model', '[BOLD] Performance (ACC) [BOLD] DeepQA', '[BOLD] Performance (ACC) [BOLD] MNLI', '[BOLD] Performance (ACC) [BOLD] SNLI', '[BOLD] Performance (ACC) [BOLD] QNLI', '[BOLD] Performance (ACC) [BOLD] RTE', '[BOLD] Inference Speed(QPS)', '[BOLD] Parameters (M)']
[['[BOLD] Original Model', '[BOLD] BERT-3', '75.78', '70.77', '77.75', '78.51', '57.42', '207', '50.44'], ['[BOLD] Original Model', '[BOLD] BERTlarge', '81.47', '79.10', '80.90', '90.30', '68.23', '16', '333.58'], ['[BOLD] Original Model', '[BOLD] BERTlarge ensemble', '81.66', '79.57', '81.39', '90.91', '70.75', '16/3', '333.58*3'], ['[BOLD] Traditional Distillation Model', '[BOLD] Bi-LSTM (1-o-1)', '71.69', '59.39', '69.59', '69.12', '56.31', '207', '50.44'], ['[BOLD] Traditional Distillation Model', '[BOLD] Bi-LSTM (1avg-o-1)', '71.93', '59.60', '70.04', '69.53', '57.35', '207', '50.44'], ['[BOLD] Traditional Distillation Model', '[BOLD] Bi-LSTM (m-o-m)', '72.04', '61.71', '72.89', '69.89', '58.12', '207/3', '50.44*3'], ['[BOLD] Traditional Distillation Model', '[BOLD] BERT-3 (1-o-1)', '77.35', '71.07', '78.62', '77.65', '55.23', '217', '45.69'], ['[BOLD] Traditional Distillation Model', '[BOLD] BERT-3 (1avg-o-1)', '77.63', '70.63', '78.64', '78.20', '58.12', '217', '45.69'], ['[BOLD] Traditional Distillation Model', '[BOLD] BERT-3 (m-o-m)', '77.44', '71.28', '78.71', '77.90', '57.40', '217/3', '45.69*3'], ['[BOLD] Our Distillation Model', '[BOLD] Bi-LSTM (TMKDbase)', '74.73', '61.68', '71.71', '69.99', '62.74', '207', '50.45'], ['[BOLD] Our Distillation Model', '∗ [BOLD] TMKDbase', '79.93', '71.29', '78.35', '83.53', '66.64', '217', '45.70'], ['[BOLD] Our Distillation Model', '∗ [BOLD] TMKDlarge', '[BOLD] 80.43', '[BOLD] 73.93', '[BOLD] 79.48', '[BOLD] 86.44', '[BOLD] 67.50', '[BOLD] 217', '[BOLD] 45.70']]
In this section, we conduct experiments to compare TMKD with baselines in terms of three dimensions, i.e. inference speed, parameter size and performance on task specific test set. 1-o-1 and 1avg-o-1 (BERT-3 and Bi-LSTM) obtain pretty good results regarding inference speed and memory capacity. However there are still some gaps compared to the original BERT model in terms of ACC metric. m-o-m performs better than 1-o-1. However, the inference speed and memory consumption increase in proportion to the number of student models used for ensemble. Compared with 1-o-1, 1avg-o-1 and m-o-m, TMKD achieves optimum in all three dimensions. In terms of memory, TMKD only needs small amount of additional memory consumption since the majority of parameters are shared across different distillation tasks compared with the 1-o-1. In addition, TMKD performs significant better than BERT-3, which further proves the effective of our model.
Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System
1910.08381
Table 5. Comparison between KD and TKD
['[BOLD] Model', '[BOLD] Performance (ACC) [BOLD] DeepQA', '[BOLD] Performance (ACC) [BOLD] MNLI', '[BOLD] Performance (ACC) [BOLD] SNLI', '[BOLD] Performance (ACC) [BOLD] QNLI', '[BOLD] Performance (ACC) [BOLD] RTE']
[['[BOLD] KD (1-o-1)', '77.35', '71.07', '78.62', '77.65', '55.23'], ['[BOLD] TKD', '[BOLD] 80.12', '[BOLD] 72.34', '[BOLD] 78.23', '[BOLD] 85.89', '[BOLD] 67.35']]
On DeepQA dataset, TKD shows significant gains by leveraging large-scale unsupervised Q&A pairs for distillation pre-training. (2) Although Q&A task is different with GLUE tasks, the student model of GLUE tasks still benefit a lot from the distillation pre-training stage leveraging Q&A task. This proves the effect of the distillation pre-training stage leveraging Q&A large corpus.
Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System
1910.08381
Table 9. Compare different number of transformer layer.
['[BOLD] Dataset', '[BOLD] Metrics', '[BOLD] Layer Number [BOLD] 1', '[BOLD] Layer Number [BOLD] 3', '[BOLD] Layer Number [BOLD] 5']
[['[BOLD] DeepQA', 'ACC', '74.59', '78.46', '79.54'], ['[BOLD] MNLI', 'ACC', '61.23', '71.13', '72.76'], ['[BOLD] SNLI', 'ACC', '70.21', '77.67', '78.20'], ['[BOLD] QNLI', 'ACC', '70.60', '82.04', '84.94'], ['[BOLD] RTE', 'ACC', '54.51', '65.70', '66.07'], ['[EMPTY]', 'QPS', '511', '217', '141']]
(2) With n increasing, the performance gain between two consecutive trials decreases. That say, when n increases from 1 to 3, the ACC gains of the 5 datasets are (3.87, 9.90, 7.46, 11.44, 11.19) which are very big jump; while n increases from 3 to 5, gains decrease to (1.08, 1.63, 0.53, 2.89, 0.37), without decent add-on value compared with the significantly decreased QPS.
Guiding Corpus-based Set Expansion by Auxiliary Sets Generation and Co-Expansion
2001.10106
Table 1. A frequency submatrix ϕ of president entities and their co-occured skip-grams.
['Entities', 'President __ and', 'President __ ,', 'President __ said', 'President __ ’s', 'President __ *']
[['Bill Clinton', '33', '17', '2', '23', '75'], ['Hu Jintao', '9', '8', '3', '0', '20'], ['Gorbachev', '2', '3', '0', '2', '7']]
The infrequent “Gorbachev” does not have enough co-occurrence with type-indicative skip-grams and might be even filtered out from the candidate pool What’s more, the skip-gram of “President __ said” might be neglected in the context feature selection process due to its lack of interaction with president entities. These four different skip-grams are actually similar since they can all be filled in with a president entity. Therefore, if we can extract the meaningful part from the original window size, such as “President __” in this example, we will be able to merge these different skip-grams into a common and more flexible pattern: “President __ ∗”, which we named as a flexgram. The last term is an asteroid which serves as a wildcard term to be matched with any words. Then we can update the co-occurrence matrix by Φe,flex(c)=∑flex(c′)=flex(c)Φe,flex(c′) where flex(c) means the flexible transformation of local context c. Then we can merge those infrequent but type-indicative skip-grams together. However, one might say a trivial solution to this data sparsity issue is to exhaust a variety of window sizes, (e.g., [-2,+2],[-1,+1],[-1,0],[0,+1]), as has been done by previous studies(Shen2017SetExpanCS; Yu2019CaSE; Rong2016EgoSetEW). However, this approach very likely ends up generating many skip-grams that are too general. For example, the other half of “President __ and” is “__ and”, which only contains stop words, not informative enough in implying the semantic class of the “skipped” term.
Guiding Corpus-based Set Expansion by Auxiliary Sets Generation and Co-Expansion
2001.10106
Table 3. Mean Average Precision across all queries on Wiki and APR.
['Methods', '[ITALIC] Wiki MAP@10', '[ITALIC] Wiki MAP@20', '[ITALIC] Wiki MAP@50', '[ITALIC] APR MAP@10', '[ITALIC] APR MAP@20', '[ITALIC] APR MAP@50']
[['CaSE', '0.897', '0.806', '0.588', '0.619', '0.494', '0.330'], ['SetExpander', '0.499', '0.439', '0.321', '0.287', '0.208', '0.120'], ['SetExpan', '0.944', '0.921', '0.720', '0.789', '0.763', '0.639'], ['BERT', '0.970', '0.945', '0.853', '0.890', '0.896', '0.777'], ['Set-CoExpan (no aux.)', '0.964', '0.950', '0.861', '0.900', '0.893', '0.793'], ['Set-CoExpan (no flex.)', '0.973', '0.961', '0.886', '0.927', '0.908', '0.823'], ['Set-CoExpan', '[BOLD] 0.976', '[BOLD] 0.964', '[BOLD] 0.905', '[BOLD] 0.933', '[BOLD] 0.915', '[BOLD] 0.830']]
The result clearly shows that our model has the best performance over all the methods in both datasets. Among all the baseline methods, BERT is the strongest one, since it stores a very large pre-trained language model to represent word semantics in a piece of context, which also explains why we have a very large margin over previous iterative-based methods like SetExpan when we incorporate BERT as embedding-based features. However, BERT itself could not outperform our model, which implies that modeling distributional similarity alone is not enough for generating accurate expansion. We can also observe that Set-CoExpan also outperforms the ablations without auxiliary sets or flexgram, which justifies that auxiliary sets indeed improve the set expansion process by providing clearer discriminative context features for multiple sets to prevent them from expanding into each other’s semantic territory. The advantage of Set-CoExpan over the ablation without flexgram also proves that it is necessary to cope with the data sparsity problem by extracting independent units from a fixed local context window. In this way, we relieve the hard-matching constraint brought by skip-gram features by making it more flexible. What’s more, Set-CoExpan has a larger advantage over all the baselines when the ranking list is longer, which indicates that when the seed set gradually grows out of control and more noises appear, our method is able to steer the direction of expansion and to set barriers for out-of-category words to come in.
Component Analysis for Visual Question Answering Architectures
2002.05104
TABLE II: Experiment using different fusion strategies.
['[ITALIC] Embedding', 'RNN', 'Fusion', 'Training', 'Validation']
[['BERT', 'GRU', 'Mult', '78.28', '[BOLD] 58.75'], ['BERT', 'GRU', 'Concat', '67.85', '55.07'], ['BERT', 'GRU', 'Sum', '68.21', '54.93']]
The best result is obtained using the element-wise multiplication. Such an approach functions as a filtering strategy that is able to scale down the importance of irrelevant dimensions from the visual-question feature vectors. In other words, vector dimensions with high cross-modal affinity will have their magnitudes increased, differently from the uncorrelated ones that will have their values reduced. Summation does provide the worst results overall, closely followed by the concatenation operator. Moreover, among all the fusion strategies used in this study, multiplication seems to ease the training process as it presents a much higher training set accuracy (≈11% improvement) as well.
Component Analysis for Visual Question Answering Architectures
2002.05104
TABLE III: Experiment using different attention mechanisms.
['[ITALIC] Embedding', 'RNN', 'Attention', 'Training', 'Validation']
[['BERT', 'GRU', '-', '78.20', '58.75'], ['BERT', 'GRU', 'Co-Attention', '71.10', '58.54'], ['BERT', 'GRU', 'Co-Attention (L2 norm)', '86.03', '64.03'], ['BERT', 'GRU', 'Top Down', '82.64', '62.37'], ['BERT', 'GRU', 'Top Down ( [ITALIC] σ=ReLU)', '87.02', '[BOLD] 64.12']]
For these experiments we used only element-wise multiplication as fusion strategy, given that it presented the best performance in our previous experiments. We observe that attention is a crucial mechanism for VQA, leading to an ≈6% accuracy improvement.
Investigating the Effectiveness of Representations Based on Word-Embeddings in Active Learning for Labelling Text DatasetsSupported by organization x.
1910.03505
Table 3: P values and win/draw/lose of pairwise comparison of QBC-based methods.
['[EMPTY]', 'BERT', 'FT', 'FT_T', 'LDA', 'TF-IDF', 'TF']
[['BERT', '[EMPTY]', '6/0/2', '5/0/3', '8/0/0', '8/0/0', '8/0/0'], ['FT', '0.0687', '[EMPTY]', '3/1/4', '8/0/0', '8/0/0', '8/0/0'], ['FT_T', '0.0929', '0.4982', '[EMPTY]', '8/0/0', '7/0/1', '7/0/1'], ['LDA', '[BOLD] 0.0117', '[BOLD] 0.0117', '[BOLD] 0.0117', '[EMPTY]', '0/0/8', '1/0/7'], ['TF-IDF', '[BOLD] 0.0117', '[BOLD] 0.0117', '[BOLD] 0.0173', '[BOLD] 0.0117', '[EMPTY]', '7/0/1'], ['TF', '[BOLD] 0.0117', '[BOLD] 0.0116', '[BOLD] 0.0173', '[BOLD] 0.0251', '[BOLD] 0.0117', '[EMPTY]']]
This shows the win/loss/tie count for each pair of representations and the p-value from the related significance test. The table demonstrate that all embedding-based methods are significantly different from methods based on TF, TF-IDF and LDA with p<0.05. However, embedding-based methods do not have significant difference between each other. Remarkably, BERT achieves the most wins as compared to any other representations.
Abstract
1604.08120
Table .5: CauseRelPro-beta’s micro-averaged scores using different degrees of polynomial kernel.
['[EMPTY]', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['2 [ITALIC] nd degree', '0.5031', '0.2516', '0.3354'], ['3 [ITALIC] rd degree', '0.5985', '0.2484', '0.3511'], ['4 [ITALIC] th degree', '0.6337', '0.2013', '0.3055']]
g We also conduct some experiments by using different degrees of polynomial kernel Note that even though the best F1-score is achieved by using the 3rd degree of polynomial kernel, the best precision of 0.6337 is achieved with degree 4.
Abstract
1604.08120
Table .6: TempRelPro performances evaluated on the TimeBank-Dense test set and compared with CAEVO.
['[BOLD] System', '[BOLD] T-T [BOLD] P/R/F1', '[BOLD] E-D [BOLD] P/R/F1', '[BOLD] E-T [BOLD] P/R/F1', '[BOLD] E-E [BOLD] P/R/F1', '[BOLD] Overall [BOLD] P', '[BOLD] Overall [BOLD] R', '[BOLD] Overall [BOLD] F1']
[['[BOLD] TempRelPro', '[BOLD] 0.780', '0.518', '[BOLD] 0.556', '0.487', '[BOLD] 0.512', '[BOLD] 0.510', '[BOLD] 0.511'], ['CAEVO', '0.712', '[BOLD] 0.553', '0.494', '[BOLD] 0.494', '0.508', '0.506', '0.507']]
we report the performances of TempRelPro compared with CAEVO. We achieve a small improvement in the overall F1-score, i.e., 51.1% vs 50.7%. For each temporal entity pair type, since we label all possible links, precision and recall are the same. TempRelPro is significantly better than CAEVO in labelling T-T and E-T pairs.
Abstract
1604.08120
Table .9: TempRelPro performances in terms of coverage (Cov), precision (P), recall (R) and F1-score (F1) for all domains, compared with systems in QA TempEval augmented with TREFL.
['[BOLD] System', '[BOLD] Cov', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['TempRelPro', '0.53', '0.65', '0.34', '0.45'], ['[BOLD] TempRelPro + coref', '0.53', '0.66', '[BOLD] 0.35', '[BOLD] 0.46'], ['HLT-FBK + trefl', '0.48', '0.61', '0.29', '0.39'], ['HLT-FBK + coref + trefl', '[BOLD] 0.67', '0.51', '0.34', '0.40'], ['HITSZ-ICRC + trefl', '0.15', '0.58', '0.09', '0.15'], ['CAEVO + trefl', '0.36', '0.60', '0.21', '0.32'], ['TIPSemB + trefl', '0.37', '0.64', '0.24', '0.35'], ['TIPSem + trefl', '0.40', '[BOLD] 0.68', '0.27', '0.38']]
g The QA TempEval organizers also provide an extra evaluation, augmenting the participating systems with a time expression reasoner (TREFL) as a post-processing step \parencitellorens-EtAl:2015:SemEval. The TREFL component adds TLINKs between timexes based on their resolved values. \parencitellorens-saquete-navarro:2010: SemEval, was also provided. TempRelPro + coref achieves the best performance with 35% recall and 46% F1-score.
Abstract
1604.08120
Table .1: Classifier performances (F1-scores) in different experimental settings S1 and S2, compared with using only traditional features. TP: true positives and FP: false positives.
['[EMPTY]', '[BOLD] Feature vector Traditional features', '[BOLD] Feature vector → [ITALIC] f', '[BOLD] Total 5271', '[BOLD] TP 2717', '[BOLD] FP 2554', '[BOLD] F1 [BOLD] 0.5155']
[['[BOLD] S1', 'GloVe', '(→ [ITALIC] w1⊕→ [ITALIC] w2)', '5271', '2388', '2883', '0.4530'], ['[EMPTY]', '[EMPTY]', '(→ [ITALIC] w1+→ [ITALIC] w2)', '5271', '2131', '3140', '0.4043'], ['[EMPTY]', '[EMPTY]', '(→ [ITALIC] w1−→ [ITALIC] w2)', '5271', '2070', '3201', '0.3927'], ['[EMPTY]', 'Word2Vec', '(→ [ITALIC] w1⊕→ [ITALIC] w2)', '5271', '2609', '2662', '[BOLD] 0.4950'], ['[EMPTY]', '[EMPTY]', '(→ [ITALIC] w1+→ [ITALIC] w2)', '5271', '2266', '3005', '0.4299'], ['[EMPTY]', '[EMPTY]', '(→ [ITALIC] w1−→ [ITALIC] w2)', '5271', '2258', '3013', '0.4284'], ['[BOLD] S2', 'Word2Vec', '((→ [ITALIC] w1⊕→ [ITALIC] w2)⊕→ [ITALIC] f)', '5271', '3036', '2235', '[BOLD] 0.5760'], ['[EMPTY]', '[EMPTY]', '((→ [ITALIC] w1+→ [ITALIC] w2)⊕→ [ITALIC] f)', '5271', '2901', '2370', '0.5504'], ['[EMPTY]', '[EMPTY]', '((→ [ITALIC] w1−→ [ITALIC] w2)⊕→ [ITALIC] f)', '5271', '2887', '2384', '0.5477']]
we report the performances of the classifier in different experimental settings S1 and S2, compared with the classifier performance using only traditional features. Since we classify all possible event pairs in the dataset, precision and recall are the same.
Abstract
1604.08120
Table .2: F1-scores per TLINK type with different feature vectors. Pairs of word vectors (→w1, →w2) are retrieved from Word2Vec pre-trained vectors.
['[BOLD] TLINK type', '(→ [ITALIC] w1⊕→ [ITALIC] w2)', '(→ [ITALIC] w1+→ [ITALIC] w2)', '(→ [ITALIC] w1−→ [ITALIC] w2)', '→ [ITALIC] f', '((→ [ITALIC] w1⊕→ [ITALIC] w2)⊕→ [ITALIC] f)', '((→ [ITALIC] w1+→ [ITALIC] w2)⊕→ [ITALIC] f)', '((→ [ITALIC] w1−→ [ITALIC] w2)⊕→ [ITALIC] f)']
[['BEFORE', '[BOLD] 0.6120', '0.5755', '0.5406', '0.6156', '[BOLD] 0.6718', '0.6440', '0.6491'], ['AFTER', '[BOLD] 0.4674', '0.3258', '0.4450', '0.5294', '[BOLD] 0.5800', '0.5486', '0.5680'], ['IDENTITY', '0.5142', '0.4528', '[BOLD] 0.5201', '0.6262', '[BOLD] 0.6650', '0.6456', '0.6479'], ['SIMULTANEOUS', '[BOLD] 0.2571', '0.2375', '0.1809', '0.1589', '0.3056', '[BOLD] 0.3114', '0.1932'], ['INCLUDES', '[BOLD] 0.3526', '0.2348', '0.3278', '0.3022', '[BOLD] 0.4131', '0.3627', '0.3598'], ['IS_INCLUDED', '[BOLD] 0.2436', '0.0769', '0.2268', '0.2273', '[BOLD] 0.3455', '0.3077', '0.2527'], ['BEGINS', '0', '0', '[BOLD] 0.0494', '0.0741', '[BOLD] 0.1071', '0.1000', '0.1053'], ['BEGUN_BY', '0.0513', '0', '[BOLD] 0.1481', '0', '[BOLD] 0.1395', '0.0930', '0.0976'], ['ENDS', '0', '0', '[BOLD] 0.0303', '0', '[BOLD] 0.2000', '0.1500', '0.1500'], ['ENDED_BY', '[BOLD] 0.3051', '0.2540', '0.1982', '0.0727', '[BOLD] 0.2807', '0.2712', '0.0784'], ['[BOLD] Overall', '[BOLD] 0.4950', '[BOLD] 0.4299', '[BOLD] 0.4284', '[BOLD] 0.5155', '[BOLD] 0.5760', '[BOLD] 0.5504', '[BOLD] 0.5477']]
g Pairs of word vectors (→w1,→w2) are retrieved from Word2Vec pre-trained vectors. (→w1−→w2) is shown to be the best in identifying IDENTITY, BEGINS, BEGUN_BY and ENDS relation types, while the rest are best identified by (→w1⊕→w2). Combining (→w1⊕→w2) and →f improves the identification of all TLINK types in general, particularly BEGINS/BEGUN_BY and ENDS/ENDED_BY types, which were barely identified when (→w1⊕→w2) or →f is used individually as features.
Abstract
1604.08120
Table .3: Classifier performance per TLINK type with different feature vectors, evaluated on TempEval-3-platinum. Pairs of word vectors (→w1, →w2) are retrieved from Word2Vec pre-trained vectors.
['[BOLD] TLINK type', '(→ [ITALIC] w1⊕→ [ITALIC] w2) [BOLD] P', '(→ [ITALIC] w1⊕→ [ITALIC] w2) [BOLD] R', '(→ [ITALIC] w1⊕→ [ITALIC] w2) [BOLD] F1', '→ [ITALIC] f [BOLD] P', '→ [ITALIC] f [BOLD] R', '→ [ITALIC] f [BOLD] F1', '((→ [ITALIC] w1⊕→ [ITALIC] w2)⊕→ [ITALIC] f) [BOLD] P', '((→ [ITALIC] w1⊕→ [ITALIC] w2)⊕→ [ITALIC] f) [BOLD] R', '((→ [ITALIC] w1⊕→ [ITALIC] w2)⊕→ [ITALIC] f) [BOLD] F1']
[['[BOLD] BEFORE', '0.4548', '0.7123', '0.5551', '0.5420', '0.7381', '[BOLD] 0.6250', '0.5278', '0.7170', '0.6080'], ['[BOLD] AFTER', '0.5548', '0.4649', '0.5059', '0.5907', '0.6196', '[BOLD] 0.6048', '0.6099', '0.6000', '[BOLD] 0.6049'], ['[BOLD] IDENTITY', '0.0175', '0.0667', '0.0278', '0.2245', '0.7333', '0.3438', '0.2444', '0.7333', '[BOLD] 0.3667'], ['[BOLD] SIMULTANEOUS', '0.3529', '0.0732', '0.1212', '0.1667', '0.0370', '0.0606', '0.2308', '0.0732', '[BOLD] 0.1111'], ['[BOLD] INCLUDES', '0.1765', '0.0750', '0.1053', '0.3077', '0.2000', '[BOLD] 0.2424', '0.1852', '0.1250', '0.1493'], ['[BOLD] IS_INCLUDED', '0.4000', '0.0426', '0.0769', '0.3333', '0.0638', '0.1071', '0.3846', '0.1064', '[BOLD] 0.1667'], ['[BOLD] BEGINS', '0', '0', '0', '0', '0', '0', '0', '0', '0'], ['[BOLD] BEGUN_BY', '0', '0', '0', '0', '0', '0', '0', '0', '0'], ['[BOLD] ENDS', '0', '0', '0', '0', '0', '0', '0', '0', '0'], ['[BOLD] ENDED_BY', '0', '0', '0', '0', '0', '0', '0', '0', '0'], ['[BOLD] Overall', '[BOLD] 0.4271', '[BOLD] 0.4271', '[BOLD] 0.4271', '[BOLD] 0.5043', '[BOLD] 0.5043', '[BOLD] 0.5043', '[BOLD] 0.4974', '[BOLD] 0.4974', '[BOLD] 0.4974']]
In general, using only (→w1⊕→w2) as features does not give any benefit since the performance is significantly worse compared to using only traditional features →f, i.e. 0.4271 vs 0.5043 F1-scores. Combining the word embedding and traditional features ((→w1⊕→w2)⊕→f) also does not improve the classifier performance in general. However, if we look into each TLINK type, the classifier performance in identifying IDENTITY, SIMULTANEOUS and IS_INCLUDED is improved, and quite significantly for SIMULTANEOUS.
Abstract
1604.08120
Table .4: CauseRelPro-beta’s micro-averaged scores.
['[BOLD] System', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['mirza-tonelli:2014:Coling', '0.6729', '0.2264', '0.3388'], ['[BOLD] CauseRelPro-beta', '[BOLD] 0.5985', '[BOLD] 0.2484', '[BOLD] 0.3511']]
The main difference between CauseRelPro-beta and that reported in \textcitemirza-tonelli:2014: Coling is the elimination of the middle step in which causal signals are identified. This, together with the use of supersenses, contributes to increasing recall. However, using token-based features and having a specific step to label CSIGNALs yield better precision.
Abstract
1604.08120
Table .6: Impact of increased training data, using EMM-clusters and propagated (prop.) CLINKs, on system performances (micro-averaged scores). TP: true positives, FP: false positives and FN: false negatives.
['[EMPTY]', '[BOLD] TP', '[BOLD] FP', '[BOLD] FN', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['Causal-TimeBank', '79', '53', '239', '0.5985', '0.2484', '0.3511'], ['Causal-TimeBank + EMM-clusters', '95', '54', '223', '0.6376', '0.2987', '0.4069'], ['[BOLD] Causal-TimeBank + EMM-clusters + prop. CLINKs', '108', '74', '210', '[BOLD] 0.5934', '[BOLD] 0.3396', '[BOLD] 0.4320']]
(Causal-TimeBank + EMM-clusters) improves micro-averaged F1-scores of the system by 5.57% significantly (p < 0.01). We evaluate the performance of the system trained with the enriched training data from CLINK propagation in the same five-fold cross-validation setting as the previous experiment. (Causal-TimeBank + EMM-clusters + prop. CLINKs). The improvement is statistically significant with p-value < 0.001. As expected, the overall improvement is caused by the increased recall, even though the system precision drops.
Leveraging Discourse Information Effectively for Authorship AttributionThe first two authors make equal contribution.
1709.02271
Table 7: Macro-averaged F1 score for multi-class author classification on the large datasets, using either no discourse (None), grammatical relations (GR), or RST relations (RST).
['Disc. Type', 'Model', 'novel-50', 'IMDB62']
[['None', 'SVM2', '92.9', '90.4'], ['None', 'CNN2', '95.3', '91.5'], ['GR', 'SVM2-PV', '93.3', '90.4'], ['GR', 'CNN2-PV', '95.1', '90.5'], ['GR', 'CNN2-DE (local)', '96.9', '90.8'], ['GR', 'CNN2-DE (global)', '97.5', '90.9'], ['RST', 'SVM2-PV', '93.8', '90.9'], ['RST', 'CNN2-PV', '95.5', '90.7'], ['RST', 'CNN2-DE (local)', '97.7', '91.4'], ['RST', 'CNN2-DE (global)', '[BOLD] 98.8', '[BOLD] 92.0']]
Generalization-dataset experiments. On novel-50, most discourse-enhanced models improve the performance of the baseline non-discourse CNN2 to varying degrees. The clear pattern again emerges that RST features work better, with the best F1 score evidenced in the CNN2-DE (global) model (3.5 improvement in F1). On IMDB62, as expected with short text inputs (mean=349 words/review), the discourse features in general do not add further contribution. Even the best model CNN2-DE brings only marginal improvement, confirming our findings from varying the chunk size on novel-9, where discourse features did not help at this input size. Equipped with discourse features, SVM2-PV performs slightly better than SVM2 on novel-50 (by 0.4 with GR, 0.9 with RST features). On IMDB62, the same pattern persists for the SVMs: discourse features do not make noticeable improvements (by 0.0 and 0.5 with GR and RST respectively). GR vs. RST. The RST parser produces a tree of discourse relations for the input text, thus introducing a “global view.” The GR features, on the other hand, are more restricted to a “local view” on entities between consecutive sentences. While a deeper empirical investigation is needed, one can intuitively imagine that identifying authorship by focusing on the local transitions between grammatical relations (as in GR) is more difficult than observing how the entire text is organized (as in RST).
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces
1805.10190
Table 3: Decoding accuracy of neural networks of different sizes (Word Error Rate, %)
['[BOLD] Model', '[BOLD] dev-clean', '[BOLD] dev-other', '[BOLD] test-clean', '[BOLD] test-other']
[['nnet-256', '7.3', '19.2', '7.6', '19.6'], ['nnet-512', '6.4', '17.1', '6.6', '17.6'], ['nnet-768', '6.4', '16.8', '6.6', '17.5'], ['KALDI', '4.3', '11.2', '4.8', '11.5']]
In the next experiment, the neural networks are trained with the same architecture but different layer sizes on the 460x6+500 hours dataset. This shows that larger models are capable of fitting the data and generalizing better, as expected. This allows us to choose the best tradeoff between precision and computational cost depending on each target hardware and assistant needs.
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces
1805.10190
Table 4: Comparison of speed and memory performance of nnet-256 and nnet-768. RTF refers to real time ratio.
['[BOLD] Model', '[BOLD] Num. Params (M)', '[BOLD] Size (MB)', '[BOLD] RTF (Raspberry Pi 3)']
[['nnet-256', '2.6', '10', '<1'], ['nnet-768', '15.4', '59', '>1']]
The gain is similar in RAM. In terms of speed, the nnet-256 is 6 to 10 times faster than the nnet-768. These tradeoffs and comparison with other trained models led us to select the nnet-256. It has a reasonable speed and memory footprint, and the loss in accuracy is compensated by the adapted LM and robust NLU.
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces
1805.10190
Table 6: Precision, recall and F1-score on Braun et al. corpora. *Benchmark run in August 2017 by the authors of bench17 . **Benchmark run in January 2018 by the authors of this paper.
['corpus', 'NLU provider', 'precision', 'recall', 'F1-score']
[['chatbot', 'Luis*', '0.970', '0.918', '0.943'], ['chatbot', 'IBM Watson*', '0.686', '0.8', '0.739'], ['chatbot', 'API.ai*', '0.936', '0.532', '0.678'], ['chatbot', 'Rasa*', '0.970', '0.918', '0.943'], ['chatbot', 'Rasa**', '0.933', '0.921', '0.927'], ['chatbot', 'Snips**', '0.963', '0.899', '0.930'], ['web apps', 'Luis*', '0.828', '0.653', '0.73'], ['web apps', 'IBM Watson*', '0.828', '0.585', '0.686'], ['web apps', 'API.ai*', '0.810', '0.382', '0.519'], ['web apps', 'Rasa*', '0.466', '0.724', '0.567'], ['web apps', 'Rasa**', '0.593', '0.613', '0.603'], ['web apps', 'Snips**', '0.655', '0.655', '0.655'], ['ask ubuntu', 'Luis*', '0.885', '0.842', '0.863'], ['ask ubuntu', 'IBM Watson*', '0.807', '0.825', '0.816'], ['ask ubuntu', 'API.ai*', '0.815', '0.754', '0.783'], ['ask ubuntu', 'Rasa*', '0.791', '0.823', '0.807'], ['ask ubuntu', 'Rasa**', '0.796', '0.768', '0.782'], ['ask ubuntu', 'Snips**', '0.812', '0.828', '0.820'], ['overall', 'Luis*', '0.945', '0.889', '0.916'], ['overall', 'IBM Watson*', '0.738', '0.767', '0.752'], ['overall', 'API.ai*', '0.871', '0.567', '0.687'], ['overall', 'Rasa*', '0.789', '0.855', '0.821'], ['overall', 'Rasa**', '0.866', '0.856', '0.861'], ['overall', 'Snips**', '0.896', '0.858', '0.877']]
For the raw results and methodology, see https://github.com/snipsco/nlu-benchmark. The main metric used in this benchmark is the average F1-score of intent classification and slot filling. The data consists in three corpora. Two of the corpora were extracted from StackExchange, one from a Telegram chatbot. The exact same splits as in the original paper were used for the Ubuntu and Web Applications corpora. At the date we ran the evaluation, the train and test splits were not explicit for the Chatbot dataset (although they were added later on). In that case, we ran a 5-fold cross-validation. For Rasa, we considered all three possible backends (Spacy, SKLearn + MITIE, MITIE), see the abovementioned GitHub repository for more details. However, only Spacy was run on all 3 datasets, for train time reasons. For fairness, the latest version of Rasa NLU is also displayed. Results show that Snips NLU ranks second highest overall.
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces
1805.10190
Table 7: Precision, recall and F1-score averaged on all slots and on all intents of an in-house dataset, run in June 2017.
['NLU provider', 'train size', 'precision', 'recall', 'F1-score']
[['Luis', '70', '0.909', '0.537', '0.691'], ['Luis', '2000', '0.954', '0.917', '[BOLD] 0.932'], ['Wit', '70', '0.838', '0.561', '0.725'], ['Wit', '2000', '0.877', '0.807', '0.826'], ['API.ai', '70', '0.770', '0.654', '0.704'], ['API.ai', '2000', '0.905', '0.881', '0.884'], ['Alexa', '70', '0.680', '0.495', '0.564'], ['Alexa', '2000', '0.720', '0.592', '0.641'], ['Snips', '70', '0.795', '0.769', '[BOLD] 0.790'], ['Snips', '2000', '0.946', '0.921', '0.930']]
In this experiment, the comparison is done separately on each intent to focus on slot filling (rather than intent classification). The main metric used in this benchmark is the average F1-score of slot filling on all slots. Three training sets of 70 and 2000 queries have been drawn from the total pool of queries to gain in statistical relevance. Validation sets consist in 100 queries per intent. Five different cloud-based providers are compared to Snips NLU (Microsoft’s Luis, API.AI now Google’s Dialogflow, Facebook’s Wit.ai, and Amazon Alexa). For more details about the specific methodology for each provider and access to the full dataset, see https://github.com/snipsco/nlu-benchmark. Each solution is trained and evaluated on the exact same datasets. Snips NLU is as accurate or better than competing cloud-based solutions in slot filling, regardless of the training set size.
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces
1805.10190
Table 16: Precision, recall and F1-score averaged on all slots in an in-house dataset, run in June 2017.
['intent', 'NLU provider', 'train size', 'precision', 'recall', 'F1-score']
[['SearchCreativeWork', 'Luis', '70', '0.993', '0.746', '0.849'], ['SearchCreativeWork', 'Luis', '2000', '1.000', '0.995', '0.997'], ['SearchCreativeWork', 'Wit', '70', '0.959', '0.569', '0.956'], ['SearchCreativeWork', 'Wit', '2000', '0.974', '0.955', '0.964'], ['SearchCreativeWork', 'API.ai', '70', '0.915', '0.711', '0.797'], ['SearchCreativeWork', 'API.ai', '2000', '1.000', '0.968', '0.983'], ['SearchCreativeWork', 'Alexa', '70', '0.492', '0.323', '0.383'], ['SearchCreativeWork', 'Alexa', '2000', '0.464', '0.375', '0.413'], ['SearchCreativeWork', 'Snips', '70', '0.864', '0.908', '0.885'], ['SearchCreativeWork', 'Snips', '2000', '0.983', '0.976', '0.980'], ['GetWeather', 'Luis', '70', '0.781', '0.271', '0.405'], ['GetWeather', 'Luis', '2000', '0.985', '0.902', '0.940'], ['GetWeather', 'Wit', '70', '0.790', '0.411', '0.540'], ['GetWeather', 'Wit', '2000', '0.847', '0.874', '0.825'], ['GetWeather', 'API.ai', '70', '0.666', '0.513', '0.530'], ['GetWeather', 'API.ai', '2000', '0.826', '0.751', '0.761'], ['GetWeather', 'Alexa', '70', '0.764', '0.470', '0.572'], ['GetWeather', 'Alexa', '2000', '0.818', '0.701', '0.746'], ['GetWeather', 'Snips', '70', '0.791', '0.703', '0.742'], ['GetWeather', 'Snips', '2000', '0.964', '0.926', '0.943'], ['PlayMusic', 'Luis', '70', '0.983', '0.265', '0.624'], ['PlayMusic', 'Luis', '2000', '0.816', '0.737', '0.761'], ['PlayMusic', 'Wit', '70', '0.677', '0.336', '0.580'], ['PlayMusic', 'Wit', '2000', '0.773', '0.518', '0.655'], ['PlayMusic', 'API.ai', '70', '0.549', '0.486', '0.593'], ['PlayMusic', 'API.ai', '2000', '0.744', '0.701', '0.716'], ['PlayMusic', 'Alexa', '70', '0.603', '0.384', '0.464'], ['PlayMusic', 'Alexa', '2000', '0.690', '0.518', '0.546'], ['PlayMusic', 'Snips', '70', '0.546', '0.482', '0.577'], ['PlayMusic', 'Snips', '2000', '0.876', '0.792', '0.823']]
In this experiment, the comparison is done separately on each intent to focus on slot filling (rather than intent classification). The main metric used in this benchmark is the average F1-score of slot filling on all slots. Three training sets of 70 and 2000 queries have been drawn from the total pool of queries to gain in statistical relevance. Validation sets consist in 100 queries per intent. Five different cloud-based providers are compared to Snips NLU (Microsoft’s Luis, API.AI now Google’s Dialogflow, Facebook’s Wit.ai, and Amazon Alexa). For more details about the specific methodology for each provider and access to the full dataset, see https://github.com/snipsco/nlu-benchmark. Each solution is trained and evaluated on the exact same datasets. Snips NLU is as accurate or better than competing cloud-based solutions in slot filling, regardless of the training set size.
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces
1805.10190
Table 17: Precision, recall and F1-score averaged on all slots in an in-house dataset, run in June 2017.
['intent', 'NLU provider', 'train size', 'precision', 'recall', 'F1-score']
[['AddToPlaylist', 'Luis', '70', '0.759', '0.575', '0.771'], ['AddToPlaylist', 'Luis', '2000', '0.971', '0.938', '0.953'], ['AddToPlaylist', 'Wit', '70', '0.647', '0.478', '0.662'], ['AddToPlaylist', 'Wit', '2000', '0.862', '0.761', '0.799'], ['AddToPlaylist', 'API.ai', '70', '0.830', '0.740', '0.766'], ['AddToPlaylist', 'API.ai', '2000', '0.943', '0.951', '0.947'], ['AddToPlaylist', 'Alexa', '70', '0.718', '0.664', '0.667'], ['AddToPlaylist', 'Alexa', '2000', '0.746', '0.704', '0.724'], ['AddToPlaylist', 'Snips', '70', '0.787', '0.788', '0.785'], ['AddToPlaylist', 'Snips', '2000', '0.914', '0.891', '0.900'], ['RateBook', 'Luis', '70', '0.993', '0.843', '0.887'], ['RateBook', 'Luis', '2000', '1.000', '0.997', '0.999'], ['RateBook', 'Wit', '70', '0.987', '0.922', '0.933'], ['RateBook', 'Wit', '2000', '0.990', '0.950', '0.965'], ['RateBook', 'API.ai', '70', '0.868', '0.830', '0.840'], ['RateBook', 'API.ai', '2000', '0.976', '0.983', '0.979'], ['RateBook', 'Alexa', '70', '0.873', '0.743', '0.798'], ['RateBook', 'Alexa', '2000', '0.867', '0.733', '0.784'], ['RateBook', 'Snips', '70', '0.966', '0.962', '0.964'], ['RateBook', 'Snips', '2000', '0.997', '0.997', '0.997'], ['SearchScreeningEvent', 'Luis', '70', '0.995', '0.721', '0.826'], ['SearchScreeningEvent', 'Luis', '2000', '1.000', '0.961', '0.979'], ['SearchScreeningEvent', 'Wit', '70', '0.903', '0.773', '0.809'], ['SearchScreeningEvent', 'Wit', '2000', '0.849', '0.849', '0.840'], ['SearchScreeningEvent', 'API.ai', '70', '0.859', '0.754', '0.800'], ['SearchScreeningEvent', 'API.ai', '2000', '0.974', '0.959', '0.966'], ['SearchScreeningEvent', 'Alexa', '70', '0.710', '0.515', '0.560'], ['SearchScreeningEvent', 'Alexa', '2000', '0.695', '0.541', '0.585'], ['SearchScreeningEvent', 'Snips', '70', '0.881', '0.840', '0.858'], ['SearchScreeningEvent', 'Snips', '2000', '0.965', '0.971', '0.967'], ['BookRestaurant', 'Luis', '70', '0.859', '0.336', '0.473'], ['BookRestaurant', 'Luis', '2000', '0.906', '0.891', '0.892'], ['BookRestaurant', 'Wit', '70', '0.901', '0.436', '0.597'], ['BookRestaurant', 'Wit', '2000', '0.841', '0.739', '0.736'], ['BookRestaurant', 'API.ai', '70', '0.705', '0.548', '0.606'], ['BookRestaurant', 'API.ai', '2000', '0.874', '0.853', '0.834'], ['BookRestaurant', 'Alexa', '70', '0.598', '0.364', '0.504'], ['BookRestaurant', 'Alexa', '2000', '0.760', '0.575', '0.689'], ['BookRestaurant', 'Snips', '70', '0.727', '0.700', '0.719'], ['BookRestaurant', 'Snips', '2000', '0.919', '0.891', '0.903']]
In this experiment, the comparison is done separately on each intent to focus on slot filling (rather than intent classification). The main metric used in this benchmark is the average F1-score of slot filling on all slots. Three training sets of 70 and 2000 queries have been drawn from the total pool of queries to gain in statistical relevance. Validation sets consist in 100 queries per intent. Five different cloud-based providers are compared to Snips NLU (Microsoft’s Luis, API.AI now Google’s Dialogflow, Facebook’s Wit.ai, and Amazon Alexa). For more details about the specific methodology for each provider and access to the full dataset, see https://github.com/snipsco/nlu-benchmark. Each solution is trained and evaluated on the exact same datasets. Snips NLU is as accurate or better than competing cloud-based solutions in slot filling, regardless of the training set size.
Generating Sequences WithRecurrent Neural Networks
1308.0850
Table 1: Penn Treebank Test Set Results. ‘BPC’ is bits-per-character. ‘Error’ is next-step classification error rate, for either characters or words.
['Input', 'Regularisation', 'Dynamic', 'BPC', 'Perplexity', 'Error (%)', 'Epochs']
[['Char', 'none', 'no', '1.32', '167', '28.5', '9'], ['char', 'none', 'yes', '1.29', '148', '28.0', '9'], ['char', 'weight noise', 'no', '1.27', '140', '27.4', '25'], ['char', 'weight noise', 'yes', '1.24', '124', '26.9', '25'], ['char', 'adapt. wt. noise', 'no', '1.26', '133', '27.4', '26'], ['char', 'adapt. wt. noise', 'yes', '1.24', '122', '26.9', '26'], ['word', 'none', 'no', '1.27', '138', '77.8', '11'], ['word', 'none', 'yes', '1.25', '126', '76.9', '11'], ['word', 'weight noise', 'no', '1.25', '126', '76.9', '14'], ['word', 'weight noise', 'yes', '1.23', '117', '76.2', '14']]
For example, he records a perplexity of 141 for a 5-gram with Keyser-Ney smoothing, 141.8 for a word level feedforward neural network, 131.1 for the state-of-the-art compression algorithm PAQ8 and 123.2 for a dynamically evaluated word-level RNN. However by combining multiple RNNs, a 5-gram and a cache model in an ensemble, he was able to achieve a perplexity of 89.4. Interestingly, the benefit of dynamic evaluation was far more pronounced here than in Mikolov’s thesis (he records a perplexity improvement from 124.7 to 123.2 with word-level RNNs). This suggests that LSTM is better at rapidly adapting to new data than ordinary RNNs.
Generating Sequences WithRecurrent Neural Networks
1308.0850
Table 2: Wikipedia Results (bits-per-character)
['Train', 'Validation (static)', 'Validation (dynamic)']
[['1.42', '1.67', '1.33']]
As with the Penn data, we tested the network on the validation data with and without dynamic evaluation (where the weights are updated as the data is predicted). This is probably because of the long range coherence of Wikipedia data; for example, certain words are much more frequent in some articles than others, and being able to adapt to this during evaluation is advantageous. It may seem surprising that the dynamic results on the validation set were substantially better than on the training set. However this is easily explained by two factors: firstly, the network underfit the training data, and secondly some portions of the data are much more difficult than others (for example, plain text is harder to predict than XML tags).
Generating Sequences WithRecurrent Neural Networks
1308.0850
Table 4: Handwriting Synthesis Results. All results recorded on the validation set. ‘Log-Loss’ is the mean value of L(x) in nats. ‘SSE’ is the mean sum-squared-error per data point.
['Regularisation', 'Log-Loss', 'SSE']
[['none', '-1096.9', '0.23'], ['adaptive weight noise', '-1128.2', '0.23']]
The regularised network appears to generate slightly more realistic sequences, although the difference is hard to discern by eye. Both networks performed considerably better than the best prediction network. In particular the sum-squared-error was reduced by 44%. This is likely due in large part to the improved predictions at the ends of strokes, where the error is largest.
Simplified End-to-End MMI training and voting for ASR
1703.10356
Table 1: WER of various models on the WSJ corpus. In [13] the CTC baseline is better than ours (5.48%/9.12% with the same architecture and ext. LM), and the eval92 set is used as a validation set.
['Model', 'LM', 'WER% eval92', 'WER% dev93']
[['CTC, EESEN ', 'Std.', '7.87', '11.39'], ['CTC, ours', 'Std.', '7.66', '11.61'], ['EEMMI bi-gram', 'Std.', '7.37', '[BOLD] 10.85'], ['EEMMI trigram', 'Std.', '[BOLD] 7.05', '11.08'], ['Attention seq2seq ', 'Ext.', '6.7', '9.7'], ['CTC, ours', 'Ext.', '5.87', '9.38'], ['EEMMI bi-gram', 'Ext.', '5.83', '[BOLD] 9.02'], ['EEMMI trigram', 'Ext.', '[BOLD] 5.48', '9.05'], ['CTC, ROVER, 3 models ', 'Ext.', '4.29', '7.65'], ['EEMMI, 3 bi-grams', 'Ext.', '4.61', '[BOLD] 7.34'], ['EEMMI, 2 bi-grams, 1 trigram', 'Ext.', '[BOLD] 4.22', '7.55']]
We consider two decoding LMs. The WSJ standard pruned trigram model (std.), and the extended-vocabulary pruned trigram model (ext.). We compare our end-to-end MMI (EEMMI) model to CTC under the same conditions. We see a consistent improvement in WER of EEMMI using bi-grams train LM compared to CTC. It can be observed, that attention models are inferior to phonemes-based CTC and to our method. These well established alignments enable the use of our simple voting scheme. We use models with the same NN architecture we described, trained separately end-to-end. We note that simple posteriors averaging does not work well with CTC, even using models with identical architectures. We perform decoding only once, on the averaged outputs of the models.
Robust Neural Machine Translation with Doubly Adversarial Inputs
1906.02443
Table 4: Results on WMT’14 English-German translation.
['Method', 'Model', 'BLEU']
[['Vaswani et\xa0al.', 'Trans.-Base', '27.30'], ['Vaswani et\xa0al.', 'Trans.-Big', '28.40'], ['Chen et\xa0al.', 'RNMT+', '28.49'], ['Ours', 'Trans.-Base', '28.34'], ['Ours', 'Trans.-Big', '[BOLD] 30.01']]
We compare our approach with Transformer for different numbers of hidden units (i.e. 1024 and 512) and a related RNN-based NMT model RNMT+ Chen et al. Recall that our approach is built on top of the Transformer model. The notable gain in terms of BLEU verifies our English-German translation model.
Robust Neural Machine Translation with Doubly Adversarial Inputs
1906.02443
Table 2: Comparison with baseline methods trained on different backbone models (second column). * indicates the method trained using an extra corpus.
['Method', 'Model', 'MT06', 'MT02', 'MT03', 'MT04', 'MT05', 'MT08']
[['Vaswani:17', 'Trans.-Base', '44.59', '44.82', '43.68', '45.60', '44.57', '35.07'], ['Miyato:17', 'Trans.-Base', '45.11', '45.95', '44.68', '45.99', '45.32', '35.84'], ['Sennrich:16c', 'Trans.-Base', '44.96', '46.03', '44.81', '46.01', '45.69', '35.32'], ['Wang:18', 'Trans.-Base', '45.47', '46.31', '45.30', '46.45', '45.62', '35.66'], ['Cheng:18', 'RNMT [ITALIC] lex.', '43.57', '44.82', '42.95', '45.05', '43.45', '34.85'], ['Cheng:18', 'RNMT [ITALIC] feat.', '44.44', '46.10', '44.07', '45.61', '44.06', '34.94'], ['Cheng:18', 'Trans.-Base [ITALIC] feat.', '45.37', '46.16', '44.41', '46.32', '45.30', '35.85'], ['Cheng:18', 'Trans.-Base [ITALIC] lex.', '45.78', '45.96', '45.51', '46.49', '45.73', '36.08'], ['Sennrich:16b*', 'Trans.-Base', '46.39', '47.31', '47.10', '47.81', '45.69', '36.43'], ['Ours', 'Trans.-Base', '46.95', '47.06', '46.48', '47.39', '46.58', '37.38'], ['Ours + BackTranslation*', 'Trans.-Base', '[BOLD] 47.74', '[BOLD] 48.13', '[BOLD] 47.83', '[BOLD] 49.13', '[BOLD] 49.04', '[BOLD] 38.61']]
Among all methods trained without extra corpora, our approach achieves the best result across datasets. After incorporating the back-translated corpus, our method yields an additional gain of 1-3 points over Sennrich et al. Since all methods are built on top of the same backbone, the result substantiates the efficacy of our method on the standard benchmarks that contain natural noise. Compared to Miyato et al.
Robust Neural Machine Translation with Doubly Adversarial Inputs
1906.02443
Table 3: Results on NIST Chinese-English translation.
['Method', 'Model', 'MT06', 'MT02', 'MT03', 'MT04', 'MT05', 'MT08']
[['Vaswani:17', 'Trans.-Base', '44.59', '44.82', '43.68', '45.60', '44.57', '35.07'], ['Ours', 'Trans.-Base', '[BOLD] 46.95', '[BOLD] 47.06', '[BOLD] 46.48', '[BOLD] 47.39', '[BOLD] 46.58', '[BOLD] 37.38']]
We first compare our approach with the Transformer model Vaswani et al. As we see, the introduction of our method to the standard backbone model (Trans.-Base) leads to substantial improvements across the validation and test sets. Specifically, our approach achieves an average gain of 2.25 BLEU points and up to 2.8 BLEU points on NIST03.
Robust Neural Machine Translation with Doubly Adversarial Inputs
1906.02443
Table 9: Effect of the ratio value γsrc and γtrg on Chinese-English Translation.
['[13mm] [ITALIC] γsrcγtrg', '0.00', '0.25', '0.50', '0.75']
[['0.00', '44.59', '46.19', '46.26', '46.14'], ['0.25', '45.23', '46.72', '[BOLD] 46.95', '46.52'], ['0.50', '44.25', '45.34', '45.39', '45.94'], ['0.75', '44.18', '44.98', '45.35', '45.37']]
The hyper-parameters γsrc and γtrg control the ratio of word replacement in the source and target inputs. As we see, the performance is relatively insensitive to the values of these hyper-parameters, and the best configuration on the Chinese-English validation set is obtained at γsrc=0.25 and γtrg=0.50. We found that a non-zero γtrg always yields improvements when compared to the result of γtrg=0. While γsrc=0.25 increases BLEU scores for all the values of γtrg, a larger γsrc seems to be damaging.
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis
1911.12569
TABLE IV: Comparison with the state-of-the-art systems proposed by [16] on emotion dataset. The metrics P, R and F stand for Precision, Recall and F1-Score.
['[BOLD] Models', 'Metric', '[BOLD] Emotion Anger', '[BOLD] Emotion Anticipation', '[BOLD] Emotion Disgust', '[BOLD] Emotion Fear', '[BOLD] Emotion Joy', '[BOLD] Emotion Sadness', '[BOLD] Emotion Surprise', '[BOLD] Emotion Trust', '[BOLD] Emotion Micro-Avg']
[['MaxEnt', 'P', '76', '72', '62', '57', '55', '65', '62', '62', '66'], ['MaxEnt', 'R', '72', '61', '47', '31', '50', '65', '15', '38', '52'], ['MaxEnt', 'F', '74', '66', '54', '40', '52', '65', '24', '47', '58'], ['SVM', 'P', '76', '70', '59', '55', '52', '64', '46', '57', '63'], ['SVM', 'R', '69', '60', '53', '40', '52', '60', '22', '45', '53'], ['SVM', 'F', '72', '64', '56', '46', '52', '62', '30', '50', '58'], ['LSTM', 'P', '76', '68', '64', '51', '56', '60', '40', '57', '62'], ['LSTM', 'R', '77', '68', '68', '48', '41', '77', '17', '49', '60'], ['LSTM', 'F', '76', '67', '65', '49', '46', '67', '21', '51', '61'], ['BiLSTM', 'P', '77', '70', '61', '58', '54', '62', '42', '59', '64'], ['BiLSTM', 'R', '77', '66', '64', '43', '59', '72', '20', '44', '60'], ['BiLSTM', 'F', '77', '[BOLD] 68', '63', '49', '56', '67', '27', '50', '62'], ['CNN', 'P', '77', '68', '62', '53', '54', '63', '36', '53', '62'], ['CNN', 'R', '77', '60', '61', '46', '56', '72', '24', '49', '59'], ['CNN', 'F', '77', '64', '62', '49', '55', '67', '[BOLD] 28', '50', '60'], ['E2 (proposed)', 'P', '81', '74', '70', '66', '64', '67', '68', '68', '71'], ['E2 (proposed)', 'R', '83', '62', '74', '42', '59', '81', '13', '49', '63'], ['E2 (proposed)', 'F', '[BOLD] 82', '[BOLD] 68', '[BOLD] 72', '[BOLD] 51', '[BOLD] 62', '[BOLD] 73', '22', '[BOLD] 57', '[BOLD] 67']]
Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise.
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis
1911.12569
TABLE III: Comparison with the state-of-the-art systems of SemEval 2016 task 6 on sentiment dataset.
['[BOLD] Models', '[BOLD] Sentiment (F-score)']
[['UWB ', '42.02'], ['INF-UFRGS-OPINION-MINING ', '42.32'], ['LitisMind', '44.66'], ['pkudblab ', '56.28'], ['SVM + n-grams + sentiment ', '78.90'], ['M2 (proposed)', '[BOLD] 82.10']]
Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis.
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis
1911.12569
TABLE V: Confusion matrix for sentiment analysis
['Actual', 'Predicted negative', 'Predicted positive']
[['negative', '1184', '88'], ['positive', '236', '325']]
We perform quantitative error analysis for both sentiment and emotion for the M2 model. This may be due to the reason that this particular class is the most underrepresented in the training set. These three emotions have the least share of training instances, making the system less confident towards these emotions.
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis
1911.12569
TABLE V: Confusion matrix for sentiment analysis
['Actual', 'Predicted NO', 'Predicted YES']
[['NO', '388', '242'], ['YES', '201', '1002']]
We perform quantitative error analysis for both sentiment and emotion for the M2 model. This may be due to the reason that this particular class is the most underrepresented in the training set. These three emotions have the least share of training instances, making the system less confident towards these emotions.
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis
1911.12569
TABLE V: Confusion matrix for sentiment analysis
['Actual', 'Predicted NO', 'Predicted YES']
[['NO', '445', '249'], ['YES', '433', '706']]
We perform quantitative error analysis for both sentiment and emotion for the M2 model. This may be due to the reason that this particular class is the most underrepresented in the training set. These three emotions have the least share of training instances, making the system less confident towards these emotions.