paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
A FOFE-based Local Detection Approach for Named Entity Recognition and Mention Detection
1611.00801
Table 5: Official entity discovery performance of our methods on KBP2016 trilingual EDL track.
['LANG', 'NAME P', 'NAME R', 'NAME F1', 'NOMINAL P', 'NOMINAL R', 'NOMINAL F1', 'OVERALL P', 'OVERALL R', 'OVERALL F1']
[['[EMPTY]', 'RUN1 (our official ED result in KBP2016 EDL2)', 'RUN1 (our official ED result in KBP2016 EDL2)', 'RUN1 (our official ED result in KBP2016 EDL2)', 'RUN1 (our official ED result in KBP2016 EDL2)', 'RUN1 (our official ED result in KBP2016 EDL2)', 'RUN1 (our official ED result in KBP2016 EDL2)', 'RUN1 (our official ED result in KBP2016 EDL2)', 'RUN1 (our official ED result in KBP2016 EDL2)', 'RUN1 (our official ED result in KBP2016 EDL2)'], ['ENG', '0.898', '0.789', '0.840', '0.554', '0.336', '0.418', '0.836', '0.680', '0.750'], ['CMN', '0.848', '0.702', '0.768', '0.414', '0.258', '0.318', '0.789', '0.625', '0.698'], ['SPA', '0.835', '0.778', '0.806', '0.000', '0.000', '0.000', '0.835', '0.602', '0.700'], ['ALL', '0.893', '0.759', '0.821', '0.541', '0.315', '0.398', '0.819', '0.639', '[BOLD] 0.718'], ['[EMPTY]', 'RUN3 (system fusion of RUN1 with the best system in )', 'RUN3 (system fusion of RUN1 with the best system in )', 'RUN3 (system fusion of RUN1 with the best system in )', 'RUN3 (system fusion of RUN1 with the best system in )', 'RUN3 (system fusion of RUN1 with the best system in )', 'RUN3 (system fusion of RUN1 with the best system in )', 'RUN3 (system fusion of RUN1 with the best system in )', 'RUN3 (system fusion of RUN1 with the best system in )', 'RUN3 (system fusion of RUN1 with the best system in )'], ['ENG', '0.857', '0.876', '0.866', '0.551', '0.373', '0.444', '0.804', '0.755', '0.779'], ['CMN', '0.790', '0.839', '0.814', '0.425', '0.380', '0.401', '0.735', '0.760', '0.747'], ['SPA', '0.790', '0.877', '0.831', '0.000', '0.000', '0.000', '0.790', '0.678', '0.730'], ['ALL', '0.893', '0.759', '0.821', '0.541', '0.315', '0.398', '0.774', '[BOLD] 0.735', '[BOLD] 0.754']]
After fixing some system bugs, we have used both the KBP2015 data and iFLYTEK data to re-train our models for three languages and finally submitted three systems to the final KBP2016 EDL2 evaluation. In our systems, we treat all nominal mentions as special types of named entities and both named and nominal entities are recognized using one model. Here we have broken down the system performance according to different languages and categories of entities (named or nominal). In RUN1, we have submitted our best NER system, achieving about 0.718 in F1 score in the KBP2016 trilingual EDL track. This is a very strong performance among all KBP2016 participating teams. The overall trilingual F1 score is improved to 0.754. It is worth to note that we have obtained a pretty high recall rate, about 0.735, after the system combination because the NER methods used by these two systems are quite complementary.
A FOFE-based Local Detection Approach for Named Entity Recognition and Mention Detection
1611.00801
Table 3: Entity Discovery Performance of our method on the KBP2015 EDL evaluation data, with comparison to the best system in KBP2015 official evaluation.
['[EMPTY]', '2015 track best [ITALIC] P', '2015 track best [ITALIC] R', '2015 track best [ITALIC] F1', 'ours [ITALIC] P', 'ours [ITALIC] R', 'ours [ITALIC] F1']
[['Trilingual', '75.9', '69.3', '72.4', '78.3', '69.9', '[BOLD] 73.9'], ['English', '79.2', '66.7', '[BOLD] 72.4', '77.1', '67.8', '72.2'], ['Chinese', '79.2', '74.8', '[BOLD] 76.9', '79.3', '71.7', '75.3'], ['Spanish', '78.4', '72.2', '75.2', '79.9', '71.8', '[BOLD] 75.6']]
The overall trilingual entity discovery performance is slightly better than the best system participated in the official KBP2015 evaluation, with 73.9 vs. 72.4 as measured by F1 scores.
A FOFE-based Local Detection Approach for Named Entity Recognition and Mention Detection
1611.00801
Table 4: Entity discovery performance (English only) in KBP2016 EDL1 evaluation window is shown as a comparison of three models trained by different combinations of training data sets.
['training data', 'P', 'R', '[ITALIC] F1']
[['KBP2015', '0.818', '0.600', '0.693'], ['KBP2015 + WIKI', '0.859', '0.601', '0.707'], ['KBP2015 + iFLYTEK', '0.830', '0.652', '[BOLD] 0.731']]
In our first set of experiments, we investigate the effect of using different training data sets on the final entity discovery performance. Different training runs are conducted on different combinations of the aforementioned data sources. The first system, using only the KBP2015 data to train the model, has achieved 0.693 in F1 score in the official KBP2016 English evaluation data. After adding the weakly labelled data, WIKI, we can see the entity discovery performance is improved to 0.707 in F1 score. Finally, we can see that it yields the best performance by using the KBP2015 data and the iFLYTEK in-house data sets to train our models, giving 0.731 in F1 score.
Learning from Explanations with Neural Execution Tree
1911.01352
Table 2: Experiment results on Relation Extraction and Sentiment Analysis. Average and standard deviation of F1 scores (%) over multiple runs are reported (5 runs for RE and 10 runs for SA). LF(E) denotes directly applying logical forms onto explanations. Bracket behind each method illustrates corresponding data used in the method. S denotes training data without labels, E denotes explanations, R denotes surface pattern rules transformed from explanations; Sa denotes labeled data annotated with explanations, Su denotes the remaining unlabeled data. Sl denotes labeled data annotated using same time as creating explanations E, Slu denotes remaining unlabeled data corresponding to Sl.
['[EMPTY]', 'TACRED', 'SemEval']
[['LF (E)', '23.33', '33.86'], ['CBOW-GloVe (R+S)', '34.6±0.4', '48.8±1.1'], ['PCNN (S [ITALIC] a)', '34.8±0.9', '41.8±1.2'], ['PA-LSTM (S [ITALIC] a)', '41.3±0.8', '57.3±1.5'], ['BiLSTM+ATT (S [ITALIC] a)', '41.4±1.0', '58.0±1.6'], ['BiLSTM+ATT (S [ITALIC] l)', '30.4±1.4', '54.1±1.0'], ['Self Training (S [ITALIC] a+S [ITALIC] u)', '41.7±1.5', '55.2±0.8'], ['Pseudo Labeling (S [ITALIC] a+S [ITALIC] u)', '41.5±1.2', '53.5±1.2'], ['Mean Teacher (S [ITALIC] a+S [ITALIC] u)', '40.8±0.9', '56.0±1.1'], ['Mean Teacher (S [ITALIC] l+S [ITALIC] lu)', '25.9±2.2', '52.2±0.7'], ['DualRE (S [ITALIC] a+S [ITALIC] u)', '32.6±0.7', '61.7±0.9'], ['Data Programming (E+S)', '30.8±2.4', '43.9±2.4'], ['NExT (E+S)', '[BOLD] 45.6±0.4', '[BOLD] 63.5±1.0']]
We observe that our proposed NExT consistently outperforms all baseline models in low-resource setting. Also, we found that, (1) directly applying logical forms to unlabeled data results in poor performance. We notice that this method achieves high precision but low recall. Based on our observation of the collected dataset, this is because people tend to use detailed and specific constraints in an NL explanation to ensure they cover all aspects of the instance. As a result, those instances that satisfy the constraints are correctly labeled in most cases, and thus the precision is high. Meanwhile, generalization ability is compromised, and thus the recall is low. (2) Compared to its downstream classifier baseline (BiLSTM+ATT with Sa), NExT achieves 4.2% F1 improvement in absolute value on TACRED, and 5.5% on SemEval. This validates that the expansion of rule coverage by NExT is effective and is providing useful information to classifier training. (3) Performance gap further widens when we take annotation efforts into account. The annotation time for E and Sl are equivalent; but the performance of BiLSTM+ATT significantly degrades with fewer instances in Sl. (4) Results of semi-supervised methods are unsatisfactory. This may be explained with difference between underlying data distribution of Sa and Su.
Learning from Explanations with Neural Execution Tree
1911.01352
Table 2: Experiment results on Relation Extraction and Sentiment Analysis. Average and standard deviation of F1 scores (%) over multiple runs are reported (5 runs for RE and 10 runs for SA). LF(E) denotes directly applying logical forms onto explanations. Bracket behind each method illustrates corresponding data used in the method. S denotes training data without labels, E denotes explanations, R denotes surface pattern rules transformed from explanations; Sa denotes labeled data annotated with explanations, Su denotes the remaining unlabeled data. Sl denotes labeled data annotated using same time as creating explanations E, Slu denotes remaining unlabeled data corresponding to Sl.
['[EMPTY]', 'Restaurant', 'Laptop']
[['LF (E)', '7.7', '13.1'], ['CBOW-GloVe (R+S)', '68.5±2.9', '61.5±1.3'], ['PCNN (S [ITALIC] a)', '72.6±1.2', '60.9±1.1'], ['ATAE-LSTM (S [ITALIC] a)', '71.1±0.4', '56.2±3.6'], ['ATAE-LSTM (S [ITALIC] l)', '71.4±0.5', '52.0±1.4'], ['Self Training (S [ITALIC] a+S [ITALIC] u)', '71.2±0.5', '57.6±2.1'], ['Pseudo Labeling (S [ITALIC] a+ [ITALIC] Su)', '70.9±0.4', '58.0±1.9'], ['Mean Teacher (S [ITALIC] a+S [ITALIC] u)', '72.0±1.5', '62.1±2.3'], ['Mean Teacher (S [ITALIC] l+S [ITALIC] lu)', '74.1±0.4', '61.7±3.7'], ['Data Programming (E+S)', '71.2±0.0', '61.5±0.1'], ['NExT (E+S)', '[BOLD] 75.8±0.8', '[BOLD] 62.8±1.9']]
We observe that our proposed NExT consistently outperforms all baseline models in low-resource setting. Also, we found that, (1) directly applying logical forms to unlabeled data results in poor performance. We notice that this method achieves high precision but low recall. Based on our observation of the collected dataset, this is because people tend to use detailed and specific constraints in an NL explanation to ensure they cover all aspects of the instance. As a result, those instances that satisfy the constraints are correctly labeled in most cases, and thus the precision is high. Meanwhile, generalization ability is compromised, and thus the recall is low. (2) Compared to its downstream classifier baseline (BiLSTM+ATT with Sa), NExT achieves 4.2% F1 improvement in absolute value on TACRED, and 5.5% on SemEval. This validates that the expansion of rule coverage by NExT is effective and is providing useful information to classifier training. (3) Performance gap further widens when we take annotation efforts into account. The annotation time for E and Sl are equivalent; but the performance of BiLSTM+ATT significantly degrades with fewer instances in Sl. (4) Results of semi-supervised methods are unsatisfactory. This may be explained with difference between underlying data distribution of Sa and Su.
Learning from Explanations with Neural Execution Tree
1911.01352
Table 4: Performance of NLProlog when extracted facts are used as input. Average accuracy over 3 runs is reported. NLProlog empowered by 21 natural language explanations and 5 hand-written rules achieves 1% gain in accuracy.
['[EMPTY]', '|S [ITALIC] a|', '|S [ITALIC] u|', 'Accuracy']
[['NLProlog (published code)', '0', '0', '74.57'], ['+ S [ITALIC] a', '103', '0', '74.40'], ['+ S [ITALIC] u (confidence >0.3)', '103', '340', '74.74'], ['+ S [ITALIC] u (confidence >0.2)', '103', '577', '75.26'], ['+ S [ITALIC] u (confidence >0.1)', '103', '832', '[BOLD] 75.60']]
From the result we observe that simply adding the 103 strictly-matched facts is not making notable improvement. However, with the help of NExT, a larger number of structured facts are recognized from support sentences, so that external knowledge from only 21 explanations and 5 rules improve the accuracy by 1 point. This observation validates NExT’s capability in low resource setting and highlight its potential when applied to downstream tasks.
Learning from Explanations with Neural Execution Tree
1911.01352
Table 8: TACRED results on 130 explanations and 100 explanations
['Metric', 'TACRED 130 Precision', 'TACRED 130 Recall', 'TACRED 130 F1', 'TACRED 100 Precision', 'TACRED 100 Recall', 'TACRED 100 F1']
[['LF (E)', '[BOLD] 83.5', '12.8', '22.2', '[BOLD] 85.2', '11.8', '20.7'], ['CBOW-GloVe (R+S)', '26.0±2.3', '[BOLD] 39.9±5.0', '31.2±0.5', '24.4±1.3', '[BOLD] 41.7±3.7', '30.7±0.1'], ['PCNN (S [ITALIC] a)', '41.8±2.7', '28.8±1.8', '34.1±1.1', '28.2±3.4', '22.2±1.3', '24.8±1.9'], ['PA-LSTM (S [ITALIC] a)', '44.9±1.7', '33.5±2.9', '38.3±1.3', '39.9±2.1', '38.2±1.1', '39.0±1.3'], ['BiLSTM+ATT (S [ITALIC] a)', '40.1±2.6', '36.2±3.4', '37.9±1.1', '36.1±0.4', '37.6±3.0', '36.8±1.4'], ['BiLSTM+ATT (S [ITALIC] l)', '35.0±9.0', '25.4±1.6', '28.9±2.7', '43.3±2.2', '23.1±3.3', '30.0±3.1'], ['Self Training (S [ITALIC] a+S [ITALIC] u)', '43.6±3.3', '35.1±2.1', '38.7±0.0', '41.9±5.9', '32.0±7.4', '35.5±2.5'], ['Pseudo Labeling (S [ITALIC] a+S [ITALIC] u)', '44.2±1.9', '34.2±1.9', '38.5±0.6', '39.7±2.0', '34.9±3.3', '37.1±1.5'], ['Mean Teacher (S [ITALIC] a+S [ITALIC] u)', '38.8±0.9', '35.6±1.3', '37.1±0.5', '37.4±4.0', '37.4±0.2', '37.3±2.0'], ['Mean Teacher (S [ITALIC] l+S [ITALIC] lu)', '21.1±3.3', '28.7±1.8', '24.2±1.8', '17.5±4.7', '18.4±.59', '17.9±5.0'], ['DualRE (S [ITALIC] a+S [ITALIC] u)', '34.9±3.6', '30.5±2.3', '32.3±1.0', '40.6±4.3', '19.1±1.5', '25.9±0.6'], ['Data Programming (E+S)', '34.3±16.1', '18.7±1.4', '23.5±4.9', '43.5±2.3', '15.0±2.3', '22.2±2.4'], ['NEXT (E+S)', '45.3±2.4', '39.2±0.3', '[BOLD] 42.0±1.1', '43.9±3.7', '36.2±1.9', '[BOLD] 39.6±0.5']]
As a supplement to Fig. Results show that our model achieves best performance compared with baseline methods.
Learning from Explanations with Neural Execution Tree
1911.01352
Table 9: SemEval results on 150 explanations and 100 explanations
['Metric', 'SemEval 150 Precision', 'SemEval 150 Recall', 'SemEval 150 F1', 'SemEval 100 Precision', 'SemEval 100 Recall', 'SemEval 100 F1']
[['LF (E)', '[BOLD] 85.1', '17.2', '28.6', '[BOLD] 90.7', '9.0', '16.4'], ['CBOW-GloVe (R+S)', '44.8±1.9', '48.6±1.5', '46.6±1.1', '36.0±1.4', '40.2±2.0', '37.9±0.1'], ['PCNN (S [ITALIC] a)', '49.1±3.9', '36.1±2.4', '41.5±1.4', '43.3±1.4', '27.9±1.0', '33.9±0.3'], ['PA-LSTM (S [ITALIC] a)', '58.0±1.2', '52.5±0.4', '55.1±0.5', '55.2±1.7', '37.7±0.8', '44.8±0.8'], ['BiLSTM+ATT (S [ITALIC] a)', '59.2±0.4', '53.7±1.8', '56.3±0.8', '54.9±5.0', '40.5±0.9', '46.5±1.3'], ['BiLSTM+ATT (S [ITALIC] l)', '47.6±2.6', '42.0±2.3', '44.6±2.5', '43.7±2.6', '37.6±5.0', '40.3±3.7'], ['Self Training (S [ITALIC] a+S [ITALIC] u)', '53.4±4.3', '47.5±2.9', '50.1±1.1', '53.2±2.3', '34.2±2.2', '41.6±1.4'], ['Pseudo Labeling (S [ITALIC] a+S [ITALIC] u)', '55.3±4.5', '51.0±2.3', '53.0±1.5', '47.4±4.6', '39.9±3.9', '43.1±0.6'], ['Mean Teacher (S [ITALIC] a+S [ITALIC] u)', '61.8±4.0', '49.1±2.6', '54.6±0.2', '58.5±1.9', '41.8±2.6', '48.7±1.4'], ['Mean Teacher (S [ITALIC] l+S [ITALIC] lu)', '40.6±2.0', '31.2±4.5', '35.2±3.6', '32.7±3.0', '25.6±3.1', '28.6±2.2'], ['DualRE (S [ITALIC] a+S [ITALIC] u)', '61.7±3.0', '56.1±3.0', '58.8±3.0', '61.6±1.7', '39.7±1.9', '48.3±1.5'], ['Data Programming (E+S)', '50.9±10.8', '27.0±0.8', '35.0±3.2', '28.0±4.1', '17.4±5.5', '21.0±3.4'], ['NEXT (E+S)', '68.5±1.6', '[BOLD] 60.0±1.7', '[BOLD] 63.7±0.8', '60.2±1.8', '[BOLD] 53.5±0.7', '[BOLD] 56.7±1.1']]
As a supplement to Fig. Results show that our model achieves best performance compared with baseline methods.
Learning from Explanations with Neural Execution Tree
1911.01352
Table 10: Laptop results on 55 explanations and 70 explanations
['Metric', 'Laptop 55 Precision', 'Laptop 55 Recall', 'Laptop 55 F1', 'Laptop 70 Precision', 'Laptop 70 Recall', 'Laptop 70 F1']
[['LF (E)', '[BOLD] 90.8', '9.2', '16.8', '[BOLD] 89.4', '9.2', '16.8'], ['CBOW-GloVe (R+S)', '53.7±0.2', '72.9±0.2', '61.8±0.2', '53.6±0.3', '72.4±0.2', '61.6±0.2'], ['PCNN (S [ITALIC] a)', '53.5±3.3', '71.0±3.6', '61.0±3.2', '55.6±1.9', '74.1±1.9', '63.5±1.5'], ['ATAE-LSTM (S [ITALIC] a)', '53.5±0.4', '71.9±2.2', '61.3±1.0', '53.7±1.2', '72.9±1.8', '61.9±1.5'], ['ATAE-LSTM (S [ITALIC] l)', '48.3±1.0', '59.5±5.0', '53.2±2.2', '54.1±1.4', '61.1±3.0', '57.4±2.1'], ['Self Training (S [ITALIC] a+S [ITALIC] u)', '51.3±2.6', '68.6±2.7', '58.7±2.6', '51.2±1.4', '68.6±2.2', '58.7±1.6'], ['Pseudo Labeling (S [ITALIC] a+S [ITALIC] u)', '51.8±1.7', '70.3±2.3', '59.7±1.9', '52.4±0.8', '70.9±1.5', '60.3±1.0'], ['Mean Teacher (S [ITALIC] a+S [ITALIC] u)', '55.1±0.9', '74.1±1.6', '63.2±1.1', '55.9±3.3', '73.0±2.6', '63.2±1.7'], ['Mean Teacher (S [ITALIC] l+S [ITALIC] lu)', '55.5±2.5', '69.3±2.8', '61.6±2.2', '58.0±0.7', '73.2±1.5', '64.7±1.0'], ['Data Programming (E+S)', '53.4±0.0', '72.6±0.0', '61.5±0.0', '53.5±0.1', '72.5±0.1', '61.6±0.1'], ['NEXT (E+S)', '56.3±1.3', '[BOLD] 75.9±2.5', '[BOLD] 64.6±1.7', '56.9±0.2', '[BOLD] 77.1±0.6', '[BOLD] 65.5±0.3']]
As a supplement to Fig. Results show that our model achieves best performance compared with baseline methods.
Learning from Explanations with Neural Execution Tree
1911.01352
Table 11: Restaurant results on 60 explanations and 75 explanations
['Metric', 'Restaurant 60 Precision', 'Restaurant 60 Recall', 'Restaurant 60 F1', 'Restaurant 75 Precision', 'Restaurant 75 Recall', 'Restaurant 75 F1']
[['LF (E)', '[BOLD] 86.0', '3.8', '7.4', '[BOLD] 85.4', '6.8', '12.6'], ['CBOW-GloVe (R+S)', '63.7±2.3', '75.6±1.3', '69.1±1.9', '64.1±1.3', '76.6±0.1', '69.8±0.7'], ['PCNN (S [ITALIC] a)', '67.0±0.9', '81.0±1.0', '73.3±0.9', '68.4±0.1', '[BOLD] 82.8±0.3', '74.9±0.2'], ['ATAE-LSTM (S [ITALIC] a)', '65.2±0.6', '78.5±0.2', '71.2±0.3', '64.7±0.4', '78.3±0.4', '70.8±0.4'], ['ATAE-LSTM (S [ITALIC] l)', '67.0±1.5', '79.5±1.2', '72.7±1.0', '66.6±2.0', '78.5±1.4', '72.1±0.6'], ['Self Training (S [ITALIC] a+S [ITALIC] u)', '65.2±0.2', '78.7±0.5', '71.3±0.2', '65.7±1.1', '77.2±1.1', '71.0±0.1'], ['Pseudo Labeling (S [ITALIC] a+S [ITALIC] u)', '64.9±0.6', '77.8±1.0', '70.8±0.3', '64.9±0.9', '77.8±1.2', '70.7±1.0'], ['Mean Teacher (S [ITALIC] a+S [ITALIC] u)', '68.8±2.3', '76.0±2.2', '72.2±1.3', '73.3±3.5', '79.2±3.8', '76.0±1.2'], ['Mean Teacher (S [ITALIC] l+S [ITALIC] lu)', '69.0±0.8', '82.0±1.1', '74.9±0.7', '69.2±0.7', '82.6±0.6', '75.3±0.6'], ['Data Programming (E+S)', '65.0±0.0', '78.8±0.1', '71.2±0.0', '65.0±0.0', '78.8±0.0', '71.2±0.0'], ['NEXT (E+S)', '71.0±1.4', '[BOLD] 82.8±1.1', '[BOLD] 76.4±0.4', '71.9±1.5', '[BOLD] 82.8±1.9', '[BOLD] 76.9±0.7']]
As a supplement to Fig. Results show that our model achieves best performance compared with baseline methods.
Learning from Explanations with Neural Execution Tree
1911.01352
Table 12: BERT experiments on Restaurant dataset using 45 and 75 explanations
['[EMPTY]', '45', '75']
[['ATAE-LSTM (S [ITALIC] a)', '79.9', '80.6'], ['Self Training (S [ITALIC] a+S [ITALIC] u)', '80.9', '81.1'], ['Pseudo Labeling (S [ITALIC] a+S [ITALIC] u)', '78.7', '81.0'], ['Mean Teacher (S [ITALIC] a+S [ITALIC] u)', '79.3', '79.8'], ['NExT (E+S)', '[BOLD] 81.4', '[BOLD] 82.0']]
Our framework is model-agnostic as it can be integrated with any downstream classifier. Results show that our model still outperforms baseline methods when BERT is incorporated. We observe the performance of NExT is approaching the upper bound 85% (by feeding all data to BERT), with only 75 explanations, which again demonstrates the annotation efficiency of NExT.
Better Early than Late: Fusing Topics with Word Embeddingsfor Neural Question Paraphrase Identification
2007.11314
Table 3: Ablation study for our TAPA model reporting F1 scores on test sets.
['[EMPTY]', 'PAWS', 'Quora', 'Sem-Eval']
[['full TAPA (early fusion)', '42.2', '84.1', '46.4'], ['-topics', '40.6', '83.9', '45.1'], ['-ELMO', '26.9', '84.5', '45.0'], ['TAPA with late fusion', '39.8', '83.9', '40.1']]
Removing topics consistently reduces F1 scores on all datasets, while the effect of ELMo representations is dataset dependent. Deleting ELMo improves performance on Quora, but leads to a massive performance drop on PAWS. The large impact on PAWS can be explained by the fact that this dataset was automatically constructed to have high textual overlap between questions and differences between paraphrases are chiefly due to variations in syntax. Our full TAPA model uses early fusion as this was the best setting during hyperparameter tuning. We conclude that topics contribute consistently to the performance of our proposed model, but that early topic-embedding fusion is crucial.
Sample Efficient Text Summarization Using a Single Pre-Trained Transformer
1905.08836
Table 1: Summarization results when using the full training set. Our scores are averaged over three models trained with different random seeds. *Other abstractive summarization model scores are provided to contextualize performance on this task but are not directly comparable to our models.
['[BOLD] Model', '[BOLD] R1', '[BOLD] R2', '[BOLD] RL']
[['Other Abs. Sum. models*', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Celikyilmaz et\xa0al. ( 2018 )', '41.69', '19.47', '37.92'], ['CopyTransformer (4-layer)', '39.25', '17.54', '36.45'], ['Gehrmann et\xa0al. ( 2018 )', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['GPT-2 (48-layer, zero-shot)', '29.34', '08.27', '26.58'], ['Radford et\xa0al. ( 2019 )', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['No Pre-training', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['BidirEncoder-Decoder (4-layer)', '37.74', '16.27', '34.76'], ['Encoder-Decoder (12-layer)', '36.72', '15.22', '33.84'], ['Transformer LM (12-layer)', '37.72', '16.14', '34.62'], ['With Pre-training (all 12-layer)', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Pre-train Encoder only', '36.05', '15.48', '33.48'], ['Pre-train Decoder only', '27.48', '06.87', '25.40'], ['Encoder-Decoder', '39.18', '17.00', '36.33'], ['Transformer LM', '39.65', '17.74', '36.85']]
We find that pre-training improves performance by about 2 ROUGE points, on average. Surprisingly, when only the decoder is pre-trained, ROUGE gets substantially worse. We speculate this is because the model starting out with a well-trained decoder and poor encoder learns to overly rely on its language modeling abilities and not adequately incorporate information from the encoder. The Transformer LM outperforms corresponding models both with and without pre-training, despite having almost half as many parameters. Our best model performs competitively with existing models on the CNN/Daily Mail abstractive summarization task, despite the absence of model augmentations such as a copy mechanism and reinforcement learning objectives.
1 Introduction
2002.01535
Table 5: Document classification results, reporting number of parameters and accuracy.
['Representation', 'Parameters', 'Accuracy']
[['Recurrent', '509.6 K', '[BOLD] 87.4'], ['Convolutional', '2.5 M', '85.7'], ['+ non-linearity', '1.3 M', '87.3'], ['+ separability', '234.4 K', '86.7'], ['+ bottlenecks', '[BOLD] 92.5 K', '86.4']]
Our optimized representation is significantly smaller than the baselines, using 5x and 27x less parameters than the recurrent and convolutional baselines respectively. Even with far less capacity, the optimized model’s accuracy is comparable to the baselines. Specifically, it only drops by 1% in comparison to the recurrent baseline. In addition, each successively compressed convolutional model achieves an accuracy better than the baseline. This suggests that the original convolutional model is over-parameterized and the reduction in capacity helps during optimization. Our optimized model shows gains on the file size and latency metrics; most notably, its latency is an order of magnitude lower than the recurrent baseline’s. However, the CNNs all use slightly more memory than the recurrent model, presenting a clear trade-off between latency and memory requirements.
DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset
1710.03957
Table 4: Experiments Results of generation-based approaches.
['[EMPTY]', 'Epoch', 'Test Loss', 'PPL', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4']
[['Seq2Seq', '30', '4.024', '55.94', '0.352', '0.146', '0.017', '0.006'], ['Attn-Seq2Seq', '60', '4.036', '56.59', '0.335', '0.134', '0.013', '0.006'], ['HRED', '44', '4.082', '59.24', '0.396', '0.174', '0.019', '0.009'], ['L+Seq2Seq', '21', '3.911', '49.96', '0.379', '0.156', '0.018', '0.006'], ['L+Attn-Seq2Seq', '37', '3.913', '50.03', '0.464', '0.220', '0.016', '0.009'], ['L+HRED', '27', '3.990', '54.05', '0.431', '0.193', '0.016', '0.009'], ['Pre+Seq2Seq', '18', '3.556', '35.01', '0.312', '0.120', '0.0136', '0.005'], ['Pre+Attn-Seq2Seq', '15', '3.567', '35.42', '0.354', '0.136', '0.013', '0.004'], ['Pre+HRED', '10', '3.628', '37.65', '0.153', '0.026', '0.001', '0.000']]
Intention and Emotion-enhanced To utilize the intention and emotion labels, we follow Zhou et al. The intention and emotion labels are characterized as one-hot vectors. (last four columns), we can see that in general attention-based approaches are better than vanilla Seq2Seq model. Among the three compared approaches, HREDs achieve highest BLEU scores because they take history information into consideration. Furthermore, label information is effective even though we utilize them in the simplest way. These findings are consistent with previous work Sordoni et al. ; Serban et al. OpenSubtitle converge faster, achieving lower Perplexity (PPL) but poorer BLEU scores. We conjecture it as a result of domain difference. OpenSubtitle dataset is constructed by movie lines, whereas our datasets are daily dialogues. Moreover, OpenSubtitles has approximately 1000+ speaker turns in one “conversation”, while our dataset has in average 8 turns. To pretrain a model by corpus from different domain will harm its performance on the target domain. Hence, it is less optimal to simply pretrain models with large-scale datasets such as OpenSubtitle, which is domain different from the evaluation datasets. We further examine this issue by comparing the generated answers by models trained solely on DailyDialog with and without pre-training.
DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset
1710.03957
Table 5: BLEU scores of retrieval-based approaches.
['[EMPTY]', 'BLEU-2', 'BLEU-3', 'BLEU-4']
[['Embedding', '0.207', '0.162', '0.150'], ['[BOLD] Feature', '[BOLD] 0.258', '[BOLD] 0.204', '[BOLD] 0.194'], ['+ I-Rerank', '0.204', '0.189', '0.181'], ['+ I-E-Rerank', '0.190', '0.174', '0.164']]
Because the groundtruth responses in the test set are not seen in the training set, we can not evaluate the performance using ranking-like metrics such as Recall-k.
DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset
1710.03957
Table 6: “Equivalence” percentage (%) of retrieval-based approaches.
['[EMPTY]', 'Feature', '+I-Rerank', '+I-E-Rerank']
[['Intention', '46.3', '[BOLD] 47.3', '46.7'], ['Emotion', '73.7', '72.3', '[BOLD] 74.3']]
We also evaluate them by calculating the “Equivalence” percentage between the labels (i.e., intention, emotion) of the retrieved responses and those of the groundtruth responses. Though subtle improvements can be seen when using labels, we speculate it as not a very strong evaluation metric. It is unsafe to conclude that the higher the “equivalence” percentage is, the better (more coherent, more suitable) the retrieved response will be.
Mono vs Multilingual Transformer-based Models: a Comparison across Several Language Tasks
2007.09757
Table 9: Comparison of the 3 models across all tasks
['Model', 'BertPT >', 'BertPT =', 'BertPT ', 'AlbertPT >', 'AlbertPT =', 'AlbertPT ', 'Multilingual >', 'Multilingual =', 'Multilingual ']
[['Baseline', '3', '12', '8', '1', '8', '14', '1', '13', '9'], ['BertPT', '[EMPTY]', '[EMPTY]', '[EMPTY]', '4', '28', '6', '2', '24', '12'], ['AlbertPT', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '1', '30', '7']]
The different tasks and settings under which BertPT and AlbertPT were compared to Multilingual BERT and published baselines yield 38 possible pairwise comparisons. We analyzed in how many times the performance of each model was better than, worse than or equivalent to the performance of the others. If the proportional difference in scores was below 5%, then the two models were considered equivalent. We can see that no single method is always better than the others. The baselines have fewer wins, which means that BERT-based models tend to yield better results. BertPT and AlbertPT achieved equivalent performances most of the time (28 out of 38). AlbertPT has an advantage (6 wins versus 4 losses), and since its training is less expensive, then it would be the model of choice.
Mono vs Multilingual Transformer-based Models: a Comparison across Several Language Tasks
2007.09757
Table 2: RTE and STS evaluated on Test Sets.
['Data', 'Model', 'RTE Acc', 'RTE F1-M', 'STS Pearson', 'STS MSE']
[['EP', 'FialhoMarquesMartinsCoheurQuaresma2016', '0.84', '0.73', '0.74', '0.60'], ['EP', 'BertPT', '0.84', '0.69', '0.79', '0.54'], ['EP', 'AlbertPT', '0.88', '0.78', '0.80', '0.47'], ['EP', 'Multilingual', '[BOLD] 0.89', '[BOLD] 0.81', '[BOLD] 0.84', '[BOLD] 0.43'], ['BP', 'FialhoMarquesMartinsCoheurQuaresma2016', '0.86', '0.64', '0.73', '0.36'], ['BP', 'BertPT', '0.86', '0.53', '0.76', '0.32'], ['BP', 'AlbertPT', '0.87', '[BOLD] 0.65', '0.79', '0.30'], ['BP', 'Multilingual', '[BOLD] 0.88', '0.55', '[BOLD] 0.81', '[BOLD] 0.28'], ['EP+BP', 'BertPT', '0.87', '0.72', '0.79', '0.39'], ['EP+BP', 'AlbertPT', '0.89', '0.76', '0.78', '0.39'], ['EP+BP', 'Multilingual', '[BOLD] 0.90', '[BOLD] 0.82', '[BOLD] 0.83', '[BOLD] 0.33'], ['EP+BP**', 'FialhoMarquesMartinsCoheurQuaresma2016', '0.83', '0.72', '-', '-'], ['EP+BP**', 'BertPT', '0.86', '0.66', '-', '-'], ['EP+BP**', 'AlbertPT', '0.88', '0.79', '-', '-'], ['EP+BP**', 'Multilingual', '[BOLD] 0.91', '[BOLD] 0.83', '-', '-']]
Next, we applied our fine-tuned model to the entire training set and ran the evaluation over the test set. The first observation is that the BP setting is difficult for all models. The EP+BP * * in the last rows indicates that the model was trained using both EP and BP training sets, but they were evaluated only using the EP test set. The results of this evaluation can be compared to the first rows, in which the models were trained and evaluated only on EP test set. Although it makes sense to think that more training data would help the model generalize, we reached the same conclusion as FialhoMarquesMartinsCoheurQuaresma2016 that the improvement, if any, in this dataset is meaningless. We could not find a published baseline for the case where the model was trained using EP+BP and evaluated using both test sets. Again, as expected, BERT models were superior to previous baselines for all combinations. Analysing 10-fold evaluation and using only the test set, we can conclude that BERT-based models were superior to previous baselines. The goal of Semantic Textual Similarity is to quantify the degree of similarity between two sentences. In the ASSIN dataset, similarity is a number ranging from 1 (no similarity) to 5 (high similarity). This is a linear regression problem as the output is a similarity score. The evaluation measures how far the predicted score is from the ground truth using two metrics: ( i) Pearson Correlation which measures the correlation with the ground truth, so the higher, the better, and (ii) and the Mean Squared Error which measures the square of the difference between the prediction and the ground truth, so the lower the better. Following the same procedure of Textual Entailment, we used 3,000 pairs for training. Since STS is related to RTE, again, all models obtained better results than the baselines. Also, the multilingual model was again superior to our pre-trained Portuguese models.
Mono vs Multilingual Transformer-based Models: a Comparison across Several Language Tasks
2007.09757
Table 5: Sentiment Polarity Classification
['Model', 'Acc', 'F1-W']
[['araujo2016@sac', '-', '0.71'], ['BertPT', '0.77', '0.76'], ['AlbertPT', '[BOLD] 0.79', '[BOLD] 0.78'], ['Multilingual', '0.71', '0.70']]
In order keep the results comparable, we use only the positive and negative samples. The fact that this corpus has several emoticons and out-of-vocabulary expressions makes it hard for the models that were not trained using a similar vocabulary.
Mono vs Multilingual Transformer-based Models: a Comparison across Several Language Tasks
2007.09757
Table 8: Emotion Classification
['Model', '6 classes Acc', '6 classes F1-W', 'Binary Acc', 'Binary F1-W']
[['Becker:2017:MEC:3063600.3063706', '-', '-', '-', '[BOLD] 0.84'], ['BertPT', '[BOLD] 0.51', '[BOLD] 0.47', '[BOLD] 0.84', '0.83'], ['AlbertPT', '0.41', '0.28', '[BOLD] 0.84', '0.81'], ['Multilingual', '0.49', '0.46', '[BOLD] 0.84', '0.80']]
The baseline for the binary version reaches up to 0.84 of F-measure. In this experiment, BertPT has a result similar to the baseline, but it was not able to overcome it.
Yin and Yang: Balancing and Answering Binary Visual Questions
1511.05099
Table 2: Evaluation on balanced test set. All accuracies are calculated using the VQA [2] evaluation metric.
['[EMPTY]', 'Training set Unbalanced', 'Training set Balanced']
[['Prior (“no”)', '63.85', '63.85'], ['Blind-Q+Tuple', '65.98', '63.33'], ['[ITALIC] SOTA Q+Tuple+H-IMG', '65.89', '71.03'], ['[ITALIC] Ours Q+Tuple+A-IMG', '[BOLD] 68.08', '[BOLD] 74.65']]
We also evaluate all models trained on the train splits of both the unbalanced and balanced datasets, by testing on the balanced test set. Training on balanced is better. both language+vision models trained on balanced data perform better than the models trained on unbalanced data. This may be because the models trained on balanced data have to learn to extract visual information to answer the question correctly, since they are no longer able to exploit language biases in the training set. Where as models trained on the unbalanced set are blindsided into learning strong language priors, which are then not available at test time.
Yin and Yang: Balancing and Answering Binary Visual Questions
1511.05099
Table 3: Classifying a pair of complementary scenes. All accuracies are percentage of test pairs that have been predicted correctly.
['[EMPTY]', 'Training set Unbalanced', 'Training set Balanced']
[['Blind-Q+Tuple', '0', '0'], ['Q+Tuple+H-IMG', '03.20', '23.13'], ['Q+Tuple+A-IMG', '[BOLD] 09.84', '[BOLD] 34.73']]
We observe that our model trained on the balanced dataset performs the best. (Q+Tuple+H-IMG) that does not model attention.
Structural-Aware Sentence Similarity with Recursive Optimal Transport
2002.00745
Table 3: Ablation Study for 4 Aspects of ROTS. Bold values indicates the best in the row
['Comments', 'Dataset', 'd=1', 'd=2', 'd=3', 'd=4', 'd=5', 'ROTS', 'uSIF']
[['ROTS + ParaNMT Vectors + Dependency Tree', 'STSB dev', '84.5', '84.6', '84.6', '84.5', '84.4', '[BOLD] 84.6', '84.2'], ['ROTS + ParaNMT Vectors + Dependency Tree', 'STSB test', '80.5', '80.6', '80.6', '80.5', '80.4', '[BOLD] 80.6', '79.5'], ['ROTS + ParaMNT Vectors + Binary Tree', 'STSB dev', '84.2', '84.3', '84.3', '84.3', '84.3', '[BOLD] 84.4', '84.2'], ['ROTS + ParaMNT Vectors + Binary Tree', 'STSB test', '79.7', '79.9', '80.0', '80.0', '80.0', '[BOLD] 80.0', '79.5'], ['ROTS + GloVe Vectors + Dependency Tree', 'STSB dev', '78.5', '78.9', '79.2', '79.5', '79.7', '[BOLD] 79.3', '78.4'], ['ROTS + GloVe Vectors + Dependency Tree', 'STSB test', '72.0', '72.5', '73.0', '73.3', '73.6', '[BOLD] 73.0', '70.3'], ['ROTS + PSL Vectors + Dependency Tree', 'STSB dev', '80.9', '80.9', '80.8', '80.7', '80.6', '80.9', '[BOLD] 81.0'], ['ROTS + PSL Vectors + Dependency Tree', 'STSB test', '75.4', '75.5', '75.6', '75.6', '75.5', '[BOLD] 75.6', '74.4'], ['Independent OT + ParaMNT Vectors + Dependency Tree', 'STSB dev', '76.6', '74.8', '77.1', '72.1', '62.5', '75.6', '[BOLD] 84.2'], ['Independent OT + ParaMNT Vectors + Dependency Tree', 'STSB test', '69.4', '68.1', '68.2', '59.0', '50.4', '65.8', '[BOLD] 79.5'], ['ROT without modifications on coefficients [ITALIC] C and [ITALIC] a, [ITALIC] b weights + ParaMNT Vectors + Dependency Tree', 'STSB dev', '69.8', '69.0', '69.0', '67.9', '66.1', '74.8', '[BOLD] 84.2'], ['ROT without modifications on coefficients [ITALIC] C and [ITALIC] a, [ITALIC] b weights + ParaMNT Vectors + Dependency Tree', 'STSB test', '64.8', '64.0', '64.7', '63.8', '61.9', '71.0', '[BOLD] 79.5']]
The dependency tree’s score is higher than that from binary tree, while it slightly decrease after 3-th level. In the first four cases that we could see the results by changing the word vectors and tree structures. We conclude that almost at every case our approach consistently outperformed uSIF and ParaMNT and Dependency Tree is the best combination. In STS-Benchmark test set, ROTS outperformed uSIF about 1.4 in average of those four different cases. Without any of those, the results is much worse than uSIF.
Structural-Aware Sentence Similarity with Recursive Optimal Transport
2002.00745
Table 1: Weakly Supervised Model Results on STS-Benchmark Dataset.
['Weakly Supervised Model', 'Dev', 'Test']
[['InferSent (bi-LSTM trained on SNLI)\xa0', '80.1', '75.8'], ['Sent2vec\xa0', '78.7', '75.5'], ['Conversation response prediction + SNLI\xa0', '81.4', '78.2'], ['SIF on Glove vectors\xa0', '80.1', '72.0'], ['GRAN (uses SimpWiki)\xa0', '81.8', '76.4'], ['Unsupervised SIF + ParaNMT vectors\xa0', '84.2', '79.5'], ['GEM\xa0', '83.5', '78.4'], ['ROTS+ binary tree (ours)', '84.4', '80.0'], ['ROTS+ dependency tree (ours)', '[BOLD] 84.6', '[BOLD] 80.6']]
The benchmark comparisons are focused on weakly supervised models with no parameters to train but only hyper-parameters to select. The results that we compared are gathered from either the leaderboard in STS-Benchmark website or from directly the best reported models in the paper The ROTS model gets much more improvement compared to other models proposed in year 2019. ROTS with dependency parsing tree increased about one point than previous state-of-the-art uSIF results. This means that hierarchical prior takes significant effects.
Structural-Aware Sentence Similarity with Recursive Optimal Transport
2002.00745
Table 2: Detailed Comparisons with Similar Unsupervised Approaches on 20 STS Datasets
['Model Type', 'Senrence Similarity', 'STS12', 'STS13', 'STS14', 'STS15', 'AVE']
[['OT based', 'WMDDBLP:conf/icml/KusnerSKW15', '60.6', '54.5', '65.5', '61.8', '60.6'], ['OT based', 'WMEDBLP:conf/emnlp/WuYXXBCRW18', '62.8', '65.3', '68', '64.2', '65.1'], ['OT based', 'CoMBDBLP:conf/iclr/SinghHDJ19', '57.9', '64.2', '70.3', '73.1', '66.4'], ['Weighted average', 'SIFDBLP:conf/iclr/AroraLM17', '59.5', '61.8', '73.5', '76.3', '67.8'], ['Weighted average', 'uSIFDBLP:conf/rep4nlp/EthayarajhH18', '65.8', '66.1', '78.4', '79.0', '72.3'], ['Weighted average', 'DynaMaxDBLP:conf/iclr/ZhelezniakSSMFH19', '66.0', '65.7', '75.9', '[BOLD] 80.1', '72.0'], ['Ours', 'ROTS + binary tree', '[BOLD] 68.3', '66.0', '[BOLD] 78.7', '79.5', '73.1'], ['Ours', 'ROTS + dependency tree', '67.5', '[BOLD] 66.4', '[BOLD] 78.7', '80.0', '[BOLD] 73.2']]
While our HOT approaches almost steadily outperformed uSIF and has best average scores. Notebly, best DynaMax model also employs the same uSIF weights as well as ParaMNT word vectors. So the improvement of ROTS over uSIF and DynaMax is fair and clear.
Automated Chess Commentator Powered by Neural Chess Engine
1909.10413
Table 2: Human evaluation results. Models marked with * are evaluated only for the Description, Quality, and Comparison categories. The underlined results are significantly worse than those of SCC-mult(*) in a two-tail T-test (p<0.01).
['[BOLD] Models', '[BOLD] Fluency', '[BOLD] Accuracy', '[BOLD] Insights', '[BOLD] Overall']
[['[BOLD] Ground Truth', '[BOLD] 4.02', '[BOLD] 3.88', '[BOLD] 3.58', '[BOLD] 3.84'], ['[BOLD] Temp', '[BOLD] 4.05', '[BOLD] 4.03', '3.02', '3.56'], ['[BOLD] Re', '3.71', '3.00', '2.80', '2.85'], ['[BOLD] KWG', '3.51', '3.24', '2.93', '3.00'], ['[BOLD] SCC-weak', '3.63', '3.62', '3.32', '3.30'], ['[BOLD] SCC-strong', '3.81', '3.74', '3.49', '3.49'], ['[BOLD] SCC-mult', '3.82', '3.91', '[BOLD] 3.51', '[BOLD] 3.61'], ['[BOLD] GAC*', '3.68', '3.32', '2.99', '3.14'], ['[BOLD] SCC-mult*', '3.83', '3.99', '3.46', '3.52']]
The human annotators are required to be good at playing chess. That is to say, they are the true audiences of the commentator researches and applications. By introducing human evaluations, we further reveal the performances in the perspective of the audiences. We further demonstrate the efficacy of our models with significantly better overall performances than the retrieval-based model and previous state-of-the-art ones. It is worth noting that the evaluations about Accuracy and Insights show that our models can produce more precise and thorough analysis owing to the internal chess engine. SCC-mult and SCC-strong perform better than SCC-weak in Accuracy and Overall scores. It also supports the points that the our commentary model can be improved with better internal engine.
Automated Chess Commentator Powered by Neural Chess Engine
1909.10413
Table 1: Automatic evaluation results.
['[BOLD] BLEU-4 (%)', '[BOLD] Temp', '[BOLD] Re', '[BOLD] KWG', '[BOLD] GAC', '[BOLD] SCC-weak', '[BOLD] SCC-strong', '[BOLD] SCC-mult']
[['[BOLD] Description', '0.82', '1.24', '1.22', '[BOLD] 1.42', '1.23', '1.31', '1.34'], ['[BOLD] Quality', '13.71', '4.91', '13.62', '16.90', '16.83', '18.87', '[BOLD] 20.06'], ['[BOLD] Comparison', '0.11', '1.03', '1.07', '1.37', '2.33', '[BOLD] 3.05', '2.53'], ['[BOLD] Planning', '0.05', '0.57', '0.84', '[EMPTY]', '[BOLD] 1.07', '0.99', '0.90'], ['[BOLD] Contexts', '1.94', '2.70', '4.39', '[EMPTY]', '4.04', '[BOLD] 6.21', '4.09'], ['[BOLD] BLEU-2 (%)', '[BOLD] Temp', '[BOLD] Re', '[BOLD] KWG', '[BOLD] GAC', '[BOLD] SCC-weak', '[BOLD] SCC-strong', '[BOLD] SCC-mult'], ['[BOLD] Description', '24.42', '22.11', '18.69', '19.46', '23.29', '[BOLD] 25.98', '25.87'], ['[BOLD] Quality', '46.29', '39.14', '55.13', '47.80', '58.53', '61.13', '[BOLD] 61.62'], ['[BOLD] Comparison', '7.33', '22.58', '20.06', '24.89', '24.85', '[BOLD] 27.48', '23.47'], ['[BOLD] Planning', '3.38', '20.34', '22.02', '[EMPTY]', '22.28', '[BOLD] 25.82', '24.32'], ['[BOLD] Contexts', '26.03', '30.12', '31.58', '[EMPTY]', '37.32', '[BOLD] 41.59', '38.59'], ['[BOLD] METEOR (%)', '[BOLD] Temp', '[BOLD] Re', '[BOLD] KWG', '[BOLD] GAC', '[BOLD] SCC-weak', '[BOLD] SCC-strong', '[BOLD] SCC-mult'], ['[BOLD] Description', '6.26', '5.27', '6.07', '6.19', '6.03', '6.83', '[BOLD] 7.10'], ['[BOLD] Quality', '22.95', '17.01', '22.86', '24.20', '24.89', '[BOLD] 25.57', '25.37'], ['[BOLD] Comparison', '4.27', '8.00', '7.70', '8.54', '8.25', '[BOLD] 9.44', '9.13'], ['[BOLD] Planning', '3.05', '6.00', '6.76', '[EMPTY]', '6.18', '7.14', '[BOLD] 7.30'], ['[BOLD] Contexts', '9.46', '8.90', '10.31', '[EMPTY]', '11.07', '[BOLD] 11.76', '11.09']]
Our SCC models outperform all of the baselines and previous state-of-the-art models. Temp is limited by the variety of templates. It is competitive with the neural models on Description and Quality due to limited expressions in these tasks. But when coming to Comparison, Planning and Contexts, Temp shows really bad performances. Re keeps flexibility by copying the sentences from training set. But it does not perform well, either. KWG and GAC provide competitive results. With the help of external information from powerful chess engines, GAC shows good performances on Quality and Comparison. Although our internal chess engine is no match for the external engines that GAC uses at playing chess, it turns out that our models with directly internal information can better bridge the semantic spaces of chess game and comment language. As for the comparisons within our models, SCC-strong turns to be better than SCC-weak, which supports our assumption that better skills enable more precise predictions, resulting in better comments. Training with multi-task learning seems to hurt the overall performances a little. But SCC-mult still has the state-of-the-art performances. And more important, it can react to all sub-tasks as a whole.
SentiCite \subtitleAn Approach for Publication Sentiment Analysis
1910.03498
Table 3: Distribution of positive and negative references in different sections of publications.
['[BOLD] Section', '[BOLD] Positive', '[BOLD] Negative']
[['Introduction / Motivation', '0.22', '0.28'], ['Information / Background', '0.2', '0.06'], ['Related Work', '0.11', '0.06'], ['Approach / Method', '[BOLD] 0.3', '0.1'], ['Evaluation / Experiments', '0.17', '[BOLD] 0.5']]
This analysis presents a general behavior for this research area based on the dataset. This shows that the negative references are often in the evaluation section of a paper because the methods, in general, get outperformed by the proposed systems. In contrast to this, the positive references occur frequently in the proposed method section because authors adopt approaches which work well.
SentiCite \subtitleAn Approach for Publication Sentiment Analysis
1910.03498
Table 2: SentiCiteDB.
['[EMPTY]', '[BOLD] Total', '[BOLD] Train set', '[BOLD] Test set']
[['Positive', '210', '50', '160'], ['Neutral', '1805', '50', '1755'], ['Negative', '85', '50', '35'], ['Overall', '2100', '150', '1983']]
SentiCiteDB is a dataset of publication sentiment analysis which is created using Scientific publications from International Conference on Document Analysis and Recognition (ICDAR) 2013. Sentences with citation are manually extracted from the publication and their manual ground truth is created. In total, SentiCiteDB contains about 2100 citations with 210 positive, 1805 neutral and 85 negative sentiment scores. Out of these, 50 citations from each of the three classes of sentiment are used for training of SentiCite. This dataset includes references from different sections of the document to make sure that the proposed method does not learn the sentiment classification for one specific section. The training set was selected with the same number of references for all classes to ensure that the classifier learns the classes in a sufficient way. Providing a real-world distribution a training set resulted in bad results for negative and positive labels.
SentiCite \subtitleAn Approach for Publication Sentiment Analysis
1910.03498
Table 5: Evaluation of different features for SentiCite (SC).
['[BOLD] Label', '[BOLD] SC-SVM', '[BOLD] SC-Paum']
[['Only POS', '0.7241', '0.7336'], ['Combination', '[BOLD] 0.7260', '[BOLD] 0.8154']]
The purpose of the feature extraction step is to get a meaningful set of features to train the classifiers. Besides the string of the tokens, three additional feature extraction modules were used. The POS tagger assigns different features e.g., type of the token, length or capitalization. The type of the token helps to indicate the importance of the token e.g., an adjective is more relevant than an article or nouns. The last module is a stemming tool for a more generalized representation of the different tokens. This procedure was done for the sentiment and the nature of citation. Furthermore, it has to be mentioned that features like n-gram, hyponyms, hypernyms and others were tested but the performance of these features was not sufficient. It is important to note here, that, a classifier can often perform well on the neutral classes and bad on the positive and negative classes but, it can still result in a good overall accuracy because of the disproportionate ratio of neutral and positive or negative references in a scientific document. For more meaningful results further experiments were performed.
SentiCite \subtitleAn Approach for Publication Sentiment Analysis
1910.03498
Table 6: Evaluation of different test corpus size for SentiCite (SC).
['[BOLD] Approach', '[BOLD] 5 docs', '[BOLD] 10 docs', '[BOLD] 20 docs', '[BOLD] 30 docs']
[['SC-SVM', '0.7111', '0.7091', '0.7141', '0.7203'], ['SC-Paum', '0.5727', '0.7795', '0.8218', '0.8221']]
In the next experiment, the impact of the corpus size was tested. The results show that for the SVM the f-score is almost the same for each run and that the corpus size has no impact. In addition, it shows that the perceptron performance increased in the beginning.
SentiCite \subtitleAn Approach for Publication Sentiment Analysis
1910.03498
Table 9: Ten run k-fold cross-fold for SentiCite (SC).
['[BOLD] Algorithm', '[BOLD] F1', '[BOLD] F2', '[BOLD] F3', '[BOLD] F4', '[BOLD] F5', '[BOLD] F6', '[BOLD] F7', '[BOLD] F8', '[BOLD] F9', '[BOLD] F10', '[BOLD] Overall']
[['SC-SVM', '0.47', '0.6', '0.53', '0.67', '0.6', '0.47', '0.47', '0.6', '0.4', '0.53', '0.53'], ['SC-Paum', '0.4', '0.53', '0.47', '0.6', '0.6', '0.67', '0.53', '0.53', '0.4', '0.6', '0.53']]
Finally, to validate that the results of the approach are correct a cross validation was done. The cross validation was done with the training corpus to have a balanced number of all classes. Therefore, the classifier was trained with a subset of the training corpus and evaluated on the remaining documents. This procedure as repeated 10 times with different splits. Also, this was the reason why the performance decreased because there was less training data. This was done because the training corpus was very large and the test corpus has for some labels only a few examples which was the case because of the distribution of the documents. The cross-validation was done exclusively with the training set which was split into two sets one for training and one for testing.
SentiCite \subtitleAn Approach for Publication Sentiment Analysis
1910.03498
Table 10: Different nature of citation classes.
['[BOLD] Label', '[BOLD] SC-SVM', '[BOLD] SC-Paum', '[BOLD] #refs']
[['Usage', '0.5294', '[BOLD] 0.8325', '17'], ['Reference', '[BOLD] 1.0', '0.9455', '110'], ['Reading', '0.7143', '[BOLD] 0.8571', '7'], ['Rest', '0.4533', '[BOLD] 0.4602', '112'], ['Dataset', '[BOLD] 0.667', '[BOLD] 0.667', '15'], ['Overall', '0.7075', '[BOLD] 0.7099', '261']]
As the numbers indicate, the classification of the different nature of references is not equally hard for the classifiers. The important labels have a good performance with scores around 0.667 for the datasets and up to 1.0 for the references.
Detecting Mismatch between Text Script and Voice-over Using Utterance Verification Based on Phoneme Recognition Ranking
2003.09180
Table 5: Performance improvement of the two-stage APR-based UV.
['Test Set', 'APR', 'APR2-stage', 'Δ']
[['BNS-1', '0.9675', '0.9677', '+0.0002'], ['BNS-2', '0.9592', '0.9598', '+0.0006']]
We finally investigate the effect of the proposed two-stage APR-based UV. Although the improvement is minimal, the two-stage APR-based UV compensates for a few errors of the pure APR-based UV.
Detecting Mismatch between Text Script and Voice-over Using Utterance Verification Based on Phoneme Recognition Ranking
2003.09180
Table 1: Excerpted test sets from a speech database and an MMORPG
['Test Set', 'Description', 'Correct', 'Incorrect']
[['DICT01', 'Read-style', '1,600', '1,600'], ['BNS-1', 'Exaggerated-style', '1,600', '1,600'], ['BNS-2', '+ various tones & effects', '483', '483']]
The second type of test set is used to detect a mismatch between text script and voice-over. We create three test sets (DICT01, BNS-1, and BNS-2). DICT01 is the 1,600 read-style utterances excerpted from a Korean speech DB (DICT01) BNS-2 comprises 483 voice-overs with various tones and sound effects. To create the mismatched pairs, we assign an arbitrary text script to a voice-over similar to the real-world scenario.
Detecting Mismatch between Text Script and Voice-over Using Utterance Verification Based on Phoneme Recognition Ranking
2003.09180
Table 3: Comparison between the proposed APR-based UV and the conventional LRT-based UV with the optimized thresholds.
['Test Set', 'LRT ACC', 'LRT [ITALIC] τ', 'APR ACC', 'APR [ITALIC] θ', 'APR Δ']
[['DICT01', '0.992', '1.5', '0.998', '4.0', '+0.006 (0.6%)'], ['BNS-1', '0.930', '1.2', '0.968', '5.0', '+0.038 (4.1%)'], ['BNS-2', '0.901', '1.1', '0.959', '6.0', '+0.058 (6.4%)']]
In the test sets of text scripts and voice-overs, we perform an experiment for comparing the performances of the proposed APR-based UV and the conventional LRT-based UV. APR is an alternative to the log-likelihood ratio (LLR) of the LRT-based UV. For the evaluation, we apply the optimized thresholds (i.e., τ and θ) that show the best performances for test sets to each method.
Detecting Mismatch between Text Script and Voice-over Using Utterance Verification Based on Phoneme Recognition Ranking
2003.09180
Table 4: Performance degradation in the exaggerated voice-overs, when applying the optimized thresholds of the read-speech utterances.
['Test Set', 'LRT ACC', 'LRT Δ', 'APR ACC', 'APR Δ']
[['BNS-1', '0.813', '-0.117', '0.952', '-0.016'], ['BNS-1', '0.813', '(-14.4%)', '0.952', '(-1.7%)'], ['BNS-2', '0.674', '-0.228', '0.900', '-0.059'], ['BNS-2', '0.674', '(-33.8%)', '0.900', '(-6.6%)']]
(-14.4%) and -0.228 (-33.8%), respectively. However, those in the case of the APR-based UV are -0.016 (-1.7%) and -0.059 (-6.6%), respectively, which are remarkably lower than that of the LRT-based UV.
From News to Medical: Cross-domain Discourse Segmentation
1904.06682
Table 5: Average inter-annotator agreement per section, ordered from highest to lowest, the corresponding average F1 of the neural segmenter, and number of tokens (there are 2 documents per section, except 1 for Summary).
['Section', 'Kappa', 'F1', '#tokens']
[['Summary', '1.00', '100', '35'], ['Introduction', '0.96', '86.58', '258'], ['Results', '0.93', '91.74', '354'], ['Abstract', '0.89', '95.08', '266'], ['Methods', '0.86', '92.99', '417'], ['Discussion', '0.84', '89.03', '365']]
Here we compare the level of annotator agreement with the performance of the neural segmenter. However, high agreement does not always translate to good performance. The Introduction section is straightforward for the annotators to segment, but this is also where most citations occur, causing the segmenter to perform more poorly. Earlier, we had noted the Discussion section was the hardest for annotators to label because of the more complex discourse. These more ambiguous syntactic constructions also pose a challenge for the segmenter, with lower performance than most other sections.
From News to Medical: Cross-domain Discourse Segmentation
1904.06682
Table 3: F1, precision (P) and recall (R) of RST discourse segmenters on two domains (best numbers for News are underlined, for Medical are bolded).
['RST Seg', 'Domain', 'F1', 'P', 'R']
[['DPLP', '[ITALIC] News', '82.56', '81.75', '83.37'], ['DPLP', '[ITALIC] Medical', '75.29', '78.69', '72.18'], ['two-pass', '[ITALIC] News', '95.72', '97.19', '94.29'], ['two-pass', '[ITALIC] Medical', '84.69', '86.23', '83.21'], ['Neural', '[ITALIC] News', '97.32', '95.68', '99.01'], ['Neural', '[ITALIC] Medical', '[BOLD] 91.68', '[BOLD] 94.86', '[BOLD] 88.70']]
As expected, the News domain outperforms the Medical domain, regardless of which segmenter is used. In the case of the DPLP segmenter, the gap between the two domains is about 7.4 F1 points. Note that the performance of DPLP on News lags considerably behind the state of the art (-14.76 F1 points). When switching to the two-pass segmenter, the performance on News increases dramatically (+13 F1 points). However, the performance on Medical increases by only 3.75 F1 points. Thus, large gains in News translate into only a small gain in Medical. The neural segmenter achieves the best performance on News and is also able to more successfully close the gap on Medical, with only a 5.64 F1 difference, largely attributable to lower recall.
Domain-Specific Sentiment Word Extraction by Seed Expansion and Pattern Generation
1309.6722
(b) Negative PageRank
['postag', 'P@50', 'P@100', 'P@500', 'P@1000']
[['[BOLD] i', '[BOLD] 0.980', '[BOLD] 0.960', '[BOLD] 0.808', '[BOLD] 0.649'], ['a', '0.260', '0.200', '0.240', '0.231'], ['v', '0.020', '0.040', '0.032', '0.048']]
In negative pagerank result, idioms gets the best result. After checking the final ranking result, we find that idioms have more synonymies with other idioms and they have higher probability to act as sentiment word. In addition, the performance in positive PageRank is poor.
Domain-Specific Sentiment Word Extraction by Seed Expansion and Pattern Generation
1309.6722
(a) Positive PageRank
['postag', 'P@50', 'P@100', 'P@500', 'P@1000']
[['i', '0.000', '0.000', '0.016', '0.018'], ['[BOLD] a', '[BOLD] 0.240', '[BOLD] 0.280', '[BOLD] 0.370', '[BOLD] 0.385'], ['v', '0.020', '0.010', '0.028', '0.044']]
In negative pagerank result, idioms gets the best result. After checking the final ranking result, we find that idioms have more synonymies with other idioms and they have higher probability to act as sentiment word. In addition, the performance in positive PageRank is poor.
Domain-Specific Sentiment Word Extraction by Seed Expansion and Pattern Generation
1309.6722
Table 6: Experimental results on DSSW extraction
['finance', 'P', 'Hu04 0.5423', 'Qiu11 0.5404', 'Our 0.6347']
[['finance', 'R', '0.2956', '0.3118', '0.3411'], ['finance', 'F1', '0.3826', '0.3955', '[BOLD] 0.4437'], ['entertainment', 'P', '0.5626', '0.5878', '0.6449'], ['entertainment', 'R', '0.2769', '0.3022', '0.3256'], ['entertainment', 'F1', '0.3711', '0.3992', '[BOLD] 0.4328'], ['digital', 'P', '0.5534', '0.5649', '0.5923'], ['digital', 'R', '0.3043', '0.3253', '0.3457'], ['digital', 'F1', '0.3927', '0.4129', '[BOLD] 0.4366']]
Our precision(P) improves significantly, especially in finance domain with 9.4% improvement. Our recall(R) improves slightly because there are still some sentiment words don’t co-occur with target words. Problem with hidden target words will be studied in future work.
The Microsoft 2017 Conversational Speech Recognition System
1708.06073
Table 2: Acoustic model performance by senone set, model architecture, and for various frame-level combinations, using an N-gram LM. The “puhpum” senone sets use an alternate dictionary with special phones for filled pauses.
['Senone set', 'Architecture', 'devset WER', 'test WER']
[['[BOLD] 9k', 'BLSTM', '11.5', '8.3'], ['[EMPTY]', 'ResNet', '10.0', '8.2'], ['[EMPTY]', 'LACE', '11.2', '8.1'], ['[EMPTY]', 'CNN-BLSTM', '11.3', '8.4'], ['[EMPTY]', 'BLSTM+ResNet+LACE', '9.8', '7.2'], ['[EMPTY]', 'BLSTM+ResNet+LACE+CNN-BLSTM', '9.6', '7.2'], ['[BOLD] 9k puhpum', 'BLSTM', '11.3', '8.1'], ['[EMPTY]', 'ResNet', '11.2', '8.4'], ['[EMPTY]', 'LACE', '11.1', '8.3'], ['[EMPTY]', 'CNN-BLSTM', '11.6', '8.4'], ['[EMPTY]', 'BLSTM+ResNet+LACE', '9.7', '7.4'], ['[EMPTY]', 'BLSTM+ResNet+LACE+CNN-BLSTM', '9.7', '7.3'], ['[BOLD] 27k', 'BLSTM', '11.4', '8.0'], ['[EMPTY]', 'ResNet', '11.5', '8.8'], ['[EMPTY]', 'LACE', '11.3', '8.8'], ['[EMPTY]', 'BLSTM+ResNet+LACE', '10.0', '7.5'], ['[BOLD] 27k puhpum', 'BLSTM', '11.3', '8.0'], ['[EMPTY]', 'ResNet', '11.2', '8.0'], ['[EMPTY]', 'LACE', '11.0', '8.4'], ['[EMPTY]', 'BLSTM+ResNet+LACE', '9.8', '7.3']]
The results are based on N-gram language models, and all combinations are equal-weighted. In the past we had used a relatively small vocabulary of 30,500 words drawn only from in-domain (Switchboard and Fisher corpus) training data. While this yields an out-of-vocabulary (OOV) rate well below 1%, our error rates have reached levels where even small absolute reductions in OOVs could potentially have a significant impact on overall accuracy. We supplemented the in-domain vocabulary with the most frequent words in the out-of-domain sources also used for language model training: the LDC Broadcast News corpus and the UW Conversational Web corpus. Boosting the vocabulary size to 165k reduced the OOV rate (excluding word fragments) on the eval2002 devset from 0.29% to 0.06%.
The Microsoft 2017 Conversational Speech Recognition System
1708.06073
Table 4: Perplexities and word errors with session-based LSTM-LMs (forward direction only). The last line reflects the use of 1-best recognition output for words in preceding utterances.
['Model inputs', 'PPL devset', 'PPL test', 'WER devset', 'WER test']
[['Utterance words, letter-3grams', '50.76', '44.55', '9.5', '6.8'], ['+ session history words', '39.69', '36.95', '[EMPTY]', '[EMPTY]'], ['+ speaker change', '38.20', '35.48', '[EMPTY]', '[EMPTY]'], ['+ speaker overlap', '37.86', '35.02', '[EMPTY]', '[EMPTY]'], ['(with 1-best history)', '40.60', '37.90', '9.3', '6.7']]
There is a large perplexity reduction of 21% by conditioning on the previous word context, with smaller incremental reductions from adding speaker change and overlap information. The table also compares the word error rate with the full session-based model to the baseline, within-utterance LSTM-LM. As shown in the last row of the table, some of the perplexity gain over the baseline is negated by the use of 1-best recognition output for the conversation history. However, the perplexity degrades by only 7-8% relative due to the noisy history.
The Microsoft 2017 Conversational Speech Recognition System
1708.06073
Table 5: Results for LSTM-LM rescoring on systems selected for combination, the combined system, and confusion network rescoring
['Senone set', 'Model/combination step', 'WER devset', 'WER test', 'WER devset', 'WER test']
[['[EMPTY]', '[EMPTY]', 'ngram-LM', 'ngram-LM', 'LSTM-LMs', 'LSTM-LMs'], ['9k', 'BLSTM', '11.5', '8.3', '9.2', '6.3'], ['27k', 'BLSTM', '11.4', '8.0', '9.3', '6.3'], ['27k-puhpum', 'BLSTM', '11.3', '8.0', '9.2', '6.3'], ['9k', 'BLSTM+ResNet+LACE+CNN-BLSTM', '9.6', '7.2', '7.7', '5.4'], ['9k-puhpum', 'BLSTM+ResNet+LACE', '9.7', '7.4', '7.8', '5.4'], ['9k-puhpum', 'BLSTM+ResNet+LACE+CNN-BLSTM', '9.7', '7.3', '7.8', '5.5'], ['27k', 'BLSTM+ResNet+LACE', '10.0', '7.5', '8.0', '5.8'], ['-', 'Confusion network combination', '[EMPTY]', '[EMPTY]', '7.4', '5.2'], ['-', '+ LSTM rescoring', '[EMPTY]', '[EMPTY]', '7.3', '5.2'], ['-', '+ ngram rescoring', '[EMPTY]', '[EMPTY]', '7.2', '5.2'], ['-', '+ backchannel penalty', '[EMPTY]', '[EMPTY]', '7.2', '5.1']]
The collection of LSTM-LMs (which includes the session-based LMs) gives a very consistent 22 to 25% relative error reduction on individual systems, compared to the N-gram LM. The system combination reduces error by 4% relative over the best individual systems, and the CN rescoring improves another 2-3% relative.
A Nested Attention Neural Hybrid Model for Grammatical Error Correction
1707.02026
Table 5: F0.5 results on the CoNLL-13 set of main model architectures, on different segments of the set according to whether the input contains OOVs.
['[BOLD] Model', '[BOLD] NonOOV', '[BOLD] OOV', '[BOLD] Overall']
[['Word NMT + UNK replacement', '27.61', '21.57', '26.17'], ['Hybrid model', '[BOLD] 29.36', '25.92', '28.49'], ['Nested Attention Hybrid Model', '29.00', '[BOLD] 27.39', '[BOLD] 28.61']]
We present a comparative performance analysis of models on the CoNLL-13 development set. First, we divide the set into two segments: OOV and NonOOV, based on whether there is at least one OOV word in the given source input. The additional nested character-level attention of our hybrid model brings a sizable improvement over the basic hybrid model in the OOV segment and a small degradation in the non-OOV segment. We should note that in future work character-level attention can be added for non-OOV source words in the nested attention model, which could improve performance on this segment as well.
A Nested Attention Neural Hybrid Model for Grammatical Error Correction
1707.02026
Table 1: Overview of the datasets used.
['[EMPTY]', 'Training', 'Validation', 'Development', 'Test']
[['#Sent pairs', '2,608,679', '4,771', '1,381', '1,312']]
We evaluate the performance of the models on the standard sets from the CoNLL-14 shared task Ng et al. We report final performance on the CoNLL-14 test set without alternatives, and analyze model performance on the CoNLL-13 development set Dahlmeier et al. We use the development and validation sets for model selection. We report performance in F0.5-measure, as calculated by the m2scorer—the official implementation of the scoring metric in the shared task. Given system outputs and gold-standard edits, m2scorer computes the F0.5 measure of a set of system edits against a set of gold-standard edits.
A Nested Attention Neural Hybrid Model for Grammatical Error Correction
1707.02026
Table 3: F0.5 results on the CoNLL-13 and CoNLL-14 test sets of main model architectures.
['[BOLD] Model', '[BOLD] Performance Dev', '[BOLD] Performance Test']
[['Word NMT + UNK replacement', '26.17', '38.77'], ['Hybrid model', '28.49', '40.44'], ['Nested Attention Hybrid Model', '[BOLD] 28.61', '[BOLD] 41.53']]
In addition to the word-level baseline, we include the performance of a hybrid model with a single level of attention, which follows the work of \newciteluong-manning:2016:P16-1 for machine translation, and is the first application of a hybrid word/character-level model to grammatical error correction. Based on hyper-parameter selection, the character-level component weight of the loss is α=1 for the basic hybrid model. N16-1 (38.77 versus 39.9). We attribute the difference to differences in the training set and the word-alignment methods used. Our reimplementation serves to provide a controlled experimental evaluation of the impact of hybrid models and nested attention on the GEC task. As seen, our nested attention hybrid model substantially improves upon the baseline, achieving a gain of close to 3 points on the test set. The hybrid word/character model with a single level of attention brings a large improvement as well, showing the importance of character-level information for this task. We delve deeper into the impact of nested attention for the hybrid model in Section 5.
A Nested Attention Neural Hybrid Model for Grammatical Error Correction
1707.02026
Table 7: Precision, Recall and F0.5 results on CoNLL-13,on the ”small changes” and “large changes” portions of the OOV segment.
['[BOLD] Model', '[BOLD] Performance P', '[BOLD] Performance R', '[BOLD] Performance [ITALIC] F0.5']
[['[BOLD] Small Changes Portion', '[BOLD] Small Changes Portion', '[BOLD] Small Changes Portion', '[BOLD] Small Changes Portion'], ['Hybrid model', '43.86', '16.29', '32.77'], ['Nested Attention Hybrid Model', '48.25', '17.92', '36.04'], ['[BOLD] Large Changes Portion', '[BOLD] Large Changes Portion', '[BOLD] Large Changes Portion', '[BOLD] Large Changes Portion'], ['Hybrid model', '32.52', '8.32', '20.56'], ['Nested Attention Hybrid Model', '33.05', '8.11', '20.46']]
Our hypothesis is that the additional character-level attention layer is particularly useful to model edits among orthographically similar words. We can see that the gains in the “small changes” portion are indeed quite large, indicating that the fine-grained character-level attention empowers the model to more accurately correct confusions among phrases with high character-level similarity. The impact in the “large changes” portion is slightly positive in precision and slightly negative in recall. Thus most of the benefit of the additional character-level attention stems from improvements in the “small changes” portion.
Lattice Rescoring Strategies for Long Short Term Memory Language Models in Speech Recognition
1711.05448
Table 3: WER of LSTMLM lattice rescoring algorithms
['Algorithm', 'WER (%)', '[EMPTY]']
[['[EMPTY]', 'LM used in', 'LM used in'], ['[EMPTY]', 'Lattice generation', 'Lattice generation'], ['[EMPTY]', '2-gram', '5-gram'], ['[ITALIC] Push-forward (Algorithm\xa0 1 )', '[ITALIC] Push-forward (Algorithm\xa0 1 )', '[EMPTY]'], ['[ITALIC] k=1', '12.8', '12.4'], ['[ITALIC] k=10', '12.7', '12.4'], ['[ITALIC] k=50', '12.6', '12.4'], ['Expand lattice to N-gram order, [ITALIC] Push-forward, [ITALIC] k=1', 'Expand lattice to N-gram order, [ITALIC] Push-forward, [ITALIC] k=1', '[EMPTY]'], ['[ITALIC] N≤2', '12.8', '12.4'], ['[ITALIC] N=3', '12.6', '12.4'], ['[ITALIC] N≥4', '12.4', '12.3'], ['[ITALIC] LSTM State Pooling (Algorithm\xa0 2 )', '[ITALIC] LSTM State Pooling (Algorithm\xa0 2 )', '[EMPTY]'], ['Uniform (Equation\xa0 12 )', '13.0', '12.6'], ['Max Prob (Equation\xa0 10 )', '12.8', '12.5'], ['Sum Prob (Equation\xa0 11 )', '12.8', '12.5'], ['[ITALIC] Arc Beam (Algorithm\xa0 3 )', '[ITALIC] Arc Beam (Algorithm\xa0 3 )', '[EMPTY]'], ['Default setting', '12.8', '12.5']]
While the WER reduced by 2% relative for the 2-gram lattices, it showed no variation for the 5-gram lattices. The push-forward algorithm with k=1 results in a more severe approximation for the 2-gram lattices. Hence, increasing the LSTM hypotheses per lattice node (k) results in a better WER. We next expanded the lattice to different N-gram orders, up to a maximum of 6, prior to applying the push-forward algorithm with k=1. While 2-gram lattices benefitted considerably from lattice expansion (3% relative), there was less gain for 5-gram lattices which already contained unique 5-gram histories for most lattice nodes prior to expansion. LSTM state pooling
Contextual Phonetic Pretraining for End-to-endUtterance-level Language and Speaker Recognition
1907.00457
Table 2: Fisher speaker recognition task results
['[BOLD] System', '[BOLD] EER', '[BOLD] minDCF08', '[BOLD] minDCF10']
[['i-vector ', '2.10', '0.093', '0.3347'], ['x-vector + stat.\xa0pooling ', '1.73', '0.086', '0.3627'], ['phn.\xa0vec.\xa0+ finetune ', '1.60', '0.076', '0.3413'], ['+ multi-tasking ', '1.39', '0.073', '0.3087'], ['x-vector + SAP', '1.50', '0.074', '0.2973'], ['pretrain + CNN + SAP', '[BOLD] 1.07', '[BOLD] 0.052', '[BOLD] 0.2247']]
The results of i-vector and x-vector from the original work are first presented. To ablate the merit of the SAP layer, we train a similar x-vector architecture with self-attentive instead of regular statistics pooling to get a 13.2% relative EER reduction. Training on contextual frame representations induced from the ASR model leads to better performance than the x-vector approach and improves performance even over the multitasking approach (where the phonetic extractor is learned jointly between ASR and SR)
A Comparison of Modeling Units in Sequence-to-Sequence Speech Recognition with the Transformer on Mandarin Chinese
1805.06239
Table 6: CER (%) on HKUST datasets compared to previous works.
['model', 'CER']
[['LSTMP-9×800P512-F444 ', '30.79'], ['CTC-attention+joint dec. (speed perturb., one-pass) +VGG net +RNN-LM (separate) ', '28.9 [BOLD] 28.0'], ['CI-phonemes-D1024-H16 ', '30.65'], ['Syllables-D1024-H16 (speed perturb) ', '28.77'], ['Words-D1024-H16 (speed perturb)', '27.42'], ['Sub-words-D1024-H16 (speed perturb)', '27.26'], ['Characters-D1024-H16 (speed perturb)', '[BOLD] 26.64']]
We can observe that the best CER 26.64% of character based model with the Transformer on HKUST datasets achieves a 13.4% relative reduction compared to the best CER of 30.79% by the deep multidimensional residual learning with 9 LSTM layers. It shows the superiority of the sequence-to-sequence attention-based model compared to the hybrid LSTM-HMM system.
Global Thread-Level Inference for Comment Classificationin Community Question Answering
1911.08755
Table 2: Same-vs-Different classification. P, R, and F1 are calculated with respect to Same.
['Classifier', 'P', 'R', 'F1', 'Acc']
[['baseline: [ITALIC] Same', '[EMPTY]', '[EMPTY]', '[EMPTY]', '69.26'], ['MaxEnt-2C', '73.95', '90.99', '81.59', '71.56'], ['MaxEnt-3C', '77.15', '80.42', '78.75', '69.94']]
We can see that the two-class MaxEnt-2C classifier works better than the three-class MaxEnt-3C. MaxEnt-3C has more balanced P and R, but loses in both F1 and accuracy. MaxEnt-2C is very skewed towards the majority class, but performs better due to the class imbalance. Overall, it seems very difficult to learn with the current features, and both methods only outperform the majority-class baseline by a small margin. Yet, while the overall accuracy is low, note that the graph-cut/ILP inference allows us to recover from some errors, because if nearby utterances are clustered correctly, the wrong decisions should be outvoted by correct ones.
Global Thread-Level Inference for Comment Classificationin Community Question Answering
1911.08755
Table 3: Good-vs-Bad classification. ‡ and † mark statistically significant differences in accuracy compared to the baseline MaxEnt classifier with confidence levels of 99% and 95%, respectively (randomized test).
['System', 'P', 'R', 'F1', 'Acc']
[['[BOLD] Top-3 at SemEval-2015 Task 3', '[BOLD] Top-3 at SemEval-2015 Task 3', '[BOLD] Top-3 at SemEval-2015 Task 3', '[EMPTY]', '[EMPTY]'], ['JAIST', '80.23', '77.73', '78.96', '79.10'], ['HITSZ-ICRC', '75.91', '77.13', '76.52', '76.11'], ['QCRI', '74.33', '83.05', '78.45', '76.97'], ['[BOLD] Instance Classifiers', '[BOLD] Instance Classifiers', '[BOLD] Instance Classifiers', '[EMPTY]', '[EMPTY]'], ['MaxEnt', '75.67', '84.33', '79.77', '78.43'], ['[BOLD] Linear Chain Classifiers', '[BOLD] Linear Chain Classifiers', '[BOLD] Linear Chain Classifiers', '[EMPTY]', '[EMPTY]'], ['CRF', '74.89', '83.45', '78.94', '77.53'], ['[BOLD] Global Inference Classifiers', '[BOLD] Global Inference Classifiers', '[BOLD] Global Inference Classifiers', '[BOLD] Global Inference Classifiers', '[EMPTY]'], ['ILP', '77.04', '83.53', '80.15', '79.14‡'], ['Graph-cut', '78.30', '82.93', '[BOLD] 80.55', '[BOLD] 79.80‡'], ['ILP-3C', '78.07', '80.42', '79.23', '78.73'], ['Graph-cut-3C', '78.26', '81.32', '79.76', '79.19†']]
On the top are the best systems at SemEval-2015 Task 3. We can see that our MaxEnt classifier is competitive: it shows higher accuracy than two of them, and the highest F1 overall.
Analyzing the Language of Food on Social Media
1409.2195
TABLE VIII: Effects of varying the fraction of tweets used for training and testing on classification accuracy in the state-prediction task, using All Words and LDA topics.
['training fraction', 'testing fraction 0.2', 'testing fraction 0.4', 'testing fraction 0.6', 'testing fraction 0.8', 'testing fraction 1.0']
[['0.2', '11.76', '11.76', '5.88', '9.80', '15.68'], ['0.4', '19.60', '17.64', '17.64', '17.64', '25.49'], ['0.6', '25.49', '29.41', '35.29', '41.17', '47.05'], ['0.8', '39.21', '41.17', '43.13', '50.98', '52.94'], ['1.0', '43.13', '58.82', '54.90', '62.74', '64.70']]
Performance varies from 11.76% when using 20% of tweets in the training set and 20% in the testing set, to 64.7% when using all available tweets. Increasing the number of tweets in the training set has a larger positive effect on accuracy than increasing the number of tweets in the testing set. As in the city prediction task, performance continues to increase as we add more data, suggesting that the performance ceiling has not been reached.
Analyzing the Language of Food on Social Media
1409.2195
TABLE V: Effects of varying the fraction of tweets used for training and testing on classification accuracy in the city-prediction task, using All Words and LDA topics.
['training fraction', 'testing fraction 0.2', 'testing fraction 0.4', 'testing fraction 0.6', 'testing fraction 0.8', 'testing fraction 1.0']
[['0.2', '6.66', '6.66', '6.66', '6.66', '6.66'], ['0.4', '13.33', '13.33', '13.33', '13.33', '20.00'], ['0.6', '20.00', '26.66', '26.66', '26.66', '40.00'], ['0.8', '33.33', '46.66', '33.33', '53.33', '53.33'], ['1.0', '46.66', '53.33', '60.00', '66.66', '80.00']]
Indeed, when only 20% of the training set is used, the models achieve the same score as the baseline classifier (6.67%). Performance continues to increase as we add more data, suggesting that we have not reached a performance ceiling yet.
Construction of the Literature Graph in Semantic Scholar
1805.02262
Table 2: Document-level evaluation of three approaches in two scientific areas: computer science (CS) and biomedical (Bio).
['Approach', 'CS prec.', 'CS yield', 'Bio prec.', 'Bio yield']
[['Statistical', '98.4', '712', '94.4', '928'], ['Hybrid', '91.5', '1990', '92.1', '3126'], ['Off-the-shelf', '97.4', '873', '77.5', '1206']]
In both domains, the statistical approach gives the highest precision and the lowest yield. The hybrid approach consistently gives the highest yield, but sacrifices precision. The TagMe off-the-shelf library used for the CS domain gives surprisingly good results, with precision within 1 point from the statistical models. However, the MetaMap Lite off-the-shelf library we used for the biomedical domain suffered a huge loss in precision. Our error analysis showed that each of the approaches is able to predict entities not predicted by the other approaches so we decided to pool their outputs in our deployed system, which gives significantly higher yield than any individual approach while maintaining reasonably high precision.
Construction of the Literature Graph in Semantic Scholar
1805.02262
Table 3: Results of the entity extraction model on the development set of SemEval-2017 task 10.
['Description', 'F1']
[['Without LM', '49.9'], ['With LM', '54.1'], ['Avg. of 15 models with LM', '55.2']]
We use the standard data splits of the SemEval-2017 Task 10 on entity (and relation) extraction from scientific papers Augenstein et al. The first line omits the LM embeddings lmk, while the second line is the full model (including LM embeddings) showing a large improvement of 4.2 F1 points. The third line shows that creating an ensemble of 15 models further improves the results by 1.1 F1 points.
Construction of the Literature Graph in Semantic Scholar
1805.02262
Table 4: The Bag of Concepts F1 score of the baseline and neural model on the two curated datasets.
['[EMPTY]', 'CS', 'Bio']
[['Baseline', '84.2', '54.2'], ['Neural', '84.6', '85.8']]
Candidate selection. In a preprocessing step, we build an index which maps any token used in a labeled mention or an entity name in the KB to associated entity IDs, along with the frequency this token is associated with that entity. This is similar to the index used in previous entity linking systems At train and test time, we use this index to find candidate entities for a given mention by looking up the tokens in the mention.
A Discriminative Neural Model for Cross-Lingual Word Alignment
1909.00444
Table 2: Precision, Recall, and F1 on Chinese GALE test data. BPE indicates “with BPE”, and NER denotes restriction to NER spans.
['Method', 'P', 'R', 'F1']
[['Avg. attention.', '36.30', '46.17', '40.65'], ['Avg. attention (BPE)', '37.89', '49.82', '43.05'], ['Avg. attention (NER)', '16.57', '35.85', '22.66'], ['FastAlign', '80.46', '50.46', '62.02'], ['FastAlign\xa0(BPE)', '70.41', '55.43', '62.03'], ['FastAlign (NER)', '83.70', '49.54', '62.24'], ['DiscAlign', '72.92', '73.91', '[BOLD] 73.41'], ['DiscAlign (BPE)', '69.36', '67.11', '[BOLD] 68.22'], ['DiscAlign (NER)', '74.52', '77.05', '[BOLD] 75.78'], ['DiscAlign (NER) +prec.', '84.69', '58.41', '69.14']]
Our model outperforms both baselines for all languages and experimental conditions. By increasing the threshold α above which p(ai,j|s,t) is considered an alignment we can obtain high-precision alignments, exceeding FastAlign’s precision and recall. The best threshold values on the development set were α=0.13 and α=0.14 for average attention on data with and without BPE (respectively) and α=0.15 for the discriminative settings (α=0.5 was used for the high-precision “+prec” setting).
A Discriminative Neural Model for Cross-Lingual Word Alignment
1909.00444
Table 3: Precision, Recall, and F1 on Arabic GALE test data.
['Method', 'P', 'R', 'F1']
[['Avg. attention.', '8.46', '32.50', '13.42'], ['Avg. attention (BPE)', '10.11', '17.27', '12.75'], ['FastAlign', '62.26', '51.06', '56.11'], ['FastAlign\xa0(BPE)', '62.74', '51.25', '56.42'], ['DiscAlign', '91.30', '75.66', '[BOLD] 82.74'], ['DiscAlign (BPE)', '87.05', '76.98', '[BOLD] 81.71']]
Furthermore, we see that average attention performs abysmally in Arabic. The best thresholds for Arabic differed substantially from those for Chinese: for average attention (with and without BPE) the values were α=0.05 and α=0.1, while for the discriminative aligner they were α=0.94 and α=0.99 (compared to 0.15 for Chinese). This is reflected in the high precision of the discriminative aligner.
A Discriminative Neural Model for Cross-Lingual Word Alignment
1909.00444
Table 4: F1 results on OntoNotes test for systems trained on data projected via FastAlign and DiscAlign.
['Method', '# train', 'P', 'R', 'F1']
[['Zh Gold', '36K', '75.46', '80.55', '77.81'], ['FastAlign', '36K', '38.99', '36.61', '37.55'], ['FastAlign', '53K', '39.46', '36.65', '37.77'], ['DiscAlign', '36K', '51.94', '52.37', '51.76'], ['DiscAlign', '53K', '51.92', '51.93', '51.57']]
Based on our findings that: [topsep=0.0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex] discriminative alignments outperform common unsupervised baselines in two typologically divergent languages this performance boost leads to major downstream improvements on NER only a small amount of labelled data is needed to realize these improvements and that these labelled examples can be obtained from L2 speakers with minimal training we conclude with a call for additional annotation efforts in a wider variety of languages. While multilingual datasets directly annotated for a given task typically lead to the highest-performing systems (see these datasets are task-specific and not robust to ontology changes. In contrast, the framework of projection via discriminatively-trained alignments which we present is task-agnostic (any token-level annotations can be projected) and requires only one source-side dataset to be annotated.
A Discriminative Neural Model for Cross-Lingual Word Alignment
1909.00444
Table 7: Sentences per minute and average scores against gold-labelled data for sentences annotated for alignment by human annotators (Hu.), compared to DiscAlign (DA) on the same sentences.
['Model', 'sents/m', 'P', 'R', 'F1']
[['Human', '4.4', '90.09', '62.85', '73.92'], ['DiscAlign', '-', '74.54', '72.15', '73.31'], ['Hu. (NER)', '-', '87.73', '71.02', '78.24'], ['DA (NER)', '-', '77.37', '67.69', '71.94']]
The annotators achieve high overall precision for alignment, but fall short on recall. When only alignments of NER spans are considered, their F1 score improves considerably. Additionally, human annotators outperform DiscAlign when evaluated on the same sentences.
Structure-Infused Copy Mechanisms for Abstractive Summarization
1806.05658
Table 11: Effects of applying the coverage regularizer and the reference beam search to structural models, evaluated on test-1951. Combining both yields the highest scores.
['| [ITALIC] V|', '[BOLD] R-2', '[BOLD] Train Speed', '[BOLD] InVcb', '[BOLD] InVcb+Src']
[['1K', '13.99', '2.5h/epoch', '60.57', '76.04'], ['2K', '15.35', '2.7h/epoch', '69.71', '80.72'], ['5K', '17.25', '3.2h/epoch', '79.98', '86.51'], ['10K', '17.62', '3.8h/epoch', '88.26', '92.18']]
Coverage and reference beam. The coverage regularizer is applied in a second training stage, where the system is trained for an extra 5 epochs with coverage and the model yielding the lowest validation loss is selected. Both coverage and ref_beam can improve the system performance. Our observation suggests that ref_beam is an effective addition to shorten the gap between different systems. Output vocabulary size. All our models by default use an output vocabulary of 5K words in order to make the results comparable to state-of-the-art-systems. However, we observe that there is a potential to further boost the system performance (17.25→17.62 R-2 F1-score, w/o coverage or ref_beam) if we had chosen to use a larger vocabulary (10K) and can endure a slightly longer training time (1.2x). The gap between the two conditions shortens as the size of the output vocabulary is increased.
Structure-Infused Copy Mechanisms for Abstractive Summarization
1806.05658
Table 9: Informativeness, fluency, and faithfulness scores of summaries. They are rated by Amazon turkers on a Likert scale of 1 (worst) to 5 (best). We choose to evaluate Struct+2Way+Relation (as oppose to 2Way+Word) because it focuses on preserving source relations in the summaries.
['[BOLD] System', '[BOLD] Info.', '[BOLD] Fluency', '[BOLD] Faithful.']
[['Struct+Input', '2.9', '3.3', '3.0'], ['Struct+2Way+Relation', '3.0', '3.4', '3.1'], ['Ground-truth Summ.', '3.2', '3.5', '3.1']]
Linguistic quality. fluency (is the summary grammatical and well-formed?), informativeness (to what extent is the meaning of the original sentence preserved in the summary?), and faithfulness (is the summary accurate and faithful to the original?). We found that “Struct+2Way+Relation” outperforms “Struct+Input” on all three criteria. It also compares favorably to ground-truth summaries on “fluency” and “faithfulness.” On the other hand, the ground-truth summaries, corresponding to article titles, are judged as less satisfying according to human raters.
Structure-Infused Copy Mechanisms for Abstractive Summarization
1806.05658
Table 10: Percentages of source dependency relations (of various types) preserved in the system summaries.
['[BOLD] System', '[BOLD] nsubj', '[BOLD] dobj', '[BOLD] amod', '[BOLD] nmod', '[BOLD] nmod:poss', '[BOLD] mark', '[BOLD] case', '[BOLD] conj', '[BOLD] cc', '[BOLD] det']
[['Baseline', '7.23', '12.07', '20.45', '8.73', '12.46', '15.83', '14.84', '9.72', '5.03', '2.22'], ['Struct+Input', '7.03', '11.72', '19.72', '[BOLD] 9.17↑', '12.46', '15.35', '14.69', '9.55', '4.67', '1.97'], ['Struct+Hidden', '[BOLD] 7.78↑', '[BOLD] 12.34↑', '[BOLD] 21.11↑', '[BOLD] 9.18↑', '[BOLD] 14.86↑', '14.93', '[BOLD] 15.84↑', '9.47', '3.93', '[BOLD] 2.65↑'], ['Struct+2Way+Word', '[BOLD] 7.46↑', '[BOLD] 12.69↑', '[BOLD] 20.59↑', '[BOLD] 9.03↑', '[BOLD] 13.00↑', '15.83', '14.43', '8.86', '3.48', '1.91'], ['Struct+2Way+Relation', '[BOLD] 7.35↑', '[BOLD] 12.07↑', '[BOLD] 20.59↑', '8.68', '[BOLD] 13.47↑', '15.41', '14.39', '9.12', '4.30', '1.89']]
Dependency relations. A source relation is considered preserved if both its words appear in the summary. We observe that the models implementing structure-infused copy mechanisms (e.g., “Struct+2Way+Word”) are more likely to preserve important dependency relations in the summaries, including nsubj, dobj, amod, nmod, and nmod:poss. Dependency relations that are less important (mark, case, conj, cc, det) are less likely to be preserved. These results show that our structure-infused copy mechanisms can learn to recognize the importance of dependency relations and selectively preserve them in the summaries.
When to Talk: Chatbot Controls the Timing of Talking during Multi-turn Open-domain Dialogue Generation
1912.09879
Table 4: Automatic evaluation results of all the models on Ubuntu and modified dailydialog test dataset (The table above is the results of ubuntu dataset, and the below is dailydialog). Baseline HRED cannot control the timing of talking and the accuracy and macro-F1 of it is empty. All the best results are shown in bold.
['[BOLD] Models', '[BOLD] PPL', '[BOLD] BLEU1', '[BOLD] BLEU2', '[BOLD] BLEU3', '[BOLD] BLEU4', '[BOLD] BERTScore', '[BOLD] Dist-1', '[BOLD] Dist-2', '[BOLD] Acc', '[BOLD] Macro-F1']
[['[BOLD] HRED', '63.06', '0.0774', '0.0486', '0.0427', '0.0399', '0.8308', '0.0427', '0.1805', '-', '-'], ['[BOLD] HRED-CF', '[BOLD] 60.28', '[BOLD] 0.0877', '[BOLD] 0.0605', '[BOLD] 0.0547', '[BOLD] 0.0521', '0.8343', '0.0327', '0.1445', '0.6720', '0.6713'], ['[BOLD] W2T-GCN', '62.18', '0.0844', '0.0579', '0.0521', '0.0504', '0.8342', '0.0237', '0.1028', '0.6835', '0.6741'], ['[BOLD] W2T-GAT', '60.89', '0.0746', '0.0519', '0.0477', '0.0461', '0.8320', '0.0017', '0.0029', '0.7137', '0.6686'], ['[BOLD] W2T-GGAT', '64.46', '0.0816', '0.0559', '0.0504', '0.0480', '[BOLD] 0.8344', '0.0552', '0.2651', '0.7348', '0.6936'], ['[BOLD] W2T-DGGNN', '64.59', '0.0796', '0.0548', '0.0496', '0.0473', '0.8328', '[BOLD] 0.0623', '[BOLD] 0.2790', '[BOLD] 0.7192', '[BOLD] 0.7015']]
For all the models, we adopt the early stopping to avoid the overfitting. Generation Quality: The strong baseline HRED-CF which explicity models the timing of talking is much better than HRED on almost all the metrics such as PPL and BLEU. Especially in terms of BLEU, HRED-CF exceeds HRED by up to 1.5% on Embedding Average points. The experimental results demonstrate that controlling the timing of talking can significantly improve the quality the dialogue generation. The other two modified datasets shows the same conclusion which can be found in appendix. Compared with the simple graph neural networks such as GCN and GAT (third and forth rows), our proposed gated neural network is much better. It indicates that the simply appling the graph neural networks is not enough and the gated mechanism is very important in modeling timing of talking. Compared with the strong baseline HRED-CF, the response quality of our proposed graph neural networks is weak. We attribute this phenomenon to the negative transfer problem Liu et al. We leave this problem to the future research. But the quality of responses is still much better than HRED. It can be shown that although the BLEU scores, PPL and embedding average scores of GNN-based models are lower than HRED-CF, the diversity of the generation is better than strong baseline HRED-CF (distinct-1 and distinc-2). In order to test the effectiveness of the gated mechanism, we also replace the first gated mechanism with the self-attention attention mechanism and propose another strong baseline W2T-GGAT (fifth row). Except for PPL and accuracy, W2T-GGAT ’s performance is far worse than our proposed double-gated graph neural network. Decision Performance: It can be shown that our proposed graph neural network models significantly outperforms the strong baseline HRED-CF by up to 5.77% Accuracy and 2.07% macro-F1. Furthermore, compared with the graph neural networks without gated mechanism, it can be found that out proposed model can achieve better performance on accuracy and macro-F1. It demonstrates that the gated mechanism is very important in modeling timing of talking.
When to Talk: Chatbot Controls the Timing of Talking during Multi-turn Open-domain Dialogue Generation
1912.09879
Table 4: Automatic evaluation results of all the models on Ubuntu and modified dailydialog test dataset (The table above is the results of ubuntu dataset, and the below is dailydialog). Baseline HRED cannot control the timing of talking and the accuracy and macro-F1 of it is empty. All the best results are shown in bold.
['[BOLD] Models', '[BOLD] PPL', '[BOLD] BLEU1', '[BOLD] BLEU2', '[BOLD] BLEU3', '[BOLD] BLEU4', '[BOLD] BERTScore', '[BOLD] Dist-1', '[BOLD] Dist-2', '[BOLD] Acc', '[BOLD] Macro-F1']
[['[BOLD] HRED', '21.91', '0.1922', '0.1375', '0.1266', '0.1235', '[BOLD] 0.8735', '0.0573', '0.2236', '-', '-'], ['[BOLD] HRED-CF', '[BOLD] 20.01', '[BOLD] 0.1950', '0.1397', '[BOLD] 0.1300', '[BOLD] 0.1270', '0.8718', '0.0508', '0.2165', '0.8321', '0.8162'], ['[BOLD] W2T-GCN', '23.21', '0.1807', '0.1262', '0.1179', '0.1160', '0.8668', '0.0437', '0.1887', '0.8273', '0.8149'], ['[BOLD] W2T-GAT', '21,98', '0.1724', '0.1181', '0.1109', '0.1129', '0.8645', '0.0077', '0.0187', '0.7290', '0.7024'], ['[BOLD] W2T-GGAT', '21.12', '0.1909', '0.1374', '0.1276', '0.1249', '0.8712', '[BOLD] 0.0698', '[BOLD] 0.3319', '0.8281', '0.8168'], ['[BOLD] W2T-DGGNN', '21.52', '0.1944', '[BOLD] 0.1399', '0.1298', '0.1269', '0.8721', '0.0673', '0.3296', '[BOLD] 0.8369', '[BOLD] 0.8248']]
For all the models, we adopt the early stopping to avoid the overfitting. Generation Quality: The strong baseline HRED-CF which explicity models the timing of talking is much better than HRED on almost all the metrics such as PPL and BLEU. Especially in terms of BLEU, HRED-CF exceeds HRED by up to 1.5% on Embedding Average points. The experimental results demonstrate that controlling the timing of talking can significantly improve the quality the dialogue generation. The other two modified datasets shows the same conclusion which can be found in appendix. Compared with the simple graph neural networks such as GCN and GAT (third and forth rows), our proposed gated neural network is much better. It indicates that the simply appling the graph neural networks is not enough and the gated mechanism is very important in modeling timing of talking. Compared with the strong baseline HRED-CF, the response quality of our proposed graph neural networks is weak. We attribute this phenomenon to the negative transfer problem Liu et al. We leave this problem to the future research. But the quality of responses is still much better than HRED. It can be shown that although the BLEU scores, PPL and embedding average scores of GNN-based models are lower than HRED-CF, the diversity of the generation is better than strong baseline HRED-CF (distinct-1 and distinc-2). In order to test the effectiveness of the gated mechanism, we also replace the first gated mechanism with the self-attention attention mechanism and propose another strong baseline W2T-GGAT (fifth row). Except for PPL and accuracy, W2T-GGAT ’s performance is far worse than our proposed double-gated graph neural network. Decision Performance: It can be shown that our proposed graph neural network models significantly outperforms the strong baseline HRED-CF by up to 5.77% Accuracy and 2.07% macro-F1. Furthermore, compared with the graph neural networks without gated mechanism, it can be found that out proposed model can achieve better performance on accuracy and macro-F1. It demonstrates that the gated mechanism is very important in modeling timing of talking.
Medical Exam Question Answering with Large-scale Reading Comprehension
1802.10279
Table 3: Results (accuracy) of SeaReader and other approaches on MedQA task
['[EMPTY]', 'Valid set', 'Test set']
[['Iterative Attention', '60.7', '59.3'], ['Neural Reasoner', '54.8', '53.0'], ['R-NET', '65.2', '64.5'], ['SeaReader', '[BOLD] 73.9', '[BOLD] 73.6'], ['SeaReader (ensemble)', '[BOLD] 75.8', '[BOLD] 75.3'], ['Human passing score', '60.0 (360/600)', '60.0 (360/600)']]
Quantitative Results We evaluated model performance by accuracy of choosing the best candidate. Our SeaReader clearly outperforms baseline models by a large margin. We also include the human passing score as a reference. As MedQA is not a commonsense question answering task, human performance relies heavily on individual skill and knowledge. A passing score is the minimum score required to get certified as a Licensed Physician and should reflect a decent expertise in medicine.
Medical Exam Question Answering with Large-scale Reading Comprehension
1802.10279
Table 4: Test performance with different number of documents given per candidate answer
['Number of documents', 'top-1', 'top-5', 'top-10', 'top-20']
[['SeaReader accuracy', '57.8', '71.7', '73.6', '74.4'], ['Relevant document ratio', '0.90', '0.54', '0.46', '0.29']]
To discover the general relevancy of documents returned by our retrieval system, we hand-labeled the relevancy of 1000 retrieved documents for 100 problems. We notice that there is still performance gain using as many as 20 documents per candidate answer, while the ratio of relevant documents is already low. This illustrates SeaReader’s ability to discern useful information from large-scale text and integrate them in reasoning and decision-making.
Bridging the Gap for Tokenizer-Free Language Models
1908.10322
Table 2: Comparing recent language model results on lm1b.
['[EMPTY]', 'Segmentation', 'Context Length', '# of params', 'Perplexity', 'Bits/Byte']
[['shazeer2017outrageously', 'Word', 'Fixed', '6.0B', '28.0', '0.929'], ['shazeer2018mesh', 'Word-Piece', 'Fixed', '4.9B', '24.0', '0.886'], ['baevski2018adaptive', 'Word', 'Fixed', '1.0B', '23.0', '0.874'], ['transformerxl', 'Word', 'Arbitrary', '0.8B', '21.8', '0.859'], ['bytelmaaai', 'Byte', 'Fixed', '0.2B', '40.6', '1.033'], ['Ours', 'Byte', 'Fixed', '0.8B', '23.0', '0.874']]
We observe that tokenizer-free LM performance improves significantly (40.6 to 23.0) when the model capacity is increased from 0.2B to 0.8B parameters. With sufficient capacity our byte-level LM is competitive with word based models (ranging from 21.8 to 28.0). Note, our model is able to achieve comparable performance without any explicit signal of word boundaries.
Classical Structured Prediction Losses for Sequence to Sequence Learning
1711.04956
Table 5: Comparison to Beam Search Optimization. We report the best likelihood (MLE) and BSO results from Wiseman and Rush (2016), as well as results from our MLE reimplementation and training with Risk. Results based on unnormalized beam search (k=5).
['[EMPTY]', '[BOLD] BLEU', 'Δ']
[['MLE', '24.03', '[EMPTY]'], ['+ BSO', '26.36', '+2.33'], ['MLE Reimplementation', '23.93', '[EMPTY]'], ['+ Risk', '26.68', '+2.75']]
Risk significantly improves BLEU compared to our baseline at +2.75 BLEU, which is slightly better than the +2.33 BLEU improvement reported for Beam Search Optimization (cf. This shows that classical objectives for structured prediction are still very competitive.
Classical Structured Prediction Losses for Sequence to Sequence Learning
1711.04956
Table 1: Test accuracy in terms of BLEU on IWSLT’14 German-English translation with various loss functions cf. Figure 1. W & R (2016) refers to Wiseman and Rush (2016), B (2016) to Bahdanau et al. (2016), [S] indicates sequence level-training and [T] token-level training. We report averages and standard deviations over five runs with different random initialization.
['[EMPTY]', '[BOLD] test', '[BOLD] std']
[['MLE (W & R, 2016) [T]', '24.03', '[EMPTY]'], ['BSO (W & R, 2016) [S]', '26.36', '[EMPTY]'], ['Actor-critic (B, 2016) [S]', '28.53', '[EMPTY]'], ['Huang et\xa0al. ( 2017 ) [T]', '28.96', '[EMPTY]'], ['Huang et\xa0al. ( 2017 ) (+LM) [T]', '29.16', '[EMPTY]'], ['TokNLL\xa0[T]', '31.78', '0.07'], ['TokLS\xa0[T]', '32.23', '0.10'], ['SeqNLL\xa0[S]', '32.68', '0.09'], ['Risk\xa0[S]', '32.84', '0.08'], ['MaxMargin\xa0[S]', '32.55', '0.09'], ['MultiMargin\xa0[S]', '32.59', '0.07'], ['SoftmaxMargin\xa0[S]', '32.71', '0.07']]
Our baseline token-level results are several points above other figures in the literature and we further improve these results by up to 0.61 BLEU with Risk training.
Classical Structured Prediction Losses for Sequence to Sequence Learning
1711.04956
Table 2: Validation and test BLEU for loss combination strategies. We either use token-level TokLS and sequence-level Riskindividually or combine them as a weighted combination, a constrained combination, a random choice for each sample, cf. §3.3.
['[EMPTY]', '[BOLD] valid', '[BOLD] test']
[['TokLS', '33.11', '32.21'], ['Risk\xa0only', '33.55', '32.45'], ['Weighted', '33.91', '32.85'], ['Constrained', '33.77', '32.79'], ['Random', '33.70', '32.61']]
Next, we compare various strategies to combine sequence-level and token-level objectives (cf. For these experiments we use 5 candidate sequences per training example for faster experimental turnaround. We consider Risk as sequence-level loss and label smoothing as token-level loss. We also compare to randomly choosing between token-level and sequence-level updates and find it underperforms the more principled constrained strategy. In the remaining experiments we use the weighted strategy.
Classical Structured Prediction Losses for Sequence to Sequence Learning
1711.04956
Table 3: Effect of initializing sequence-level training (Risk) with parameters from token-level likelihood (TokNLL) or label smoothing (TokLS).
['[EMPTY]', '[BOLD] valid', '[BOLD] test']
[['TokNLL', '32.96', '31.74'], ['Risk\xa0init with TokNLL', '33.27', '32.07'], ['Δ', '+0.31', '+0.33'], ['TokLS', '33.11', '32.21'], ['Risk\xa0init with TokLS', '33.91', '32.85'], ['Δ', '+0.8', '+0.64']]
So far we initialized sequence-level models with parameters from a token-level model trained with label smoothing. The improvement of initializing with TokNLL is only 0.3 BLEU with respect to the TokNLL baseline, whereas, the improvement from initializing with TokLS is 0.6-0.8 BLEU. We believe that the regularization provided by label smoothing leads to models with less sharp distributions that are a better starting point for sequence-level training.
Classical Structured Prediction Losses for Sequence to Sequence Learning
1711.04956
Table 4: Generating candidates online or offline.
['[EMPTY]', '[BOLD] valid', '[BOLD] test']
[['Online generation', '33.91', '32.85'], ['Offline generation', '33.52', '32.44']]
Next, we consider the question if refreshing the candidate subset at every training step (online) results in better accuracy compared to generating candidates before training and keeping the set static throughout training (offline). However the online setting is much slower, since regenerating the candidate set requires incremental (left to right) inference with our model which is very slow compared to efficient forward/backward over large batches of pre-generated hypothesis.
Classical Structured Prediction Losses for Sequence to Sequence Learning
1711.04956
Table 6: Accuracy on Gigaword abstractive summarization in terms of F-measure Rouge-1 (RG-1), Rouge-2 (RG-2), and Rouge-L (RG-L) for token-level label smoothing, and Risk optimization of all three ROUGE F1 metrics. [T] indicates a token-level objective and [S] indicates a sequence level objectives. ABS+ refers to Rush et al. (2015), RNN MLE/MRT (Ayana et al., 2016), WFE (Suzuki and Nagata, 2017), SEASS (Zhou et al., 2017), DRGD (Li et al., 2017).
['[EMPTY]', '[BOLD] RG-1', '[BOLD] RG-2', '[BOLD] RG-L']
[['ABS+ [T]', '29.78', '11.89', '26.97'], ['RNN MLE [T]', '32.67', '15.23', '30.56'], ['RNN MRT [S]', '36.54', '16.59', '33.44'], ['WFE [T]', '36.30', '17.31', '33.88'], ['SEASS [T]', '36.15', '17.54', '33.63'], ['DRGD [T]', '36.27', '17.57', '33.62'], ['TokLS', '36.53', '18.10', '33.93'], ['+ Risk\xa0RG-1', '36.96', '17.61', '34.18'], ['+ Risk\xa0RG-2', '36.65', '18.32', '34.07'], ['+ Risk\xa0RG-L', '36.70', '17.88', '34.29']]
We optimize all three ROUGE metrics separately and find that Risk can further improve our strong baseline. but accuracy was generally lower on the validation set: RG-1 (36.59 Risk only vs. 36.67 Weighted), RG-2 (17.34 vs. 18.05), and RG-L (33.66 vs. 33.98).
Automatic Severity Classification of Coronary Artery Disease via Recurrent Capsule Network
1807.06718
TABLE IV: Class-wise Performance (in Terms of F{}_{1}-score) of Our RCN Model and Baseline Methods
['Method', 'Input', 'r:modifier (e1,e2)', 'r:negative (e2,e1)', 'r:position (e1,e2)', 'r:percentage (e1,e2)', 'r:percentage (e2,e1)']
[['CNN + MaxPooling\xa0[nguyen2015relation]', 'Word + Position', '96.33', '[BOLD] 99.54', '95.45', '80.60', '97.53'], ['CNN + MaxPooling\xa0[nguyen2015relation]', 'Word + Position + Entity Type*', '96.28', '[BOLD] 99.54', '[BOLD] 96.21', '80.00', '98.77'], ['BiLSTM + MaxPooling\xa0[zhang2015relation]', 'Word Only', '95.33', '99.09', '95.35', '73.85', '95.65'], ['BiLSTM + MaxPooling\xa0[zhang2015relation]', 'Word + Entity Type*', '96.66', '99.09', '94.62', '81.16', '96.20'], ['BiLSTM + Attention\xa0[zhou2016attention]', 'Word Only', '94.87', '99.09', '93.44', '73.85', '93.17'], ['BiLSTM + Attention\xa0[zhou2016attention]', 'Word + Entity Type*', '[BOLD] 97.15', '99.09', '93.85', '[BOLD] 83.87', '98.14'], ['CRNN + MaxPooling\xa0[raj2017learning]', 'Word Only', '95.28', '98.62', '93.54', '77.61', '98.16'], ['CRNN + MaxPooling\xa0[raj2017learning]', 'Word + Entity Type*', '96.23', '[BOLD] 99.54', '94.70', '80.00', '96.34'], ['CRNN + Attention\xa0[raj2017learning]', 'Word Only', '94.77', '98.64', '93.94', '81.69', '96.89'], ['CRNN + Attention\xa0[raj2017learning]', 'Word + Entity Type*', '96.50', '[BOLD] 99.54', '94.62', '80.60', '98.16'], ['[BOLD] Our RCN model', 'Word Only', '96.35', '[BOLD] 99.54', '93.08', '76.47', '98.14'], ['[BOLD] Our RCN model', 'Word + Entity Type*', '[BOLD] 97.15', '[BOLD] 99.54', '94.74', '82.35', '[BOLD] 99.38'], ['The entity type features are proposed by us.', 'The entity type features are proposed by us.', 'The entity type features are proposed by us.', 'The entity type features are proposed by us.', 'The entity type features are proposed by us.', '[EMPTY]', '[EMPTY]']]
Furthermore, we compare class-wise performance of our RCN model with baseline methods. Firstly, we clearly observe that our model achieves No.1 in three relations (i.e. r:modifier[e1,e2], r:negative[e2,e1] and r:percentage[e2,e1]) and No.2 in one relation (i.e. r:percentage[e1,e2]) among all the five relations. The F{}_{1}-scores are higher than 90.00 except for r:percentage(e1,e2) relation because of its low frequency (only 70 instances) in the training set. Secondly, we also obtain the same observation on the proposed entity type features as mentioned above that they can help improve performance of the original word embedding features.
Automatic Severity Classification of Coronary Artery Disease via Recurrent Capsule Network
1807.06718
TABLE III: Comparative Results of Our RCN Model and Baseline Methods
['Method', 'Input', 'P', 'R', 'F{}_{1}']
[['CNN + MaxPooling\xa0[nguyen2015relation]', 'Word + Position', '95.54', '96.40', '95.97'], ['CNN + MaxPooling\xa0[nguyen2015relation]', 'Word + Position + Entity Type*', '[BOLD] 96.53', '95.95', '96.24'], ['BiLSTM + MaxPooling\xa0[zhang2015relation]', 'Word Only', '94.45', '94.30', '94.87'], ['BiLSTM + MaxPooling\xa0[zhang2015relation]', 'Word + Entity Type*', '95.52', '95.95', '95.74'], ['BiLSTM + Attention\xa0[zhou2016attention]', 'Word Only', '94.27', '93.70', '93.98'], ['BiLSTM + Attention\xa0[zhou2016attention]', 'Word + Entity Type*', '96.11', '96.40', '96.26'], ['CRNN + MaxPooling\xa0[raj2017learning]', 'Word Only', '93.97', '95.80', '94.88'], ['CRNN + MaxPooling\xa0[raj2017learning]', 'Word + Entity Type*', '94.18', '97.00', '95.57'], ['CRNN + Attention\xa0[raj2017learning]', 'Word Only', '93.70', '95.80', '94.74'], ['CRNN + Attention\xa0[raj2017learning]', 'Word + Entity Type*', '95.68', '96.25', '95.96'], ['[BOLD] Our RCN model', 'Word Only', '95.14', '96.85', '95.99'], ['[BOLD] Our RCN model', 'Word + Entity Type*', '95.59', '[BOLD] 97.45', '[BOLD] 96.51'], ['The entity type features are proposed by us.', 'The entity type features are proposed by us.', 'The entity type features are proposed by us.', 'The entity type features are proposed by us.', 'The entity type features are proposed by us.']]
First of all, we can observe that our model with entity type features outperforms these reference algorithms, with 95.59% in Precision, 97.45% in Recall and 96.51% in F{}_{1}-score. The improvements compared with the original baselines without entity type features are 1.5, 1.5, 1.05, 0.45 and 1.2 points in Recall, 0.54, 1.64, 2.53, 1.63 and 1.77 points in F{}_{1}-score, respectively. Without entity type features, our model also outperforms the reference algorithms without entity type features. Both the Recall and F{}_{1}-score of our model achieve the best ones, and the Precision of our model is just below the CNN + MaxPooling method, which use an extra position feature, while our model does not. Secondly, the entity type features proposed by us can help improve performance of the original baselines. The benefits in F{}_{1}-score brought by the entity type features are 0.27, 0.87, 2.28, 0.69 and 1.22 points, respectively. However, the improved performance is still worse than our model. Thirdly, among the baselines, it is interesting to note that without entity type features, attention-based pooling technique performs worse than conventional max-pooling strategy, which has also been observed earlier by Sahu and Anand [sahu2017drug] and Raj et al. [raj2017learning], while with entity type features, attention-based pooling technique performs better than conventional max-pooling strategy.
Automatic Severity Classification of Coronary Artery Disease via Recurrent Capsule Network
1807.06718
TABLE V: Performance of Our Automatic Severity Classification Method
['[EMPTY]', 'P', 'R', 'F{}_{1}']
[['Mild Stenosis', '100.00', '98.62', '99.31'], ['Moderate Stenosis', '93.33', '93.33', '93.33'], ['Severe Stenosis', '75.00', '90.00', '81.82'], ['Overall Accuracy', '97.00', '97.00', '97.00']]
To evaluate the effectiveness of our proposed severity classification method, we randomly select 200 coronary arteriography texts for evaluation. First of all, our method obtains an overall Accuracy of 97.00%. Only six texts are classified into the wrong level. Secondly, most of the coronary arteriography texts (72.5%) belongs to mild stenosis in practice. Our method achieves an relative high performance in terms of Precision (100%), Recall (98.62%) and F{}_{1}-score (99.31%). Thirdly, our method is a little confused with moderate stenosis and severe stenosis. The precision, recall and F{}_{1}-score of moderate stenosis are all 93.33%. It is still acceptable. As to severe stenosis, which appears rarely (5%) in practice, though the precision is merely 75.00%, but the recall is 90%. That is to say, only one text of severe stenosis is not recognized by our method.
Automatic Severity Classification of Coronary Artery Disease via Recurrent Capsule Network
1807.06718
TABLE VI: Comparisons between Different Input Features with Different Routing Iterations
['r', 'Word Only P', 'Word Only R', 'Word Only F{}_{1}', 'Word + Entity Type P', 'Word + Entity Type R', 'Word + Entity Type F{}_{1}']
[['1', '94.53', '95.80', '95.16', '94.97', '96.25', '95.61'], ['2', '[BOLD] 95.14', '96.85', '[BOLD] 95.99', '94.74', '97.30', '96.01'], ['3', '94.69', '96.25', '95.46', '95.15', '97.15', '96.14'], ['4', '94.95', '95.80', '95.37', '[BOLD] 95.59', '[BOLD] 97.45', '[BOLD] 96.51'], ['5', '93.25', '[BOLD] 97.30', '95.23', '95.43', '97.00', '96.21']]
To study the effect of the input features and the routing iterations in our model, we experimentally compare the performance between different input features with different routing iterations. The Precision, Recall and F{}_{1}-score are 95.59%, 97.45% and 96.51%, respectively. Comparing the model inputs, we can observe that the performance of our model with entity type features is better than that without entity type features. The benefits are 0.66 in Precision, 0.63 in Recall and 0.65 in F{}_{1}-score on average. Comparing the five lines in the table, we can observe that whatever inputs are used, the performance (in terms of F{}_{1}-score) first grows then drops down with the increase of iterative number r. When words are only used, our model achieves its best performance with r=2. And when words and entity types are both used, our model achieves its best performance with r=4.
Automatic Severity Classification of Coronary Artery Disease via Recurrent Capsule Network
1807.06718
TABLE VII: Comparison of the Interest of Uni/Bi-LSTMs and Capsules
['[EMPTY]', 'P', 'R', 'F{}_{1}']
[['All Bi-LSTMs', '94.76', '[BOLD] 97.60', '96.16'], ['Softmax layer', '95.16', '97.30', '96.22'], ['[BOLD] Uni/Bi-LSTMs & Capsule layer', '[BOLD] 95.59', '97.45', '[BOLD] 96.51']]
To analyze the interest of Uni/Bi-LSTMs and capsules, we compare our model with that all using Bi-LSTMs or replacing the capsule layer by a fully-connected layer with a softmax function. From the table, we can observe that the F{}_{1}-score of our model is higher than that all using Bi-LSTMs by 0.29\%, and higher than that using a softmax layer by 0.35 point. It indicates the interest of the proposed Uni/Bi-LSTMs and capsules.
Select-Additive Learning: Improving Generalization in Multimodal Sentiment Analysis
1609.05244
Table 2: Across data set experiments
['[EMPTY]', 'Youtube [BOLD] CNN', 'Youtube [BOLD] SAL-CNN', 'MOUD [BOLD] CNN', 'MOUD [BOLD] SAL-CNN']
[['Verbal', '0.605', '[BOLD] 0.657', '0.522', '[BOLD] 0.569'], ['Acoustic', '0.441', '[BOLD] 0.564', '0.455', '[BOLD] 0.549'], ['Visual', '0.492', '[BOLD] 0.549', '[BOLD] 0.555', '0.548'], ['Ver+Acou', '0.642', '[BOLD] 0.652', '0.515', '[BOLD] 0.574'], ['Ver+Vis', '0.642', '[BOLD] 0.667', '0.542', '[BOLD] 0.574'], ['Acou+Vis', '0.452', '[BOLD] 0.559', '0.533', '[BOLD] 0.554'], ['All', '0.611', '[BOLD] 0.667', '0.531', '[BOLD] 0.574']]
First, it is noteworthy that in some cases the performance of the CNN is worse than mere chance. This inferior performance substantiates the existence of the non-generalization problems we are targeting.
FRAGE: Frequency-Agnostic Word Representation
1809.06858
Table 1: Results on three word similarity datasets.
['[BOLD] RG65 [BOLD] Orig.', '[BOLD] RG65 [BOLD] with FRAGE', '[BOLD] WS [BOLD] Orig.', '[BOLD] WS [BOLD] with FRAGE', '[BOLD] RW [BOLD] Orig.', '[BOLD] RW [BOLD] with FRAGE']
[['75.63', '[BOLD] 78.78', '66.74', '[BOLD] 69.35', '52.67', '[BOLD] 58.12']]
Word Similarity From the table, we can see that our method consistently outperforms the baseline on all datasets. In particular, we outperform the baseline for about 5.4 points on the rare word dataset RW. This result shows that our method improves the representation of words, especially the rare words.
FRAGE: Frequency-Agnostic Word Representation
1809.06858
Table 2: Perplexity on validation and test sets on Penn Treebank and WikiText2. Smaller the perplexity, better the result. Baseline results are obtained from DBLP:journals/corr/abs-1708-02182 ; DBLP:journals/corr/abs-1711-03953 . “Paras” denotes the number of model parameters.
['[EMPTY]', '[EMPTY]', '[BOLD] Paras', '[BOLD] Orig. Validation', '[BOLD] Orig. Test', '[BOLD] with FRAGE Validation', '[BOLD] with FRAGE Test']
[['[BOLD] PTB', 'AWD-LSTM w/o finetune DBLP:journals/corr/abs-1708-02182 ', '24M', '60.7', '58.8', '60.2', '58.0'], ['[BOLD] PTB', 'AWD-LSTM DBLP:journals/corr/abs-1708-02182 ', '24M', '60.0', '57.3', '58.1', '56.1'], ['[BOLD] PTB', 'AWD-LSTM + continuous cache pointer DBLP:journals/corr/abs-1708-02182 ', '24M', '53.9', '52.8', '[BOLD] 52.3', '[BOLD] 51.8'], ['[BOLD] PTB', 'AWD-LSTM-MoS w/o finetune DBLP:journals/corr/abs-1711-03953 ', '24M', '58.08', '55.97', '57.55', '55.23'], ['[BOLD] PTB', 'AWD-LSTM-MoS DBLP:journals/corr/abs-1711-03953 ', '24M', '56.54', '54.44', '55.52', '53.31'], ['[BOLD] PTB', 'AWD-LSTM-MoS + dynamic evaluation DBLP:journals/corr/abs-1711-03953 ', '24M', '48.33', '47.69', '[BOLD] 47.38', '[BOLD] 46.54'], ['[BOLD] WT2', 'AWD-LSTM w/o finetune DBLP:journals/corr/abs-1708-02182 ', '33M', '69.1', '67.1', '67.9', '64.8'], ['[BOLD] WT2', 'AWD-LSTM DBLP:journals/corr/abs-1708-02182 ', '33M', '68.6', '65.8', '66.5', '63.4'], ['[BOLD] WT2', 'AWD-LSTM + continuous cache pointer DBLP:journals/corr/abs-1708-02182 ', '33M', '53.8', '52.0', '[BOLD] 51.0', '[BOLD] 49.3'], ['[BOLD] WT2', 'AWD-LSTM-MoS w/o finetune DBLP:journals/corr/abs-1711-03953 ', '35M', '66.01', '63.33', '64.86', '62.12'], ['[BOLD] WT2', 'AWD-LSTM-MoS DBLP:journals/corr/abs-1711-03953 ', '35M', '63.88', '61.45', '62.68', '59.73'], ['[BOLD] WT2', 'AWD-LSTM-MoS + dynamic evaluation DBLP:journals/corr/abs-1711-03953 ', '35M', '42.41', '40.68', '[BOLD] 40.85', '[BOLD] 39.14']]
Language Modeling In all these settings, our method outperforms the two baselines. On PTB dataset, our method improves the AWD-LSTM and AWD-LSTM-MoS baseline by 0.8/1.2/1.0 and 0.76/1.13/1.15 points in test set at different checkpoints. On WT2 dataset, which contains more rare words, our method achieves larger improvements. We improve the results of AWD-LSTM and AWD-LSTM-MoS by 2.3/2.4/2.7 and 1.15/1.72/1.54 in terms of test perplexity, respectively.
FRAGE: Frequency-Agnostic Word Representation
1809.06858
Table 3: BLEU scores on test set on WMT2014 English-German and IWSLT German-English tasks.
['[BOLD] WMT En→De [BOLD] Method', '[BOLD] WMT En→De [BOLD] BLEU', '[BOLD] IWSLT De→En [BOLD] Method', '[BOLD] IWSLT De→En [BOLD] BLEU']
[['ByteNet kalchbrenner2016neural ', '23.75', 'DeepConv gehring2016convolutional ', '30.04'], ['ConvS2S gehring2017convolutional ', '25.16', 'Dual transfer learning Wang2018Dual ', '32.35'], ['Transformer Base vaswani2017attention ', '27.30', 'ConvS2S+SeqNLL edunov2017classical ', '32.68'], ['Transformer Base with FRAGE', '[BOLD] 28.36', 'ConvS2S+Risk edunov2017classical ', '32.93'], ['Transformer Big vaswani2017attention ', '28.40', 'Transformer', '33.12'], ['Transformer Big with FRAGE', '[BOLD] 29.11', 'Transformer with FRAGE', '[BOLD] 33.97']]
Machine Translation We outperform the baselines for 1.06/0.71 in the term of BLEU in transformer_base and transformer_big settings in WMT14 English-German task, respectively. The model learned from adversarial training also outperforms original one in IWSLT14 German-English task by 0.85. These results show improving word embeddings can achieve better results in more complicated tasks and larger datasets.
FRAGE: Frequency-Agnostic Word Representation
1809.06858
Table 10: BLEU scores on test set of the WMT14 English-German task and IWSLT14 German-English task. Our method is denoted as “FRAGE”, “Reweighting” denotes reweighting the loss of each word by reciprocal of its frequency, and “Weight Decay” denotes putting weight decay rate (0.2) on embeddings.
['[BOLD] WMT En→De [BOLD] Method', '[BOLD] WMT En→De [BOLD] BLEU', '[BOLD] IWSLT De→En [BOLD] Method', '[BOLD] IWSLT De→En [BOLD] BLEU']
[['Transformer Base ', '27.30', 'Transformer', '33.12'], ['Transformer Base + Reweighting', '26.04', 'Transformer + Reweighting', '31.04'], ['Transformer Base + Weight Decay', '26.76', 'Transformer + Weight Decay', '32.52'], ['Transformer Base with FRAGE', '[BOLD] 28.36', 'Transformer with FRAGE', '[BOLD] 33.97']]
We compare some other simple methods with ours on machine translation tasks, which include reweighting method and l2 regularization (weight decay). We notice that those simple methods do not work for the tasks, even have negative effects.
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Autoencoders
1904.02142
Table 6: F1 for different model variants on the binary WSJ validation set with included punctuation. The binary trees are as-is (∅) or modified according to the post-processing heuristic (+PP). The mean F1 is shown across three random seeds.
['[BOLD] Composition', '[BOLD] Loss', 'F1 [ITALIC] μ ∅', 'F1 [ITALIC] μ +PP']
[['TreeLSTM', 'Margin', '49.9', '53.1'], ['TreeLSTM', 'Softmax', '52.0', '52.9'], ['MLP', 'Margin', '49.7', '54.4'], ['MLP', 'Softmax', '52.6', '55.5'], ['MLPKernel', 'Softmax', '51.8', '54.8'], ['MLPShared', 'Softmax', '50.8', '56.7']]
We see that MLP composition consistently performs better than with TreeLSTM, that MLP benefits from the Softmax loss, and that the best performance comes from sharing parameters. All other experimental results use this highly performant setting unless otherwise specified.
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Autoencoders
1904.02142
Table 1: Full WSJ (test set) unsupervised unlabeled binary constituency parsing including punctuation. † indicates trained to optimize NLI task. Mean and max are calculated over five random restarts. PRPN F1 was calculated using the parse trees and results provided by Htut et al. (2018). The depth (δ) is the average tree height. +PP refers to post-processing heuristic that attaches trailing punctuation to the root of the tree. The top F1 value in each column is bolded.
['[BOLD] Model', 'F1 [ITALIC] μ', 'F1 [ITALIC] max', '[ITALIC] δ']
[['LB', '13.1', '13.1', '12.4'], ['RB', '16.5', '16.5', '12.4'], ['Random', '21.4', '21.4', '5.3'], ['Balanced', '21.3', '21.3', '4.6'], ['RL-SPINN†', '13.2', '13.2', '-'], ['ST-Gumbel - GRU†', '22.8 ±1.6', '25.0', '-'], ['PRPN-UP', '38.3 ±0.5', '39.8', '5.9'], ['PRPN-LM', '35.0 ±5.4', '42.8', '6.2'], ['ON-LSTM', '47.7 ±1.5', '49.4', '5.6'], ['DIORA', '48.9 ±0.5', '49.6', '8.0'], ['PRPN-UP+PP', '-', '45.2', '6.7'], ['PRPN-LM+PP', '-', '42.4', '6.3'], ['DIORA+PP', '[BOLD] 55.7 ±0.4', '[BOLD] 56.2', '8.5']]
This model achieves a mean F1 7 points higher than ON-LSTM and an increase of over 6.5 max F1 points. We also see that DIORA exhibits much less variance between random seeds than ON-LSTM. Additionally, we find that PRPN-UP and DIORA benefit much more from the +PP heuristic than PRPN-LM. This is consistent with qualitative analysis showing that DIORA and PRPN-UP incorrectly attach trailing punctuation much more often than PRPN-LM.
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Autoencoders
1904.02142
Table 2: NLI unsupervised unlabeled binary constituency parsing comparing to CoreNLP predicted parses. PRPN F1 was calculated using the parse trees and results provided by Htut et al. (2018). F1 median and max are calculated over five random seeds and the top F1 value in each column is bolded. Note that we use median rather than mean in order to compare with previous work.
['[BOLD] Model', 'F1 [ITALIC] median', 'F1 [ITALIC] max', '[ITALIC] δ']
[['Random', '27.0', '27.0', '4.4'], ['Balanced', '21.3', '21.3', '3.9'], ['PRPN-UP', '48.6', '-', '4.9'], ['PRPN-LM', '50.4', '-', '5.1'], ['DIORA', '51.2', '53.3', '6.4'], ['PRPN-UP+PP', '-', '54.8', '5.2'], ['PRPN-LM+PP', '-', '50.4', '5.1'], ['DIORA+PP', '59.0', '[BOLD] 59.1', '6.7']]
Using the heuristic, DIORA greatly surpasses both variants of PRPN. A second caveat is that SNLI Bowman et al. Syntactic parsers often suffer significant performance drops when predicting outside of the newswire domain that the models were trained on.
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Autoencoders
1904.02142
Table 3: WSJ-10 and WSJ-40 unsupervised non-binary unlabeled constituency parsing with punctuation removed. † indicates that the model predicts a full, non-binary parse with additional resources. ‡ indicates model was trained on WSJ data and PRPNNLI was trained on MultiNLI data. CCM uses predicted POS tags while CCMgold uses gold POS tags. PRPN F1 was calculated using the parse trees and results provided by Htut et al. (2018). LB and RB are the left and right-branching baselines. UB is the upper bound attainable by a model that produces binary trees.
['[BOLD] Model', '[BOLD] WSJ-10 F1 [ITALIC] μ', '[BOLD] WSJ-10 F1 [ITALIC] max', '[BOLD] WSJ-40 F1 [ITALIC] μ', '[BOLD] WSJ-40 F1 [ITALIC] max']
[['UB', '87.8', '87.8', '85.7', '85.7'], ['LB', '28.7', '28.7', '12.0', '12.0'], ['RB', '61.7', '61.7', '40.7', '40.7'], ['CCM†', '-', '63.2', '-', '-'], ['CCM [ITALIC] gold†', '-', '71.9', '-', '33.7'], ['PRLG †', '-', '[BOLD] 72.1', '-', '54.6'], ['PRPN [ITALIC] NLI', '66.3 ±0.8', '68.5', '-', '-'], ['PRPN‡', '70.5 ±0.4', '71.3', '-', '52.4'], ['ON-LSTM‡', '65.1 ±1.7', '66.8', '-', '-'], ['DIORA', '67.7 ±0.7', '68.5', '60.6 ±0.2', '[BOLD] 60.9']]
We also compare our models to two subsets of the WSJ dataset that were used in previous unsupervised parsing evaluations. WSJ-10 and WSJ-40 contain sentences up to length 10 and 40 respectively after punctuation removal. The WSJ-10 split has been difficult for latent tree parsers such as DIORA, PRPN, and ON-LSTM, none of which (including our model) are able to improve upon previous non-neural methods. However, when we compare trends between WSJ-10 and WSJ-40, we see that DIORA does a better job at extending to longer sequences.
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Autoencoders
1904.02142
Table 4: Segment recall from WSJ separated by phrase type. The 10 most frequent phrase types are shown above, and the highest value in each row is bolded. P-UP=PRNP-UP, P-LM=PRPN-LM
['Label', 'Count', 'DIORA', 'P-UP', 'P-LM']
[['NP', '297,872', '[BOLD] 0.767', '0.687', '0.598'], ['VP', '168,605', '[BOLD] 0.628', '0.393', '0.316'], ['PP', '116,338', '0.595', '0.497', '[BOLD] 0.602'], ['S', '87,714', '[BOLD] 0.798', '0.639', '0.657'], ['SBAR', '24,743', '[BOLD] 0.613', '0.403', '0.554'], ['ADJP', '12,263', '[BOLD] 0.604', '0.342', '0.360'], ['QP', '11,441', '[BOLD] 0.801', '0.336', '0.545'], ['ADVP', '5,817', '[BOLD] 0.693', '0.392', '0.500'], ['PRN', '2,971', '[BOLD] 0.546', '0.127', '0.144'], ['SINV', '2,563', '0.926', '0.904', '[BOLD] 0.932']]
In many scenarios, one is only concerned with extracting particular constituent phrases rather than a full parse. Common use cases would be identifying entities, noun phrases, or verb phrases for downstream analysis. To get an idea of how well our model can perform on phrase segmentation, we consider the maximum recall of spans in our predicted parse tree. We leave methods for cutting the tree to future work and instead consider the maximum recall of our model which serves as an upper bound on its performance. Recall here is the percentage of labeled constituents that appear in our predicted tree relative to the total number of constituents in the gold tree. DIORA achieves the highest recall across the most types and is the only model to perform effectively on verb-phrases. Interestingly, DIORA performs worse than PRPN-LM at prepositional phrases.
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Autoencoders
1904.02142
Table 5: P@1, P@10, and P@100 for labeled chunks from CoNLL-2000 and CoNLL 2012 datasets. For all metrics, higher is better. The top value in each column is bolded. Diora uses the concatenation of the inside and outside vector at each cell which performed better than either in isolation.
['[BOLD] Model', 'Dim', '[BOLD] CoNLL 2000 P@1', '[BOLD] CoNLL 2000 P@10', '[BOLD] CoNLL 2000 P@100', '[BOLD] CoNLL 2012 P@1', '[BOLD] CoNLL 2012 P@10', '[BOLD] CoNLL 2012 P@100']
[['Random', '800', '0.684', '0.683', '0.680', '0.137', '0.133', '0.135'], ['ELMo [ITALIC] CI', '1024', '0.962', '0.955', '0.957', '0.708', '0.643', '0.544'], ['ELMo [ITALIC] SI', '4096', '0.970', '0.964', '0.955', '0.660', '0.624', '0.533'], ['ELMo', '4096', '0.987', '0.983', '0.974', '[BOLD] 0.896', '[BOLD] 0.847', '[BOLD] 0.716'], ['DIORA [ITALIC] In/ [ITALIC] Out', '800', '[BOLD] 0.990', '[BOLD] 0.985', '[BOLD] 0.979', '0.860', '0.796', '0.646']]
For each of the labeled spans with length greater than one, we first generate its phrase representation. We then calculate its cosine similarity to all other labeled spans. We then calculate if the label for that query span matches the labels for each of the K most similar other spans in the dataset.
FQuAD: French Question Answering Dataset
2002.06071
Table 3: The number of articles, paragraphs and questions for FQuAD1.1
['Dataset', 'Articles', 'Paragraphs', 'Questions']
[['Train', '271', '12123', '50741'], ['Development', '30', '1387', '5668'], ['Test', '25', '1398', '5594']]
The guidelines for writing question and answer pairs for each paragraph are the same as for SQuAD1.1 [rajpurkar-etal-2016-squad]. First, the paragraph is presented to the student on the platform and the student reads it. Second, the student thinks of a question whose answer is a span of text within the context. Third, the student selects the smallest span in the paragraph which contains the answer. The process is then repeated until 3 to 5 questions are generated and correctly answered. The students were asked to spend on average 1 minute on each question and answer pair. This amounts to an average of 3-5 minutes per annotated paragraph. The total number of questions amounts to 62003. The FQuAD1.1 training, development and test sets are then respectively composed of 271 articles (83%), 30 (9%) and 25 (8%). The difference with the first annotation process is that the workers were specifically asked to come up with complex questions by varying style and question types in order to increase difficulty. The additional answer collection process remains the same.
FQuAD: French Question Answering Dataset
2002.06071
Table 7: Human Performance on FQuAD
['Dataset', 'F1 [%]', 'EM [%]']
[['FQuAD1.0-test.', '92.1', '78.4'], ['FQuAD1.1-test', '91.2', '75.9'], ['"FQuAD1.1-test new samples"', '90.5', '74.1'], ['FQuAD1.0-dev', '92.6', '79.5'], ['FQuAD1.1-dev', '92.1', '78.3'], ['"FQuAD1.1-dev new samples"', '91.4', '76.7']]
The human score on FQuAD1.0 reaches 92.1% F1 and 78.4% EM on the test set and 92.6% and 79.5% on the development set. On FQuAD1.1, it reaches 91.2% F1 and 75.9% EM on the test set and 92.1% and 78.3% on the development set. We observe that there is a noticeable gap between the human performance on FQuAD1.0 test dataset and the human performance on the new samples of FQuAD1.1 with 78.4% EM score on the 2189 questions of FQuAD1.0 test set and 74.1% EM score on the 3405 new questions of FQuAD1.1 test set. This gap in human performance constitutes for us a proof that answering to FQuAD1.1 new questions is globally more difficult than answering to FQuAD1.0 questions, hence making the final FQuAD1.1 dataset even more challenging.
FQuAD: French Question Answering Dataset
2002.06071
Table 8: Answer type comparison for the development sets of FQuAD1.1 and SQuAD1.1
['Answer type', 'FQuAD1.1 [%]', 'SQuAD1.1 [%]']
[['Common noun', '26.6', '31.8'], ['Person', '14.6', '12.9'], ['Other proper nouns', '13.8', '15.3'], ['Location', '14.1', '4.4'], ['Date', '7.3', '8.9'], ['Other numeric', '13.6', '10.9'], ['Verb', '6.6', '5.5'], ['Adjective', '2.6', '3.9'], ['Other', '0.9', '2.7']]
S5SS3SSS0Px1 Answer type distribution For both datasets, the most represented answer type is Common Noun with FQuAD1.1 scoring 26.6% and SQuAD1.1 scoring 31.8%. The less represented ones are Adjective and Other which have a noticeable higher proportion for SQuAD1.1 than FQuAD1.1 Compared to SQuAD1.1, a significant difference exists on structured entities such as Person, Location, and Other Numeric where FQuAD1.1 consistently scores above SQuAD1.1 with the exception of the Date category where FQuAD scores less. Based on these observations, it is difficult to understand the difference in human score between the two datasets.
Backpropagating through Structured Argmax using a spigot
1805.04658
Table 2: Test accuracy of sentiment classification on Stanford Sentiment Treebank. Bold font indicates the best performance.
['[BOLD] Model', '[BOLD] Accuracy (%)']
[['BiLSTM', '84.8'], ['pipeline', '85.7'], ['ste', '85.4'], ['spigot', '[BOLD] 86.3']]
Pipelined semantic dependency predictions brings 0.9% absolute improvement in classification accuracy, and spigot outperforms all baselines. In this task ste achieves slightly worse performance than a fixed pre-trained pipeline.
Backpropagating through Structured Argmax using a spigot
1805.04658
Table 3: Syntactic parsing performance (in unlabeled attachment score, UAS) and DM semantic parsing performance (in labeled F1) on different groups of the development data. Both systems predict the same syntactic parses for instances from Same, and they disagree on instances from Diff (§5).
['[BOLD] Split', '[BOLD] # Sent.', '[BOLD] Model', '[BOLD] UAS', '[BOLD] DM']
[['Same', '1011', 'pipeline', '97.4', '94.0'], ['Same', '1011', 'spigot', '97.4', '94.3'], ['Diff', '681', 'pipeline', '91.3', '88.1'], ['Diff', '681', 'spigot', '89.6', '89.2']]
We consider the development set instances where both syntactic and semantic annotations are available, and partition them based on whether the two systems’ syntactic predictions agree (Same), or not (Diff). The second group includes sentences with much lower syntactic parsing accuracy (91.3 vs. 97.4 UAS), and spigot further reduces this to 89.6. Even though these changes hurt syntactic parsing accuracy, they lead to a 1.1% absolute gain in labeled F1 for semantic parsing. Furthermore, spigot has an overall less detrimental effect on the intermediate parser than ste: using spigot, intermediate dev. parsing UAS drops to 92.5 from the 92.9 pipelined performance, while ste reduces it to 91.8.
DAWT: Densely Annotated Wikipedia Texts across multiple languages
1703.00948
Table 7: Accuracy of Semantic Analogy
['[BOLD] Relation', '[BOLD] GloVe Word dimensionality 50', '[BOLD] GloVe Word dimensionality 100', '[BOLD] GloVe Word dimensionality 200', '[BOLD] GloVe Word dimensionality 300', '[BOLD] DAWT Entity dimensionality 50', '[BOLD] DAWT Entity dimensionality 300', '[BOLD] DAWT Entity dimensionality 1000']
[['Capital-World', '74.43', '92.77', '97.05', '97.94', '93.24', '93.95', '91.81'], ['City-in-State', '23.22', '40.10', '63.90', '72.59', '68.39', '88.98', '87.90'], ['Capital-Common-Countries', '80.04', '95.06', '96.64', '97.23', '78.66', '79.64', '71.54'], ['Currency', '17.29', '30.05', '37.77', '35.90', '43.88', '13.56', '2.93'], ['Family', '71.05', '85.09', '89.18', '91.23', '66.96', '72.51', '75.15'], ['Average', '53.21', '68.61', '76.91', '78.98', '70.23', '69.73', '65.87']]
In comparison, it also shows the accuracy from GloVe word embeddings with vector sizes of 50, 100, 200, and 300. Entity embeddings have better performance with vector size of 50. As we increase vector size, word embeddings perform significantly better and outperform entity embeddings when the vector size is 200 or higher. The degraded performance of entity embeddings may due to less training data, since our entity embeddings were obtained from 2.2B tokens, where GloVe’s word embeddings were obtained from 6B tokens.