paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
Semantic Parsing to Probabilistic Programs for Situated Question Answering
1606.07046
Table 3: Accuracy of P3 when trained and evaluated with labeled logical forms, food webs, or both.
['Model', 'Accuracy', 'Δ']
[['[ITALIC] P3', '69.1', '[EMPTY]'], ['+ gold logical form', '75.1', '+6.0'], ['+ gold food web', '82.3', '+13.2'], ['+ both', '91.6', '+22.5']]
Our third experiment analyses sources of error by training and evaluating P3 while providing the gold logical form, food web, or both as input. The final entry shows the maximum accuracy possible given our domain theory and answer selection. The larger accuracy improvement with gold food webs suggests that the execution model is responsible for more error than semantic parsing, though both components contribute.
Semantic Parsing to Probabilistic Programs for Situated Question Answering
1606.07046
Table 4: Accuracy on the scene data set. KK2013 results are from Krishnamurthy and Kollar (2013).
['Model', 'Supervision QA', 'Supervision QA+E', 'Supervision QA+E+LF']
[['[ITALIC] P3', '68', '75', '–'], ['KK2013', '67', '–', '70']]
The evaluation metric is exact match accuracy between the predicted and labeled sets of objects. We consider three supervision conditions: QA trains with question/answer pairs, QA+E further includes labeled environments, and QA+E+LF further includes labeled logical forms. We trained P3 in the first two conditions, while prior work trained in the first and third conditions. KK2013 is a possible worlds model with a max-margin training objective. P3 slightly outperforms in the QA condition and P3 trained with labeled environments outperforms prior work trained with additional logical form labels.
An empirical study on large scale text classification with skip-gram embeddings
1606.06623
Table 3: Classification performance of the different representations. The upper part of the table presents one-hot-encoding methods, while the bottom part methods that depend on the dimensionality of the distributed representations. We report the best performance obtained when the size N of the training data N∈{1,50,100,150,200}×103. The best achieved performance per classification problem is shown in bold. We have performed statistical significance tests to test if improvements obtained compared to tf-idf representations are statistically important, which is noted by putting a dagger (†) to the accuracy scores.
['[ITALIC] hash', 'PubMed1,000 0.63', 'PubMed1,000 0.63', 'PubMed1,000 0.63', 'PubMed5,000 0.427', 'PubMed5,000 0.427', 'PubMed5,000 0.427', 'PubMed10,000 0.456', 'PubMed10,000 0.456', 'PubMed10,000 0.456']
[['[ITALIC] tf-idf', '0.65', '0.65', '0.65', '0.469', '0.469', '0.469', '0.492', '0.492', '0.492'], ['[EMPTY]', 'D=100', 'D=200', 'D=400', 'D=100', 'D=200', 'D=400', 'D=100', 'D=200', 'D=400'], ['[ITALIC] x_conc', '0.592', '0.614', '0.626', '0.362', '0.41', '0.436', '0.386', '0.434', '0.454'], ['[ITALIC] hash+x_conc', '0.651', '0.654', '0.646', '0.464', '0.476', '0.473', '0.488', '0.495', '0.491'], ['[ITALIC] tfidf+x_conc', '[BOLD] 0.66†', '[BOLD] 0.66†', '0.656', '0.484†', '0.486†', '[BOLD] 0.487†', '0.507†', '0.507†', '[BOLD] 0.512†']]
We first discuss the performance when single representations are used: x_conc, tf-idf and hash. Notice that tf-idf performs better than both x_conc and hash, with the latter achieving the lowest performance. The performance of the three representations on PubMed1,000 is comparable, but in the bigger classification problems tf-idf and performs considerably better. We, thus, consider the tf-idf representations as our baseline model, and we examine how the models with concatenated representations behave compared to it. We investigate now whether the fusion of distributed document representations with the one-hot-encoding representations benefits the classification performance. In the experiments, both hash and tf-idf consistently achieve better performance when combined with the distributed representations. For instance, for PubMed10,000 and D=400 the tf-idf (resp. hash) representations improve in absolute numbers by 2 (resp. 3.5) F1 points. We have performed two-sided student’s t-tests (p<0.01) to compare whether the improvements obtained for each classification problem are statistically significant compared to using tf-idf representations. In addition to the important improvements, note than in the fused representations, the effect of D diminishes. For instance in PubMed1,000, one can obtain the optimal performance using embedding dimensions with D<400 and similar observations can be made in the rest of the datasets.
Efficient summarization with read-again and copy mechanism
1611.03382
Table 2: Rouge-N limited-length recall on DUC2004. Size denotes the size of decoder vocabulary in a model.
['Models', 'Size', 'Rouge-1', 'Rouge-2', 'Rouge-L']
[['ZOPIARY (Zajic et\xa0al. ( 2004 ))', '-', '25.12', '6.46', '20.12'], ['ABS (Rush et\xa0al. ( 2015 ))', '69K', '26.55', '7.06', '23.49'], ['ABS+ (Rush et\xa0al. ( 2015 ))', '69K', '28.18', '8.49', '23.81'], ['RAS-LSTM (Chopra et\xa0al. ( 2016 ))', '69K', '27.41', '7.69', '23.06'], ['RAS-Elman (Chopra et\xa0al. ( 2016 ))', '69K', '28.97', '8.26', '24.06'], ['big-words-lvt2k-1sent (Nallapati et\xa0al. ( 2016 ))', '69K', '28.35', '[BOLD] 9.46', '24.59'], ['big-words-lvt5k-1sent (Nallapati et\xa0al. ( 2016 ))', '200K', '28.61', '9.42', '25.24'], ['Ours-GRU (C)', '15K', '29.08', '9.20', '25.25'], ['Ours-LSTM (C)', '15K', '[BOLD] 29.89', '9.37', '25.93'], ['Ours-Opt-2 (C)', '15K', '29.74', '9.44', '[BOLD] 25.94']]
Evaluation on DUC2004: DUC 2004 (Over et al. Each article is paired with 4 different human-generated reference summaries, capped at 75 characters. This dataset is evaluation-only. Similar to Rush et al. , we train our neural model on the Gigaword training set, and show the models’ performances on DUC2004. Following the convention, we also use ROUGE limited-length recall as our evaluation metric, and set the capping length to 75 characters. We generate summaries with 15 words using beam-size of 10. Furthermore, our model only uses 15k decoder vocabulary, while previous methods use 69k or 200k.
Efficient summarization with read-again and copy mechanism
1611.03382
Table 1: Different Read-Again Model. Ours denotes Read-Again models. C denotes copy mechanism. Ours-Opt-1 and Ours-Opt-2 are the models described in section 3.1.3. Size denotes the size of decoder vocabulary in a model.
['#Input', 'Model', 'Size', 'Rouge-1', 'Rouge-2', 'Rouge-L']
[['1 sent', 'ABS (baseline)', '69K', '24.12', '10.24', '22.61'], ['1 sent', 'GRU (baseline)', '69K', '26.79', '12.03', '25.14'], ['1 sent', 'Ours-GRU', '69K', '27.26', '12.28', '25.48'], ['1 sent', 'Ours-LSTM', '69K', '[BOLD] 27.82', '[BOLD] 12.74', '[BOLD] 26.01'], ['1 sent', 'GRU (baseline)', '15K', '24.67', '11.30', '23.28'], ['1 sent', 'Ours-GRU', '15K', '25.04', '11.40', '23.47'], ['1 sent', 'Ours-LSTM', '15K', '25.30', '11.76', '23.71'], ['1 sent', 'Ours-GRU (C)', '15K', '[BOLD] 27.41', '12.58', '[BOLD] 25.74'], ['1 sent', 'Ours-LSTM (C)', '15K', '27.37', '[BOLD] 12.64', '25.69'], ['2 sent', 'Ours-Opt-1 (C)', '15K', '27.95', '[BOLD] 12.65', '26.10'], ['2 sent', 'Ours-Opt-2 (C)', '15K', '[BOLD] 27.96', '12.65', '[BOLD] 26.18']]
Results on Gigaword : We compare the performances of different architectures and report ROUGE scores in Tab. Our baselines include the ABS model of Rush et al. We allow the decoder to generate variable length summaries. As shown in Tab. We also observe that adding the copy mechanism further helps to improve performance: Even though the decoder vocabulary size of our approach with copy (15K) is much smaller than ABS (69K) and GRU (69K), it achieves a higher ROUGE score. Besides, our Multiple-Sentences model achieves the best performance.
Exploiting Multi-typed Treebanks for Parsing with Deep Multi-task Learning
1606.01161
Table 3: Parsing accuracies of Multilingual (Univ→Univ). Significance tests with MaltEval yield p-values < 0.01 for (MTL vs. Sup) on all languages.
['[EMPTY]', 'Multilingual (Univ → Univ) Sup', 'Multilingual (Univ → Univ) Sup', 'Multilingual (Univ → Univ) \\textsc [ITALIC] CasEN', 'Multilingual (Univ → Univ) \\textsc [ITALIC] CasEN', 'Multilingual (Univ → Univ) \\textsc [ITALIC] SMTLEN', 'Multilingual (Univ → Univ) \\textsc [ITALIC] SMTLEN', 'Multilingual (Univ → Univ) \\textsc [ITALIC] MTLEN', 'Multilingual (Univ → Univ) \\textsc [ITALIC] MTLEN']
[['[EMPTY]', 'UAS', 'LAS', 'UAS', 'LAS', 'UAS', 'LAS', 'UAS', 'LAS'], ['DE', '84.24', '78.40', '84.24', '78.65', '84.37', '79.07', '[BOLD] 84.93', '[BOLD] 79.34'], ['ES', '85.31', '81.23', '85.42', '81.42', '85.78', '81.54', '[BOLD] 86.78', '[BOLD] 82.92'], ['FR', '85.55', '81.13', '84.57', '80.14', '86.13', '81.77', '[BOLD] 86.44', '[BOLD] 82.01'], ['PT', '88.40', '86.54', '88.88', '87.07', '89.08', '87.24', '[BOLD] 89.24', '[BOLD] 87.50'], ['IT', '86.53', '83.72', '86.58', '83.67', '86.53', '83.64', '[BOLD] 87.26', '[BOLD] 84.27'], ['SV', '84.91', '79.88', '86.43', '81.92', '[BOLD] 86.79', '[BOLD] 82.31', '85.98', '81.35'], ['1-9 Avg', '[ITALIC] 85.82', '[ITALIC] 81.82', '[ITALIC] 86.02', '[ITALIC] 82.15', '[ITALIC] 86.45', '[ITALIC] 82.60', '[ITALIC] 86.77', '[ITALIC] 82.90']]
Cas yields slightly better performance than Sup, especially for SV (+1.52% UAS and +2.04% LAS), indicating that pre-training with EN training data indeed provides a better initialization of the parameters for cascaded training. SMTL in turn outperforms Cas overall (comparable for IT), which implies that training two treebanks jointly helps even with an unique model.
Exploiting Multi-typed Treebanks for Parsing with Deep Multi-task Learning
1606.01161
Table 4: SMTL for Swedish without sharing BiLSTM(chars).
['SV', 'SMTL', 'UAS 86.79', 'LAS 82.31']
[['SV', '– [ITALIC] shared-BiLSTM(chars)', '86.06', '81.50']]
To verify the first issue, we conduct tests on SMTL without sharing Char-BiLSTMs. This observation also indicates that MTL has the potential to reach higher performances through language-specific tuning of parameter sharing strategies.
Exploiting Multi-typed Treebanks for Parsing with Deep Multi-task Learning
1606.01161
Table 5: Low resource setup (3K tokens), evaluated with LAS.
['[EMPTY]', 'DE', 'ES', 'FR']
[['Sup', '58.93', '61.99', '60.45'], ['Cas', '64.08', '[BOLD] 70.45', '[BOLD] 68.72'], ['SMTL', '63.57', '69.01', '65.04'], ['[ITALIC] + weighted sampling', '[ITALIC] 63.50', '[ITALIC] 70.17', '[ITALIC] 68.52'], ['MTL', '62.43', '66.67', '64.23'], ['[ITALIC] + weighted sampling', '[BOLD] 64.22', '[ITALIC] 68.42', '[ITALIC] 66.67'], ['Duong et al.', '61.2', '69.1', '65.3'], ['Duong et al. + Dict', '61.8', '70.5', '67.2']]
To verify the second issue, we consider a low resource setup following \newciteduong-EtAl:2015:EMNLP, where the target language has a small treebank (3K tokens). We train our models on identical sampled dataset shared by \newciteduong-EtAl:2015:EMNLP on DE, ES and FR. Although not the primary focus of this work, we find that SMTL and MTL can be significantly improved in low resource setting through weighted sampling of tasks during training. In this way, the two tasks are encouraged to converge at a similar rate.
Exploiting Multi-typed Treebanks for Parsing with Deep Multi-task Learning
1606.01161
Table 6: Monolingual (Conll↔Univ) performance. SV∗ is used for computing the Avg values.
['[EMPTY]', 'Sup UAS', 'Sup LAS', 'Cas UAS', 'Cas LAS', 'MTL UAS', 'MTL LAS']
[['[EMPTY]', 'Monolingual (Conll→Univ)', 'Monolingual (Conll→Univ)', 'Monolingual (Conll→Univ)', 'Monolingual (Conll→Univ)', 'Monolingual (Conll→Univ)', 'Monolingual (Conll→Univ)'], ['DE', '84.24', '78.40', '85.02', '80.05', '[BOLD] 85.73', '[BOLD] 80.64'], ['ES', '85.31', '81.23', '[BOLD] 85.90', '[BOLD] 81.73', '85.80', '81.45'], ['PT', '88.40', '86.54', '89.12', '87.32', '[BOLD] 89.40', '[BOLD] 87.60'], ['SV', '84.91', '79.88', '87.17', '82.83', '[BOLD] 87.27', '[BOLD] 83.52'], ['SV∗', '82.61', '77.42', '[BOLD] 85.39', '80.60', '85.29', '[BOLD] 81.22'], ['1-7 Avg', '[ITALIC] 85.14', '[ITALIC] 80.90', '[ITALIC] 86.35', '[ITALIC] 82.43', '[ITALIC] 86.56', '[ITALIC] 82.73'], ['[EMPTY]', 'Monolingual (Univ→Conll)', 'Monolingual (Univ→Conll)', 'Monolingual (Univ→Conll)', 'Monolingual (Univ→Conll)', 'Monolingual (Univ→Conll)', 'Monolingual (Univ→Conll)'], ['DE', '89.06', '86.48', '89.64', '86.66', '[BOLD] 89.98', '[BOLD] 87.50'], ['ES', '85.41', '80.50', '[BOLD] 86.46', '81.37', '86.07', '[BOLD] 81.41'], ['PT', '[BOLD] 90.16', '[BOLD] 85.53', '89.50', '85.03', '89.98', '85.23'], ['SV', '88.49', '81.98', '89.07', '82.91', '[BOLD] 91.60', '[BOLD] 85.22'], ['SV∗', '79.61', '72.71', '82.91', '74.96', '[BOLD] 84.86', '[BOLD] 77.36'], ['1-7 Avg', '[ITALIC] 86.06', '[ITALIC] 81.31', '[ITALIC] 87.13', '[ITALIC] 82.01', '[ITALIC] 87.72', '[ITALIC] 82.88']]
Overall MTL systems outperforms the supervised baselines by significant margins in both conditions, showing the mutual benefits of UDT and CoNLL-X treebanks.
Context-Aware Neural Machine Translation Learns Anaphora Resolution
1805.10163
Table 1: Automatic evaluation: BLEU. Significant differences at p<0.01 are in bold.
['[BOLD] model', '[BOLD] BLEU']
[['baseline', '29.46'], ['concatenation (previous sentence)', '29.53'], ['context encoder (previous sentence)', '[BOLD] 30.14'], ['context encoder (next sentence)', '29.31'], ['context encoder (random context)', '29.69']]
We use the traditional automatic metric BLEU on a general test set to get an estimate of the overall performance of the discourse-aware model, before turning to more targeted evaluation in the next section. The ‘baseline’ is the discourse-agnostic version of the Transformer. As another baseline we use the standard Transformer applied to the concatenation of the previous and source sentences, as proposed by \newcitetiedemann_neural_2017. \newcitetiedemann_neural_2017 only used a special symbol to mark where the context sentence ends and the source sentence begins. This technique performed badly with the non-recurrent Transformer architecture in preliminary experiments, resulting in a substantial degradation of performance (over 1 BLEU). Instead, we use a binary flag at every word position in our concatenation baseline telling the encoder whether the word belongs to the context sentence or to the source sentence.
Context-Aware Neural Machine Translation Learns Anaphora Resolution
1805.10163
Table 9: Performance of CoreNLP and our model’s attention mechanism compared to human assessment (%). Examples with ≥1 noun in context sentence.
['[EMPTY]', 'CoreNLP right', 'CoreNLP wrong']
[['attn right', '53', '19'], ['attn wrong', '24', '04']]
The agreement between our model and the ground truth is 72%. Though 5% below the coreference system, this is a lot higher than the best heuristic (+18%). This confirms our conclusion that our model performs latent anaphora resolution. Nevertheless, there is room for improvement, and improving the attention component is likely to boost translation performance.
Augmenting Non-Collaborative Dialog Systems with Explicit Semantic and Strategic Dialog History
1909.13425
Table 2: Human evaluation ratings for (on a scale from 1 to 5) for FeHED, FeHED−FST-DA, FeHED−FST-S, and HED. We conducted third-person rating and second-person rating. Sale price is normalized.
['Second-person Rating [BOLD] Models', 'Second-person Rating Persuasive', 'Second-person Rating Coherent', 'Second-person Rating Natural', 'Second-person Rating Sale Price', 'Third-person Rating Persuasive', 'Third-person Rating Coherent', 'Third-person Rating Natural', 'Third-person Rating Sale Price']
[['FeHED', '[BOLD] 2.6', '[BOLD] 2.8', '[BOLD] 3.0', '[BOLD] 0.84', '[BOLD] 3.2', '[BOLD] 3.9', '3.5', '[BOLD] 0.68'], ['−FST-DA', '2.5', '2.2', '2.4', '0.70', '3.0', '3.4', '3.5', '0.64'], ['−FST-S', '2.0', '2.4', '2.4', '0.64', '2.9', '3.4', '3.3', '0.59'], ['HED+RNN', '2.3', '2.5', '2.6', '0.49', '2.8', '3.8', '[BOLD] 3.6', '0.44'], ['HED', '1.8', '1.9', '1.9', '0.62', '2.9', '3.4', '3.1', '0.50']]
For third-person rating, we asked an expert to generate 20 dialogs by negotiating with FeHED, FeHED−FST-DA, FedHED−S, HED+RNN and HED respectively (5 dialogs each). We then recruited 50 people on AMT to rate these generated dialogs. Result shows that FeHED is more persuasive, coherent and natural than all the baselines. For second-person rating, we asked 50 participants on AMT to play the role of buyer to negotiate with FeHED, FeHED−FST-DA, FeHED−FST-S, HED+RNN and HED respectively. Results show that FeHED outperforms RNN and other models on all the metrics except naturalness. It is likely because RNN is trained jointly with HED, but it is not good at explicitly tracking and preserving the history. Although FST is learned separately, it forces the model to learn the history through a list of traversed states. We analyze the generated dialogs for human evaluation and find that baselines are more likely to accept unfair offers and apply inappropriate strategies.
The NYU-CUBoulder Systems forSIGMORPHON 2020 Task 0 and Task 2
2006.11830
Table 3: Results for all test languages on the official test sets for Task 2.
['System Test Set', 'Baseline 1 Test Set', 'Baseline 1 Test Set', 'Baseline 2 Test Set', 'Baseline 2 Test Set', 'Sub-1 Test Set', 'Sub-1 Test Set', 'Sub-2 Test Set', 'Sub-2 Test Set', 'Sub-3 Test Set', 'Sub-3 Test Set']
[['[EMPTY]', 'slots', 'macro', 'slots', 'macro', 'slots', 'macro', 'slots', 'macro', 'slots', 'macro'], ['Basque', '30', '0.0006', '27', '0.0006', '30', '0.0005', '30', '0.0005', '30', '[BOLD] 0.0007'], ['Bulgarian', '35', '0.283', '34', '[BOLD] 0.3169', '35', '0.2769', '35', '0.2894', '35', '0.2789'], ['English', '4', '0.656', '4', '[BOLD] 0.662', '4', '0.502', '4', '0.528', '4', '0.512'], ['Finnish', '21', '0.0533', '21', '[BOLD] 0.055', '21', '0.0536', '21', '0.0547', '21', '0.0535'], ['German', '9', '0.2835', '9', '[BOLD] 0.29', '9', '0.273', '9', '0.2735', '9', '0.2735'], ['Kannada', '172', '0.1549', '172', '[BOLD] 0.1512', '172', '0.111', '172', '0.1116', '172', '0.111'], ['Navajo', '3', '0.0323', '3', '[BOLD] 0.0327', '3', '0.004', '3', '0.0043', '3', '0.0043'], ['Spanish', '29', '0.2296', '29', '[BOLD] 0.2367', '29', '0.2039', '29', '0.2056', '29', '0.203'], ['Turkish', '104', '0.1421', '104', '[BOLD] 0.1553', '104', '0.1488', '104', '0.1539', '104', '0.1513'], ['[BOLD] All', '[EMPTY]', '0.2039', '[EMPTY]', '[BOLD] 0.2112', '[EMPTY]', '0.1749', '[EMPTY]', '0.1802', '[EMPTY]', '0.1765']]
All three systems produce relatively similar results. NYU-CUBoulder-2, our vanilla transformer ensemble, performed slightly better overall with an average best-match accuracy of 18.02%. Since our system is close to the baseline models, it performs similarly, achieving slightly worse results. For Basque, our all-round ensemble NYU-CUBoulder-2 outperformed both baselines with a best-match accuracy of 00.07%, achieving the highest result in the shared task.
The NYU-CUBoulder Systems forSIGMORPHON 2020 Task 0 and Task 2
2006.11830
Table 1: The hyperparameters used in our inflection models for both Task 0 and Task 2.
['[BOLD] Hyperparameter', '[BOLD] Value']
[['Embedding dimension', '256'], ['Encoder layers', '4'], ['Decoder layers', '4'], ['Encoder hidden dimension', '1024'], ['Decoder hidden dimension', '1024'], ['Attention heads', '4']]
For Task 2, we use ensembling for all submissions. NYU-CUBoulder-1 is an ensemble of six pointer-generator transformers, NYU-CUBoulder-2 is an ensemble of six vanilla transformers, and NYU-CUBoulder-3 is an ensemble of all twelve models. For all models in both tasks, we use the hyperparameters described in Table
The NYU-CUBoulder Systems forSIGMORPHON 2020 Task 0 and Task 2
2006.11830
Table 2: Macro-averaged results over all languages on the official development and test sets for Task 0. Low=languages with less than 1000 train instances, Other=all other languages, All=all languages.
['[EMPTY]', 'Sub-1', 'Sub-2', 'Sub-3', 'Sub-4', 'Base']
[['Development Set', 'Development Set', 'Development Set', 'Development Set', 'Development Set', 'Development Set'], ['Low', '[BOLD] 88.71', '88.02', '84.90', '84.07', '-'], ['Other', '90.46', '90.63', '90.20', '[BOLD] 90.94', '-'], ['All', '[BOLD] 90.06', '90.02', '88.96', '89.34', '-'], ['Test Set', 'Test Set', 'Test Set', 'Test Set', 'Test Set', 'Test Set'], ['Low', '84.8', '84.8', '85.5', '83.9', '[BOLD] 89.77'], ['Other', '89.7', '89.8', '89.8', '90.2', '[BOLD] 92.43'], ['All', '88.6', '88.7', '88.8', '88.8', '[BOLD] 91.81']]
All four systems produce relatively similar results. NYU-CUBoulder-3, our five-model ensemble, performs best overall with 88.8% accuracy on average. We further look at the results for low-resource (< 1000 training examples) and high-resource (>= 1000 training examples) languages separately. This way, we are able to see the advantage of the pointer-generator transformer in the low-resource setting, where all pointer-generator systems achieve an at least 0.9% higher accuracy than the vanilla transformer model. However, in the setting where training data is abundant, the effect of the copy mechanism vanishes, as NYU-CUBoulder-4 – our only vanilla transformer – achieved the best results for our high-resource languages.
The NYU-CUBoulder Systems forSIGMORPHON 2020 Task 0 and Task 2
2006.11830
Table 4: Results on the official development data for our low-resource experiment. Trm=Vanilla transformer, Trm-PG=Pointer-generator transformer, Baseline=neural transducer by Makarov and Clematide (2018).
['System', 'Trm', 'Trm-PG', 'Baseline']
[['All', '63.06', '67.61', '[BOLD] 70.06']]
S5SS3SSS0Px4 Results. For some languages, such as Chichicapan Zapotec, the difference is up to 14%. While the neural transducer achieves a higher accuracy, our model performs only 2.45% worse than this state-of-the-art model. We are also able to observe the use of the copy mechanism for copying of OOV characters in the test sets of some languages.
Coach: A Coarse-to-Fine Approach for Cross-domain Slot Filling
2004.11727
Table 3: F1-scores on the NER target domain (CBS SciTech News).
['Target Samples', '0', '50']
[['CT\xa0(Bapna et al. ( 2017 ))', '61.43', '65.85'], ['RZT\xa0(Shah et al. ( 2019 ))', '61.94', '65.21'], ['BiLSTM-CRF', '61.77', '66.57'], ['Coach', '64.08', '[BOLD] 68.35'], ['Coach + TR', '[BOLD] 64.54', '67.45']]
However, we observe that template regularization loses its effectiveness in this task, since the text in NER is relatively more open, which makes it hard to capture the templates for each label type.
Coach: A Coarse-to-Fine Approach for Cross-domain Slot Filling
2004.11727
Table 1: Slot F1-scores based on standard BIO structure for SNIPS. Scores in each row represents the performance of the leftmost target domain, and TR denotes template regularization.
['Training Setting Domain ↓ Model →', 'Zero-shot CT', 'Zero-shot RZT', 'Zero-shot Coach', 'Zero-shot +TR', 'Few-shot on 20 (1%) samples CT', 'Few-shot on 20 (1%) samples RZT', 'Few-shot on 20 (1%) samples Coach', 'Few-shot on 20 (1%) samples +TR', 'Few-shot on 50 (2.5%) samples CT', 'Few-shot on 50 (2.5%) samples RZT', 'Few-shot on 50 (2.5%) samples Coach', 'Few-shot on 50 (2.5%) samples +TR']
[['AddToPlaylist', '38.82', '42.77', '45.23', '[BOLD] 50.90', '58.36', '[BOLD] 63.18', '58.29', '62.76', '68.69', '[BOLD] 74.89', '71.63', '74.68'], ['BookRestaurant', '27.54', '30.68', '33.45', '[BOLD] 34.01', '45.65', '50.54', '61.08', '[BOLD] 65.97', '54.22', '54.49', '72.19', '[BOLD] 74.82'], ['GetWeather', '46.45', '50.28', '47.93', '[BOLD] 50.47', '54.22', '58.86', '67.61', '[BOLD] 67.89', '63.23', '58.87', '[BOLD] 81.55', '79.64'], ['PlayMusic', '32.86', '[BOLD] 33.12', '28.89', '32.01', '46.35', '47.20', '53.82', '[BOLD] 54.04', '54.32', '59.20', '62.41', '[BOLD] 66.38'], ['RateBook', '14.54', '16.43', '[BOLD] 25.67', '22.06', '64.37', '63.33', '[BOLD] 74.87', '74.68', '76.45', '76.87', '[BOLD] 86.88', '84.62'], ['SearchCreativeWork', '39.79', '44.45', '43.91', '[BOLD] 46.65', '57.83', '[BOLD] 63.39', '60.32', '57.19', '66.38', '[BOLD] 67.81', '65.38', '64.56'], ['FindScreeningEvent', '13.83', '12.25', '[BOLD] 25.64', '25.63', '48.59', '49.18', '66.18', '[BOLD] 67.38', '70.67', '74.58', '78.10', '[BOLD] 83.85'], ['Average F1', '30.55', '32.85', '35.82', '[BOLD] 37.39', '53.62', '56.53', '63.17', '[BOLD] 64.27', '64.85', '66.67', '74.02', '[BOLD] 75.51']]
The CT framework suffers from the difficulty of capturing the whole slot entity, while our framework is able to recognize the slot entity tokens by sharing its parameters across all slot types. Based on the CT framework, the performance of RZT is still limited, and Coach outperforms RZT by a ∼3% F1-score in the zero-shot setting. Additionally, template regularization further improves the adaptation robustness by helping the model cluster the utterance representations into a similar vector space based on their corresponding template representations.
Coach: A Coarse-to-Fine Approach for Cross-domain Slot Filling
2004.11727
Table 2: Averaged F1-scores for seen and unseen slots over all target domains. ‡ represent the number of training samples utilized for the target domain.
['Target Samples‡', '0 samples unseen', '0 samples seen', '20 samples unseen', '20 samples seen', '50 samples unseen', '50 samples seen']
[['CT', '27.1', '44.18', '50.13', '61.21', '62.05', '69.64'], ['RZT', '28.28', '47.15', '52.56', '63.26', '63.96', '73.10'], ['Coach', '32.89', '50.78', '61.96', '73.78', '74.65', '76.95'], ['Coach+TR', '[BOLD] 34.09', '[BOLD] 51.93', '[BOLD] 64.16', '[BOLD] 73.85', '[BOLD] 76.49', '[BOLD] 80.16']]
We take a further step to test the models on seen and unseen slots in target domains to analyze the effectiveness of our approaches. To test the performance, we split the test set into “unseen” and “seen” parts. An utterance is categorized into the “unseen” part as long as there is an unseen slot (i.e., the slot does not exist in the remaining six source domains) in it. Otherwise we categorize it into the “seen” part. We observe that our approaches generally improve on both unseen and seen slot types compared to the baseline models. For the improvements in the unseen slots, our models are better able to capture the unseen slots since they explicitly learn the general pattern of slot entities. Interestingly, our models also bring large improvements in the seen slot types. We conjecture that it is also challenging to adapt models to seen slots due to the large variance between the source and target domains. For example, slot entities belonging to the “object type” in the “RateBook” domain are different from those in the “SearchCreativeWork” domain. Hence, the baseline models might fail to recognize these seen slots in the target domain, while our approaches can adapt to the seen slot types more quickly in comparison. In addition, we observe that template regularization improves performance in both seen and unseen slots, which illustrates that clustering representations based on templates can boost the adaptation ability.
Language Model Bootstrapping Using Neural Machine Translation for Conversational Speech Recognition
1912.00958
Table 4: PPL and relative WERRs (%) with varying floor interpolation weights for the translation component in the 180 hour setup.
['[BOLD] Floor', '[BOLD] Interpolated', '[BOLD] WERR %']
[['[BOLD] weight', '[BOLD] PPL', '[EMPTY]'], ['0.1', '50.28', '5.78'], ['0.15', '51.24', '7.04'], ['0.25', '52.36', '7.86'], ['0.3', '53.37', '7.49'], ['0.4', '56.34', '6.58']]
The purpose of this investigation is to observe the effect of changing the floor weight parameter for the translation component, which provides a lever to override its relative importance in the interpolated LM. However, WERR demonstrates fluctuation with varying floor weights: a low weight renders the translation component ineffective whereas a high value undermines the transcription component. Floor weight sweep can provide empirical guidance for adjusting this parameter.
Language Model Bootstrapping Using Neural Machine Translation for Conversational Speech Recognition
1912.00958
Table 1: Relative WERRs (%) with different post-editing techniques. Perplexity (PPL) is evaluated on a held-out in-domain dataset. Relative WERR captures the WER reduction w.r.t baseline trained on transcribed data only.
['[BOLD] Postprocessing', '[BOLD] Approach', '[BOLD] PPL', '[BOLD] Relative WERR %']
[['None', 'Raw translations', '11941.08', '-1.81'], ['Post-editing', 'NE copy-over', '2889.45', '2.36'], ['[EMPTY]', 'NE resampling', '1241.52', '4.62'], ['[EMPTY]', 'Code mixing + NE resampling', '936.64', '5.83']]
The approach of ingesting raw translations without any postprocessing yields a high perplexity translation component, resulting in negative WERR. We observe consistent improvements by introducing attention weight based post-editing. NE copy-over alone reduces the perplexity significantly. This, coupled with NE resampling and code mixing, results in a 5.83% WERR.
Language Model Bootstrapping Using Neural Machine Translation for Conversational Speech Recognition
1912.00958
Table 2: Relative WERRs (%) with different NMT adaptation strategies. Note that these results include the effect of NE resampling and code mixing techniques.
['[BOLD] Adaptation', '[BOLD] Approach', '[BOLD] PPL', '[BOLD] Relative WERR %']
[['Data selection', 'Unweighted avg. (BLEU: 29.1)', '662.33', '6.94'], ['(BLEU original model: 43.8)', 'SIF (BLEU: 37.8)', '686.97', '7.23'], ['[EMPTY]', 'LASER (BLEU: 37.4)', '704.12', '7.14'], ['Rescoring', 'beam-size=5', '792.92', '6.28'], ['[EMPTY]', 'beam-size=20', '852.16', '5.88'], ['Model finetuning', 'n-epochs=3', '726.62', '6.84'], ['[EMPTY]', 'n-epochs=10', '983.64', '5.23'], ['Filtering translations', 'MT score - top 85%', '1109.44', '4.82'], ['[EMPTY]', 'MT score - top 75%', '1327.56', '3.37'], ['[EMPTY]', 'MT score - top 65%', '1426.18', '2.16'], ['[EMPTY]', 'SLM score - top 85%', '793.73', '6.33'], ['[EMPTY]', 'SLM score - top 75%', '892.92', '6.82'], ['[EMPTY]', 'SLM score - top 65%', '878.16', '5.94'], ['Combined', '(i) SIF selection + Rescoring + SLM score - top 75%', '584.24', '7.62'], ['[EMPTY]', '(i) + Model finetuning', '564.06', '7.86']]
For MT training data selection, we retain only top-25% (out of 8.4M) sentences w.r.t. their relative similarity with in-domain data. This reduction in training data impacts the BLEU score adversely. Amongst the sentence representation techniques, LASER and SIF embeddings outperform the unweighted averaging approach in terms of BLEU score. Interestingly, while the unweighted averaging achieves lowest perplexity on held-out in-domain dataset, the gains don’t carry over while measuring overall ASR performance. SIF embedding based selection achieves the highest WERR of 7.23%, followed closely by LASER encoder representation.
Language Model Bootstrapping Using Neural Machine Translation for Conversational Speech Recognition
1912.00958
Table 3: Relative WERR % by interaction scenarios captured along with the extent of coverage in transcribed collections. Named entity proportion in the utterances is given by NE %. Adaptation contribution % captures the relative contribution of adaptation towards WERR. For e.g., with a 5.75% post-editing WERR, adaptation yields an additional 1.51% WERR towards a combined WERR of 7.26%, i.e. 20.80%.
['[BOLD] Coverage', '[BOLD] Interaction', '[BOLD] Post-editing', '[BOLD] Combined', '[BOLD] Adaptation', '[BOLD] NE%']
[['[BOLD] (In transcribed collections)', '[BOLD] scenario', '[BOLD] WERR %', '[BOLD] WERR %', '[BOLD] contribution %', '[EMPTY]'], ['Low', 'Books', '5.75', '7.26', '20.80', '34.74'], ['[EMPTY]', 'Communication', '3.82', '5.98', '36.12', '11.19'], ['[EMPTY]', 'Weather', '3.23', '6.85', '52.84', '7.63'], ['[EMPTY]', 'Shopping', '7.86', '10.84', '27.49', '52.94'], ['Moderate', 'Knowledge', '6.36', '9.54', '33.34', '31.60'], ['[EMPTY]', 'Video', '6.44', '8.52', '24.41', '39.36'], ['[EMPTY]', 'Home Automation', '5.68', '7.94', '22.63', '5.81'], ['High', 'Notifications', '4.65', '7.06', '34.14', '5.66'], ['[EMPTY]', 'Music', '5.74', '7.48', '23.26', '47.62']]
Cleo prompts cover multiple interaction use cases. In order to derive fine grained insights into the effect of translations, we study WERR on test utterances manually categorized into scenarios. In order to isolate the gains obtained from post-editing and adaptation, we study both the post-editing and combined WERR. We also analyse the proportion of named entities for utterances belonging to each of these scenarios. Following observations can be made from this analysis.
Language Model Bootstrapping Using Neural Machine Translation for Conversational Speech Recognition
1912.00958
Table 5: Relative WERRs (%) with varying levels of in-domain transcribed data.
['[BOLD] Transcribed', '[BOLD] WERR %']
[['[BOLD] Volume', '[EMPTY]'], ['10K', '15.65'], ['20K', '13.18'], ['50K', '9.42'], ['100K', '8.98'], ['200K', '7.86']]
We now attempt to address the following question: what are the relative gains provided by the translation data during different phases of bootstrapping? In particular, we measure the WERR between the baseline and translation-augmented LMs, by varying the in-domain transcribed utterances from 10K to 200K. We observe that the combined WERR after post-editing and adaptation increases from 7.86% to 15.65% as the amount of in-domain data reduces. Note that in this experiment, we use the same AM trained on 180 hours data, in order to precisely study the effect of data augmentation for LM. The WERR we report is hence an underestimate, and will probably be much higher, if the AM was trained using similar levels of transcribed data.
Neural Multi-task Learning in Automated Assessment
1801.06830
Table 1: Mapping of FCE to essay scores
['Exam Score', 'Essay Score']
[['1.1', '1'], ['1.2', '4'], ['1.3', '8'], ['2.1', '9'], ['2.2', '10'], ['2.3', '11'], ['3.1', '12'], ['..', '..'], ['..', '..'], ['5.3', '20']]
In particular, we modified the neural network such that the output of the hidden states in each direction are concatenated and averaged over all time-steps before being fed into the AES output layer. The AES output layer maps this concatenated vector to a single value, feeds this value into a sigmoid function and scales it appropriately. The AES loss used is the squared error between the predicted score and the gold standard essay score. We use the public FCE dataset for our experiments. The dataset contains short essays written by ESL learners for the First Certificate in English (FCE) (an upper intermediate level exam) To our knowledge, this is the only public dataset that has annotations for both GED and AES on the same texts. The dataset contains 1244 scripts where each script contains two essays. Our final training, development, and test set contains 2059, 198, and 194 essays respectively.
Neural Multi-task Learning in Automated Assessment
1801.06830
Table 2: Performance of BiLSTM model on public-FCE test set for GED with multi-task training objectives (cumulatively) for entire essay contexts.
['Error Detection Cost', 'Error Detection Precision', 'Error Detection Recall', 'Error Detection [ITALIC] F0.5']
[['[ITALIC] Eged', '0.492', '0.251', '0.413'], ['+ [ITALIC] Elm', '0.588', '0.221', '0.442'], ['+ [ITALIC] Eaes', '0.543', '0.265', '0.449']]
(+ Eaes) for the GED task. We see a small but insignificant increase in performance (F0.5) when using the AES training objective.
Neural Multi-task Learning in Automated Assessment
1801.06830
Table 3: Performance of BiLSTM AES task on public-FCE test set with and without multi-task training objectives (cumulatively) for entire essay context. * indicates that the results are statistically significant when compared to + Elm
['Automated Essay Scoring Cost', 'Automated Essay Scoring Spearman', 'Automated Essay Scoring QWK']
[['[ITALIC] Eaes', '0.334', '0.324'], ['+ [ITALIC] Elm', '0.376', '0.347'], ['+ [ITALIC] Eged', '0.537*', '0.459*']]
In particular, we see that the semi-supervised language modelling objective (Elm) improves the performance of the model. However, more strikingly we see that when the GED objective is added it increases the performance of our model on the AES task substantially. This is an encouraging result as it means that a neural model can utilise multiple signals in an end-to-end manner to improve AES.
Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation
1809.04686
Table 4: Results of the control experiment on zero-shot performance on the Amazon German test set.
['[BOLD] Model', 'Amazon (De)']
[['Zero-shot [ITALIC] Encoder-Classifier', '52.33'], ['+ Pre-trained Encoder', '52.98'], ['+ Freeze Encoder', '57.72']]
The third row in the table shows the small deviation of 7% over random, which is likely obtained from common sub-words having similar meaning across languages. This control experiment suggests that although having a shared sub-word vocabulary is necessary, we still need to train the NMT system on parallel data from the language of interest so that the system can perform zero-shot classification.
Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation
1809.04686
Table 1: Transfer learning results of the classification accuracy on all the datasets. Amazon (En) and Amazon (Fr) are the English and French versions of the task, training the models on the data for each language. The state-of-the-art results are cited from ijcai2018-802 ijcai2018-802 for both Amazon Reviews tasks and NIPS2017_7209 NIPS2017_7209 for SST and SNLI.
['[BOLD] Model', 'Amazon (En)', 'Amazon (Fr)', 'SST (En)', 'SNLI (En)']
[['Proposed model: [ITALIC] Encoder-Classifier', '76.60', '82.50', '79.63', '76.70'], ['+ Pre-trained Encoder', '80.70', '83.18', '84.18', '84.42'], ['+ Freeze Encoder', '84.13', '85.65', '84.51', '84.41'], ['State-of-the-art Models', '83.50', '87.50', '90.30', '88.10']]
The first row in the table shows the baseline accuracy of our system for all four datasets. The second row shows the result from initializing with a pre-trained multilingual NMT encoder. It can be seen that this provides a significant improvement in accuracy, an average of 4.63%, across all the tasks. This illustrates that the multilingual NMT encoder has successfully learned transferable contextualized representations that are leveraged by the classifier component of our proposed system. These results are in line with the results in \citeauthorNIPS2017_7209 \shortciteNIPS2017_7209 where the authors used the representations from the top NMT encoder layer as an additional input to the task-specific system. However, in our setup we reused all of the layers of the encoder as a single pre-trained component in the task-specific system. The third row shows the results from freezing the pre-trained encoder after initialization and only training the classifier component. For the Amazon English and French tasks, freezing the encoder after initialization significantly improves the performance further. We hypothesize that since the Amazon dataset is a document level classification task, the long input sequences are very different from the short sequences consumed by the NMT system and hence freezing the encoder seems to have a positive effect. This hypothesis is also supported by the SNLI and SST results, which contain sentence-level input sequences, where we did not find any significant difference between freezing and not freezing the encoder. In this section, we explore the zero-shot classification task in French for our systems. We assume that we do not have any French training data for all the three tasks and test how well our proposed method can generalize to the unseen French language without any further training. A reasonable upper bound to which zero-shot performance should be compared to is bridging - translating a French test text to English and then applying the English classifier on the translated text. If we assume the translation to be perfect, we should expect this approach to perform as well as the English classifier. The Amazon Reviews and SNLI tasks have a French test set available, and we evaluate the performance of the bridged and zero-shot systems on each French set. -S4SS1SSS0Px3 is used to evaluate the zero-shot performance. We do this since translating the ‘pseudo French’ back to English will result in two distinct translation steps and hence more errors.
Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation
1809.04686
Table 2: Zero-Shot performance on all French test sets. ∗Note that we use the English accuracy in the bridged column for SST.
['[BOLD] Model', 'Amazon (Fr) Bridged', 'Amazon (Fr) Zero-Shot', 'SST (Fr) Bridged∗', 'SST (Fr) Zero-Shot', 'SNLI (Fr) Bridged', 'SNLI (Fr) Zero-Shot']
[['Proposed model: [ITALIC] Encoder-Classifier', '73.30', '51.53', '79.63', '59.47', '74.41', '37.62'], ['+ Pre-trained Encoder', '79.23', '75.78', '84.18', '81.05', '80.65', '72.35'], ['+ Freeze Encoder', '83.10', '81.32', '84.51', '83.14', '81.26', '73.88']]
It can be seen that just by using the pre-trained NMT encoder, the zero-shot performance increases drastically from almost random to within 10% of the bridged system. Freezing the encoder further pushes this performance closer to the bridged system. On the Amazon Review task, our zero-shot system is within 2% of the best bridged system. On the SST task, our zero-shot system obtains an accuracy of 83.14% which is within 1.5% of the bridged equivalent (in this case the English system).
Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation
1809.04686
Table 3: Comparison of our best zero-shot result on the French SNLI test set to other baselines. See text for details.
['[BOLD] Model', 'SNLI (Fr)']
[['Our best zero-shot [ITALIC] Encoder-Classifier', '[BOLD] 73.88'], ['INVERT\xa0', '62.60'], ['BiCVM\xa0', '59.03'], ['RANDOM\xa0', '63.21'], ['RATIO\xa0', '58.64']]
Finally, on SNLI, we compare our best zero-shot system with bilingual and multilingual embedding based methods evaluated on the same French test set in \citeauthorlrec2018 \shortcitelrec2018. INVERT BiCVM Our system significantly outperforms all methods listed in the second column by 10.66% to 15.24% and demonstrates the effectiveness of our proposed approach.
Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation
1809.04686
Table 5: Effect of machine translation data over our proposed Encoder-Classifier on the SNLI tasks. The results of SNLI (Fr) shows the zero-shot performance of our system.
['[BOLD] Parallel data type for NMT', 'SNLI (En)', 'SNLI (Fr)']
[['Symmetric data (full)', '84.13', '73.88'], ['Symmetric data (half)', '80.79', '66.72'], ['Asymmetric data (half)', '81.15', '67.63']]
We explore two dimensions that could affect zero-shot performance related to our training data in the multilingual NMT model. First, we investigate the effect of using symmetric training data to train both directions in the multilingual NMT system. We conduct an experiment where we take half of the sentences from the En→Fr training set and use the swapped version of the other half of the sentences for training the model. Second, we want to see the effect of training data size, so we run an experiment where we use only half of the training set in a symmetric fashion. However, both the symmetric and asymmetric versions of the data perform comparably on both tasks. This shows that the multilingual NMT system is able to learn an effective interlingua even without the need of symmetric data across the language pairs involved.
Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation
1809.04686
Table 6: Zero-shot analyses of classifier network model capacity. The SNLI (Fr) results report the zero-shot performance.
['[BOLD] Encoder components', 'Simpler classifier SNLI (En)', 'Simpler classifier SNLI (Fr)', 'Complex classifier SNLI (En)', 'Complex classifier SNLI (Fr)']
[['Embeddings only', '65.18', '49.66', '82.43', '56.66'], ['+ bi-directional layer 1', '67.99', '58.19', '83.40', '64.74'], ['+ layer 2', '67.00', '61.01', '83.63', '72.81'], ['+ layer 3', '67.26', '60.55', '84.17', '74.33'], ['+ layer 4', '67.26', '61.61', '84.41', '74.11']]
S7SS0SSS0Px3 Effect of Encoder/Classifier Capacity We study the effect of the capacity of the two parts of our model on the final accuracies. Next, we experimented with only reusing different parts of the multilingual encoder in a bottom-up fashion. It can be seen that, as expected going from a simple linear classifier to a complex classifier significantly improves both English and zero-shot French performance on the SNLI tasks. However, even a simple linear classifier can achieve significant zero-shot performance when provided with rich enough encodings (49.66 to 61.61 accuracy). However, changing the encoder capacity tells an interesting story. As we selectively reuse parts of the encoder from the embedding layer to the top, we notice that the English performance only increases by about 2% whereas the zero-shot performance increases by about 18% at most in the complex classifier. This means that the additional layers in the encoder are essential for the proposed system to model a language agnostic representation (interlingua) which enables it to perform better zero-shot classification. Moreover, it should be noted that best zero-shot performance is obtained by using the complex classifier and up to layer 3 of the encoder. Although this gap is not big enough to be significant, we hypothesize that top layer of the encoder could be very specific to the MT task and hence might not be best suited for zero-shot classification.
Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation
1809.04686
Table 7: Effect of parameter smoothing on the English SNLI test set and zero-shot performance on the French test set.
['Smoothing Range (steps)', 'SNLI (En)', 'SNLI (Fr)']
[['1', '84.41', '74.11'], ['400', '84.62', '75.02'], ['1K', '84.67', '75.48'], ['20K', '84.65', '75.93'], ['35K', '84.46', '75.63']]
is a technique which aims to smooth point estimates of the learned parameters by averaging n steps from the training run and using it for inference. This is aimed at improving generalization and being less susceptible to the effects of over-fitting at inference. We hypothesize that a system with enhanced generalization might be better suited for zero-shot classification since it is a measure of the ability of the model to generalize to a new task.
Towards Controllable and Personalized Review Generation
1910.03506
Table 3: Confusion Matrix of Empirical Test
['Empirical Test', 'Actual Value Human-Written', 'Actual Value RevGAN']
[['Human', '119', '61'], ['Machine', '102', '78']]
Besides the statistical and semantical metrics, we also design an empirical study to test the personalized performance of our generated reviews. We randomly select 15 reviews to include in each questionnaire, 5 from the original dataset, 5 from RevGAN generated results with personalization and 5 from RevGAN generated results without personalization, and ask participants to analyze which ones are generated by the machine and which ones are really created by humans. Besides, they are also asked to assess the helpfulness of each review by choosing the helpfulness score scale 1-5 for each review. To test that whether the RevGAN generated reviews are indeed statistically indistinguishable from original ones, we run chi-test for significance testing: χ2=0.012
Towards Controllable and Personalized Review Generation
1910.03506
Table 1: Comparison of Experimental Results on Amazon Review Dataset (** stands for significance under 99% confidence, * stands for 95% confidence)
['Models', 'Log-Likelihood', 'WMD', 'PPL', 'BLEU-4(%)', 'ROUGE-L(%)']
[['SeqGAN', '-86699', '1.869', '22.60', '15.06', '38.30'], ['LeakGAN', '-108581', '2.324', '24.09', '14.98', '37.73'], ['RankGAN', '-73309', '1.862', '22.45', '14.92', '37.72'], ['charRNN', '-100430', '1.976', '22.07', '11.46', '33.60'], ['MLE', '-54338', '2.106', '17.15', '9.62', '31.89'], ['Attr2Seq', '-56298', '2.077', '21.00', '11.48', '32.22'], ['RevGAN+CD', '-80386', '2.097', '19.71', '21.32', '39.47'], ['RevGAN+CD+SA', '-51549', '2.030', '17.45', '24.44', '41.32'], ['RevGAN+CD+SA+PD', '[BOLD] -34305**', '[BOLD] 1.762**', '[BOLD] 17.00*', '[BOLD] 27.16**', '[BOLD] 44.63**']]
To illustrate the superiority and generalizability of our RevGAN model, we implement our model on three different domains of the Amazon Review Dataset including musical instruments, automotive and patio products. On average, we could witness a 5% increase in Word Mover Distance (WMD), 80% improvement in BLEU and 10% rising in ROUGE. Besides, the comparison between different variations of RevGAN model verifies that indeed the combination of all three novel components gives the best generation performance. By deploying bootstrap re-sampling techniques introduced in the previous section, we conduct hypothesis tests where all the tests confirm the significant improvement of our RevGAN model. In that sense, we claim that our model achieved the state-of-the-art results on review generation. We also showcase some generated reviews at the end of this section.
Towards Controllable and Personalized Review Generation
1910.03506
Table 3: Confusion Matrix of Empirical Test
['Empirical Test', 'Actual Value Human-Written', 'Actual Value RevGAN+PD']
[['Human', '119', '61'], ['Machine', '118', '62']]
Besides the statistical and semantical metrics, we also design an empirical study to test the personalized performance of our generated reviews. We randomly select 15 reviews to include in each questionnaire, 5 from the original dataset, 5 from RevGAN generated results with personalization and 5 from RevGAN generated results without personalization, and ask participants to analyze which ones are generated by the machine and which ones are really created by humans. Besides, they are also asked to assess the helpfulness of each review by choosing the helpfulness score scale 1-5 for each review. To test that whether the RevGAN generated reviews are indeed statistically indistinguishable from original ones, we run chi-test for significance testing: χ2=0.012
Graph Sequential Network for Reasoning over Sequences
2004.02001
Table 2: Results comparison with average and standard deviation of five runs. “ANS”, “SUP” and “JOINT” indicate the jointly trained models’ performance in terms of measurements on answer span prediction, supporting sentence prediction and joint tasks.
['[EMPTY]', 'ANS EM', 'ANS [ITALIC] F1', 'SUP EM', 'SUP [ITALIC] F1', 'JOINT EM', 'JOINT [ITALIC] F1']
[['baseline', '62.99±0.16', '76.90±0.31', '61.35±0.17', '88.73±0.09', '41.76±0.40', '69.64±0.28'], ['GSN', '63.56±0.31', '77.26±0.11', '63.26±0.16', '89.35±0.04', '43.51±0.27', '70.43±0.14']]
Compared to the baseline model, more improvement is attained than models trained only on one task, especially for the supporting sentence prediction task. With joint training, the performance on answer span prediction drops while the performance on supporting sentence prediction increases. Actually we observed better joint EM and F1 scores with joint training compared to separate training for both baseline models and GSN-based models. Thus, joint training still boosts the performance overall, because the joint trained models find the correct answer and supporting sentences of a question simultaneously.
Graph Sequential Network for Reasoning over Sequences
2004.02001
Table 1: Results comparison with average and standard deviation of five runs. “ANS-only” and “SUP-only” indicate the model is only trained on two separate tasks.
['[EMPTY]', 'ANS-only EM', 'ANS-only [ITALIC] F1', 'SUP-only EM', 'SUP-only [ITALIC] F1']
[['baseline', '63.87±0.16', '77.69±0.16', '62.14±0.16', '88.94±0.07'], ['GSN', '64.39±0.06', '78.27±0.10', '62.96±0.14', '89.29±0.07']]
4.2.2 Results We report the results using the best hyperparameters for each experimental setting. The results show that the proposed GSN-based models perform better on both tasks with strong statistical significance compared to the baseline GCN-based model. The improvement on EM score is slightly more significant, indicating GSN-based models are better at finding complete answer span or supporting sentences than the baseline models. When jointly training both tasks, the 2-layer GSN gives the best performance. This pattern is reasonable because the powerful BERT based encoder possibly learns good contextual representation on token level. This learned representation can benefit the answer prediction task which is also on token level, thus less graph based reasoning is required. Similarly, some recent studies Min et al. However, it is not the case for supporting sentence prediction as it requires the model to find sentences that could be far from each other in the context. On the contrast, our proposed GSN is suitable to model the relational information among sentences, thus we observe that more layers give better results for supporting sentence prediction.
Graph Sequential Network for Reasoning over Sequences
2004.02001
Table 5: Results on FEVER development and test sets.
['[EMPTY]', 'dev ACC', 'dev FEVER', 'test ACC', 'test FEVER']
[['baseline Zhou et\xa0al. ( 2019 )', '73.67', '68.69', '71.01', '65.64'], ['baseline (ours)', '73.72', '69.26', '70.80', '65.88'], ['GSN', '[BOLD] 74.89', '[BOLD] 70.51', '[BOLD] 72.00', '[BOLD] 67.13']]
Our re-implementation of the baseline system gets slightly better numbers than those reported by Zhou et al. And our proposed GSN based system improves over the baseline system by more than 1% in terms of both ACC and FEVER score. Since the fact verification task can be regarded as a graph classification task, we further prove the proposed GSN is able to achieve better performance than GNN based models when using sequences on graph nodes.
Gaussian Mixture Latent Vector Grammars
1805.04688
Table 2: Token accuracy (T) and sentence accuracy (S) for POS tagging on the testing data.
['Model', 'WSJ T', 'WSJ S', 'English T', 'English S', 'French T', 'French S', 'German T', 'German S', 'Russian T', 'Russian S', 'Spanish T', 'Spanish S', 'Indonesian T', 'Indonesian S', 'Finnish T', 'Finnish S', 'Italian T', 'Italian S']
[['LVG-D-16', '96.62', '48.74', '92.31', '52.67', '93.75', '34.90', '87.38', '20.98', '81.91', '12.25', '92.47', '24.82', '89.27', '20.29', '83.81', '19.29', '94.81', '45.19'], ['LVG-G-16', '96.78', '50.88', '93.30', '57.54', '94.52', '34.90', '88.92', '24.05', '84.03', '16.63', '93.21', '27.37', '90.09', '21.19', '85.01', '20.53', '95.46', '48.26'], ['GM-LVeG-D', '96.99', '53.10', '[BOLD] 93.66', '[BOLD] 59.46', '94.73', '[BOLD] 39.60', '89.11', '24.77', '[BOLD] 84.21', '17.84', '[BOLD] 93.76', '[BOLD] 32.48', '[BOLD] 90.24', '[BOLD] 21.72', '85.27', '[BOLD] 23.30', '95.61', '[BOLD] 50.72'], ['GM-LVeG-S', '[BOLD] 97.00', '[BOLD] 53.11', '93.55', '58.11', '[BOLD] 94.74', '39.26', '[BOLD] 89.14', '[BOLD] 25.58', '84.06', '[BOLD] 18.44', '93.52', '30.66', '90.12', '[BOLD] 21.72', '[BOLD] 85.35', '22.07', '[BOLD] 95.62', '49.69']]
It can be seen that, on all the testing data, GM-LVeGs consistently surpass LVGs in terms of both token accuracy and sentence accuracy. GM-LVeG-D is slightly better than GM-LVeG-S in sentence accuracy, producing the best sentence accuracy on 5 of the 9 testing datasets. GM-LVeG-S performs slightly better than GM-LVeG-D in token accuracy on 5 of the 9 datasets. Overall, there is not significant difference between GM-LVeG-D and GM-LVeG-S. However, GM-LVeG-S admits more efficient learning than GM-LVeG-D in practice since it has fewer parameters.
Gaussian Mixture Latent Vector Grammars
1805.04688
Table 3: Parsing accuracy on the testing data of WSJ. EX indicates the exact match score.
['Model', 'dev (all) F1', 'test≤40 F1', 'test≤40 EX', 'test (all) F1', 'test (all) EX']
[['LVG-G-16', '[EMPTY]', '[EMPTY]', '[EMPTY]', '88.70', '35.80'], ['LVG-D-16', '[EMPTY]', '[EMPTY]', '[EMPTY]', '89.30', '[BOLD] 39.40'], ['Multi-Scale', '[EMPTY]', '89.70', '39.60', '89.20', '37.20'], ['Berkeley Parser', '[EMPTY]', '90.60', '39.10', '90.10', '37.10'], ['CVG (SU-RNN)', '91.20', '91.10', '[EMPTY]', '90.40', '[EMPTY]'], ['GM-LVeG-S', '[BOLD] 91.24', '[BOLD] 91.38', '[BOLD] 41.51', '[BOLD] 91.02', '39.24']]
It can be seen that GM-LVeG-S produces the best F1 scores on both the development data and the testing data. It surpasses the Berkeley parser by 0.92% in F1 score on the testing data. Its exact match score on the testing data is only slightly lower than that of LVG-D-16.
Gaussian Mixture Latent Vector Grammars
1805.04688
Table 7: Token accuracy (T) and sentence accuracy (S) for POS tagging on the testing data. The numerical postfix of each LVG model indicates the number of nonterminal subtypes, and hence LVG-G-1 denotes HMM.
['Model', 'WSJ T', 'WSJ S', 'English T', 'English S', 'French T', 'French S', 'German T', 'German S', 'Russian T', 'Russian S', 'Spanish T', 'Spanish S', 'Indonesian T', 'Indonesian S', 'Finnish T', 'Finnish S', 'Italian T', 'Italian S']
[['LVG-D-1', '96.50', '48.04', '91.80', '50.79', '93.55', '30.20', '86.52', '16.99', '81.21', '9.24', '91.79', '22.63', '89.08', '18.85', '83.15', '16.82', '94.00', '37.42'], ['LVG-D-2', '96.57', '47.60', '92.17', '52.05', '93.86', '33.56', '86.93', '18.32', '81.46', '10.04', '92.10', '24.82', '89.16', '19.21', '83.34', '18.52', '94.45', '40.90'], ['LVG-D-4', '96.57', '48.76', '92.30', '52.34', '93.96', '34.90', '87.18', '19.86', '81.95', '11.85', '92.37', '24.82', '89.28', '19.57', '83.76', '18.83', '94.60', '42.54'], ['LVG-D-8', '96.60', '49.14', '92.31', '53.06', '93.78', '34.90', '87.52', '21.60', '81.54', '11.25', '92.26', '23.72', '89.23', '19.39', '83.68', '18.67', '94.70', '42.95'], ['LVG-D-16', '96.62', '48.74', '92.31', '52.67', '93.75', '34.90', '87.38', '20.98', '81.91', '12.25', '92.47', '24.82', '89.27', '20.29', '83.81', '19.29', '94.81', '45.19'], ['LVG-G-1', '96.11', '43.68', '90.84', '44.92', '92.69', '26.51', '86.71', '17.40', '81.22', '10.22', '91.85', '22.63', '88.93', '18.31', '82.94', '16.36', '93.64', '33.74'], ['LVG-G-2', '96.27', '45.57', '92.11', '51.37', '93.28', '28.19', '87.87', '19.86', '81.51', '11.45', '92.29', '23.36', '89.19', '18.49', '83.29', '17.44', '94.20', '38.45'], ['LVG-G-4', '96.50', '48.19', '92.90', '54.31', '94.06', '32.55', '88.31', '20.78', '82.64', '11.85', '92.58', '24.45', '89.58', '19.03', '83.76', '19.44', '95.00', '45.40'], ['LVG-G-8', '96.76', '50.38', '93.29', '56.67', '94.57', '37.25', '88.75', '21.70', '82.85', '14.86', '92.95', '29.20', '89.78', '20.29', '84.69', '21.76', '95.42', '46.83'], ['LVG-G-16', '96.78', '50.88', '93.30', '57.54', '94.52', '34.90', '88.92', '24.05', '84.03', '16.63', '93.21', '27.37', '90.09', '21.19', '85.01', '20.53', '95.46', '48.26'], ['GM-LVeG-D', '96.99', '53.10', '[BOLD] 93.66', '[BOLD] 59.46', '94.73', '[BOLD] 39.60', '89.11', '24.77', '[BOLD] 84.21', '17.84', '[BOLD] 93.76', '[BOLD] 32.48', '[BOLD] 90.24', '[BOLD] 21.72', '85.27', '[BOLD] 23.30', '95.61', '[BOLD] 50.72'], ['GM-LVeG-S', '[BOLD] 97.00', '[BOLD] 53.11', '93.55', '58.11', '[BOLD] 94.74', '39.26', '[BOLD] 89.14', '[BOLD] 25.58', '84.06', '[BOLD] 18.44', '93.52', '30.66', '90.12', '[BOLD] 21.72', '[BOLD] 85.35', '22.07', '[BOLD] 95.62', '49.69']]
In addition to the results shown in the paper, this table includes the tagging results of LVGs with 1, 2, 4, 8 subtypes for each nonterminal.
A Benchmark Dataset of Check-worthy Factual Claims
2004.14425
Table 3: Distribution of sentences over classes
['Assigned label', '#sent', '%']
[['CFS', '5,318', '23,87'], ['UFS', '2,328', '10.45'], ['NFS', '14,635', '65.68'], ['total', '22,281', '100.00']]
We collected 88,313 labels among which 62,404 (70.6%) are from top-quality participants. There are 22,281 (99.02%) sentences which satisfy the above stopping condition. The remaining 220 sentences, though, received many responses from top-quality participants, the labeling agreement did not satisfy the stopping condition. We assign each sentence the label with the majority count.
Energy-Based Models for Text
2004.10188
Table 9: Validation and test perplexity on CC-News and Toronto Book Corpus. * denotes models initialized with RoBERTa trained on additional data. The joint model perplexity ranges are estimated using 100,000 samples, see Eq. 5. The number of parameters of each model is shown in parentheses.
['Model (#parameters)', 'CC-News Val', 'CC-News Test', 'Toronto Book Corpus Val', 'Toronto Book Corpus Test']
[['base LM (203M)', '18.41', '17.57', '16.16', '18.29'], ['RALM (LM+203M)', '17.01', '16.17', '15.71', '17.85'], ['BALM (408M)', '16.50', '15.74', '15.00', '16.99'], ['joint UniT (LM+203M)', '16.42-16.44', '15.57-15.58', '15.12-15.13', '16.98-17.00'], ['joint BiT-Base (LM+125M)', '15.32-15.35', '14.61-14.64', '-', '-'], ['joint BiT-Base* (LM+125M)', '15.40-15.46', '14.75-14.76', '14.63-14.63', '16.36-16.37'], ['joint BiT-Base* (LM+125M)', '15.11-15.17', '14.37-14.42', '14.14-14.16', '15.72-15.74'], ['joint BiT-Large* (LM+355M)', '[BOLD] 14.59- [BOLD] 14.61', '[BOLD] 13.97- [BOLD] 14.00', '[BOLD] 13.80- [BOLD] 13.83', '[BOLD] 15.33- [BOLD] 15.36'], ['Base LM-24L (203M)', '15.71', '14.89', '15.61', '18.14'], ['RALM (LM-24L+203M)', '15.70', '14.89', '15.63', '18.17'], ['BALM-24L (408M)', '14.58', '13.92', '15.20', '18.24'], ['joint UniT (LM-24L+203M)', '14.59-14.61', '13.81-13.82', '15.12−15.16', '17.46-17.48'], ['joint BiT-Base (LM-24L+125M)', '13.68-13.69', '13.01-13.03', '-', '-'], ['joint BiT-Base* (LM-24L+125M)', '13.60-13.62', '12.93-12.95', '14.11-14.12', '16.17-16.18'], ['joint BiT-Med (LM-24L+203M)', '12.97-13.01', '12.38-12.42', '-', '-'], ['joint BiT-Large* (LM-24L+355M)', '[BOLD] 12.71- [BOLD] 12.77', '[BOLD] 12.10- [BOLD] 12.16', '[BOLD] 13.30- [BOLD] 13.34', '[BOLD] 15.17- [BOLD] 15.22']]
We can see that on both datasets, residual EBMs with causal attention joint UniT outperforms the baseline RALM with approximately the same number of parameters. The non-residual baseline BALM performs similarly to joint UniT, which might be due to the limitation that Pϕ is not trained jointly with the residual model in both joint UniT and RALM. However, by using our EBM approach, we can remove the causal attention mask and use bi-directional models, which achieves better performance than both baselines and joint UniT: without external data, joint BiT-Base reaches a higher performance than joint UniT with fewer parameters. By initializing from the state-of-the-art pretrained bi-directional transformers RoBERTa-Base and RoBERTa-Large, joint BiT-Base* and Joint BiT-Large* reach even better performance than joint BiT-Base.
Energy-Based Models for Text
2004.10188
Table 3: Number of parameters in millions for the discriminator. The computational cost is directly related to the number of parameters in other layers than the input embedding layer (second row).
['[EMPTY]', '[BOLD] Discriminators Linear', '[BOLD] Discriminators BiLSTM', '[BOLD] Discriminators BiLSTM Big', '[BOLD] Discriminators UniT', '[BOLD] Discriminators BiT']
[['embed.', '0.1', '26', '39', '51', '51'], ['others', '0', '23', '90', '151', '304'], ['total', '0.1', '49', '129', '203', '355']]
We use data-parallel synchronous multi-GPU training with up to 24 nodes, each with 8 Nvidia V100 GPUs. The Wikitext dataset has lower accuracy because the discriminator overfits to such smaller dataset.
Energy-Based Models for Text
2004.10188
Table 7: Cross-corpora generalization accuracy using TransfBig generator and UniT discriminator (except for the last row which used a bidirectional transformer). Each row specifies the corpora used at training time, Ctrain. Each column shows the corpus used at test time, Ctest.
['train corpora', 'test corpora Books', 'test corpora CCNews', 'test corpora Wiki']
[['Wiki', '70.9', '73.6', '76.4'], ['Books', '91.7', '63.5', '59.1'], ['Books + Wiki', '91.5', '73.6', '78.3'], ['CCNews', '60.6', '88.4', '65.5'], ['Books + CCNews', '90.4', '88.5', '68.3'], ['CCNews + Wiki', '73.5', '88.3', '81.0'], ['ALL (UniT)', '90.4', '88.5', '80.9'], ['ALL (BiT)', '94.1', '94.1', '-']]
We observe that models generalize less well across corpora; for instance, when testing on Wikitext a discriminator trained with either Books or CCNews, the accuracy is 59.1% and 65.5%, respectively. However, training on the union of two of the corpora gives a large benefit over training on just one or the other when testing on the third. To answer this question and to provide an automatic way to assess generation quality, we have tested the false positive rate, that is the fraction of machine generated samples that are deemed human generated text, using as discriminator the BiT model. We found that the baseline language model Pϕ (Base LM) has a false positive rate of 17.8% while the joint language model Pθ (Joint BiT-med) has a much higher false positive rate of 31.8%.
Energy-Based Models for Text
2004.10188
Table 8: Generalization in the wild of the discriminator to unconditional generation from various GPT2 models (model size in parentheses, followed by sampling method used). Each row contains the accuracy on the corresponding test set. TF-IDF results are taken from Radford and Wu (2019). Results in parentheses are taken from https://openai.com/blog/gpt-2-1-5b-release/.
['Discriminator → Test setting →', 'TF-IDF∗ in-domain', 'BiT in-domain', 'BiT cross-architecture', 'BiT wild']
[['Small (137) top-k', '96.79', '99.09 (99.3)', '-', '93.25'], ['Small (137) temp=1', '88.29', '99.80', '-', '66.04'], ['Med (380) top-k', '95.22', '98.07 (98.5)', '97.37 (96.6)', '88.19'], ['Med (380) temp=1', '88.94', '99.43', '97.35', '55.06'], ['Big (762) top-k', '94.43', '96.50 (97.9)', '93.58 (90.9)', '83.88'], ['Big (762) temp=1', '77.16', '99.42', '95.96', '64.03'], ['Huge (1542) top-k', '94.43', '95.01 (96.0)', '90.17 (79.3)', '79.18'], ['Huge (1542) temp=1', '77.31', '99.00', '91.76', '61.29']]
In this case, we finetune the discriminator on the training set of each of the datasets, following the same protocol used by the provided TF-IDF baseline. We notice that BiT discriminator has consistently superior performance, with an accuracy greater than 95%. While the discriminator still works much better than a random predictor, it lags behind the simple (in-domain) linear baseline. That suggests that matching the domain of the training set is more important than matching the model complexity.
Feature Generation for Robust Semantic Role Labeling
1702.07046
Table 1: Performance of our automatic feature selection vs prior work. In general our local model with automatic feature selection is a few points behind joint inference models but matches or exceeds other local inference models. Results for FrameNet are on top, Propbank below.
['[EMPTY]', 'Global', 'P', 'R', 'F1']
[['This work', '✗', '[BOLD] 73.9', '55.8', '63.6'], ['Das:2012 local', '✗', '67.7', '[BOLD] 59.8', '63.5'], ['Das:2012 constrained', '✓', '70.4', '59.5', '[BOLD] 64.6'], ['This work', '✗', '[BOLD] 87.5', '69.1', '77.2'], ['pradhan2013towards', '✗', '81.3', '70.5', '75.5'], ['pradhan2013towards (revised)', '✗', '78.5', '76.7', '77.5'], ['tackstrom-etal-2015', '✓', '80.6', '78.2', '79.4'], ['fitzgerald:2015:EMNLP', '✓', '80.9', '[BOLD] 78.4', '79.6'], ['ZhouXu15', '✓', '-', '-', '[BOLD] 81.3']]
Overall, our method seems to work about as well as experts manually designing features for SRL. Other systems achieve better performance, but these models all use global information, an orthogonal issue to the local feature set.
Feature Generation for Robust Semantic Role Labeling
1702.07046
Table 2: Columns are the dataset used for feature selection and rows are the dataset used for training and testing.
['[EMPTY]', 'FN', 'PB']
[['FN', '63.6', '74.8'], ['PB', '61.7', '77.2']]
but it is another question of whether this matters towards system performance. It could be that there are many different types of feature sets which lead to good performance on either task/dataset, and only one is needed (possibly created manually). The performance on the diagonal is considerably higher, indicating empirically that there likely isn’t one “SRL feature set”. If you weight both equally, the average increase in error due to domain shift is 7.9%. This is even the case for feature selection with FrameNet, where you might expect that selecting features on Propbank, a much larger resource, could yield gains because of much lower variance without much bias.
Feature Generation for Robust Semantic Role Labeling
1702.07046
Table 3: Columns how many features were used for argument identification and rows how many features were used for role classification. FrameNet (FN) is on top β=10, Propbank (PB) is below β=0.01.
['FN', '0', '320', '640', '1280']
[['0', '[EMPTY]', '50.1', '56.4', '61.5'], ['320', '54.7', '55.7', '58.8', '61.9'], ['640', '57.8', '59.6', '61.4', '62.4'], ['1280', '58.1', '59.5', '61.0', '63.6'], ['PB', '0', '320', '640', '1280'], ['0', '[EMPTY]', '59.9', '65.8', '73.6'], ['320', '61.3', '62.8', '70.2', '74.5'], ['640', '67.6', '68.1', '74.1', '75.3'], ['1280', '70.6', '72.8', '75.1', '77.2']]
Given that we can automatically generate feature sets, we can easily determine how adding or removing features from each stage will affect performance. This is useful for choosing a feature set which balances the cost of prediction time with performance, which is labor intensive and error prone when done manually. id features than role classification ones.
Improving Fine-grained Entity Typing with Entity Linking
1909.12079
Table 1: Fine-grained entity typing performance. The performance of “Ours (DirectTrain)” on BBN is omitted since this dataset does not have fine-grained types for person.
['Dataset Approach', 'FIGER (GOLD) Accuracy', 'FIGER (GOLD) Macro F1', 'FIGER (GOLD) Micro F1', 'BBN Accuracy', 'BBN Macro F1', 'BBN Micro F1']
[['AFET', '53.3', '69.3', '66.4', '67.0', '72.7', '73.5'], ['AAA', '65.8', '81.2', '77.4', '73.3', '79.1', '79.2'], ['NFETC', '68.9', '81.9', '79.0', '72.1', '77.1', '77.5'], ['CLSC', '-', '-', '-', '74.7', '80.7', '80.5'], ['Ours (NonDeep NoEL)', '65.9', '81.7', '78.0', '69.3', '81.4', '81.5'], ['Ours (NonDeep)', '72.3', '85.4', '82.6', '79.1', '87.9', '88.4'], ['Ours (DirectTrain)', '69.1', '85.2', '82.2', '-', '-', '-'], ['Ours (NoEL)', '69.8', '82.7', '80.4', '80.5', '87.5', '88.0'], ['Ours (LocAttEL)', '75.1', '86.3', '83.9', '[BOLD] 82.8', '88.9', '89.5'], ['Ours (Full)', '[BOLD] 75.5', '[BOLD] 87.1', '[BOLD] 84.6', '82.5', '[BOLD] 89.2', '[BOLD] 89.6']]
As we can see, our approach performs much better than existing approaches on both datasets.
Language Modeling with Deep Transformers
1905.04226
Table 4: Effect of activation functions. Perplexity after 1 epoch (10 sub-epochs in our setup) for (24, 2048, 512, 8).
['Activation', 'Perplexity Train', 'Perplexity Dev']
[['ReLU ', '76.4', '72.5'], ['GLU ', '76.5', '72.8'], ['GELU ', '[BOLD] 75.7', '[BOLD] 72.2']]
16 heads which is the largest number we try in this setup give the best performance. As opposed to previous work on feed-forward language models using GLUs [ As we observe that the impact of choice of activation functions on the perplexity is overall limited, all our other models use the standard ReLU.
Language Modeling with Deep Transformers
1905.04226
Table 4: Effect of activation functions. Perplexity after 1 epoch (10 sub-epochs in our setup) for (24, 2048, 512, 8).
['[ITALIC] H', 'Params. in M', 'Perplexity Train', 'Perplexity Dev']
[['1', '243', '71.9', '70.8'], ['4', '243', '69.1', '68.6'], ['8', '243', '67.6', '67.1'], ['16', '243', '[BOLD] 66.9', '[BOLD] 66.6']]
16 heads which is the largest number we try in this setup give the best performance. As opposed to previous work on feed-forward language models using GLUs [ As we observe that the impact of choice of activation functions on the perplexity is overall limited, all our other models use the standard ReLU.
Language Modeling with Deep Transformers
1905.04226
Table 7: WERs (%) for hybrid systems on the LibriSpeech 960hr. 4-gram model is used in the first pass to generates lattices for rescoring. The row ”Lattice” shows oracle WERs of the lattices.
['LM', '[ITALIC] L', 'Para. in M', 'dev clean', 'dev clean', 'dev other', 'dev other', 'test clean', 'test clean', 'test other', 'test other']
[['LM', '[ITALIC] L', 'in M', 'PPL', 'WER', 'PPL', 'WER', 'PPL', 'WER', 'PPL', 'WER'], ['4-gram', '-', '230', '151.7', '3.4', '140.6', '8.3', '158.1', '3.8', '145.7', '8.8'], ['Lattice', '-', '-', '-', '1.0', '-', '2.3', '-', '1.3', '-', '2.6'], ['LSTM', '2', '1048', '60.2', '2.3', '60.2', '5.4', '64.8', '2.6', '61.7', '5.9'], ['Trans-', '24', '281', '57.8', '2.2', '58.3', '[BOLD] 5.2', '62.2', '[BOLD] 2.5', '59.4', '5.7'], ['former', '42', '338', '54.5', '[BOLD] 2.1', '55.5', '[BOLD] 5.2', '59.1', '[BOLD] 2.5', '56.4', '5.7'], ['former', '96', '431', '[BOLD] 53.2', '[BOLD] 2.1', '[BOLD] 54.2', '[BOLD] 5.2', '[BOLD] 57.6', '[BOLD] 2.5', '[BOLD] 55.0', '[BOLD] 5.6']]
We apply our word-level Transformer language models to conventional hybrid speech recognition by lattice rescoring. We obtain consistent improvements in terms of WER over the LSTM baselines.
Language Modeling with Deep Transformers
1905.04226
Table 8: WERs (%) for attention-based models on LibriSpeech 960hr dataset. Perplexities are on the 10K BPE level.
['LM', 'Beam', 'dev clean', 'dev clean', 'dev other', 'dev other', 'test clean', 'test clean', 'test other', 'test other']
[['LM', 'Beam', 'PPL', 'WER', 'PPL', 'WER', 'PPL', 'WER', 'PPL', 'WER'], ['None', '12', '-', '4.3', '-', '12.9', '-', '4.4', '-', '13.5'], ['LSTM', '64', '43.7', '2.9', '46.4', '8.9', '47.1', '3.2', '47.2', '9.9'], ['Transfo.', '64', '[BOLD] 35.9', '[BOLD] 2.6', '[BOLD] 38.9', '[BOLD] 8.4', '[BOLD] 38.8', '[BOLD] 2.8', '[BOLD] 39.0', '[BOLD] 9.3']]
The 10K BPE level training data has a longer average length of 24 tokens per sentence with the longest sentence length of 1343, which is still manageable without any truncation for self-attention. We use the Transformer architecture of (24, 4096, 1024, 8). The LSTM model has 4 layers with 2048 nodes. Again, we obtain consistent improvements over the LSTM baseline. These results are better than previously reported WERs
Language Modeling with Deep Transformers
1905.04226
Table 9: Effect of sinusoidal positional encoding. Perplexity after 5 epochs (13M updates) for (L, 2048, 512, 8) models.
['[ITALIC] L', 'Position. encoding', 'Params. in M.', 'Perplexity Train', 'Perplexity Dev', 'Perplexity Test']
[['12', 'Sinusoidal', '243', '61.8', '63.1', '66.1'], ['12', 'None', '243', '58.0', '[BOLD] 60.5', '[BOLD] 63.4'], ['24', 'Sinusoidal', '281', '55.6', '58.0', '60.8'], ['24', 'None', '281', '52.7', '[BOLD] 56.6', '[BOLD] 59.2'], ['42', 'Sinusoidal', '338', '51.2', '55.0', '57.7'], ['42', 'None', '338', '50.5', '[BOLD] 54.2', '[BOLD] 56.8']]
In the autoregressive problem where a new token is provided to the model at each time step, the amount of information the model has access to strictly increases from left to right at the lowest level of the network; the deeper layers should be able to recognize this structure which should provide the model with some positional information by its own. To check this hypothesis, we train models without any positional encoding.
Fast(er) Exact Decoding and Global Training for Transition-Based Dependency Parsing via a Minimal Feature Set
1708.09403
Table 2: Test set performance for different training regimes and feature sets. The models use the same decoders for testing and training. For each setting, the average and standard deviation across 5 runs with different random initializations are reported. Boldface: best (averaged) result per dataset/measure.
['Model', 'Training', 'Features', 'PTB UAS (%)', 'PTB UEM (%)', 'CTB UAS (%)', 'CTB UEM (%)']
[['Arc-standard', 'Local', '{ \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244pt}{2% .27622pt}{→←} [ITALIC] ts2, \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244% pt}{2.27622pt}{→←} [ITALIC] ts1, \\raisebox −1.422638 [ITALIC] pt\\resizebox% {7.682244pt}{2.27622pt}{→←} [ITALIC] ts0, \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244pt}{2.27622pt}{→←} [ITALIC] tb0}', '93.95±0.12', '52.29±0.66', '88.01±0.26', '36.87±0.53'], ['Arc-hybrid', 'Local', '{ \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244pt}{2% .27622pt}{→←} [ITALIC] ts2, \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244% pt}{2.27622pt}{→←} [ITALIC] ts1, \\raisebox −1.422638 [ITALIC] pt\\resizebox% {7.682244pt}{2.27622pt}{→←} [ITALIC] ts0, \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244pt}{2.27622pt}{→←} [ITALIC] tb0}', '93.89±0.10', '50.82±0.75', '87.87±0.17', '35.47±0.48'], ['Arc-hybrid', 'Local', '{ \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244pt}{2% .27622pt}{→←} [ITALIC] ts0, \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244% pt}{2.27622pt}{→←} [ITALIC] tb0}', '93.80±0.12', '49.66±0.43', '87.78±0.09', '35.09±0.40'], ['Arc-hybrid', 'Global', '{ \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244pt}{2% .27622pt}{→←} [ITALIC] ts0, \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244% pt}{2.27622pt}{→←} [ITALIC] tb0}', '94.43±0.08', '53.03±0.71', '88.38±0.11', '36.59±0.27'], ['Arc-eager', 'Local', '{ \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244pt}{2% .27622pt}{→←} [ITALIC] ts2, \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244% pt}{2.27622pt}{→←} [ITALIC] ts1, \\raisebox −1.422638 [ITALIC] pt\\resizebox% {7.682244pt}{2.27622pt}{→←} [ITALIC] ts0, \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244pt}{2.27622pt}{→←} [ITALIC] tb0}', '93.80±0.12', '49.66±0.43', '87.49±0.20', '33.15±0.72'], ['Arc-eager', 'Local', '{ \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244pt}{2% .27622pt}{→←} [ITALIC] ts0, \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244% pt}{2.27622pt}{→←} [ITALIC] tb0}', '93.77±0.08', '49.71±0.24', '87.33±0.11', '34.17±0.41'], ['Arc-eager', 'Global', '{ \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244pt}{2% .27622pt}{→←} [ITALIC] ts0, \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244% pt}{2.27622pt}{→←} [ITALIC] tb0}', '[BOLD] 94.53±0.05', '53.77±0.46', '[BOLD] 88.62±0.09', '[BOLD] 37.75±0.87'], ['Edge-factored', 'Global', '{ \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244pt}{2% .27622pt}{→←} [ITALIC] th, \\raisebox −1.422638 [ITALIC] pt\\resizebox{7.682244pt}{% 2.27622pt}{→←} [ITALIC] tm}', '94.50±0.13', '[BOLD] 53.86±0.78', '88.25±0.12', '36.42±0.52']]
All models use the same decoder for testing as during the training process. Though no global decoder for the arc-standard system has been explored in this paper, its local models are listed for comparison. We also include an edge-factored graph-based model, which is conventionally trained globally. The edge-factored model scores bi-LSTM features for each head-modifier pair; a maximum spanning tree algorithm is used to find the tree with the highest sum of edge scores.
Multi-Task Learning with Contextualized Word Representations for Extented Named Entity Recognition
1902.10118
Table 3: Results in F1 scores for FG-NER (We run each setting five times and report the average F1 scores.)
['Model', 'FG-NER', '+Chunk', '+NER (CoNLL)', '+POS', '+NER (Ontonotes)']
[['Base Model (GloVe)', '81.51', '-', '-', '-', '-'], ['RNN-Shared Model (GloVe)', '-', '80.53', '81.38', '80.55', '81.13'], ['Embedding-Shared Model (GloVe)', '-', '81.49', '81.21', '81.59', '81.24'], ['Hierarchical-Shared Model (GloVe)', '-', '81.65', '[BOLD] 82.14', '81.27', '81.67'], ['Base Model (ELMo)', '82.74', '-', '-', '-', '-'], ['RNN-Shared Model (ELMo)', '-', '82.60', '82.09', '81.77', '82.12'], ['Embedding-Shared Model (ELMo)', '-', '82.75', '82.45', '82.34', '81.94'], ['Hierarchical-Shared Model (ELMo)', '-', '[BOLD] 83.04', '82.72', '82.76', '82.96'], ['Base Model (GloVe) + LM\xa0', '81.77', '-', '-', '-', '-'], ['RNN-Shared Model (GloVe) + Shared-LM', '-', '80.83', '81.34', '80.69', '81.45'], ['Embedding-Shared Model (GloVe) + Shared-LM', '-', '81.54', '81.95', '81.86', '81.34'], ['Hierarchical-Shared Model (GloVe) + Shared-LM', '-', '81.69', '[BOLD] 81.96', '81.42', '81.78'], ['Base Model (ELMo) + LM', '82.91', '-', '-', '-', '-'], ['RNN-Shared Model (ELMo) + Shared-LM', '-', '82.68', '82.64', '81.61', '82.36'], ['Embedding-Shared Model (ELMo) + Shared-LM', '-', '82.61', '82.32', '82.46', '82.45'], ['Hierarchical-Shared Model (ELMo) + Shared-LM', '-', '82.87', '82.82', '82.85', '[BOLD] 82.99'], ['Hierarchical-Shared Model (GloVe) + Unshared-LM', '-', '81.77', '81.80', '81.72', '81.88'], ['Hierarchical-Shared Model (ELMo) + Unshared-LM', '-', '[BOLD] 83.35', '83.14', '83.06', '82.82'], ['', '83.14', '-', '-', '-', '-']]
Deep Contextualized Word Representations In the first experiment, we investigate the effectiveness of contextualized word representations (ELMo) compared to uncontextualized word representations (GloVe) when incorporating in our FG-NER systems (Base Model (GloVe) vs. Base Model (ELMo)). In both cases, hierarchical-shared model gives the best performances. In particular, it achieves an F1 score of 82.14% when learning with NER (CoNLL) and an F1 score of 83.04% when learning with NER (Ontonotes) compared to F1 scores of 81.51% and 82.74% of base model in GloVe and ELMo settings respectively. For same-level-shared models, they also achieve better results compared to base model but the differences are not very large. These results indicate that learning FG-NER with other sequence labeling tasks at different parameter sharing schemes helps to improve the performances of FG-NER system. Also, in most cases, it is more beneficial when learning the auxiliary and the main tasks at different levels (hierarchical-shared model) compared to learning at the same level (RNN-shared and embedding-shared models).
Multi-Task Learning with Contextualized Word Representations for Extented Named Entity Recognition
1902.10118
Table 2: Hyper-parameters used in our systems.
['[EMPTY]', 'Hyper-parameter', 'Value']
[['LSTM', 'hidden size', '256'], ['CNN', 'window size', '3'], ['CNN', '#filter', '30'], ['Dropout', 'input dropout', '0.33'], ['Dropout', 'BLSTM dropout', '0.5'], ['Embedding', 'GloVe dimension', '300'], ['Embedding', 'ELMo dimension', '1024'], ['Embedding', '[ITALIC] γ', '1'], ['Language Model', '[ITALIC] λ', '0.05'], ['Training', 'batch size', '16'], ['Training', 'initial learning rate', '0.01'], ['Training', 'decay rate', '0.05']]
S3SS2SSS0Px1 The training procedure for multi-task sequence labeling models is as follows. For same-level-shared models, at each iteration, we first sample a task (main or auxiliary tasks) by Bernoulli trial based on sizes of datasets. Next, we sample a batch of training examples from the given task and then update gradients for both the shared parameters and the task-specific parameters according to the loss function of the given task. We use stochastic gradient descent algorithm with decay rate 0.05.
Multi-Task Learning with Contextualized Word Representations for Extented Named Entity Recognition
1902.10118
Table 4: 5 most improved NE types when using ELMo.
['Named Entity', 'GloVe', 'ELMo', 'Token Length']
[['Book', '48.65', '76.92', '3.2'], ['Printing Other', '60.38', '83.33', '3.5'], ['Spaceship', '61.90', '80.00', '2.7'], ['Earthquake', '75.00', '90.20', '3.8'], ['Public Institution', '80.00', '95.00', '4.2']]
While the average token length of NEs in our dataset is 1.9, the average token lengths of these NE types are much longer. It shows that ELMo helps to improve the performance of our system when identifying NEs which are long sequences. This result is understandable because Base Model (GloVe) relies on only BLSTM layer to learn the dependencies among words in sequence to predict NE labels while Base Model (ELMo) learns these dependencies by both embedding and BLSTM layers. Unlike NER, NE types in FG-NER are often more complex and longer so using only BLSTM layer is not sufficient to capture these dependencies.
What do Neural Machine Translation Models Learn about Morphology?
1704.03471
Table 6: Impact of changing the target language on POS tagging accuracy. Self = German/Czech in rows 1/2 respectively.
['SourceTarget', 'English', 'Arabic', 'Self']
[['German', '93.5', '92.7', '89.3'], ['Czech', '75.7', '75.2', '71.8']]
We report here results that were omitted from the paper due to the space limit. As noted in the paper, all the results consistently show that i) layer 1 performs better than layers 0 and 2; and ii) char-based representations are better than word-based for learning morphology.
What do Neural Machine Translation Models Learn about Morphology?
1704.03471
Table 2: POS accuracy on gold and predicted tags using word-based and character-based representations, as well as corresponding BLEU scores.
['[EMPTY]', 'Gold', 'Pred', 'BLEU']
[['[EMPTY]', 'Word/Char', 'Word/Char', 'Word/Char'], ['Ar-En', '80.31/93.66', '89.62/95.35', '24.7/28.4'], ['Ar-He', '78.20/92.48', '88.33/94.66', '9.9/10.7'], ['De-En', '87.68/94.57', '93.54/94.63', '29.6/30.4'], ['Fr-En', '–', '94.61/95.55', '37.8/38.8'], ['Cz-En', '–', '75.71/79.10', '23.2/25.4']]
Char-based models always generate better representations for POS tagging, especially in the case of morphologically-richer languages like Arabic and Czech. We observed a similar pattern in the full morphological tagging task. For example, we obtain morphological tagging accuracy of 65.2/79.66 and 67.66/81.66 using word/char-based representations from the Arabic-Hebrew and Arabic-English encoders, respectively. The inherent difficulty in translating Arabic to Hebrew/German may affect the ability to learn good representations of word structure. To probe this more, we trained an Arabic-Arabic autoencoder on the same training data. However, its word representations are actually inferior for the purpose of POS/morphological tagging. This implies that higher BLEU does not necessarily entail better morphological representations. In other words, a better translation model learns more informative representations, but only when it is actually learning to translate rather than merely memorizing the data as in the autoencoder case. We found this to be consistently true also for char-based experiments, and in other language pairs.
What do Neural Machine Translation Models Learn about Morphology?
1704.03471
Table 3: POS tagging accuracy using encoder and decoder representations with/without attention.
['Attn', 'POS Accuracy ENC', 'POS Accuracy DEC', 'BLEU Ar-En', 'BLEU En-Ar']
[['✓', '89.62', '86.71', '24.69', '13.37'], ['✗', '74.10', '85.54', '11.88', '5.04']]
There is a modest drop in representation quality with the decoder. This drop may be correlated with lower BLEU scores when translating English to Arabic vs. Arabic to English. To test this hypothesis, we train NMT models with and without attention and compare the quality of their learned representations. It seems that the decoder does not rely on the attention mechanism to obtain good target word representations, contrary to our hypothesis.
What do Neural Machine Translation Models Learn about Morphology?
1704.03471
Table 4: POS tagging accuracy using word-based and char-based encoder/decoder representations.
['[EMPTY]', 'POS Accuracy ENC', 'POS Accuracy DEC', 'BLEU Ar-En', 'BLEU En-Ar']
[['Word', '89.62', '86.71', '24.69', '13.37'], ['Char', '95.35', '91.11', '28.42', '13.00']]
In both bases, char-based representations perform better. BLEU scores behave differently: the char-based model leads to better translations in Arabic-to-English, but not in English-to-Arabic. A possible explanation for this phenomenon is that the decoder’s predictions are still done at word level even with the char-based model (which encodes the target input but not the output). In practice, this can lead to generating unknown words. Indeed, in Arabic-to-English the char-based model reduces the number of generated unknown words in the MT test set by 25%, while in English-to-Arabic the number of unknown words remains roughly the same between word-based and char-based models.
What do Neural Machine Translation Models Learn about Morphology?
1704.03471
Table 5: POS and morphology accuracy on predicted tags using word- and char-based representations from different layers of *-to-En systems.
['[EMPTY]', 'Layer 0', 'Layer 1', 'Layer 2']
[['[EMPTY]', 'Word/Char (POS)', 'Word/Char (POS)', 'Word/Char (POS)'], ['De', '91.1/92.0', '93.6/95.2', '93.5/94.6'], ['Fr', '92.1/92.9', '95.1/95.9', '94.6/95.6'], ['Cz', '76.3/78.3', '77.0/79.1', '75.7/80.6'], ['[EMPTY]', 'Word/Char (Morphology)', 'Word/Char (Morphology)', 'Word/Char (Morphology)'], ['De', '87.6/88.8', '89.5/91.2', '88.7/90.5']]
We report here results that were omitted from the paper due to the space limit. As noted in the paper, all the results consistently show that i) layer 1 performs better than layers 0 and 2; and ii) char-based representations are better than word-based for learning morphology.
What do Neural Machine Translation Models Learn about Morphology?
1704.03471
Table 7: POS accuracy and BLEU using decoder representations from different language pairs.
['[EMPTY]', 'En-De', 'En-Cz', 'De-En', 'Fr-En']
[['POS', '94.3', '71.9', '93.3', '94.4'], ['BLEU', '23.4', '13.9', '29.6', '37.8']]
There is a modest drop in representation quality with the decoder. This drop may be correlated with lower BLEU scores when translating English to Arabic vs. Arabic to English. We report here results that were omitted from the paper due to the space limit. As noted in the paper, all the results consistently show that i) layer 1 performs better than layers 0 and 2; and ii) char-based representations are better than word-based for learning morphology.
Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T LossThis is the final version of the paper submitted to the ICASSP 2020 on Oct 21, 2019.
2002.02562
Table 1: Transformer encoder parameter setup.
['Input feature/embedding size', '512']
[['Dense layer 1', '2048'], ['Dense layer 2', '1024'], ['Number attention heads', '8'], ['Head dimension', '64'], ['Dropout ratio', '0.1']]
Our Transformer Transducer model architecture has 18 audio and 2 label encoder layers. Every layer is identical for both audio and label encoders. All the models for experiments presented in this paper are trained on 8x8 TPU with a per-core batch size of 16 (effective batch size of 2048). The learning rate schedule is ramped up linearly from 0 to 2.5e−4 during first 4K steps, it is then held constant till 30K steps and then decays exponentially to 2.5e−6 till 200K steps. We train this model to output grapheme units in all our experiments. We found that the Transformer Transducer models trained much faster (≈1 day) compared to the an LSTM-based RNN-T model (≈3.5 days), with a similar number of parameters.
Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T LossThis is the final version of the paper submitted to the ICASSP 2020 on Oct 21, 2019.
2002.02562
Table 2: Comparison of WERs for Hybrid (streamable), LAS (e2e), RNN-T (e2e & streamable) and Transformer Transducer models (e2e & streamable) on LibriSpeech test sets.
['Model', 'Param size', 'No LM (%) clean', 'No LM (%) other', 'With LM (%) clean', 'With LM (%) other']
[['Hybrid ', '-', '-', '-', '2.26', '4.85'], ['LAS', '361M', '2.8', '6.8', '2.5', '5.8'], ['BiLSTM RNN-T', '130M', '3.2', '7.8', '-', '-'], ['FullAttn T-T (Ours)', '139M', '2.4', '5.6', '[BOLD] 2.0', '[BOLD] 4.6']]
We first compared the performance of Transformer Transducer (T-T) models with full attention on audio to an RNN-T model using a bidirectional LSTM audio encoder. We also observed that T-T models can achieve competitive recognition accuracy with existing wordpiece-based end-to-end models with similar model size. This Transformer LM (6 layers; 57M parameters) had a perplexity of 2.49 on the dev-clean set; the use of dropout, and of larger models, did not improve either perplexity or WER. Shallow fusion was then performed using that LM and both the trained T-T system and the trained bidirectional LSTM-based RNN-T baseline, with scaling factors on the LM output and on the non-blank symbol sequence length tuned on the LibriSpeech dev sets. The shallow fusion result for the T-T system is competitive with corresponding results for top-performing existing systems.
Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T LossThis is the final version of the paper submitted to the ICASSP 2020 on Oct 21, 2019.
2002.02562
Table 3: Limited left context per layer for audio encoder.
['Audio Mask left', 'Audio Mask right', 'Label Mask left', 'WER (%) Test-clean', 'WER (%) Test-other']
[['10', '0', '20', '4.2', '11.3'], ['6', '0', '20', '4.3', '11.8'], ['2', '0', '20', '4.5', '14.5']]
Next, we ran training and decoding experiments using T-T models with limited attention windows over audio and text, with a view to building online streaming speech recognition systems with low latency. Similarly to the use of unidirectional RNN audio encoders in online models, where activations for time t are computed with conditioning only on audio frames before t, here we constrain the AudioEncoder to attend to the left of the current frame by masking the attention scores to the right of the current frame. In order to make one-step inference for AudioEncoder tractable (i.e. to have constant time complexity), we further limit the attention for AudioEncoder to a fixed window of previous states by again masking the attention score. Due to limited computation resources, we used the same mask for different Transformer layers, but the use of different contexts (masks) for different layers is worth exploring. As we can see, using more audio history gives the lower WER, but considering a streamable model with reasonable time complexity for inference, we experimented with a left context of up to 10 frames per layer.
Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T LossThis is the final version of the paper submitted to the ICASSP 2020 on Oct 21, 2019.
2002.02562
Table 4: Limited right context per layer for audio encoder.
['Audio Mask left', 'Audio Mask right', 'Label Mask left', 'WER (%) Test-clean', 'WER (%) Test-other']
[['512', '512', '20', '2.4', '5.6'], ['512', '10', '20', '2.7', '6.6'], ['512', '6', '20', '2.8', '6.9'], ['512', '2', '20', '3.0', '7.7'], ['10', '0', '20', '4.2', '11.3']]
Similarly, we explored the use of limited right context to allow the model to see some future audio frames, in the hope of bridging the gap between a streamable T-T model (left = 10, right = 0) and a full attention T-T model (left = 512, right = 512). Since we apply the same mask for every layer, the latency introduced by using right context is aggregated over all the layers. To explore the right context impact for modeling, we did comparisons with fixed 512 frames left context per layer to compared with full attention T-T model. Compared with streamable T-T model, 2 frames right context per layer (around 1 sec of latency) brings around 30% improvements.
Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T LossThis is the final version of the paper submitted to the ICASSP 2020 on Oct 21, 2019.
2002.02562
Table 5: Limited left context per layer for label encoder.
['Audio Mask left', 'Audio Mask right', 'Label Mask left', 'WER (%) Test-clean', 'WER (%) Test-other']
[['10', '0', '20', '4.2', '11.3'], ['10', '0', '4', '4.2', '11.4'], ['10', '0', '3', '4.2', '11.4'], ['10', '0', '2', '4.3', '11.5'], ['10', '0', '1', '4.4', '12']]
In addition, we evaluated how the left context used in the T-T LabelEncoder affects performance. It shows very limited left context for label encoder is good engough for T-T model. We see a similar trend when limiting left label states while using a full attention T-T audio encoder.
Answering Complex Open-domain Questions Through Iterative Query Generation
1910.07000
Table 6: Span prediction and IR performance of the query generator models for Hop 1 (G1) and Hop 2 (G2) evaluated separately on the HotpotQA dev set.
['[BOLD] Model', '[BOLD] Span EM', '[BOLD] Span F1', '[BOLD] R@5']
[['[ITALIC] G1', '51.40', '78.75', '85.86'], ['[ITALIC] G2', '52.29', '63.07', '64.83']]
To evaluate the query generators, we begin by determining how well they emulate the oracles. We evaluate them using Exact Match (EM) and F1 on the span prediction task, as well as compare their queries’ retrieval performance against the oracle queries. When we combine them into a pipeline, the generated queries perform only slightly better on d1 when a total of 10 documents are retrieved (89.91% vs 87.85%), but are significantly more effective for d2 (61.01% vs 36.91%). If we further zoom in on the retrieval performance on non-comparison questions for which finding the two entities involved is less trivial, we can see that the recall on d2 improves from 27.88% to 53.23%, almost doubling the number of questions we have the complete gold context to answer. We note that the IR performance we report on the full pipeline is different to that when we evaluate the query generators separately. We attribute this difference to the fact that the generated queries sometimes retrieve both gold documents in one step.
Answering Complex Open-domain Questions Through Iterative Query Generation
1910.07000
Table 2: End-to-end QA performance of baselines and our GoldEn Retriever model on the HotpotQA fullwiki test set. Among systems that were not published at the time of submission of this paper, “BERT pip.” was submitted to the official HotpotQA leaderboard on May 15th (thus contemporaneous), while “Entity-centric BERT Pipeline” and “PR-Bert” were submitted after the paper submission deadline.
['[BOLD] System', '[BOLD] Answer EM', '[BOLD] Answer F1', '[BOLD] Sup Fact EM', '[BOLD] Sup Fact F1', '[BOLD] Joint EM', '[BOLD] Joint F1']
[['Baseline Yang et\xa0al. ( 2018 )', '25.23', '34.40', '05.07', '40.69', '02.63', '17.85'], ['GRN + BERT', '29.87', '39.14', '13.16', '49.67', '08.26', '25.84'], ['MUPPET Feldman and El-Yaniv ( 2019 )', '30.61', '40.26', '16.65', '47.33', '10.85', '27.01'], ['CogQA Ding et\xa0al. ( 2019 )', '37.12', '48.87', '22.82', '57.69', '12.42', '34.92'], ['PR-Bert', '43.33', '53.79', '21.90', '59.63', '14.50', '39.11'], ['Entity-centric BERT Pipeline', '41.82', '53.09', '26.26', '57.29', '17.01', '39.18'], ['BERT pip. (contemporaneous)', '45.32', '57.34', '38.67', '70.83', '25.14', '47.60'], ['GoldEn Retriever', '37.92', '48.58', '30.69', '64.24', '18.04', '39.13']]
We compare the end-to-end performance of GoldEn Retriever against several QA systems on the HotpotQA dataset: (1) the baseline presented in Yang et al. , (2) CogQA Ding et al. However, the QA performance is handicapped because we do not make use of pretrained contextualization models (e.g., BERT) that these systems use. We expect a boost in QA performance from adopting these more powerful question answering models, especially ones that are tailored to perform few-document multi-hop reasoning. We leave this to future work.
Answering Complex Open-domain Questions Through Iterative Query Generation
1910.07000
Table 3: Question answering and IR performance amongst different IR settings on the dev set. We observe that although improving the IR engine is helpful, most of the performance gain results from the iterative retrieve-and-read strategy of GoldEn Retriever. (*: for GoldEn Retriever, the 10 paragraphs are combined from both hops, 5 from each hop.)
['[BOLD] Setting', '[BOLD] Ans F1', '[BOLD] Sup F1', '[BOLD] R@10∗']
[['GoldEn Retriever', '49.79', '64.58', '75.46'], ['Single-hop query', '38.19', '54.82', '62.38'], ['HotpotQA IR', '36.34', '46.78', '55.71']]
In all cases, we use the QA component in GoldEn Retriever for the final question answering step. Further inspection reveals that despite Elasticsearch improving overall recall of gold documents, it is only able to retrieve both gold documents for 36.91% of the dev set questions, in comparison to 28.21% from the IR engine in Yang et al. In contrast, GoldEn Retriever improves this percentage to 61.01%, almost doubling the recall over the single-hop baseline, providing the QA component a much better set of context documents to predict answers from.
Answering Complex Open-domain Questions Through Iterative Query Generation
1910.07000
Table 4: Pipeline ablative analysis of GoldEn Retriever end-to-end QA performance by replacing each query generator with a query oracle.
['[BOLD] System', '[BOLD] Ans F1', '[BOLD] Sup F1', '[BOLD] Joint F1']
[['GoldEn Retriever', '49.79', '64.58', '40.21'], ['w/ Hop 1 oracle', '52.53', '68.06', '42.68'], ['w/ Hop 1 & 2 oracles', '62.32', '77.00', '52.18']]
Lastly, we perform an ablation study in which we replace our query generator models with our query oracles and observe the effect on end-to-end performance. only slightly improves end-to-end performance, but further substituting G2 with the oracle yields a significant improvement. This illustrates that the performance loss is largely attributed to G2 rather than G1, because G2 solves a harder span selection problem from a longer retrieval context. In the next section, we examine the query generation models more closely by evaluating their performance without the QA component.
Answering Complex Open-domain Questions Through Iterative Query Generation
1910.07000
Table 7: IR performance (recall in percentages) of various Elasticsearch setups on the HotpotQA dev set using the original question.
['[BOLD] IR System', '[BOLD] R@10 for [ITALIC] d1', '[BOLD] R@10 for [ITALIC] d2']
[['Final system', '87.85', '36.91'], ['w/o Title Boosting', '86.85', '32.64'], ['w/o Reranking', '86.32', '34.77'], ['w/o Both', '84.67', '29.55']]
To this end, we propose to rerank these query results with a simple but effective heuristic that alleviates this issue. We would first retrieve at least 50 candidate documents for each query for consideration, and boost the query scores of documents whose title exactly matches the search query, or is a substring of the search query. Specifically, we multiply the document score by a heuristic constant between 1.05 and 1.5, depending on how well the document title matches the query, before reranking all search results. This results in a significant improvement in these cases. For the query “George W. Bush”, the page for the former US president is ranked at the top after reranking.
GEAR: Graph-based Evidence Aggregating and Reasoning for Fact Verification
1908.01843
Table 3: Document retrieval evaluation on dev set (%). (’-’ denotes a missing value)
['[BOLD] Model', '[BOLD] OFEVER']
[['Athene', '[BOLD] 93.55'], ['UCL MRG', '-'], ['UNC NLP', '92.82'], ['Our Model', '93.33']]
We use the OFEVER metric to evaluate the document retrieval component. After running the same model proposed by \newcitehanselowski2018ukp, we find our OFEVER score is slightly lower, which may due to the random factors.
GEAR: Graph-based Evidence Aggregating and Reasoning for Fact Verification
1908.01843
Table 4: Sentence selection evaluation and average label accuracy of GEAR with different thresholds on dev set (%).
['[ITALIC] τ', '[BOLD] OFEVER', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1', '[BOLD] GEAR LA']
[['0', '[BOLD] 91.10', '24.08', '[BOLD] 86.72', '37.69', '74.84'], ['10−4', '91.04', '30.88', '86.63', '45.53', '74.86'], ['10−3', '90.86', '40.60', '86.36', '55.23', '[BOLD] 74.91'], ['10−2', '90.27', '53.12', '85.47', '65.52', '74.89'], ['10−1', '87.70', '[BOLD] 70.61', '81.64', '[BOLD] 75.72', '74.81']]
We find the model with threshold 0 achieves the highest recall and OFEVER score. When the threshold increases, the recall value and the OFEVER score drop gradually while the precision and F1 score increase. The results are consistent with our intuition. If we do not filter out evidence, more claims could be provided with the full evidence set. If we increase the value of the threshold, more pieces of noisy evidence are filtered out, which contributes to the increase of precision and F1.
GEAR: Graph-based Evidence Aggregating and Reasoning for Fact Verification
1908.01843
Table 5: Label accuracy on the difficult dev set with different ERNet layers and evidence aggregators (%).
['[BOLD] ERNet Layers', '[BOLD] Aggregator [BOLD] Attention', '[BOLD] Aggregator [BOLD] Max', '[BOLD] Aggregator [BOLD] Mean']
[['0', '66.17', '65.36', '65.03'], ['1', '67.13', '66.63', '66.76'], ['2', '[BOLD] 67.44', '[BOLD] 67.24', '[BOLD] 67.56'], ['3', '66.53', '66.72', '66.89']]
We find our models with ERNet perform better than models without ERNet and the minimal improvement between them is 1.27%. We can also discover from the table that models with 2 ERNet layers achieve the best results, which indicates that claims from the difficult subset require multi-step evidence propagation. This result demonstrates the ability of our framework to deal with claims which need multiple evidence.
Multitask Learning with CTC and Segmental CRF for Speech Recognition
1702.06378
Table 2: Results of three types of acoustic features.
['Model', 'Features', 'Dim', 'dev', 'eval']
[['SRNN', 'FBANK', '250', '18.1', '20.0'], ['+MTL', 'FBANK', '250', '17.5', '18.7'], ['SRNN', 'fMLLR', '250', '16.6', '17.9'], ['+MTL', 'fMLLR', '250', '15.9', '17.5'], ['CTC', 'FBANK', '250', '17.7', '19.9'], ['+MTL', 'FBANK', '250', '17.2', '18.9'], ['CTC', 'fMLLR', '250', '16.7', '17.8'], ['+MTL', 'fMLLR', '250', '16.2', '17.4']]
We only show results of using LSTMs with 250 dimensional hidden states. The interpolation weight was set to be 0.5. In our experiments, tuning the interpolation weight did not further improve the recognition accuracy. The improvement for FBANK features is much larger than fMLLR features. In particular, with multitask learning, the recognition accuracy of our CTC system with best path decoding is comparable to the results obtained by Graves et al.
Multitask Learning with CTC and Segmental CRF for Speech Recognition
1702.06378
Table 1: Phone error rates of baseline CTC and SRNN models.
['Model', 'Features', '#Layer', 'Dim', 'dev', 'eval']
[['SRNN', 'FBANK', '3', '128', '19.2', '20.5'], ['SRNN', 'fMLLR', '3', '128', '17.6', '19.2'], ['SRNN', 'FBANK', '3', '250', '18.1', '20.0'], ['SRNN', 'fMLLR', '3', '250', '16.6', '17.9'], ['CTC', 'FBANK', '3', '128', '20.0', '21.8'], ['CTC', 'fMLLR', '3', '128', '17.7', '18.4'], ['CTC', 'FBANK', '3', '250', '17.7', '19.9'], ['CTC', 'fMLLR', '3', '250', '16.7', '17.8']]
The FBANK features are 120-dimensional with delta and delta-delta coefficients, and the fMLLR features are 40-dimensional, which were obtained from a Kaldi baseline system. We used a 3-layer bidirectional LSTMs for feature extraction, and we used the greedy best path decoding algorithm for both models. Our SRNN and CTC achieved comparable phone error rate (PER) for both kinds of features. However, for the CTC system, Graves et al. Apart from the implementation difference of using different code bases, Graves et al.
Improving Fluency of Non-Autoregressive Machine Translation
2004.03227
Table 1: Quantitative results of the models in terms of BLEU score and average decoding times per sentence in milliseconds. Results on WMT14 English-German translation and results without back-translation are in the Appendix.
['Method', 'German WMT15 en → de', 'German WMT15 de → en', 'Romanian WMT16 en → ro', 'Romanian WMT16 ro → en', 'Czech WMT18 en → cs', 'Czech WMT18 cs → en', 'Decoding time [ms]']
[['Non-autoregressive', '21.67', '25.57', '19.88', '28.99', '16.27', '17.63', '0233'], ['Transformer, greedy', '29.84', '32.62', '25.89', '33.54', '21.57', '27.89', '1664'], ['Transformer, beam 5', '30.23', '33.43', '26.46', '34.06', '22.20', '28.49', '3848'], ['Ours, beam 1', '22.68', '26.44', '19.74', '29.65', '16.98', '18.78', '0337'], ['Ours, beam 5', '25.50', '29.45', '22.46', '33.01', '19.31', '23.33', '0408'], ['Ours, beam 10', '25.93', '30.05', '23.33', '33.29', '19.47', '23.95', '0526'], ['Ours, beam 20', '26.03', '30.15', '24.11', '33.51', '19.58', '24.32', '1097']]
We observe that the beam search greatly improves the translation quality over the CTC-based nAR models (“Non-autoregressive” vs. “Ours”). Additionally, we have control over the speed/quality trade-off by either lowering or increasing the beam size.
Improving Fluency of Non-Autoregressive Machine Translation
2004.03227
Table 2: BLEU scores for English-to-German translation for different beam sizes and feature sets: CTC score (c), language model (l), ratio of the blank symbols (r), and the number of trailing blank symbols (t).
['Beam Size', '1', '5', '10', '20']
[['[ITALIC] c+ [ITALIC] l+ [ITALIC] r+ [ITALIC] t', '22.68', '25.50', '25.93', '26.03'], ['[ITALIC] c+ [ITALIC] l+ [ITALIC] r', '22.21', '24.92', '25.12', '25.35'], ['[ITALIC] c+ [ITALIC] l', '22.05', '24.64', '24.77', '25.12'], ['[ITALIC] c', '21.67', '22.06', '22.13', '22.17']]
We can see that combining the features is beneficial and that the improvement is substantial with larger beam sizes. The feature weights were trained separately for each beam size.
Joint Copying and Restricted Generation for Paraphrase
1611.09235
Table 2: Target word coverage ratio (%) on the test set.
['Vocabulary', 'Summarization', 'Simplification']
[['[BOLD] X', '79.2', '78.1'], ['[BOLD] X∪ [BOLD] A( [BOLD] X)', '89.2', '85.8'], ['[BOLD] X∪ [BOLD] A( [BOLD] X)∪ [BOLD] U', '95.3', '96.0'], ['| [BOLD] V|=30000', '96.3', '95.4']]
In this paper, we develop a novel Seq2Seq model called CoRe, which captures the two core writing modes in paraphrase, i.e., Copying and Rewriting. CoRe fuses a copying decoder and a restricted generative decoder. Therefore, the weights learned by the attention mechanism have the explicit meanings in the copying mode. Meanwhile, the generative decoder produces the words restricted in the source-specific vocabulary. This vocabulary is composed of a source-target word alignment table and a small set of frequent words. The alignment table is trained in advance, and many frequent rewriting patterns are included in it. It seems better to update the alignment table according to the learned attention weights. However, with the supplement of a few frequent words, experiments (see Table: While the output dimension of our generative decoder is just one tenth of the output dimension used by the common Seq2Seq models, it is able to generate highly relevant words concerning rewriting. To combine the two decoders and determine the final output, we develop a predictor to predict the writing mode of copying or rewriting. Since we know the actual mode at each output position in a training instance, we introduce a binary sequence labeling task to guide the learning of this predictor, which takes advantages of the supervision derived from the writing modes. It appears that both datasets hold a high copying ratio. When we restrict the generative decoder to produce the source alignments, more than 85% target words can be covered. When combined with 2000 frequent words, the coverage ratio of our model is already close to that using the vocabulary of 30000 words. We conduct text quality evaluation from several points of view. The lower PPL usually means higher readability. We also perform the statistical analysis on the average length of the target text, UNK ratio and copy ratio.
Joint Copying and Restricted Generation for Paraphrase
1611.09235
Table 3: Performance of different models. ∗Moses simply ignore the unknown words.
['Data', 'Model', 'Informativeness ROUGE-1(%)', 'Informativeness ROUGE-2(%)', 'Text Quality PPL', 'Text Quality Length', 'Text Quality UNK(%)', 'Text Quality Copy(%)']
[['Summarization', 'LEAD', '28.1', '14.1', '176', '19.9', '0', '100'], ['Summarization', 'Moses', '27.8', '14.1', '214', '73.0', '0∗', '99.6'], ['Summarization', 'ABS', '28.1', '12.4', '113', '13.7', '0.88', '92.0'], ['Summarization', 'CoRe', '[BOLD] 30.5', '[BOLD] 16.2', '[BOLD] 95', '14.0', '0.14', '88.6'], ['Simplification', 'LEAD', '66.4', '49.4', '66.5', '20.8', '0', '100'], ['Simplification', 'Moses', '70.9', '52.1', '70.3', '24.4', '0∗', '97.6'], ['Simplification', 'ABS', '68.4', '50.3', '69.5', '22.7', '5.6', '87.7'], ['Simplification', 'CoRe', '[BOLD] 72.7', '[BOLD] 55.3', '[BOLD] 60.9', '19.6', '2.3', '85.9']]
In this table, the metrics that measure informativeness and text quality are separated. Let’s look at the informativeness performance first. As can be seen, CoRe achieves the highest ROUGE scores on both summarization and text simplification. In contrast, the standard attentive Seq2Seq model ABS is slightly inferior to Moses. It even performs worse than the simple baseline LEAD in terms of ROUGE-2 in summarization. Apparently, introducing the copying and restricted generation mechanisms is critical for the paraphrase-oriented tasks. Then, we check the quality of the generated sentences. According to PPL, the sentences produced by CoRe resemble the target language the most. It is interesting that LEAD extracts human-written text in the source. Nevertheless, its PPL is considerably higher than CoRe on both datasets. It seems that CoRe indeed captures some characteristics of the target language, such as the diction. We also find that the PPL of Moses is the largest, and its generated length reaches the length of the source text. Moses seems to conduct word-to-word translation. This practice is acceptable in text simplification, but totally offends the summarization requirement. Although not manually controlled, the lengths of the outputs in ABS and CoRe are both similar to the actual one, which demonstrates the learning ability of Seq2Seq models. The former verifies the power of our two decoders, while the latter may be attributed to the supplement of the supervision of writing modes.
The Paradigm Discovery Problem
2005.01630
Table 4: PDP and PCFP results for all languages and models, averaged over 4 runs. Metrics are defined in § 3.3. An refers to the Analogy metric and LE to the Lexicon Expansion metric.
['[EMPTY]', 'Cells', 'Paradigms', '[BOLD] PDP F_{\\mathrm{cell}}', '[BOLD] PDP F_{\\mathrm{par}}', '[BOLD] PDP F_{\\mathrm{grid}}', '[BOLD] PCFP An', '[BOLD] PCFP LE']
[['[BOLD] Arabic nouns – 8,732 forms', '[BOLD] Arabic nouns – 8,732 forms', '[BOLD] Arabic nouns – 8,732 forms', '[BOLD] Arabic nouns – 8,732 forms', '[BOLD] Arabic nouns – 8,732 forms', '[BOLD] Arabic nouns – 8,732 forms', '[BOLD] Arabic nouns – 8,732 forms', '[BOLD] Arabic nouns – 8,732 forms'], ['sup', '27', '4,283', '[EMPTY]', '[EMPTY]', '[EMPTY]', '85.9', '87.0'], ['bench', '12.8', '5,279.3', '39.9', '48.5', '43.7', '16.8', '49.5'], ['gold k', '27', '4,930.3', '25.9', '46.4', '33.1', '16.1', '57.2'], ['[BOLD] German nouns – 19,481 forms', '[BOLD] German nouns – 19,481 forms', '[BOLD] German nouns – 19,481 forms', '[BOLD] German nouns – 19,481 forms', '[BOLD] German nouns – 19,481 forms', '[BOLD] German nouns – 19,481 forms', '[BOLD] German nouns – 19,481 forms', '[BOLD] German nouns – 19,481 forms'], ['sup', '8', '17,018', '[EMPTY]', '[EMPTY]', '[EMPTY]', '72.2', '74.9'], ['bench', '7.3', '17,073.3', '35.2', '59.4', '43.3', '14.2', '56.7'], ['gold k', '8', '16,836.0', '29.4', '66.6', '40.8', '14.8', '60.4'], ['[BOLD] English verbs – 3,330 forms', '[BOLD] English verbs – 3,330 forms', '[BOLD] English verbs – 3,330 forms', '[BOLD] English verbs – 3,330 forms', '[BOLD] English verbs – 3,330 forms', '[BOLD] English verbs – 3,330 forms', '[BOLD] English verbs – 3,330 forms', '[BOLD] English verbs – 3,330 forms'], ['sup', '5', '1,801', '[EMPTY]', '[EMPTY]', '[EMPTY]', '80.4', '80.7'], ['bench', '7.5', '1,949.5', '64.0', '80.1', '71.1', '52.0', '67.5'], ['gold k', '5', '1,977.3', '79.6', '82.1', '80.8', '54.7', '69.4'], ['[BOLD] Latin nouns – 6,903 forms', '[BOLD] Latin nouns – 6,903 forms', '[BOLD] Latin nouns – 6,903 forms', '[BOLD] Latin nouns – 6,903 forms', '[BOLD] Latin nouns – 6,903 forms', '[BOLD] Latin nouns – 6,903 forms', '[BOLD] Latin nouns – 6,903 forms', '[BOLD] Latin nouns – 6,903 forms'], ['sup', '12', '3,013', '[EMPTY]', '[EMPTY]', '[EMPTY]', '80.0', '88.0'], ['bench', '13.0', '3,746.5', '38.8', '73.2', '50.6', '17.2', '72.9'], ['gold k', '12', '3,749.0', '39.9', '71.6', '51.3', '17.5', '72.6'], ['[BOLD] Russian nouns – 36,321 forms', '[BOLD] Russian nouns – 36,321 forms', '[BOLD] Russian nouns – 36,321 forms', '[BOLD] Russian nouns – 36,321 forms', '[BOLD] Russian nouns – 36,321 forms', '[BOLD] Russian nouns – 36,321 forms', '[BOLD] Russian nouns – 36,321 forms', '[BOLD] Russian nouns – 36,321 forms'], ['sup', '14', '14,502', '[EMPTY]', '[EMPTY]', '[EMPTY]', '94.7', '96.8'], ['bench', '16.5', '19,792.0', '44.5', '72.2', '55.0', '31.9', '86.2'], ['gold k', '14', '20,944.0', '45.7', '69.1', '55.0', '31.6', '84.3']]
For reference, we also report a supervised benchmark, sup, which assumes a gold grid as input, then solves the PCFP exactly as the benchmark does. In terms of the PDP, clustering assigns lexicon forms to paradigms (46–82%) more accurately than to cells (26–80%). Results are high for English, which has the fewest gold cells, and lower elsewhere. In German, Latin, and Russian, our benchmark proposes nearly as many cells as gold k, thus performing similarly. For English, it overestimates the true number and performs worse. For Arabic, it severely underestimates k but performs better, likely due to the orthography: without diacritics, the three case distinctions become obscured in almost all instances. In general, fixing the true number of cells can be unhelpful because syncretism and the Zipfian distribution of cells creates situations where certain gold cells are too difficult to detect. Allowing the system to choose its own number of cells lets it focus on distinctions for which there is sufficient distributional evidence.
The Paradigm Discovery Problem
2005.01630
Table 7: Benchmark variations demonstrating the effects of various factors, averaged over 4 runs.
['Paradigms', 'Paradigms', '[BOLD] PDP F_{\\mathrm{cell}}', '[BOLD] PDP F_{\\mathrm{par}}', '[BOLD] PDP F_{\\mathrm{grid}}', '[BOLD] PCFP An', '[BOLD] PCFP LE']
[['[BOLD] Arabic nouns – 27 cells', '[BOLD] Arabic nouns – 27 cells', '[BOLD] Arabic nouns – 27 cells', '[BOLD] Arabic nouns – 27 cells', '[BOLD] Arabic nouns – 27 cells', '[BOLD] Arabic nouns – 27 cells', '[BOLD] Arabic nouns – 27 cells'], ['Gold k', '4,930.3', '25.9', '46.4', '33.1', '16.1', '57.2'], ['larger corpus', '5,039.5', '29.1', '37.5', '32.8', '20.4', '49.2'], ['smaller corpus', '5,004.0', '18.8', '37.7', '24.9', '9.5', '42.1'], ['no affix bias', '4,860.3', '21.5', '47.7', '29.7', '16.3', '43.5'], ['no window bias', '4,978.5', '24.0', '47.5', '31.8', '17.6', '55.8'], ['\\omega(x,c)=1', '3,685.0', '[EMPTY]', '34.4', '28.8', '5.2', '35.5'], ['\\omega(x,c)=0', '1,310.5', '[EMPTY]', '10.0', '13.9', '0.1', '5.8'], ['random sources', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '16.3', '55.9'], ['[BOLD] Latin nouns – 12 cells', '[BOLD] Latin nouns – 12 cells', '[BOLD] Latin nouns – 12 cells', '[BOLD] Latin nouns – 12 cells', '[BOLD] Latin nouns – 12 cells', '[BOLD] Latin nouns – 12 cells', '[BOLD] Latin nouns – 12 cells'], ['Gold k', '3,749.0', '39.9', '71.6', '51.3', '17.5', '72.6'], ['larger corpus', '3,529.5', '42.8', '79.1', '55.5', '16.2', '69.9'], ['smaller corpus', '4,381.5', '30.7', '49.1', '37.8', '14.6', '51.1'], ['no affix bias', '3,906.8', '37.1', '68.2', '48.1', '22.7', '66.6'], ['no window bias', '3,756.5', '42.0', '71.2', '52.8', '17.9', '70.9'], ['\\omega(x,c)=1', '3,262.5', '[EMPTY]', '67.1', '49.6', '11.0', '52.9'], ['\\omega(x,c)=0', '1,333.3', '[EMPTY]', '26.3', '31.7', '0.7', '7.1'], ['random sources', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '16.5', '72.3']]
We consider augmenting and shrinking the corpus. We also reset the FastText hyperparameters used to achieve a morphosyntactic inductive bias to their default values (no affix/window bias) and consider two constant exponent penalty weights (\omega(x_{f},c)=1 and \omega(x_{f},c)=0) Finally, we consider selecting random sources for PCFP reinflection instead of identifying reliable sources. For all variants, the number of cells is fixed to the ground truth.
The PhotoBook Dataset:Building Common Ground through Visually-Grounded Dialogue
1906.01530
Table 2: Number of reference chains, dialogue segments, and image types (targets and non-targets) in each data split.
['[BOLD] Split', '[BOLD] Chains', '[BOLD] Segments', '[BOLD] Targets', '[BOLD] Non-Targets']
[['Train', '12,694', '30,992', '40,898', '226,993'], ['Val', '2,811', '6,801', '9,070', '50,383'], ['Test', '2,816', '6,876', '9,025', '49,774']]
The automatically extracted co-reference chains per target image were split into three disjoint sets for training (70%), validation (15%) and testing (15%), aiming at an equal distribution of target image domains in all three sets. The results show that the resolution capabilities of our model are well above the baseline. The History model achieves higher recall and F-score than the No-History model, while precision is comparable across these two conditions.
The PhotoBook Dataset:Building Common Ground through Visually-Grounded Dialogue
1906.01530
Table 1: Avg. token counts in COCO captions and the first and last descriptions in PhotoBook, plus their cosine distance to the caption’s cluster mean vector. The distance between first and last descriptions is 0.083.
['[BOLD] Source', '[BOLD] # Tokens', '[BOLD] # Content', '[BOLD] Distance']
[['COCO captions', '11.167', '5.255', '–'], ['First description', '9.963', '5.185', '0.091'], ['Last description', '5.685', '5.128', '0.156']]
In a small-scale pilot study, Ilinykh et al. We argue that in the PhotoBook task referring expressions are not only adapted based on the goal-oriented nature of the interaction but also by incorporating the developing common ground between the participants. This effect becomes most apparent when collecting all referring expressions for a specific target image produced during the different rounds of a game in its coreference chain. The following excerpt displays such a coreference chain extracted from the PhotoBook dataset: [itemsep=-1pt,leftmargin=14pt] A: Do you have a boy with a teal coloured shirt with yellow holding a bear with a red shirt? B: Boy with teal shirt and bear with red shirt? A: Teal shirt boy? To quantify the effect of referring expression refinement, we compare participants’ first and last descriptions to a given target image with the image’s captions provided in the MS COCO dataset. For this purpose we manually annotated the first and last expressions referring to a set of six target images across ten random games in the PhotoBook dataset. Before filtering, first referring expressions do not significantly differ in length from the COCO captions. Last descriptions however are significantly shorter than both the COCO captions and first descriptions. After filtering for content words, no significant differences remain. We also calculate the cosine distance between the three different descriptions based on their average word vectors. mean word vector, which is confirmed in our results. Comparing the distribution of word classes in the captions and referring expressions finally revealed a similar distribution in first referring expressions and COCO captions, and a significantly different distribution in last referring expressions, among other things doubling the relative frequency of nouns.
The PhotoBook Dataset:Building Common Ground through Visually-Grounded Dialogue
1906.01530
Table 3: Results for the target images in the test set.
['[BOLD] Model', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1']
[['Random baseline', '15.34', '49.95', '23.47'], ['No-History', '56.65', '75.86', '64.86'], ['History', '56.66', '77.41', '65.43'], ['History / No image', '35.66', '63.18', '45.59']]
Every candidate image contributes individually to the scores, i.e., the task is not treated as multi-label for evaluation purposes. Random baseline scores are obtained by taking the average of 10 runs with a model that predicts targets and non-targets randomly for the images in the test set.
The PhotoBook Dataset:Building Common Ground through Visually-Grounded Dialogue
1906.01530
Table 6: Results for target images in the validation set.
['[BOLD] Model', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1']
[['No History', '56.37', '75.91', '64.70'], ['History', '56.32', '78.10', '65.45'], ['No image', '34.61', '62.49', '44.55']]
The latter constitute the large majority of candidate images, and thus results are substantially higher for this class.
Chinese Named Entity Recognition Augmented with Lexicon Memory
1912.08282
Table 7: Results on the Weibo NER dataset
['[BOLD] Model', '[BOLD] P (%)', '[BOLD] R (%)', '[BOLD] F1 (%)']
[['[peng2016improving]', '-', '-', '58.99'], ['[he2017unified]', '-', '-', '58.23'], ['[zhang2018lattice]', '-', '-', '58.79'], ['LEMON', '70.86', '55.42', '[BOLD] 62.19']]
The LEMON-2 achieved state-of-the-art results on all the four datasets. Our model also achieved the highest F1-score Note that the Weibo NER data is extracted from the social media, it is full of non-standard expressions and only contains about 1.4k samples. The problems of out-of-vocabulary words and ambiguity of word boundaries become more serious for NER on this dataset.
Chinese Named Entity Recognition Augmented with Lexicon Memory
1912.08282
Table 2: Results on OntoNotes-4 development set with different model architectures.
['[BOLD] fragment \\ Character', '[BOLD] fragment \\ Character', '[BOLD] P (%) [BOLD] Baseline', '[BOLD] R (%) [BOLD] Baseline', '[BOLD] F1 (%) [BOLD] Baseline', '[BOLD] P (%) [BOLD] Transformer', '[BOLD] R (%) [BOLD] Transformer', '[BOLD] F1 (%) [BOLD] Transformer', '[BOLD] P (%) [BOLD] Bi-RNN', '[BOLD] R (%) [BOLD] Bi-RNN', '[BOLD] F1 (%) [BOLD] Bi-RNN']
[['[BOLD] Gold', '[BOLD] BOW', '72.40', '62.03', '66.81', '-', '-', '-', '73.60', '69.08', '71.27'], ['[BOLD] Gold', '[BOLD] FOFE', '75.52', '64.86', '69.78', '64.35', '54.04', '58.74', '76.93', '70.43', '73.54'], ['[BOLD] Gold', '[BOLD] Bi-RNN', '73.68', '69.74', '71.66', '59.92', '54.87', '57.28', '71.51', '73.66', '72.57'], ['[EMPTY]', '[BOLD] BOW + Lex', '78.77', '70.40', '74.35 (+7.54)', '76.73', '73.48', '75.07 (+8.26)', '78.27', '75.34', '76.78 (+5.51)'], ['[EMPTY]', '[BOLD] FOFE + Lex', '77.33', '71.90', '74.52 (+4.74)', '79.92', '72.65', '76.11 (+17.37)', '79.49', '73.77', '76.53 (+2.99)'], ['[EMPTY]', '[BOLD] Bi-RNN + Lex', '77.40', '74.39', '75.87 (+4.21)', '79.62', '73.87', '76.64 (+19.36)', '81.12', '75.18', '[BOLD] 78.04 (+5.47)'], ['[BOLD] Auto', '[BOLD] BOW', '76.67', '56.24', '64.88', '-', '-', '-', '75.39', '61.36', '66.92'], ['[BOLD] Auto', '[BOLD] FOFE', '71.66', '58.21', '64.24', '73.17', '61.75', '66.98', '76.20', '61.65', '68.16'], ['[BOLD] Auto', '[BOLD] Bi-RNN', '74.60', '63.67', '68.70', '72.05', '63.52', '67.52', '76.73', '63.70', '69.61'], ['[EMPTY]', '[BOLD] BOW + Lex', '76.33', '64.75', '70.06 (+5.18)', '73.96', '64.69', '69.02 (+4.14)', '78.42', '67.06', '72.30 (+5.38)'], ['[EMPTY]', '[BOLD] FOFE + Lex', '77.24', '63.91', '69.95 (+5.71)', '78.46', '62.93', '69.85 (+2.87)', '76.24', '68.76', '72.31 (+4.15)'], ['[EMPTY]', '[BOLD] Bi-RNN + Lex', '77.62', '66.32', '71.53 (+2.83)', '76.79', '67.09', '71.61 (+4.09)', '76.57', '69.54', '[BOLD] 72.89 (+3.28)']]
The performances of all models will decrease of approximately 4\% in F1-score if we used the results of word segmentation and POS-tagging automatically generated by THULAC toolkit instead of the ground truth. It shows that the NER performance is significantly influenced by the results of the upstream tasks through the error propagation.
Chinese Named Entity Recognition Augmented with Lexicon Memory
1912.08282
Table 3: Results on the OntoNotes-4 development set with different features
['[BOLD] Features \\ Data', '[BOLD] Features \\ Data', '[BOLD] P (%) [BOLD] Ground truth', '[BOLD] R (%) [BOLD] Ground truth', '[BOLD] F1 (%) [BOLD] Ground truth', '[BOLD] P (%) [BOLD] Automatically labelled', '[BOLD] R (%) [BOLD] Automatically labelled', '[BOLD] F1 (%) [BOLD] Automatically labelled']
[['[BOLD] NCRF', '[BOLD] char', '66.37', '60.21', '63.14', '-', '-', '-'], ['[BOLD] NCRF', '[BOLD] char + seg', '70.58', '69.96', '70.27', '70.77', '63.33', '66.85'], ['[BOLD] NCRF', '[BOLD] char + pos', '71.81', '74.48', '73.12', '70.20', '[BOLD] 70.26', '70.23'], ['[BOLD] NCRF', '[BOLD] char + seg + pos', '75.63', '72.35', '73.08', '72.88', '68.18', '70.45'], ['[BOLD] No Lex', '[BOLD] char', '67.6', '55.03', '60.67', '-', '-', '-'], ['[BOLD] No Lex', '[BOLD] char + seg', '72.16', '66.09', '68.99', '70.48', '62.65', '66.33'], ['[BOLD] No Lex', '[BOLD] char + pos', '74.39', '65.44', '69.63', '72.87', '63.73', '67.99'], ['[BOLD] No Lex', '[BOLD] char + seg + pos', '74.97', '72.23', '73.58', '76.29', '64.43', '69.86'], ['[BOLD] Lex', '[BOLD] char', '77.27', '60.73', '68.01', '-', '-', '-'], ['[BOLD] Lex', '[BOLD] char + seg', '78.40', '70.75', '74.38', '76.11', '64.63', '69.91'], ['[BOLD] Lex', '[BOLD] char + pos', '77.71', '72.35', '74.93', '77.46', '66.89', '71.79'], ['[BOLD] Lex', '[BOLD] char + seg + pos', '[BOLD] 78.70', '[BOLD] 74.95', '[BOLD] 76.78', '[BOLD] 76.41', '68.61', '[BOLD] 72.30']]
We also trained a LSTM-CRF model as a traditional approach for comparison by NCRF++, an open source neural sequence labeling toolkit [yang2018ncrf]. The experimental results demonstrate that the features derived from the word segmentation and POS-tagging always benefit to all the models no matter they are labeled by human or produced by automatic toolkit.
Chinese Named Entity Recognition Augmented with Lexicon Memory
1912.08282
Table 4: Results on the MSRA dataset
['[BOLD] Model', '[BOLD] P (%)', '[BOLD] R (%)', '[BOLD] F1 (%)']
[['[chen2006chinese]', '91.22', '81.71', '86.20'], ['[zhang2006sighan]', '92.20', '90.08', '91.18'], ['[lu2016multi]', '-', '-', '87.94'], ['[dong2016character]', '91.28', '90.62', '90.95'], ['[zhang2018lattice]', '93.57', '[BOLD] 92.79', '93.18'], ['LEMON', '[BOLD] 95.39', '91.77', '[BOLD] 93.55']]
The LEMON-2 achieved state-of-the-art results on all the four datasets. Our model also achieved the highest F1-score Note that the Weibo NER data is extracted from the social media, it is full of non-standard expressions and only contains about 1.4k samples. The problems of out-of-vocabulary words and ambiguity of word boundaries become more serious for NER on this dataset.
Chinese Named Entity Recognition Augmented with Lexicon Memory
1912.08282
Table 5: Results on the Resume NER dataset
['[BOLD] Model', '[BOLD] P (%)', '[BOLD] R (%)', '[BOLD] F1 (%)']
[['word{\\dagger}', '93.72', '93.44', '93.58'], ['word+char+bichar{\\dagger}', '94.07', '94.42', '94.24'], ['char {\\dagger}', '93.66', '93.31', '93.48'], ['char+bichar+softword{\\dagger}', '94.53', '[BOLD] 94.29', '94.41'], ['[zhang2018lattice]', '94.81', '94.11', '94.46'], ['LEMON', '[BOLD] 95.59', '94.07', '[BOLD] 94.82']]
The LEMON-2 achieved state-of-the-art results on all the four datasets. Our model also achieved the highest F1-score Note that the Weibo NER data is extracted from the social media, it is full of non-standard expressions and only contains about 1.4k samples. The problems of out-of-vocabulary words and ambiguity of word boundaries become more serious for NER on this dataset.
Chinese Named Entity Recognition Augmented with Lexicon Memory
1912.08282
Table 6: Results on the OntoNotes-4 dataset
['[BOLD] Model', '[BOLD] P (%)', '[BOLD] R (%)', '[BOLD] F1 (%)']
[['[wang2013effective] {\\dagger}', '76.43', '72.32', '74.32'], ['[che2013named] {\\dagger}', '77.71', '72.51', '75.02'], ['[yang2016combining] {\\dagger}', '72.98', '[BOLD] 80.15', '76.40'], ['LEMON {\\dagger}', '79.27', '78.29', '[BOLD] 78.78'], ['[zhang2018lattice]', '76.35', '71.56', '73.88'], ['LEMON', '[BOLD] 80.61', '71.05', '[BOLD] 75.53']]
The LEMON-2 achieved state-of-the-art results on all the four datasets. Our model also achieved the highest F1-score Note that the Weibo NER data is extracted from the social media, it is full of non-standard expressions and only contains about 1.4k samples. The problems of out-of-vocabulary words and ambiguity of word boundaries become more serious for NER on this dataset.
Self-Attention and Ingredient-Attention Based Model for Recipe Retrieval from Image Queries
1911.01770
Table 1. Comparison between our method, our Joint Neural Embedding (JNE)(Marín et al., 2018) and AdaMine (Carvalho et al., 2018) re-implementation. For all models we were using selected matching pairs generated by reducing noisy instruction sentences as described above. Recall rates are averaged over the evaluation batches.
['Image to Recipe', 'Image to Recipe', 'Image to Recipe MedR', 'Image to Recipe R@1', 'Image to Recipe R@5', 'Image to Recipe R@10']
[['1k samples', 'Random (Marín et al., 2018 )', '500.0', '0.001', '0.005', '0.01'], ['1k samples', 'JNE (Marín et al., 2018 )', '5.0±0.1', '25.9', '52.6', '64.1'], ['1k samples', 'AdaMine (Carvalho et al., 2018 )', '3.0±0.1', '33.1', '64.3', '75.2'], ['1k samples', 'IA', '2.9±0.3', '34.6', '66.0', '76.6']]
One sample of these subsets is composed of text embedding and image embedding in the shared latent space. Since our interest lies in the recipe retrieval task, we optimized and evaluated our model by using each image embedding in the subsets as query against all text embeddings. By ranking the query and the candidate embeddings according to their cosine distance, we estimate the median rank. The model’s performance is best, if the matching text embedding is found at the first rank. Further, we estimate the recall percentage at the top K percent over all queries. The recall percentage describes the quantity of queries ranked amid the top K closest results.
Assertion Detection in Multi-Label Clinical Text using Scope Localization
2005.09246
Table 2: Distribution of Assertion classes in the data.
['[BOLD] Class', '[BOLD] Dataset-I [BOLD] Train', '[BOLD] Dataset-I [BOLD] Val', '[BOLD] Dataset-I [BOLD] Test', '[BOLD] Dataset-II [BOLD] Train', '[BOLD] Dataset-II [BOLD] Val', '[BOLD] Dataset-II [BOLD] Test']
[['Present', '3711', '511', '524', '17407', '2215', '2452'], ['Absent', '596', '73', '73', '6136', '708', '805'], ['Conditional', '169', '31', '19', '393', '44', '49'], ['Hypothetical', '147', '22', '18', '69', '10', '5'], ['Possibility', '62', '5', '11', '219', '37', '25'], ['AWSE', '15', '3', '2', '21', '4', '2']]
The annotations were done using BRAT tool Rules for annotation were generated after consulting with the Radiologist supervising the annotators. Other Radiologists were consulted to annotate any mentions that were previously unseen or ambiguous and also for the final review. For a fair comparison with the baseline, the box predictions from our model are converted to a sequence of labels per token. On first impressions, the performance seem to be affected by the quantity of data available for training with the best performance on present class and least performance on AWSE class. After further analysis, it appears that the scope lengths found in the training set is also a crucial factor. As shown, model performance for the present class declines with scope lengths 7, 10, and 20, which reflect sparsity of this class for these scopes in the training set. In contrast, the model performs well on the hypothetical class with scope length 7, reflective of the better distribution of this class for this scope relative to other scopes.