paper
stringlengths 0
839
| paper_id
stringlengths 1
12
| table_caption
stringlengths 3
2.35k
| table_column_names
large_stringlengths 13
1.76k
| table_content_values
large_stringlengths 2
11.9k
| text
large_stringlengths 69
2.82k
|
---|---|---|---|---|---|
Cross-lingual Word Analogies using Linear Transformations between Semantic Spaces | 1807.04175 | (b) state-currency | ['[EMPTY]', 'En', 'De', 'Es', 'It', 'Cs', 'Hr'] | [['En', '11.1', '7.4', '3.9', '4.4', '2.1', '5.3'], ['De', '5.8', '6.7', '1.5', '3.2', '1.5', '3.4'], ['Es', '6.5', '3.7', '2.8', '3.4', '1.8', '1.4'], ['It', '6.3', '4.9', '2.8', '3.7', '3.0', '3.1'], ['Cs', '3.4', '2.7', '1.7', '2.5', '1.0', '1.6'], ['Hr', '5.3', '5.7', '1.5', '1.6', '1.5', '4.3']] | Again, rows represent the source language a and columns the target language b. The results were achieved using B-CCA-cu transformation with dictionaries of size n=20,000. Each language seems to have strengths and weaknesses. |
Cross-lingual Word Analogies using Linear Transformations between Semantic Spaces | 1807.04175 | (c) capital-common-countries | ['[EMPTY]', 'En', 'De', 'Es', 'It', 'Cs', 'Hr'] | [['En', '95.3', '81.7', '86.7', '86.8', '48.8', '53.3'], ['De', '91.9', '82.6', '85.9', '89.2', '55.0', '49.3'], ['Es', '93.8', '82.8', '83.3', '84.5', '54.7', '47.8'], ['It', '93.6', '83.2', '85.7', '88.9', '54.3', '53.1'], ['Cs', '91.1', '77.9', '79.5', '80.0', '44.9', '43.9'], ['Hr', '71.1', '55.0', '64.4', '55.6', '25.9', '32.2']] | Again, rows represent the source language a and columns the target language b. The results were achieved using B-CCA-cu transformation with dictionaries of size n=20,000. Each language seems to have strengths and weaknesses. |
Cross-lingual Word Analogies using Linear Transformations between Semantic Spaces | 1807.04175 | (d) state-adjective | ['[EMPTY]', 'En', 'De', 'Es', 'It', 'Cs', 'Hr'] | [['En', '91.2', '58.7', '90.2', '91.8', '86.3', '88.0'], ['De', '91.1', '75.9', '86.0', '92.8', '73.5', '80.2'], ['Es', '90.5', '71.5', '87.4', '94.1', '83.8', '83.6'], ['It', '90.5', '61.5', '89.8', '89.1', '90.1', '85.2'], ['Cs', '88.5', '44.6', '86.6', '90.4', '92.7', '80.2'], ['Hr', '86.6', '66.8', '82.0', '85.4', '82.7', '86.5']] | Again, rows represent the source language a and columns the target language b. The results were achieved using B-CCA-cu transformation with dictionaries of size n=20,000. Each language seems to have strengths and weaknesses. |
Cross-lingual Word Analogies using Linear Transformations between Semantic Spaces | 1807.04175 | (e) adjective-comparative | ['[EMPTY]', 'En', 'De', 'Es', 'It', 'Cs', 'Hr'] | [['En', '78.5', '55.1', '1.7', '10.0', '34.5', '31.6'], ['De', '68.4', '59.1', '2.2', '7.3', '17.4', '16.8'], ['Es', '34.8', '29.2', '25.0', '12.0', '4.5', '9.6'], ['It', '41.3', '31.9', '8.0', '13.3', '4.3', '5.5'], ['Cs', '76.4', '49.7', '2.0', '15.3', '48.4', '33.1'], ['Hr', '67.6', '45.3', '8.3', '15.6', '32.6', '32.2']] | Again, rows represent the source language a and columns the target language b. The results were achieved using B-CCA-cu transformation with dictionaries of size n=20,000. Each language seems to have strengths and weaknesses. |
Cross-lingual Word Analogies using Linear Transformations between Semantic Spaces | 1807.04175 | (f) adjective-superlative | ['[EMPTY]', 'En', 'De', 'Es', 'It', 'Cs', 'Hr'] | [['En', '68.9', '15.3', '11.5', '20.2', '12.0', '19.0'], ['De', '63.8', '32.9', '12.5', '20.1', '15.6', '19.8'], ['Es', '4.6', '0.4', '32.8', '37.3', '0.0', '0.2'], ['It', '5.9', '0.5', '24.9', '62.1', '0.1', '0.2'], ['Cs', '54.6', '21.8', '4.9', '24.3', '28.5', '17.3'], ['Hr', '57.2', '21.6', '7.5', '10.8', '21.6', '29.2']] | Again, rows represent the source language a and columns the target language b. The results were achieved using B-CCA-cu transformation with dictionaries of size n=20,000. Each language seems to have strengths and weaknesses. |
Cross-lingual Word Analogies using Linear Transformations between Semantic Spaces | 1807.04175 | (g) adjective-opposite | ['[EMPTY]', 'En', 'De', 'Es', 'It', 'Cs', 'Hr'] | [['En', '51.4', '39.4', '40.7', '38.2', '79.8', '49.8'], ['De', '49.5', '33.5', '42.9', '37.9', '78.7', '47.4'], ['Es', '46.2', '37.2', '40.3', '37.3', '76.7', '47.2'], ['It', '49.6', '38.9', '43.3', '38.9', '79.2', '47.4'], ['Cs', '49.6', '35.6', '33.9', '34.3', '78.9', '41.0'], ['Hr', '46.6', '40.2', '41.4', '36.4', '77.8', '51.9']] | Again, rows represent the source language a and columns the target language b. The results were achieved using B-CCA-cu transformation with dictionaries of size n=20,000. Each language seems to have strengths and weaknesses. |
Cross-lingual Word Analogies using Linear Transformations between Semantic Spaces | 1807.04175 | (h) noun-plural | ['[EMPTY]', 'En', 'De', 'Es', 'It', 'Cs', 'Hr'] | [['En', '66.8', '48.2', '67.6', '45.0', '32.6', '40.7'], ['De', '66.1', '49.0', '65.2', '41.8', '33.2', '40.4'], ['Es', '68.9', '48.6', '71.7', '55.4', '32.1', '45.0'], ['It', '68.6', '48.4', '72.5', '52.6', '33.3', '42.8'], ['Cs', '62.2', '43.8', '61.7', '36.2', '39.4', '31.3'], ['Hr', '66.8', '47.6', '63.2', '41.8', '32.9', '44.2']] | Again, rows represent the source language a and columns the target language b. The results were achieved using B-CCA-cu transformation with dictionaries of size n=20,000. Each language seems to have strengths and weaknesses. |
Interactive Variance Attention based Online Spoiler Detection for Time-Sync Comments | 1908.03451 | Table 4. F1-score, Precision, and Recall with different IVA parameters | ['Parameters Metrics', 'R=1 Precision', 'R=1 Recall', 'R=1 F1-score', 'R=2 Precision', 'R=2 Recall', 'R=2 F1-score', 'R=3 Precision', 'R=3 Recall', 'R=3 F1-score', 'R=4 Precision', 'R=4 Recall', 'R=4 F1-score'] | [['P=1', '0.489', '0.472', '0.480', '0.525', '0.566', '0.525', '0.537', '0.589', '0.540', '0.545', '0.607', '0.555'], ['P=2', '0.583', '0.563', '0.561', '0.638', '0.662', '0.649', '0.642', '0.682', '0.652', '0.628', '0.695', '0.673'], ['P=3', '0.688', '0.733', '0.712', '0.759', '0.762', '0.740', '0.817', '0.797', '0.807', '0.805', '0.811', '0.807'], ['P=4', '0.730', '0.762', '0.744', '0.770', '0.824', '0.797', '0.822', '0.833', '0.819', '0.829', '0.828', '0.814'], ['P=5', '0.777', '0.781', '0.769', '0.817', '0.845', '0.841', '0.825', '0.860', '0.846', '0.819', '0.869', '0.850'], ['P=6', '0.779', '0.808', '0.773', '0.820', '0.853', '0.839', '0.830', '0.854', '0.845', '0.827', '0.855', '0.844']] | Finally, we change the number of former neighbors R and keyframe P to see their influence on the experiment result in the validation set. We can find the F1-score increase with the number of former neighbors R and keyframe P. This result proves that our application of the interactive and real-time properties of TSCs is correct. When we increase former neighbors, TSCs establish semantics association with more surrounding neighbors. When we increase keyframes, TSCs can be compared with more important plots in the video. |
Interactive Variance Attention based Online Spoiler Detection for Time-Sync Comments | 1908.03451 | Table 3. F1-score, Precision, and Recall of each method | ['[BOLD] (a) TV-series Baselines', '[BOLD] (a) TV-series Precision', '[BOLD] (a) TV-series Recall', '[BOLD] (a) TV-series F1-score'] | [['KM', '0.443', '[BOLD] 0.892', '0.577'], ['LDA', '0.563', '0.656', '0.606'], ['LI-NPP', '0.677', '0.713', '0.695'], ['DN-GAA', '0.730', '0.798', '0.747'], ['SBN', '0.782', '0.791', '0.779'], ['SBN-WT', '0.762', '0.743', '0.754'], ['SBN-IVA', '[BOLD] 0.843', '0.856', '[BOLD] 0.850'], ['[BOLD] (b) Movies', '[BOLD] (b) Movies', '[BOLD] (b) Movies', '[BOLD] (b) Movies'], ['Baselines', 'Precision', 'Recall', 'F1-score'], ['KM', '0.398', '[BOLD] 0.912', '0.545'], ['LDA', '0.497', '0.604', '0.562'], ['LI-NPP', '0.654', '0.727', '0.688'], ['DN-GAA', '0.714', '0.792', '0.751'], ['SBN', '0.722', '0.833', '0.785'], ['SBN-WT', '0.708', '0.776', '0.732'], ['SBN-IVA', '[BOLD] 0.753', '0.856', '[BOLD] 0.811'], ['[BOLD] (c) Sports', '[BOLD] (c) Sports', '[BOLD] (c) Sports', '[BOLD] (c) Sports'], ['Baselines', 'Precision', 'Recall', 'F1-score'], ['KM', '0.373', '0.797', '0.509'], ['LDA', '0.525', '0.589', '0.555'], ['LI-NPP', '0.668', '0.778', '0.719'], ['DN-GAA', '0.718', '0.758', '0.738'], ['SBN', '0.782', '0.809', '0.789'], ['SBN-WT', '0.722', '0.749', '0.741'], ['SBN-IVA', '[BOLD] 0.810', '[BOLD] 0.841', '[BOLD] 0.825']] | The experiments of all the models are repeated for 10 times, and we use the average values as the final results. Compared to the-state-of-art DN-GAA method proposed by Chang et al In other baselines, keyword-matching (KM) method achieves high Recall and low Precision, since it treats many non-spoiler TSCs as spoilers. Latent Dirichlet Allocation (LDA) based method does not perform well because the LDA model is not suitable to process short texts like TSC data. LI-NPP is an SVM-based method and has better performance than the unsupervised learning methods. DN-GAA is the state-of-the-art method and has the highest performance among the baselines. The results show that the neural network has powerful feature extraction capabilities when there is sufficient data. |
Relation Discovery with Out-of-Relation Knowledge Base as Supervision | 1905.01959 | Table 4: Comparison of prediction results based on encoder using NYT122, NYT71, and NYT27 datasets with different KB regularization strategies. | ['[BOLD] Model', '[BOLD] NYT122 F1', '[BOLD] NYT122 F1', '[BOLD] NYT122 NMI', '[BOLD] NYT122 NMI', '[BOLD] NYT71 F1', '[BOLD] NYT71 F1', '[BOLD] NYT71 NMI', '[BOLD] NYT71 NMI', '[BOLD] NYT27 F1', '[BOLD] NYT27 F1', '[BOLD] NYT27 NMI', '[BOLD] NYT27 NMI'] | [['[EMPTY]', 'Mean', 'Std', 'Mean', 'Std', 'Mean', 'Std', 'Mean', 'Std', 'Mean', 'Std', 'Mean', 'Std'], ['Majority', '0.355', '-', '0', '-', '0.121', '-', '0', '-', '0.549', '-', '0', '-'], ['DVAE', '0.417', '0.011', '0.339', '0.009', '0.325', '0.011', '0.375', '0.023', '0.433', '0.018', '0.384', '0.021'], ['DVAE+E', '0.385', '0.021', '0.341', '0.043', '0.339', '0.021', '0.418', '0.022', '0.396', '0.034', '0.381', '0.039'], ['DVAE+D', '0.452', '0.033', '0.438', '0.022', '0.352', '0.038', '0.339', '0.009', '0.499', '0.040', '0.469', '0.027'], ['RegDVAE', '0.469', '0.014', '0.430', '0.020', '0.377', '0.020', '0.466', '0.036', '0.587', '0.005', '0.451', '0.005'], ['RegDVAE+D', '[BOLD] 0.499', '0.022', '[BOLD] 0.497', '0.013', '[BOLD] 0.432', '0.028', '[BOLD] 0.589', '0.071', '[BOLD] 0.665', '0.022', '[BOLD] 0.562', '0.038']] | S4SS3SSS0Px2 Comparison on Different Datasets. We also compare our algorithm on the three datasets with different baseline settings. In order to evaluate our model rigorously, besides the original DVAE model, we compare two additional augmented baseline models with the same hyper-parameter setting: DVAE with TransE embeddings appended to encoder input features (DVAE+E) and DVAE with decoder entity vectors replaced by pre-trained KB embeddings (DVAE+D). For our method, we report RegDVAE with the best setting where we use Euclidean distance based constraints to regularize the encoder. Moreover, we report a setting with fixed embeddings in the decoder as the ones obtained from TransE (RegDVAE+D). This also makes sense since even though the TransE embeddings are not trained with the observation of the same relations as the text corpus, the embeddings already contain much semantic information about entities. Then by fixing the embeddings of entities in the decoder, we can significantly reduce the number of parameters that need to be trained. As we can see that, RegDVAE+D can outperform the original DVAE by 8∼23 points on F1. DVAE+D is also good but may fail when there are a lot of out-of-sample entities in the training corpus. |
Relation Discovery with Out-of-Relation Knowledge Base as Supervision | 1905.01959 | Table 3: Comparison results on NYT122 with different prediction and regularization strategies (using encoder or decoder). | ['[BOLD] Model', '[BOLD] Metrics Prediction based on encoder', '[BOLD] Metrics Prediction based on encoder', '[BOLD] Metrics Prediction based on encoder', '[BOLD] Metrics Prediction based on encoder', '[BOLD] Metrics Prediction based on decoder', '[BOLD] Metrics Prediction based on decoder', '[BOLD] Metrics Prediction based on decoder', '[BOLD] Metrics Prediction based on decoder'] | [['[EMPTY]', 'F1', 'F1', 'NMI', 'NMI', 'F1', 'F1', 'NMI', 'NMI'], ['[EMPTY]', 'Mean', 'Std', 'Mean', 'Std', 'Mean', 'Std', 'Mean', 'Std'], ['DVAE', '0.417', '0.011', '0.339', '0.009', '0.419', '0.011', '0.337', '0.014'], ['RegDVAE (Euclidean at encoder)', '[BOLD] 0.469', '0.014', '[BOLD] 0.430', '0.020', '[BOLD] 0.448', '0.020', '[BOLD] 0.384', '0.020'], ['RegDVAE (KL at encoder)', '0.375', '0.009', '0.359', '0.014', '0.380', '0.011', '0.355', '0.014'], ['RegDVAE (JS at encoder)', '0.435', '0.038', '0.370', '0.042', '0.409', '0.012', '0.336', '0.005'], ['RegDVAE (Euclidean at decoder)', '0.416', '0.019', '0.329', '0.017', '0.350', '0.012', '0.201', '0.054']] | From the table, we can see that regularization with Euclidean distance performs the best compared to KL and JS. Moreover, the regularization over encoder is better than the regularization over decoder. This may be because the way that we put constraints only over sampled sentences in a batch may hurt the regularization of decoder, since sampled unique pairs may be less than sample sentences. If we look at results comparing original DVAE prediction based on the encoder and the decoder, both result in similar F1 and NMI numbers. Thus, we can only conclude that currently in the way we do sampling, constraining over encoder is a better choice. |
Hierarchical Contextualized Representation for Named Entity Recognition | 1911.02257 | Table 2: F1 scores on CoNLL-2003. † refers to models trained on both training and development datasets. | ['[BOLD] Models', '[ITALIC] F1'] | [['', '90.94'], ['', '91.21'], ['', '91.62'], ['', '91.24 ± 0.12'], ['', '91.35'], ['', '91.57'], ['', '91.64 ± 0.17'], ['', '91.10'], ['', '91.44 ± 0.10'], ['', '91.74'], ['', '91.54'], ['Ours', '[BOLD] 91.96 ± 0.03'], ['[BOLD] + Language Models / External knowledge', '[BOLD] + Language Models / External knowledge'], ['†', '91.62 ± 0.33'], ['', '91.71 ± 0.10'], [' (ELMo)', '92.20'], ['', '92.61'], [' (BERT)', '92.80'], ['†', '93.09'], ['†', '93.18'], [' (BERT)', '93.23'], ['Ours + BERT', '[BOLD] 93.37 ± 0.04']] | Our model surpasses previous state-of-the-art approaches on all the three datasets. On CoNLL-2002 Spanish dataset, our model achieves 87.08 F1 score without external knowledge, which surpasses previous best score by 0.4. Considering that the above two datasets are relatively small, we further conduct experiment on a much more large OntoNotes 5.0 dataset, which also has more entity types. Overall, the comparisons on these three benchmark datasets well demonstrate that our model truly learns and benefits from useful sentence-level and document-level representation without the support from external knowledge. |
Hierarchical Contextualized Representation for Named Entity Recognition | 1911.02257 | Table 3: F1 scores on CoNLL-2002. | ['[BOLD] Models', '[ITALIC] F1'] | [['', '82.95'], ['', '85.75'], ['', '85.77'], ['', '86.68 ± 0.35'], ['Ours', '[BOLD] 87.08 ± 0.16']] | Our model surpasses previous state-of-the-art approaches on all the three datasets. On CoNLL-2002 Spanish dataset, our model achieves 87.08 F1 score without external knowledge, which surpasses previous best score by 0.4. Considering that the above two datasets are relatively small, we further conduct experiment on a much more large OntoNotes 5.0 dataset, which also has more entity types. Overall, the comparisons on these three benchmark datasets well demonstrate that our model truly learns and benefits from useful sentence-level and document-level representation without the support from external knowledge. |
Hierarchical Contextualized Representation for Named Entity Recognition | 1911.02257 | Table 4: F1 scores on OntoNotes 5.0. | ['[BOLD] Models', '[ITALIC] F1'] | [['', '84.04'], ['', '86.28 ± 0.26'], ['', '86.63 ± 0.49'], ['', '86.84 ± 0.19'], ['', '87.44'], ['', '87.67 ± 0.17'], ['Ours', '[BOLD] 87.98 ± 0.05'], ['[BOLD] + Language Models / External knowledge', '[BOLD] + Language Models / External knowledge'], ['', '87.95'], ['', '88.88'], ['', '89.71'], ['Ours + BERT', '[BOLD] 90.30']] | Our model surpasses previous state-of-the-art approaches on all the three datasets. On CoNLL-2002 Spanish dataset, our model achieves 87.08 F1 score without external knowledge, which surpasses previous best score by 0.4. Considering that the above two datasets are relatively small, we further conduct experiment on a much more large OntoNotes 5.0 dataset, which also has more entity types. Overall, the comparisons on these three benchmark datasets well demonstrate that our model truly learns and benefits from useful sentence-level and document-level representation without the support from external knowledge. |
Hierarchical Contextualized Representation for Named Entity Recognition | 1911.02257 | Table 5: Ablation study on the three benchmark datasets. | ['[EMPTY]', 'CoNLL03', 'CoNLL02', 'OntoNotes'] | [['base model', '91.60', '86.65', '87.58'], ['+ sentence-level', '91.80', '86.95', '87.86'], ['+ document-level', '91.79', '86.76', '87.81'], ['+ ALL', '[BOLD] 91.96', '[BOLD] 87.08', '[BOLD] 87.98']] | In this experiment, we individually adopt two hierarchical contextualized representations to enhance the representation of tokens: sentence-level representation for assigning the sentence state to each token and document-level representation for inference. We discover that both sentence-level and document-level representations enhance the baseline. By combing these two representations together, we get a larger gain of 0.36 / 0.43 / 0.40, respectively. |
Hierarchical Contextualized Representation for Named Entity Recognition | 1911.02257 | Table 6: Comparison of different strategies on CoNLL-2003 dataset. ERR is the relative error rate reduction of our model compared to the baseline. | ['[EMPTY]', 'Strategy', '[ITALIC] F1', 'ERR'] | [['base model', '-', '91.60', '-'], ['sentence-level', 'mean-pooling', '91.65', '0.60'], ['sentence-level', 'label-embedding', '91.80', '2.23'], ['document-level', 'dot-product', '91.63', '1.55'], ['document-level', 'scaled dot-product', '91.75', '1.79'], ['document-level', 'cosine similarity', '91.79', '2.38'], ['ALL', '-', '[BOLD] 91.96', '[BOLD] 3.81']] | We further analyze the two hierarchical representations by adopting different strategies. We further conduct experiments to investigate the three compatibility functions used to employ memorized information. Among the three compatibility functions to compute the weight of query word and memorized slots, cosine similarity performs best, while dot-product performs worst. Cosine similarity calculates the inner product of word vectors with unit length, and can further solve the inconsistency between the embeddings and the similarity measurement. Thus, we eventually adopt cosine similarity as the compatibility function. |
Hierarchical Contextualized Representation for Named Entity Recognition | 1911.02257 | Table 7: Detailed results on the CoNLL-2003 dataset for IV, OOTV, OOEV, OOBV. | ['[EMPTY]', '[BOLD] Baseline [ITALIC] P', '[BOLD] Baseline [ITALIC] R', '[BOLD] Baseline [ITALIC] F1', '[BOLD] Ours [ITALIC] P', '[BOLD] Ours [ITALIC] R', '[BOLD] Ours [ITALIC] F1'] | [['IV', '94.58', '93.16', '93.87', '94.96', '93.58', '[BOLD] 94.26'], ['OOTV', '93.46', '91.57', '92.51', '94.07', '91.85', '[BOLD] 92.95'], ['OOEV', '94.12', '94.12', '94.12', '94.12', '94.12', '94.12'], ['OOBV', '88.42', '84.81', '86.58', '88.51', '85.56', '[BOLD] 87.01']] | out-of-training-vocabulary words (OOTV), out-of-embedding-vocabulary words (OOEV), and out-of-both-vocabulary words (OOBV) on CoNLL-2003 datset. According to our statistic, 63.40% / 52.43% / 84.68% of the NEs in the test set of CoNLL-2003, CoNLL-2002, and OntoNotes datasets are located in the IV part, respectively. Therefore, it is of great importance to focus on this part. We adopt memory network to memorize and retrieve the global representations and use the memorized training instances directly to participate in inference, which greatly improves both the precision and recall of the NEs in IV part, in which our model outperforms baseline by 0.39 in terms of F1 score. For OOV NEs, sentence-level representation can help these concerned tokens aware of the entire sentence, thus enhance the performance. The improvement is 0.44 / 0.43 F1 score for OOTV NEs and OOBV NEs, respectively. |
Graph Pattern Entity Ranking Model for Knowledge Graph Completion | 1904.02856 | Table 2: Mean Reciprocal Rank (MRR) and HITS@n scores obtained for the link prediction tasks on the WN18, FB15k, WN18RR, and FB15k-237 datasets. The highest result for each column is shown in bold. The results of TransE and TorusE were reported by Ebisu and Ichise Ebisu and Ichise (2018), the results of RESCAL were reported by Nickel et al. Nickel et al. (2016), the results of DistMult and ComplEx were reported by Trouillon et al. Trouillon et al. (2016), the results of R-GCN and ConvE were reported by Dettmers et al. Dettmers et al. (2018), the results of PRA were reported by Liu et al. Liu et al. (2016), and the results of Node+LinkFeat were reported by Toutanova and Chen Toutanova and Chen (2015). | ['[EMPTY]', 'WN18 MRR', 'WN18 HITS@', 'WN18 HITS@', 'WN18 HITS@', 'FB15k MRR', 'FB15k HITS@', 'FB15k HITS@', 'FB15k HITS@', 'WN18RR MRR', 'WN18RR HITS@', 'WN18RR HITS@', 'WN18RR HITS@', 'FB15k-237 MRR', 'FB15k-237 HITS@', 'FB15k-237 HITS@', 'FB15k-237 HITS@'] | [['Model', '[EMPTY]', '1', '3', '10', '[EMPTY]', '1', '3', '10', '[EMPTY]', '1', '3', '10', '[EMPTY]', '1', '3', '10'], ['TransE', '0.397', '0.040', '0.745', '0.923', '0.414', '0.247', '0.534', '0.688', '0.182', '0.027', '0.295', '0.444', '0.257', '0.174', '0.284', '0.420'], ['TorusE', '0.947', '0.943', '0.950', '0.954', '0.733', '0.674', '0.771', '0.832', '–', '–', '–', '–', '–', '–', '–', '–'], ['RESCAL', '0.890', '0.842', '0.904', '0.928', '0.354', '0.235', '0.409', '0.587', '–', '–', '–', '–', '–', '–', '–', '–'], ['DistMult', '0.822', '0.728', '0.914', '0.936', '0.654', '0.546', '0.733', '0.824', '0.43', '0.39', '0.44', '0.49', '0.241', '0.155', '0.263', '0.419'], ['ComplEx', '0.941', '0.936', '0.945', '0.947', '0.692', '0.599', '0.759', '0.840', '0.44', '0.41', '0.46', '0.51', '0.240', '0.152', '0.263', '0.419'], ['R-GCN', '0.814', '0.686', '0.928', '0.955', '0.651', '0.541', '0.736', '0.825', '–', '–', '–', '–', '0.248', '0.153', '0.258', '0.417'], ['ConvE', '0.942', '0.935', '0.947', '0.955', '0.745', '0.670', '0.801', '0.873', '0.46', '0.39', '0.43', '0.48', '0.316', '[BOLD] 0.239', '0.350', '[BOLD] 0.491'], ['PRA', '0.458', '0.422', '–', '0.481', '0.336', '0.303', '–', '0.392', '–', '–', '–', '–', '–', '–', '–', '–'], ['Node+LinkFeat', '0.940', '–', '–', '0.943', '0.822', '–', '–', '0.870', '–', '–', '–', '–', '0.272', '–', '–', '0.414'], ['GPro', '[BOLD] 0.950', '[BOLD] 0.946', '[BOLD] 0.954', '[BOLD] 0.959', '0.793', '0.759', '0.810', '0.858', '0.467', '0.430', '[BOLD] 0.485', '[BOLD] 0.543', '0.229', '0.163', '0.250', '0.360'], ['GRank (dMAP)', '[BOLD] 0.950', '[BOLD] 0.946', '0.953', '0.957', '0.841', '0.814', '0.855', '0.890', '0.466', '0.434', '0.480', '0.530', '0.312', '0.233', '0.340', '0.473'], ['GRank(fdMAP)', '[BOLD] 0.950', '[BOLD] 0.946', '[BOLD] 0.954', '0.958', '[BOLD] 0.842', '[BOLD] 0.816', '[BOLD] 0.856', '[BOLD] 0.891', '[BOLD] 0.470', '[BOLD] 0.437', '0.482', '0.539', '[BOLD] 0.322', '[BOLD] 0.239', '[BOLD] 0.352', '0.489']] | The Node+LinkFeat model performed well on WN18 and FB15k because these datasets often contain the reverse relations of other relations. In other words, it shows that knowledge graph embedding models failed to capture this redundancy. On the other hand, our proposed models, GPro and GRank, generally yield better results than the knowledge graph embedding models and results which are better than or comparable to Node+LinkFeat, which means that our models can also handle such redundancy. In particular, GRank with dMAP and fdMAP yielded the best results on FB15k. This indicates that taking the multiplicity of matchings and deeper information into account is important for knowledge graphs such as FreeBase that contain miscellaneous relations and are not well curated like WordNet. As a result, GRank performed well. For FB15k-237, the performance of Node+LinkFeat is comparable with most of the other more sophisticated knowledge graph models and GPro does not yield good results because FB15k-237 has less redundancy. GRank also performs better than the most other models for the FB15k-237 dataset for the same reason as the FB15k dataset. However, our models do not utilize the information related to the co-occurrence of entities and relations in triples (node features Toutanova and Chen We also limited the size and the shapes of graph patterns because of the calculation time; we will address these and improve our models further in our future work. |
Massively Multilingual Neural Grapheme-to-Phoneme Conversion | 1708.01464 | Table 5: High Resource Results | ['Model', 'WER', 'WER 100', 'PER'] | [['wFST', '[BOLD] 44.17', '21.97', '[BOLD] 14.70'], ['LangID-High', '47.88', '[BOLD] 15.50', '16.89'], ['LangID-All', '48.76', '15.78', '17.35'], ['NoLangID-High', '69.72', '29.24', '35.16'], ['NoLangID-All', '69.82', '29.27', '35.47']] | Having shown that our model exceeds the performance of the wFST-adaptation approach, we next compare it to the baseline models for just high resource languages. The wFST models here are purely monolingual – they do not use data adaptation because there is sufficient training data for each of them. We omit models trained on the Adapted languages because they were not trained on high resource languages with unique writing systems, such as Georgian and Greek, and consequently performed very poorly on them. |
Massively Multilingual Neural Grapheme-to-Phoneme Conversion | 1708.01464 | Table 4: Adapted Results | ['Model', 'WER', 'WER 100', 'PER'] | [['wFST', '88.04', '69.80', '48.01'], ['LangID-High', '74.99', '46.18', '42.64'], ['LangID-Adapted', '75.06', '46.39', '41.77'], ['LangID-All', '[BOLD] 74.10', '[BOLD] 43.23', '[BOLD] 37.85'], ['NoLangID-High', '82.14', '50.17', '54.05'], ['NoLangID-Adapted', '85.11', '48.24', '55.93'], ['NoLangID-All', '83.65', '47.13', '51.87']] | On the 229 languages for which Deri and Knight The best performance came with the version of our model that was trained on data in all available languages, not just the languages it was tested on. Using a language ID token improves results considerably, but even NoLangID beats the baseline in WER and WER 100. |
Massively Multilingual Neural Grapheme-to-Phoneme Conversion | 1708.01464 | Table 6: Results on languages not in the training corpus | ['Model', 'WER', 'WER 100', 'PER'] | [['LangID-High', '[BOLD] 85.94', '58.10', '[BOLD] 53.06'], ['LangID-Adapted', '87.78', '68.40', '65.62'], ['LangID-All', '86.27', '62.31', '54.33'], ['NoLangID-High', '88.52', '58.21', '62.02'], ['NoLangID-Adapted', '91.27', '57.61', '74.07'], ['NoLangID-All', '89.96', '[BOLD] 56.29', '62.79']] | The unseen languages are any that are present in the test corpus but absent from the training data. Deri and Knight did not report results specifically on these languages. Although the NoLangID models sometimes do better on WER 100, even here the LangID models have a slight advantage in WER and PER. This is somewhat surprising because the LangID models have not learned embeddings for the language ID tokens of unseen languages. Perhaps negative associations are also being learned, driving the model towards predicting more common pronunciations for unseen languages. |
A Read-Write Memory Network for Movie Story Understanding | 1709.09345 | Table 2: Performance comparison for the video+subtitle task on MovieQA public validation/test dataset. (–) means that the method does not participate on the task. Baselines include DEMM (Deep embedded memory network), OVQAP (Only video question answer pairs) and VCFSM (Video clip features with simple MLP). | ['Methods', 'Video+Subtitle val', 'Video+Subtitle test'] | [['OVQAP', '–', '23.61'], ['Simple MLP', '–', '24.09'], ['LSTM + CNN', '–', '23.45'], ['LSTM + Discriminative CNN', '–', '24.32'], ['VCFSM', '–', '24.09'], ['DEMN\xa0', '–', '29.97'], ['MEMN2N\xa0', '34.20', '–'], ['RWMN-noRW', '34.20', '–'], ['RWMN-noR', '36.50', '–'], ['RWMN-noQ', '38.17', '–'], ['RWMN-noVid', '37.20', '–'], ['RWMN', '[BOLD] 38.67', '[BOLD] 36.25'], ['RWMN-bag', '38.37', '35.69'], ['RWMN-ensemble', '38.30', '–']] | Results of VQA task. We observe that RWMN achieves the best performance on both validation and test sets. For example, in the test set, RWMN attains 36.25%, which is significantly better than the runner-up DEMN of 29.97%. |
A Read-Write Memory Network for Movie Story Understanding | 1709.09345 | Table 4: Performance of the RWMN on the video+subtitle task, according to the structure parameters of write/read networks. νw/r: the number of layers for write/read networks, (fw/rvi,sw/rvi,fw/rci): the height and the stride of convolution filters, and the number of output channels. | ['# Layers [ITALIC] νw', '# Layers [ITALIC] νr', 'Write network ( [ITALIC] fwvi, [ITALIC] swvi, [ITALIC] fwci)', 'Read network ( [ITALIC] frvi, [ITALIC] srvi, [ITALIC] fwri)', 'Acc.'] | [['0', '0', '–', '–', '34.2'], ['1', '0', '(40,7,1)', '–', '33.9'], ['1', '0', '(40,30,3)', '–', '36.5'], ['1', '1', '(40,30,3)', '(3,1,1)', '[BOLD] 38.6'], ['1', '1', '(40,60,3)', '(3,1,1)', '33.6'], ['2', '1', '(40,10,3), (10,5,3)', '(3,1,1)', '37.2'], ['2', '1', '(5,3,1), (5,3,1)', '(3,1,1)', '37.3'], ['2', '2', '(4,2,1), (4,2,1)', '(3,1,1), (3,1,1)', '36.9'], ['2', '2', '(4,2,1), (4,2,1)', '(4,2,1), (4,2,1)', '37.3'], ['3', '1', '(10,3,3), (40,3,3), (100,3,3)', '(3,1,1)', '35.1'], ['3', '1', '(40,3,3), (10,3,3), (10,3,3)', '(3,1,1)', '37.9'], ['3', '1', '(40,3,3), (40,3,3), (40,3,3)', '(3,1,1)', '35.7'], ['3', '1', '(100,3,3), (40,3,3), (10,3,3)', '(3,1,1)', '35.8']] | We make several observations from the results. First, as the number of CNN layers in read /write network increases, the capacity of memory interaction may increase as well; yet the performance becomes worsen. Presumably, the main reason may be overfitting due to a relative small dataset size of MovieQA as discussed. It is hinted by our results that the two-layer CNN is the best for training performance, while the one-layer CNN is the best for validation. Second, we observe that there is no absolute magic number of how many memory slots should be read/written as a single chunk and how many strides the memory controller moves. If the stride height is too small or too large compared to the height of a convolution filter, the performance decreases. It means that the performance can be degraded when too much information is read/written as a single abstracted slot, when too much information is overlapped in adjacent reads/writes (due to a small stride), or when the information overlap is too coarse (due to a high stride). We present more ablation results to the supplementary file. |
Exploring Gap Filling as a Cheaper Alternative to Reading Comprehension Questionnaires when Evaluating Machine Translation for Gisting | 1809.00315 | Table 1: A comparison of BLEU and NIST scores, RCQ marks in the three possible weightings, and GF success rates at different densities. | ['[EMPTY]', 'BLEU', 'NIST', 'RCQ scores Simple', 'RCQ scores Weighted', 'RCQ scores Literal', 'GF scores Overall', 'GF scores 10%', 'GF scores 20%'] | [['Google', '[BOLD] 0.306', '[BOLD] 4.66', '[BOLD] 0.753', '[BOLD] 0.748', '0.776', '0.592', '0.565', '0.619'], ['Bing', '0.281', '4.40', '0.709', '0.695', '0.734', '[BOLD] 0.618', '[BOLD] 0.595', '[BOLD] 0.640'], ['Homebrew', '0.241', '4.51', '0.594', '0.577', '0.608', '0.550', '0.547', '0.553'], ['Systran', '0.203', '3.05', '0.680', '0.670', '0.701', '0.569', '0.544', '0.595'], ['MT Average', '[EMPTY]', '[EMPTY]', '0.684', '0.673', '0.705', '0.582', '0.563', '0.602'], ['Human', '1.000', '10.0', '0.813', '0.810', '0.872', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['No hint (random)', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '0.258', '0.302', '0.213'], ['No hint (entropy)', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '0.193', '0.191', '0.195'], ['No hint (average)', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '0.225', '0.247', '0.204']] | In view that score distributions are actually very far from normality, the usual significance tests (such as Welch’s t-test) are not applicable; therefore, statistical significances of differences between RCQ and GF scores will be reported throughout using the distribution-agnostic Kolmogorov–Smirnov test. . Jordan-Núñez et al. |
Exploring Gap Filling as a Cheaper Alternative to Reading Comprehension Questionnaires when Evaluating Machine Translation for Gisting | 1809.00315 | Table 2: Effect in success rates of allowing for synonyms in GF | ['System', 'GF scores with synonyms Overall', 'GF scores with synonyms 10%', 'GF scores with synonyms 20%', 'GF scores without synonyms Overall', 'GF scores without synonyms 10%', 'GF scores without synonyms 20%'] | [['Google', '0.757', '0.711', '0.776', '0.592', '0.565', '0.619'], ['Bing', '[BOLD] 0.795', '[BOLD] 0.785', '[BOLD] 0.804', '[BOLD] 0.618', '[BOLD] 0.595', '[BOLD] 0.640'], ['Homebrew', '0.704', '0.711', '0.697', '0.550', '0.547', '0.553'], ['Systran', '0.765', '0.750', '0.781', '0.569', '0.544', '0.595'], ['MT Average', '0.755', '0.746', '0.765', '0.582', '0.563', '0.602'], ['No hint (random)', '0.339', '0.379', '0.299', '0.258', '0.302', '0.213'], ['No hint (entropy)', '0.306', '0.322', '0.290', '0.193', '0.191', '0.195'], ['No hint (average)', '0.322', '0.350', '0.294', '0.225', '0.247', '0.204']] | Allowing for synonyms: The GF success scores reported thus far have been computed by giving credit only to exact matches. We have studied giving credit to synonyms observed in informant work, namely to those appearing at least twice (in the work of all informants) that, according to one of the authors, preserved the meaning of the problem sentence, or were trivial spelling or case variations. A total of 124 frequent valid substitutions were considered. The relative ranking of MT systems is maintained; the statistical significance of the homebrew Moses results versus Bing results is maintained, and two additional statistically significant differences appear: Google vs. homebrew Moses and Systran vs. homebrew Moses. The statistical significance of the effect of gap density disappears when allowing for synonyms. This indicates that it would be beneficial to assign credit to synonyms if the necessary language resources are available or if further analysis of actual GF results is feasible. |
Natural Language Generation enhances human decision-making with uncertain information | 1606.03254 | Table 3: Average Monetary gains and Confidence scores (Females). | ['[EMPTY]', '[BOLD] Monetary gains', '[BOLD] Confidence'] | [['Graphs only', '60.83', '74.6%'], ['Multi-modal', '118.41', '81.3%'], ['NLG only', '113.86', '65.8%']] | We found that females score significantly higher at the decision task when exposed to either of the NLG output presentations, when compared to the graphics-only presentation (p<0.05, effect = +53.03). In addition, the same group of users scores significantly higher when presented with the multi-modal output as compared to graphics only (p=0.05, effect =60.74%). Interestingly, for this group, the multi-modal presentation adds little more in effectiveness of decision-making than the NLG-only condition, but the multi-modal presentations do enhance their confidence (+15%). We furthermore found that educated (i.e. holding a BSc or higher degree) females, who also correctly answered the risk literacy test, feel significantly more confident when presented with the multi-modal representations than with NLG only (p=0.01, effect = 16.7%). |
Natural Language Generation enhances human decision-making with uncertain information | 1606.03254 | Table 2: Average Monetary gains and Confidence scores (All Adults). | ['[EMPTY]', '[BOLD] Monetary gains', '[BOLD] Confidence'] | [['Graphs only', '81.15', '78.5%'], ['Multi-modal', '117.51', '83.7%'], ['NLG only', '101.33', '66%']] | Multi-modal vs. Graphics-only : We found that use of multi-modal representations leads to gaining significantly higher game scores (i.e. better decision-making) than the Graphics-only representation (p=0.03, effect = +36.36). This is a 44% average increase in game score. Multi-modal vs. NLG-only: However, there is no significant difference between the NLG only and the multi-modal representation, for game score. NLG vs. Graphics-only There was no significant difference found between the WMO and NATURAL NLG conditions. Confidence: For confidence, the multi-modal representation is significantly more effective than NLG only (p<0.01, effect = 17.7%). |
Learning Visual Question Answering by Bootstrapping Hard Attention | 1808.00300 | Table 4: Results, in %, on CLEVR. SAN denotes the SAN [9] implementation of [34]. SAN* denotes the SAN implementation of [25]. Object RN** [55] and Stack-NMNs** [52] report the results only on the validation set, whereas others report on the test set. Overall performance of Stack-NMNs** [52] is measured with the “expert layout” (similar to N2NMN) yielding 96.6 and without it (93.0). DDRprog− [71], PG+EE (700k)− [70], TbD−, and TbD+hres− [50] are trained with a privileged state-description, while others are trained directly from images-questions-answers. TbD+hres [50] uses high-resolution (28x28) spatial tensor, while majority uses either 8x8 or 14x14. HAN+Sum/RN+ denotes a larger relational model, or a different hyper-parameters setup, than the model of [25]. HAN+RN++ denotes HAN+RN+ with larger input images with spatial dimensions 224x224 as opposed to 128x128, and larger image tensors with spatial dimension 14x14 as opposed to 8x8. | ['Model', '[BOLD] Overall', 'Count', 'Exist', 'Compare Numbers', 'Query Attribute', 'Compare Attribute'] | [['Human\xa0', '92.6', '86.7', '96.6', '86.5', '95.0', '96.0'], ['Q-type baseline\xa0', '41.8', '34.6', '50.2', '51.0', '36.0', '51.3'], ['LSTM-only\xa0', '46.8', '41.7', '61.1', '69.8', '36.8', '51.8'], ['CNN+LSTM\xa0', '52.3', '43.7', '65.2', '67.1', '49.3', '53.0'], ['SAN\xa0', '68.5', '52.2', '71.1', '73.5', '85.3', '52.3'], ['SAN*\xa0', '76.6', '64.4', '82.7', '77.4', '82.6', '75.4'], ['LBP-SIG\xa0', '78.0', '61.3', '79.6', '80.7', '88.6', '76.3'], ['N2NMN\xa0', '83.7', '68.5', '85.7', '85.0', '90.0', '88.9'], ['PG+EE (700k)−\xa0', '96.9', '92.7', '97.1', '98.7', '98.1', '98.9'], ['RN\xa0', '95.5', '90.1', '97.8', '93.6', '97.9', '97.1'], ['Hyperbolic RN\xa0', '95.7', '-', '-', '-', '-', '-'], ['Object RN**\xa0', '94.5', '93.6', '94.7', '93.3', '95.2', '94.4'], ['Stack-NMNs**\xa0', '96.6 (93.0)', '-', '-', '-', '-', '-'], ['FiLM\xa0', '97.6', '94.5', '99.2', '93.8', '99.2', '99.0'], ['DDRprog−\xa0', '98.3', '96.5', '98.8', '98.4', '99.1', '99.0'], ['MAC\xa0', '98.9', '97.2', '99.5', '99.4', '99.3', '99.5'], ['TbD−\xa0', '98.7', '96.8', '98.9', '99.1', '99.4', '99.2'], ['TbD+hres−\xa0', '99.1', '97.6', '99.2', '99.4', '99.5', '99.6'], ['HAN+Sum+ (Ours)', '94.7', '88.9', '97.3', '88.0', '98.1', '97.0'], ['HAN+RN+ (Ours)', '96.9', '92.8', '98.6', '94.9', '98.9', '98.2'], ['HAN+RN++ (Ours)', '98.8', '97.2', '99.6', '96.9', '99.6', '99.6']] | (TbD+hres) have noted increasing the spatial resolution definitely helps in achieving better performance. Finally, through a visual inspection, we have observed that the fraction of input cells that we have experimented with (k=16 for 8x8 spatial tensor, and k=64 for 14x14 spatial tensor) is sufficient to cover all the important objects in the image, and thus the mechanism resembles more the saliency mechanism. It is worth noting, the hard-attention mechanism often selects a few cells that correspond to the object as this is sufficient to recognize the object’s properties such as size, material, color, type, and spatial location. |
Learning Visual Question Answering by Bootstrapping Hard Attention | 1808.00300 | Table 1: Comparison between different number of attended cells (percentage of the whole input), and aggregation operation. We consider a simple summation, and non-local pairwise computations as the aggregation tool. | ['[EMPTY]', 'Percentage', 'Overall', 'Yes/No', 'Number', 'Other'] | [['[EMPTY]', 'of cells', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['HAN+sum', '16%', '26.99', '40.53', '11.38', '24.15'], ['HAN+sum', '32%', '27.43', '41.05', '11.38', '24.68'], ['HAN+sum', '48%', '27.94', '41.35', '11.93', '25.27'], ['HAN+sum', '64%', '27.80', '40.74', '11.29', '25.52'], ['sum', '100%', '27.96', '43.23', '12.09', '24.29'], ['HAN+pairwise', '16%', '26.81', '41.24', '10.87', '23.61'], ['HAN+pairwise', '32%', '27.45', '40.91', '11.48', '24.75'], ['HAN+pairwise', '48%', '28.23', '41.23', '11.40', '25.98'], ['Pairwise', '100%', '28.06', '44.10', '13.20', '23.71'], ['SAN ', '-', '24.96', '38.35', '11.14', '21.74'], ['SAN (ours)', '-', '26.60', '39.69', '11.25', '23.92'], ['SAN+pos (ours)', '-', '27.77', '40.73', '11.31', '25.47'], ['GVQA ', '-', '31.30', '57.99', '13.68', '22.14']] | We begin with the most basic hard attention architecture, which applies hard attention and then does sum pooling over the attended cells, followed by a small MLP. For each experiment, we take the top k cells, out of 100, according to our L2-norm criterion, where k ranges from 16 to 100 (with 100, there is no attention, and the whole image is summed). Considering that the hard attention selects only a subset of the input cells, we might expect that the algorithm would lose important information and be unable to recover. In fact, however, the performance is almost the same with less than half of the units attended. Even with just 16 units, the performance loss is less than 1%, suggesting that hard attention is quite capable of capturing the important parts of the image. The fact that hard attention can work is interesting itself, but it should be especially useful for models that devote significant processing to each attended cell. actually boosts performance over an analogous model without hard attention. Surprisingly, soft attention does not outperform basic sum pooling, even with careful implementation that outperforms the previously reported results with the same method on this dataset; in fact, it performs slightly worse. The non-local pairwise aggregation performs better than SAN on its own, although the best result includes hard attention. SAN’s Our implementation has 2 attention hops, 1024 dimensional multimodal embedding size, a fixed learning rate 0.0001, and ResNet-101. In these experiments we pool the attended representations by weighted average with the attention weights. We use 2 heads, with embedding size 512. |
Learning Visual Question Answering by Bootstrapping Hard Attention | 1808.00300 | Table 2: Comparison between different adaptive hard-attention techniques with average number of attended parts, and aggregation operation. We consider a simple summation, and the non-local pairwise aggregation. Since AdaHAN adaptively selects relevant features, based on the fixed threshold 1w∗h, we report here the average number of attended parts. | ['[EMPTY]', 'Percentage', 'Overall', 'Yes/No', 'Number', 'Other'] | [['[EMPTY]', 'of cells', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['AdaHAN+sum', '25.66%', '27.40', '40.70', '11.13', '24.86'], ['AdaHAN+pairwise', '32.63%', '28.65', '52.25', '13.79', '20.33'], ['HAN+sum', '32%', '27.43', '41.05', '11.38', '24.68'], ['HAN+sum', '48%', '27.94', '41.35', '11.93', '25.27'], ['HAN+pairwise', '32%', '27.45', '40.91', '11.48', '24.75'], ['HAN+pairwise', '48%', '28.23', '41.23', '11.40', '25.98']] | Thus far, our experiments have dealt with networks that have a fixed threshold for all images. However, some images and questions may require reasoning about more entities than others. Therefore, we explore a simple adaptive method, where the network chooses how many cells to attend to for each image. We can see that on average, the adaptive mechanism uses surprisingly few cells: 25.66 out of 100 when sum pooling is used, and 32.63 whenever the non-local pairwise aggregation mechanism is used. For sum pooling, this is on-par with a non-adaptive network that uses more cells on average (HAN+sum 32); for the non-local pairwise aggregation mechanism, just 32.63 cells are enough to outperform our best non-adaptive model, which uses roughly 50% more cells. This shows that even very simple methods of adapting hard attention to the image and the question can lead to both computation and performance gains, suggesting that more sophisticated methods will be an important direction for future work. |
Learning Visual Question Answering by Bootstrapping Hard Attention | 1808.00300 | Table 3: Comparison between different number of the attended cells as the percentage of the whole input. The results are reported on VQA-CP v2. The second column denotes the percentage of the attended input. The third column denotes number of layers of the MLP (Equations 2 and 3). | ['[EMPTY]', 'Percentage of cells', 'Number of layers', 'Overall', 'Yes/No', 'Number', 'Other'] | [['HAN+sum', '25%', '0', '26.38', '43.21', '13.12', '21.17'], ['HAN+sum', '50%', '0', '26.75', '41.42', '10.94', '23.38'], ['HAN+sum', '75%', '0', '26.82', '41.30', '11.48', '23.42'], ['HAN+sum', '25%', '2', '26.99', '40.53', '11.38', '24.15'], ['HAN+sum', '50%', '2', '27.43', '41.05', '11.38', '24.68'], ['HAN+sum', '75%', '2', '27.94', '41.35', '11.93', '25.27']] | In this section, we briefly analyze an important architectural choice: the number of layers used on top of the pretrained embeddings. That is, before the question and image representations are combined, we perform a small amount of processing to “align” the information, so that the embedding can easily tell the relevance of the visual information to the question. We consistently see a drop of about 1% without the layers, suggesting that deciding which cells to attend to requires different information than the classification-tuned ResNet is designed to provide. |
An Empirical Study of Discriminative Sequence Labeling Models for Vietnamese Text Processing | 1708.09163 | TABLE V: Performance of the CRF model for PoS tagging with two feature sets | ['[BOLD] Feature set', '[BOLD] Test Acc. [ITALIC] n≤20', '[BOLD] Test Acc. [ITALIC] n≤25', '[BOLD] Training Acc. [ITALIC] n≤20', '[BOLD] Training Acc. [ITALIC] n≤25'] | [['{Word identities, word shapes}', '87.62', '88.93', '90.75', '91.56'], ['{Word identities, word shapes, word embeddings}', '[BOLD] 88.97', '[BOLD] 90.26', '91.64', '92.29']] | The table indicates that incorporating word embedding features helps to improves the accuracy of the CRF model 1.35% from 87.62% to 88.97%. The CRF model outperformed LSTMs when we do not use word embedding features. However, its accuracy is lower than that of LSTMs when word embedding features are incorporated. |
An Empirical Study of Discriminative Sequence Labeling Models for Vietnamese Text Processing | 1708.09163 | TABLE III: Performance of LSTMs for PoS tagging using word identities and word shapes | ['[BOLD] Hidden Units', '[BOLD] Test Acc. [ITALIC] n≤20', '[BOLD] Test Acc. [ITALIC] n≤25', '[BOLD] Training Acc. [ITALIC] n≤20', '[BOLD] Training Acc. [ITALIC] n≤25'] | [['16', '84.85', '86.45', '94.91', '96.40'], ['32', '83.41', '83.89', '94.43', '93.18'], ['64', '83.82', '85.93', '96.04', '96.23'], ['100', '83.49', '85.53', '93.93', '95.74'], ['128', '84.67', '86.75', '97.49', '97.52'], ['150', '85.84', '86.33', '97.49', '97.27'], ['200', '[BOLD] 85.98', '[BOLD] 87.46', '97.27', '97.46']] | We train different LSTMs with varying number hidden units in the range from 32 to 200. We see that the larger number of hidden units is, the better result the tagger can achieve on the test set. The LSTMs tagger achieves 85.98% of accuracy on the test set when the network has 200 hidden units. |
An Empirical Study of Discriminative Sequence Labeling Models for Vietnamese Text Processing | 1708.09163 | TABLE X: Performances of NER systems at VLSP 2016 | ['[BOLD] Team', '[BOLD] Model', '[BOLD] Performance'] | [['Le-Hong\xa0', 'ME', '88.78'], ['[Anonymous]', 'CRF', '86.62'], ['Nguyen et al.\xa0', 'ME', '84.08'], ['Nguyen et al.\xa0', 'LSTM', '83.80'], ['Le et al.\xa0', 'CRF', '78.40']] | In VLSP 2016 workshop, several different systems have been proposed for Vietnamese NER. That system used many hand-crafted features to improve the performance of MEMM. Most approaches in VLSP 2016 used the CRF and maximum entropy models, whose performance is heavily dependent on feature engineering. We observe that although the models studied in this work only rely on word features, their performance is very competitive. |
MS-UEdin Submission to the WMT2018 APE Shared Task: Dual-Source Transformer for Automatic Post-Editing | 1809.00188 | Table 1: Experiments with WMT 2017 data, correcting a phrase-base system. | ['Model', 'dev 2016 [BOLD] TER↓', 'dev 2016 BLEU↑', 'test 2016 [BOLD] TER↓', 'test 2016 BLEU↑', 'test 2017 [BOLD] TER↓', 'test 2017 BLEU↑'] | [['Uncorrected', '24.81', '62.92', '24.76', '62.11', '24.48', '62.49'], ['WMT17: FBK Primary', '19.22', '71.89', '19.32', '70.88', '19.60', '70.07'], ['WMT17: AMU Primary', '—', '—', '19.21', '70.51', '19.77', '69.50'], ['Baseline (single model)', '19.77', '70.54', '20.10', '69.25', '20.43', '68.48'], ['+Tied embeddings', '19.39', '70.70', '19.82', '68.87', '20.09', '69.06'], ['+Shared encoder', '19.23', '71.14', '19.44', '70.06', '20.15', '69.04'], ['Transformer-base (Tied+Shared)', '18.73', '71.71', '18.92', '70.86', '19.49', '69.72'], ['Transformer-base x4', '18.22', '72.34', '18.86', '71.04', '19.03', '70.46']] | During the WMT2017 APE shared task we submitted a dual-source model with soft and hard attention which placed second right after a very similar dual-source model by the FBK team. FBK and WMT17:AMU (ours). An ensemble of four identical models trained with different random initializations strongly improves over last year’s best models on all indicators. |
MS-UEdin Submission to the WMT2018 APE Shared Task: Dual-Source Transformer for Automatic Post-Editing | 1809.00188 | Table 2: Experiments with WMT 2017+eSCAPE data for SMT system. | ['Model', 'dev 2016 [BOLD] TER↓', 'dev 2016 BLEU↑', 'test 2016 [BOLD] TER↓', 'test 2016 BLEU↑', 'test 2017 [BOLD] TER↓', 'test 2017 BLEU↑'] | [['Transformer all', '17.84', '73.45', '17.81', '72.79', '18.10', '71.72'], ['Transformer 1M', '17.59', '73.45', '18.29', '72.20', '18.42', '71.50'], ['Transformer 2M', '17.92', '73.37', '18.02', '72.41', '18.35', '71.57'], ['Transformer 4M', '17.75', '73.51', '17.89', '72.70', '18.09', '71.78'], ['[BOLD] Transformer x4 (all above)', '[BOLD] 17.31', '[BOLD] 74.14', '[BOLD] 17.34', '[BOLD] 73.43', '[BOLD] 17.47', '[BOLD] 72.84']] | So far, we only trained on data that was available during WMT2017. This year, the task organizers added a new large corpus created for automatic post-editing across many domains. We experimented with domain selection algorithms for this corpus and tried to find subsets that would be better suited to the given IT domain. We trained an 5-gram language model on a 10M words randomly sampled subset of the German IT training data and a similarly size language model on the eSCAPE data. Next we applied cross-entropy filtering Moore and Lewis We sorted eSCAPE by these scores and selected different sizes of subsets. Smaller subsets should be more in-domain. We experimented with 1M, 2M, 4M and all sentences (nearly 8M). Adding eSCAPE to the training data was generally helpful, but we did not see a clear winner across subsets and test sets. In the end we use all the experimental models as components of a 4x ensemble. The different training sets might as well serve as additional randomization factors potentially beneficial for ensembling. |
MS-UEdin Submission to the WMT2018 APE Shared Task: Dual-Source Transformer for Automatic Post-Editing | 1809.00188 | (a) PBSMT sub-task | ['Systems', '[BOLD] TER↓', 'BLEU↑'] | [['[BOLD] MS-UEdin (Ours)', '[BOLD] 18.00', '[BOLD] 72.52'], ['FBK', '18.62', '71.04'], ['POSTECH', '19.63', '69.87'], ['USAAR DFKI', '22.69', '66.16'], ['DFKI-MLT', '24.19', '63.40'], ['Baseline', '24.24', '62.99']] | For full results with information concerning statistical significance see the full shared task description Chatterjee et al. As expected, improvements are quite significant for the SMT-based system, and much smaller for the NMT-based system. Our submissions to the PBSMT sub-task strongly outperforms all submissions by other teams in terms of TER and BLEU and established the new state-of-the-art for the field. The improvements over the PBSMT baseline approach impressive 10 BLEU points. |
MS-UEdin Submission to the WMT2018 APE Shared Task: Dual-Source Transformer for Automatic Post-Editing | 1809.00188 | (b) NMT sub-task | ['Systems', '[BOLD] TER↓', 'BLEU↑'] | [['FBK', '16.46', '75.53'], ['[BOLD] MS-UEdin (Ours)', '[BOLD] 16.50', '[BOLD] 75.44'], ['POSTECH', '16.70', '75.14'], ['Baseline', '16.84', '74.73'], ['USAAR DFKI', '17.23', '74.22'], ['DFKI-MLT', '18.84', '70.87']] | For full results with information concerning statistical significance see the full shared task description Chatterjee et al. As expected, improvements are quite significant for the SMT-based system, and much smaller for the NMT-based system. Our submissions to the PBSMT sub-task strongly outperforms all submissions by other teams in terms of TER and BLEU and established the new state-of-the-art for the field. The improvements over the PBSMT baseline approach impressive 10 BLEU points. |
Words are not Equal: Graded Weighting Model for building Composite Document Vectors | 1512.03549 | Table 5: Results on IMDB Movie Review Dataset | ['[BOLD] Method', '[BOLD] Accuracy'] | [['Maas et al.(2011)', '88.89'], ['NBSVM-bi (Wang & Manning, 2012)', '91.22'], ['NBSVM-uni (Wang & Manning, 2012)', '88.29'], ['SVM-uni (Wang & Manning, 2012)', '89.16'], ['Paragraph Vector (Le and Mikolov(2014))', '92.58'], ['Weighted WordVector+Wiki(Our Method)', '88.60'], ['Weighted WordVector+TfIdf(Our Method)', '90.67'], ['Composite Document Vector', '[BOLD] 93.91']] | The main contributor for improvement in results is our new document vector which overcomes the weaknesses of BOW and document vectors taken separately. |
Words are not Equal: Graded Weighting Model for building Composite Document Vectors | 1512.03549 | Table 1: Comparison of accuracies on 3 Datasets (IMDB, Amazon Electronics Review and Hindi Movie Reviews (IITB)) for various types of document composition models. The state of the art for these tasks are: IMDB: 92.58% [Le and Mikolov2014]; Amazon:85.90% [Dredze et al.2008], Hindi:79.0% [Bakliwal et al.2012]. | ['[BOLD] Method', '[BOLD] IMDB', '[BOLD] Amazon', '[BOLD] Hindi'] | [['RNNLM (Baseline)', '86.45', '90.03', '78.84'], ['Paragraph Vector ', '92.58', '91.30', '74.57'], ['Averaged Vector', '88.42', '88.52', '79.62'], ['Weighted Average Vector', '89.56', '88.63', '85.90'], ['Composite Document Vector', '93.91', '92.17', '90.30']] | A surprising event in Information Theory has higher information content than an expected event (Shanon, 1948). The same happens when we give weights to word vectors. We give more weight to events which evoke surprise and less weight to events which are expected. In this work we present an early experiment on the possibilities of distributional semantic models (word vectors) for low-resource, highly inflected languages such as Hindi. What is interesting is that our word vector averaging method along with tf-idf results in improvements of accuracy compared to existing state-of-the art methods for sentiment analysis in Hindi (from 80.2% to 90.3% on IITB Movie Review Dataset). The size of the corpus is also small to learn paragraph vectors. Thus, our model overcomes these weaknesses with a better document representation. We observe that pruning high-frequency stop words improves the accuracy by around 0.45%. This is most likely because such words tend to occur in most of the documents and don’t contribute to sentiment. For example, the word \dnEPSm(Film) occurs in 139/252 documents in Movie Reviews(55.16%) and has little effect on sentiment. Similarly words such as \dnEs\388wAT\0(Siddharth) occur in 2/252 documents in Movie Reviews(0.79%). These words don’t provide much information. |
Words are not Equal: Graded Weighting Model for building Composite Document Vectors | 1512.03549 | Table 2: Results of Vector Composition with different Operations | ['Composition', 'Accuracy'] | [['Multiplication', '50.30'], ['Average', '88.42'], ['Idf Graded Weighted Average', '[BOLD] 89.56']] | We, therefore, adopt both simple and idf weighted average methods in our work. The advantage with addition is that, it doesnot increase the dimension of the vector and captures high level semantics with ease. They also use additive composition to reflect semantic dependencies. queen - king ≈ clearly show that vectors of Neural Language Model and Distributed Model when used with additive composition outperform those with multiplicative composition in Paraphrase Classification task. DM vectors outperform by nearly giving accuracy difference of 6%. They also perform very well on Phrase similarity tasks. We, therefore, propose graded weighting schema for better composition of vectors which is described below. |
Words are not Equal: Graded Weighting Model for building Composite Document Vectors | 1512.03549 | Table 3: Accuracies on our newly released 700-Movie Review Dataset | ['[BOLD] Method', '[BOLD] Feature Selection', '[BOLD] Accuracy'] | [['Document Vector + tfidf', 'None', '74.57'], ['Document Vector + tfidf', 'PCA(n=50)', '76.33'], ['Document Vector + tfidf', 'ANOVA-F', '88.07'], ['Weighted Word Vector + tfidf', 'None', '76.43'], ['Weighted Word Vector + tfidf', 'ANOVA-F', '90.37'], ['Weighted Word Vector + tfidf', 'PCA(n=50)', '78.61']] | Dimensionality Reduction is the process of reducing the number of random variables in such a way that the remaining variables effectively reproduce most of the variability of the dataset. The reason for using such techniques is because of the curse of dimensionality which is a phenomena that occurs in high-dimension but doesn’t occur in low-dimension. With ANOVA-F, we selected around 4k features but with PCA, this number was just 50. So, the low accuracy with PCA can be attributed to the fact that we may have lost some important features in low dimension. Also, PCA cannot work with size of dimension d >size of learning set. This sharp decrease in accuracy in both cases happens because ANOVA-F selects features with larger variance across group and thus reduces noise to a larger extent whereas PCA reduces angular variance which is not effective in this case due to the distribution of data points in high-dimensional space. |
Words are not Equal: Graded Weighting Model for building Composite Document Vectors | 1512.03549 | Table 9: Accuracies for Product Review and Movie Review Datasets. | ['[BOLD] Features', '[BOLD] Accuracy(1)', '[BOLD] Accuracy(2)'] | [['WordVector Averaging', '78.0', '79.62'], ['WordVector+tf-idf', '90.73', '89.52'], ['WordVector+tf-idf without stop words', '91.14', '89.97'], ['Weighted WordVector', '89.71', '85.90'], ['Weighted WordVector+tfidf', '[BOLD] 92.89', '[BOLD] 90.30']] | We see that there is a slight improvement in accuracy on both datasets once we remove stop-words but the major breakthrough occurs once we used weighted averaging technique for construction of document vectors from word vectors. |
Sequential Short-Text Classification with Recurrent and Convolutional Neural Networks | 1603.03827 | Table 4: Accuracy (%) of our models and other methods from the literature. The majority class model predicts the most frequent class. SVM: [Dernoncourt et al.2016]. Graphical model: [Ji and Bilmes2006]. Naive Bayes: [Lendvai and Geertzen2007]. HMM: [Stolcke et al.2000]. Memory-based Learning: [Rotaru2002]. All five models use features derived from transcribed words, as well as previous predicted dialog acts except for Naive Bayes. The interlabeler agreement could be obtained only for SwDA. For the CNN and LSTM models, the presented results are the test set accuracy of the run with the highest accuracy on the validation set. | ['[BOLD] Model', 'DSTC\xa04', 'MRDA', 'SwDA'] | [['CNN', '65.5', '[BOLD] 84.6', '[BOLD] 73.1'], ['LSTM', '[BOLD] 66.2', '84.3', '69.6'], ['Majority class', '25.8', '59.1', '33.7'], ['SVM', '57.0', '–', '–'], ['Graphical model', '–', '81.3', '–'], ['Naive Bayes', '–', '82.0', '–'], ['HMM', '–', '–', '71.0'], ['Memory-based Learning', '–', '–', '72.3'], ['Interlabeler agreement', '–', '–', '84.0']] | Overall, our model shows competitive results, while requiring no human-engineered features. Rigorous comparisons are challenging to draw, as many important details such as text preprocessing and train/valid/test split may vary, and many studies fail to perform several runs despite the randomness in some parts of the training process, such as weight initialization. |
Training Augmentation with Adversarial Examplesfor Robust Speech Recognition | 1806.02782 | Table 2: WER comparison on CHiME-4 single-channel track evaluation sets with adversarial examples (AdvEx) (ϵ=0.1). | ['system', 'et05_simu BUS', 'et05_simu CAF', 'et05_simu PED', 'et05_simu STR', 'et05_simu [BOLD] AVE.', 'et05_real BUS', 'et05_real CAF', 'et05_real PED', 'et05_real STR', 'et05_real [BOLD] AVE.'] | [['Baseline', '20.25', '30.69', '26.62', '28.74', '26.57', '43.95', '33.64', '25.95', '18.68', '30.55'], ['AdvEx', '19.65', '29.29', '24.75', '26.95', '[BOLD] 25.16', '41.00', '31.34', '24.74', '18.23', '[BOLD] 28.82']] | We see that within a reasonable range (ϵ<0.25) the proposed approach brings consistent gain. Relative WER reductions obtained on et05_real and et05_simu sets were 5.7% and 5.3%, respectively. The proposed approach was able to bring consistent improvements for all types of noises, whether in simulated or real environments. |
Training Augmentation with Adversarial Examplesfor Robust Speech Recognition | 1806.02782 | Table 1: WER comparison on the Aurora-4 evaluation set with adversarial examples (AdvEx) (ϵ=0.3) | ['[EMPTY]', 'A', 'B', 'C', 'D', 'AVG.'] | [['Baseline', '3.21', '6.08', '6.41', '18.11', '11.05'], ['AdvEx', '3.51', '5.84', '5.79', '14.75', '9.49'], ['WER reduction (%)', '-9.4', '3.9', '9.7', '18.6', '14.1']] | Based on the results, ϵ=0.3 is chosen as the best perturbation weight to train the Aurora-4 model. The model trained on WSJ0m serves as the baseline. With ϵ=0.3, the augmented data training achieves 9.49% WER averaged across the four test sets, a 14.1% relative improvement over the baseline. For the test set with the highest WER on the baseline system, D, in which both noise and channel distortion are present, the proposed method reduced the WER by 18.6% relative. We also experimented with dropout training, and it gave only very small gains over the baseline. |
SPEECH-TO-SPEECH TRANSLATION BETWEEN UNTRANSCRIBED UNKNOWN LANGUAGES | 1910.00795 | Table 2: Our experiment results based on BTEC Japanese-English speech-to-speech translation. | ['[BOLD] Model (JA-EN) [BOLD] Baseline Tacotron with MFCC source', '[BOLD] Model (JA-EN) [BOLD] Baseline Tacotron with MFCC source', '[BOLD] BLEU -', '[BOLD] METEOR -'] | [['[BOLD] Proposed Speech2Code', '[BOLD] Proposed Speech2Code', '[BOLD] Proposed Speech2Code', '[BOLD] Proposed Speech2Code'], ['Codebook', 'Time Reduction', '[EMPTY]', '[EMPTY]'], ['32', '4', '14.8', '15'], ['32', '8', '14.2', '15.6'], ['32', '12', '16', '16'], ['64', '4', '10.8', '12.1'], ['64', '8', '14.2', '14.7'], ['64', '12', '14.7', '14.8'], ['128', '4', '11.9', '13.5'], ['128', '8', '15.3', '15.3'], ['128', '12', '14.9', '14.5'], ['[BOLD] Topline (Cascade ASR ->TTS)', '[BOLD] Topline (Cascade ASR ->TTS)', '37.4', '32.8']] | We tried several hyperparameters, including codebook size and time-reduction factor. Our best performance was produced by a codebook of 128 and a time-reduction factor 8 with a score of 15.3 BLEU and 15.3 METEOR. |
SPEECH-TO-SPEECH TRANSLATION BETWEEN UNTRANSCRIBED UNKNOWN LANGUAGES | 1910.00795 | Table 1: Our experiment results based on BTEC French-English speech-to-speech translation: | ['[BOLD] Model (FR-EN) [BOLD] Baseline Tacotron with MFCC input', '[BOLD] Model (FR-EN) [BOLD] Baseline Tacotron with MFCC input', '[BOLD] BLEU -', '[BOLD] METEOR -'] | [['[BOLD] Proposed Speech2Code', '[BOLD] Proposed Speech2Code', '[BOLD] Proposed Speech2Code', '[BOLD] Proposed Speech2Code'], ['Codebook', 'Time Reduction', '[EMPTY]', '[EMPTY]'], ['32', '4', '19.4', '19.1'], ['32', '8', '23.8', '22.2'], ['32', '12', '23.2', '22.1'], ['64', '4', '16.1', '16.9'], ['64', '8', '24.4', '22.9'], ['64', '12', '25.0', '23.2'], ['128', '4', '16.9', '17.4'], ['128', '8', '23.3', '22.1'], ['128', '12', '24.2', '21.9'], ['[BOLD] Topline (Cascade ASR ->TTS)', '[BOLD] Topline (Cascade ASR ->TTS)', '47.4', '41.2']] | We tried several hyperparameters, including codebook size and time-reduction factor. Our best performance was produced by codebook of 64 and a time-reduction factor of 12 with a score of 25.0 BLEU and 23.2 METEOR. |
Many Languages, One Parser | 1602.01595 | Table 5: Effect of automatically predicting language ID and POS tags with MaLOPa on LAS scores. | ['LAS', 'language ID', 'coarse POS', 'target language de', 'target language en', 'target language es', 'target language fr', 'target language it', 'target language pt', 'target language sv', 'average'] | [['[EMPTY]', 'gold', 'gold', '78.6', '84.2', '83.4', '82.4', '89.1', '84.2', '82.6', '83.5'], ['[EMPTY]', 'predicted', 'gold', '78.5', '80.2', '83.4', '82.1', '88.9', '83.9', '82.5', '82.7'], ['[EMPTY]', 'gold', 'predicted', '71.2', '79.9', '80.5', '78.5', '85.0', '78.4', '75.5', '78.4'], ['[EMPTY]', 'predicted', 'predicted', '70.8', '74.1', '80.5', '78.2', '84.7', '77.1', '75.5', '77.2']] | The macro average language ID prediction accuracy on the test set across sentences is 94.7%. The macro average accuracy of the POS tagger is 93.3%. the four configurations: {gold language ID, predicted language ID} × {gold POS tags, predicted POS tags}. The performance of the parser suffers mildly (–0.8 LAS points) when using predicted language IDs, but more (–5.1 LAS points) when using predicted POS tags. The disparity in parsing results with gold vs. predicted POS tags is an important open problem, and has been previously discussed by \newcitetiedemann:15. Without using block dropout, we lose an extra 0.2 LAS points in both configurations using predicted POS tags. |
Adapting Word Representations Across Corpora | 1906.02688 | Table 8: Micro and Macro accuracies over all classes and Macro accuracy for the 50% rarest classes. | ['Method', 'Micro', 'Macro', 'Rare classes'] | [['Tgt', '73.4', '47.8', '25.7'], ['Yang17C', '74.9', '50.2', '28.0'], ['Reg-Our', '75.0', '51.8', '31.0'], ['Src-Tune', '75.0', '52.6', '32.3'], ['WECT', '[BOLD] 75.4', '[BOLD] 53.4', '[BOLD] 33.8']] | We also show the macro accuracy of the 50% rarest classes. We find that our snippet selection method WECT provides the highest gains over the baseline. Further, regularizing using our word stability measure provides more useful embeddings than Yang17’s stability scores. Also, while Src-Tune performs better than regularization-based method, it is still inferior to WECT on all three metrics. The X-axis lists the classes in increasing order of frequency. We find that our method achieves impressive gains in the left side corresponding to low frequency classes. In the first class, Tgt alone gave 50% accuracy, whereas WECT gave 100%. For the second class, Tgt alone got 0% accuracy whereas WECT got 25%. |
Adapting Word Representations Across Corpora | 1906.02688 | Table 3: Perplexity of the trained language model on various target domains (lower is better). | ['[EMPTY]', 'Physics', 'Gaming', 'Android', 'Unix'] | [['Tgt', '113.7', '163.9', '116.5', '123.1'], ['Src-Tune', '114.9', '161.9', '116.6', '122.1'], ['Yang17C', '114.0', '164.9', '116.9', '122.1'], ['Reg-Our', '112.1', '162.2', '118.9', '122.7'], ['WECT:word', '111.2', '162.0', '114.2', '121.9'], ['WECT:ctxt', '113.2', '166.5', '116.5', '123.6'], ['WECT', '[BOLD] 110.6', '[BOLD] 161.6', '[BOLD] 113.4', '[BOLD] 121.7']] | We observe that perplexity is lowest with our method of jointly training with target and selected source snippets. Surprisingly, the Src-Tune method does not perform as well on the LM task. On two of the four domains Src-Tune increases perplexity beyond the Tgt baseline. Likewise, Yang17C while providing gains over Tgt in the question deduplication task, increases perplexity for the LM task. Using our word stability measure instead of Yang17’s helps significantly reduce perplexity. |
Studio Ousia’s Quiz Bowl Question Answering System | 1803.08652 | Table 2: Results for Neural Type Predictor. | ['Model Name', 'Metric', 'Sent 1', 'Sent 1–2', 'Sent 1–3', 'Full'] | [['Coarse-grained CNN', 'Accuracy', '0.95', '0.96', '0.97', '0.98'], ['Fine-grained CNN', 'Precision@1', '0.93', '0.95', '0.96', '0.97'], ['Fine-grained CNN', 'Accuracy', '0.56', '0.64', '0.69', '0.73'], ['Fine-grained CNN', 'F1', '0.83', '0.87', '0.89', '0.91']] | The coarse-grained model performed very accurately; the accuracies exceeded 95% for incomplete questions and 98% for full questions. The fine-grained model also achieved good results; its Precision@1 scores were comparable to the accuracies of the coarse-grained model. However, the model suffered when it came to predicting all the fine-grained entity types, resulting in the relatively degraded performance in its accuracy and its F1 score. |
Studio Ousia’s Quiz Bowl Question Answering System | 1803.08652 | Table 3: Accuracies of our question answering system. NQS and NTP stand for Neural Quiz Solver and Neural Type Predictor, respectively. | ['Name', 'Sent 1', 'Sent 1–2', 'Sent 1–3', 'Full'] | [['Full model (NQS + NTP + IR)', '0.56', '0.78', '0.88', '0.97'], ['NQS', '0.31', '0.54', '0.70', '0.88'], ['NQS + coarse-grained NTP', '0.33', '0.56', '0.72', '0.89'], ['NQS + fine-grained NTP', '0.33', '0.57', '0.73', '0.89'], ['NQS + NTP', '0.34', '0.57', '0.73', '0.89'], ['NQS + NTP + IR-Wikipedia', '0.48', '0.71', '0.84', '0.95'], ['NQS + NTP + IR-Dataset', '0.49', '0.73', '0.86', '0.96']] | Here, we tested the performance using Dataset QA, and used the output of the Answer Scorer to predict the answer. Our system performed very accurately; it achieved 56% accuracy when given only a single sentence and 97% accuracy given the full set of sentences. To further evaluate the effectiveness of each sub-model presented above, we added the sub-models incrementally to the Answer Scorer. Note that the features not based on sub-models (e.g., the number of words in a question) were included in all instances. As a result, all of the sub-models effectively contributed to the performance. We also observed that the neural network models (i.e., Neural Quiz Solver and Neural Type Predictor) achieved good performance only for longer questions. Further, the IR models substantially improved the performance, especially for shorter questions. |
Studio Ousia’s Quiz Bowl Question Answering System | 1803.08652 | Table 4: Accuracies of the top three QA systems submitted in the competition. | ['Name', 'Accuracy'] | [['Our system', '[BOLD] 0.85'], ['Acelove', '0.675'], ['Lunit.io', '0.6'], ['Baseline', '0.55']] | Our system achieved the best performance by a wide margin. To further evaluate the actual performance of the systems in the quiz bowl, the competition organizers performed simulated pairwise matches between the systems following the official quiz bowl rules. Our system outperformed the Acelove system (our system: 1220 points; the Acelove system: 60 points) and the Lunit.io system (our system: 1145 points; the Lunit.io system: 105 points) by considerably wide margins. |
Neural Net Models of Open-domain Discourse Coherence | 1606.01545 | Table 2: Performance on the open-domain binary classification dataset of 984 Wikipedia paragraphs. | ['Model', 'Accuracy'] | [['VLV-GM (MMI)', '[BOLD] 0.873'], ['VLV-GM (bi)', '0.860'], ['VLV-GM (uni)', '0.839'], ['LDA-HMM-GM (MMI)', '0.847'], ['LDA-HMM-GM (bi)', '0.837'], ['LDA-HMM-GM (uni)', '0.814'], ['Seq2Seq (MMI)', '0.840'], ['Seq2Seq (bi)', '0.821'], ['Seq2Seq (uni)', '0.803'], ['Discriminative Model', '0.715'], ['Entity Grid Model', '0.686'], ['foltz1998measurement-Glove', '0.597'], ['foltz1998measurement-LDA', '0.575']] | Contrary to the findings on the domain specific dataset in the previous subsection, the discriminative model does not yield compelling results, performing only slightly better than the entity grid model. We believe the poor performance is due to the sentence-level negative sampling used by the discriminative model. Due to the huge semantic space in the open-domain setting, the sampled instances can only cover a tiny proportion of the possible negative candidates, and therefore don’t cover the space of possible meanings. By contrast the dataset in \newcitebarzilay2008modeling is very domain-specific, and the semantic space is thus relatively small. By treating all other sentences in the document as negative, the discriminative strategy ’s negative samples form a much larger proportion of the semantic space, leading to good performance. |
Neural Net Models of Open-domain Discourse Coherence | 1606.01545 | Table 4: Adversarial Success for different models. | ['Model', 'adver-1', 'adver-2', 'adver-3'] | [['VLV-GM (MMI)', '[BOLD] 0.174', '[BOLD] 0.120', '[BOLD] 0.054'], ['LDA-HMM-GM (MMI)', '0.130', '0.104', '0.043'], ['Seq2Seq (MMI)', '0.120', '0.090', '0.039'], ['Seq2Seq (bi)', '0.108', '0.078', '0.030'], ['Seq2Seq (uni)', '0.101', '0.068', '0.024']] | As can be seen, the latent variable model VLV-GM is able to generate chunk of texts that are most indistinguishable from coherent texts from humans. This is due to its ability to handle the dependency between neighboring sentences. Performance declines as the number of turns increases due to the accumulation of errors and current models’ inability to model long-term sentence-level dependency. All models perform poorly on the adver-3 evaluation metric, with the best adversarial success value being 0.081 (the trained evaluator is able to distinguish between human-generated and machine generated dialogues with greater than 90 percent accuracy for all models). |
Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) | 1412.6632 | Table 9: Performance comparison of different versions of m-RNN models on the Flickr30K dataset. All the models adopt VggNet as the image representation. See Figure 5 for details of the models. | ['[EMPTY]', 'B-1', 'B-2', 'B-3', 'B-4'] | [['m-RNN', '0.600', '0.412', '0.278', '0.187'], ['m-RNN-NoEmbInput', '0.592', '0.408', '0.277', '0.188'], ['m-RNN-OneLayerEmb', '0.594', '0.406', '0.274', '0.184'], ['m-RNN-EmbOneInput', '0.590', '0.406', '0.274', '0.185'], ['m-RNN-visInRnn', '0.466', '0.267', '0.157', '0.101'], ['m-RNN-visInRnn-both', '0.546', '0.333', '0.191', '0.120'], ['m-RNN-visInRnn-both-shared', '0.478', '0.279', '0.171', '0.110']] | To validate its efficiency, we train three different m-RNN networks: m-RNN-NoEmbInput, m-RNN-OneLayerEmb, m-RNN-EmbOneInput. “m-RNN-NoEmbInput” denotes the m-RNN model whose connection between the word embedding layer \@slowromancapii@ and the multimodal layer is cut off. Thus the multimodal layer has only two inputs: the recurrent layer and the image representation. “m-RNN-OneLayerEmb” denotes the m-RNN model whose two word embedding layers are replaced by a single 256 dimensional word-embedding layer. There are much more parameters of the word-embedding layers in the m-RNN-OneLayerEmb than those in the original m-RNN ( 256⋅M v.s. 128⋅M+128⋅256) if the dictionary size M is large. “m-RNN-EmbOneInput” denotes the m-RNN model whose connection between the word embedding layer \@slowromancapii@ and the multimodal layer is replaced by the connection between the word embedding layer \@slowromancapi@ and the multimodal layer. It verifies the effectiveness of the two word embedding layers. How to connect the vision and the language part of the model. We train three variants of m-RNN models where the image representation is inputted into the recurrent layer: m-RNN-VisualInRNN, m-RNN-VisualInRNN-both, and m -RNN-VisualInRNN-Both-Shared. For m-RNN-VisualInRNN, we only input the image representation to the word embedding layer \@slowromancapii@ while for the later two models, we input the image representation to both the multimodal layer and word embedding layer \@slowromancapii@. The weights of the two connections V(1)I, V(2)I are shared for m-RNN-VisualInRNN-Both-Shared. |
Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) | 1412.6632 | Table 3: Results of R@K and median rank (Med r) for Flickr8K dataset. “-AlexNet” denotes the image representation based on AlexNet extracted from the whole image frame. “-RCNN” denotes the image representation extracted from possible objects detected by the RCNN algorithm. | ['[EMPTY]', 'Sentence Retrival (Image to Text) R@1', 'Sentence Retrival (Image to Text) R@5', 'Sentence Retrival (Image to Text) R@10', 'Sentence Retrival (Image to Text) Med r', 'Image Retrival (Text to Image) R@1', 'Image Retrival (Text to Image) R@5', 'Image Retrival (Text to Image) R@10', 'Image Retrival (Text to Image) Med r'] | [['Random', '0.1', '0.5', '1.0', '631', '0.1', '0.5', '1.0', '500'], ['SDT-RNN-AlexNet', '4.5', '18.0', '28.6', '32', '6.1', '18.5', '29.0', '29'], ['Socher-avg-RCNN', '6.0', '22.7', '34.0', '23', '6.6', '21.6', '31.7', '25'], ['DeViSE-avg-RCNN', '4.8', '16.5', '27.3', '28', '5.9', '20.1', '29.6', '29'], ['DeepFE-AlexNet', '5.9', '19.2', '27.3', '34', '5.2', '17.6', '26.5', '32'], ['DeepFE-RCNN', '12.6', '32.9', '44.0', '14', '9.7', '29.6', '[BOLD] 42.5', '[BOLD] 15'], ['Ours-m-RNN-AlexNet', '[BOLD] 14.5', '[BOLD] 37.2', '[BOLD] 48.5', '[BOLD] 11', '[BOLD] 11.5', '[BOLD] 31.0', '42.4', '[BOLD] 15']] | This dataset was widely used as a benchmark dataset for image and sentence retrieval. We compare our model with several state-of-the-art methods: SDT-RNN (Socher et al. (Frome et al. , DeepFE ( Karpathy et al. with various image representations. Our model outperforms these methods by a large margin when using the same image representation (e.g. AlexNet). “-avg-RCNN” denotes methods with features of the average CNN activation of all objects above a detection confidence threshold. DeepFE-RCNN Karpathy et al. The results show that using these features improves the performance. Even without the help from the object detection methods, however, our method performs better than these methods in almost all the evaluation metrics. We will develop our framework using better image features based on object detection in the future work. |
Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) | 1412.6632 | Table 8: Results of m-RNN-shared model after applying consensus reranking using nearest neighbors as references (m-RNN-shared-NNref), compared with those of the original m-RNN model on our validation set and MS COCO test server. | ['MS COCO val for consensus reranking', 'MS COCO val for consensus reranking B1', 'MS COCO val for consensus reranking B2', 'MS COCO val for consensus reranking B3', 'MS COCO val for consensus reranking B4', 'MS COCO val for consensus reranking CIDEr', 'MS COCO val for consensus reranking ROUGE_L', 'MS COCO val for consensus reranking METEOR'] | [['m-RNN-shared', '0.686', '0.511', '0.375', '0.280', '0.842', '0.500', '0.228'], ['m-RNN-shared-NNref-BLEU', '0.718', '0.550', '0.409', '0.305', '0.909', '0.519', '0.235'], ['m-RNN-shared-NNref-CIDEr', '0.714', '0.543', '0.406', '0.304', '0.938', '0.519', '0.239'], ['m-RNN-shared-NNref-BLEU-Orcale', '0.792', '0.663', '0.543', '0.443', '1.235', '0.602', '0.287'], ['m-RNN-shared-NNref-CIDEr-Oracle', '0.784', '0.648', '0.529', '0.430', '1.272', '0.593', '0.287'], ['MS COCO 2014 test server', 'MS COCO 2014 test server', 'MS COCO 2014 test server', 'MS COCO 2014 test server', 'MS COCO 2014 test server', 'MS COCO 2014 test server', 'MS COCO 2014 test server', 'MS COCO 2014 test server'], ['[EMPTY]', 'B1', 'B2', 'B3', 'B4', 'CIDEr', 'ROUGE_L', 'METEOR'], ['m-RNN-shared', '0.685', '0.512', '0.376', '0.279', '0.819', '0.504', '0.229'], ['m-RNN-shared-NNref-BLEU', '0.720', '0.553', '0.410', '0.302', '0.886', '0.524', '0.238'], ['m-RNN-shared-NNref-CIDEr', '0.716', '0.545', '0.404', '0.299', '0.917', '0.521', '0.242']] | For BLEU-based consensus reranking, we get an improvement of 3.5 points on our validation set and 3.3 points on the MS COCO test 2014 set in terms of BLEU4 score. For the CIDEr-based consensus reranking, we get an improvement of 9.4 points on our validation set and 9.8 points on the MS COCO test 2014 set in terms of CIDEr. We also show the oracle performance of the ten hypotheses, which is the upper bound of the consensus reranking. More specifically, for each image in our validation set, we rerank the hypotheses according to the scores (BLEU or CIDEr) w.r.t to the groundtruth captions. (see rows with “-oracle”). The oracle performance is surprisingly high, indicating that there is still room for improvement, both for the m-RNN model itself and the reranking strategy. |
Order-free Learning Alleviating Exposure Bias in Multi-label Classification | 1909.03434 | Table 2: Performance on AAPD | ['Models (a) Seq2set (simp.)', 'Models (a) Seq2set (simp.)', 'maF1', 'miF1', 'ebF1', 'ACC', 'HA', 'Average'] | [['', '', '-', '0.705', '-', '-', '0.9753', '-'], ['(b) Seq2set', '(b) Seq2set', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['', '', '-', '0.698', '-', '-', '0.9751', '-'], ['(c) SGM+GE', '(c) SGM+GE', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['', '', '-', '0.710', '-', '-', '0.9755', '-'], ['Baselines', 'Baselines', 'Baselines', 'Baselines', 'Baselines', 'Baselines', 'Baselines', 'Baselines'], ['(d) BR', '(d) BR', '0.523', '0.694', '0.695', '0.368', '0.9741', '0.651'], ['(e) BR++', '(e) BR++', '0.521', '0.700', '0.703', '0.390', '0.9750', '0.658'], ['(f) Seq2seq', '(f) Seq2seq', '0.511', '0.695', '0.707', '[BOLD] 0.421', '0.9743', '0.662'], ['(g) Seq2seq + SS', '(g) Seq2seq + SS', '0.541', '0.703', '0.713', '0.406', '0.9742', '0.667'], ['(h) Order-free RNN', '(h) Order-free RNN', '0.539', '0.696', '0.708', '0.413', '0.9742', '0.666'], ['(i) Order-free RNN + SS', '(i) Order-free RNN + SS', '0.548', '0.699', '0.709', '0.416', '0.9743', '0.669'], ['Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods'], ['(j) OCD', '(j) OCD', '0.541', '0.707', '0.723', '0.403', '0.9740', '0.670'], ['OCD + MTL', '(k) RNN dec.', '0.578', '0.711', '0.727', '0.391', '0.9742', '0.676'], ['OCD + MTL', '(l) BR dec.', '0.562', '0.711', '0.718', '0.382', ' [BOLD] 0.9760', '0.670'], ['OCD + MTL', '(m) Logistic rescore', ' [BOLD] 0.585', ' [BOLD] 0.720', ' [BOLD] 0.736', '0.395', '0.9749', ' [BOLD] 0.682'], ['OCD + MTL', '(n) Logistic joint dec.', '0.580', '0.719', '0.731', '0.399', '0.9753', '0.681']] | In the following, we show results of the baseline models and the proposed method on three text datasets. For MTL models, we show the results of the four kinds of different decoding strategies described in section Decoder Integration. For a simple comparison, we also compute averages of the five metrics as a reference. Note that blue an bold texts in Table We see that different models are skilled at different metrics. For example, RNN decoder based models, i.e. Seq2seq in row (f) and Order-free RNN in row (h), perform well on ACC, whereas BR and BR++ have better results in terms of HA but show clear weaknesses in predicting rare labels (cf. especially maF1). However, OCD in row (j) performs better than all the baselines (row (d)–(i)) |
Order-free Learning Alleviating Exposure Bias in Multi-label Classification | 1909.03434 | Table 3: Performance comparisons on Reuters-21578 | ['Models SVM', 'Models SVM', 'maF1', 'miF1', 'ebF1', 'ACC', 'HA', 'Average'] | [['', '', '0.468', '0.787', '-', '-', '-', '-'], ['EncDec', 'EncDec', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['', '', '0.457', '0.855', '0.891', '0.828', '0.996', '0.805'], ['Baselines', 'Baselines', 'Baselines', 'Baselines', 'Baselines', 'Baselines', 'Baselines', 'Baselines'], ['BR', 'BR', '0.442', '0.861', '0.878', '0.817', '0.9964', '0.799'], ['BR++', 'BR++', '0.407', '0.852', '0.861', '0.812', '0.9962', '0.786'], ['Seq2seq', 'Seq2seq', '0.465', '0.862', '0.895', '0.834', '0.9965', '0.811'], ['Seq2seq+SS', 'Seq2seq+SS', '0.464', '0.856', '0.895', '0.834', '0.9965', '0.809'], ['Order-free RNN', 'Order-free RNN', '0.445', '0.862', '0.901', '0.835', '0.9963', '0.806'], ['Order-free RNN + SS', 'Order-free RNN + SS', '0.452', '0.859', '0.896', '0.836', '0.9962', '0.808'], ['Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods'], ['OCD', 'OCD', '0.458', '0.872', '0.903', '0.839', '0.9966', '0.814'], ['OCD + MTL', 'RNN dec.', '0.475', '0.874', ' [BOLD] 0.905', ' [BOLD] 0.844', '0.9966', '0.819'], ['OCD + MTL', 'BR dec.', '0.459', ' [BOLD] 0.877', '0.898', '0.835', '0.9966', '0.813'], ['OCD + MTL', 'Logistic rescore', '0.477', '0.875', '0.903', '0.842', ' [BOLD] 0.9967', '0.819'], ['OCD + MTL', 'Logistic joint dec.', ' [BOLD] 0.490', '0.874', '0.904', '0.843', ' [BOLD] 0.9967', ' [BOLD] 0.822']] | These results demonstrate again the superiority of OCD and the performance gains afforded by the MTL framework. Since there are over 80% of test samples only have one label in this corpus, to truly know the effect of proposed approaches to multi-label classification , we also provide results only on test samples with more than one label in the Section Analysis of Reuters-21758 in Appendix. |
Order-free Learning Alleviating Exposure Bias in Multi-label Classification | 1909.03434 | Table 4: Performance comparisons on Audio set. | ['Models Baselines', 'Models Baselines', 'maF1 Baselines', 'miF1 Baselines', 'ebF1 Baselines', 'ACC Baselines', 'HA Baselines', 'Average Baselines'] | [['BR', 'BR', '0.349', '0.480', '0.416', '0.086', '[BOLD] 0.9957', '0.465'], ['Seq2seq', 'Seq2seq', '0.345', '0.448', '0.421', '[BOLD] 0.140', '0.9942', '0.470'], ['Seq2seq + SS', 'Seq2seq + SS', '0.340', '0.448', '0.419', '0.137', '0.9943', '0.468'], ['Order-free RNN', 'Order-free RNN', '0.310', '0.438', '0.410', '0.096', '0.9940', '0.450'], ['Order-free RNN + SS', 'Order-free RNN + SS', '0.310', '0.437', '0.408', '0.095', '0.9947', '0.449'], ['Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods'], ['OCD', 'OCD', '0.353', '0.465', '0.435', '0.117', '0.9941', '0.473'], ['OCD + MTL', 'RNN dec.', '0.359', '0.466', '0.438', '0.115', '0.9940', '0.474'], ['OCD + MTL', 'BR dec.', '0.353', '0.485', '0.420', '0.075', '0.9950', '0.466'], ['OCD + MTL', 'Logistic rescore', ' [BOLD] 0.378', '0.487', ' [BOLD] 0.456', '0.096', '0.9940', '0.482'], ['OCD + MTL', 'Logistic joint dec.', '0.377', ' [BOLD] 0.488', '0.454', '0.119', '0.9945', ' [BOLD] 0.487']] | In the following, we show results of the baseline models and the proposed method on three text datasets. For MTL models, we show the results of the four kinds of different decoding strategies described in section Decoder Integration. For a simple comparison, we also compute averages of the five metrics as a reference. Note that blue an bold texts in Table In this experiment, all models have similar performance in HA. Surprisingly, BR is a competitive baseline model and performs especially well in miF1. Seq2seq achieves the best performance in terms of ACC, which is the same as the observation on AAPD. Overall, OCD performs better than all the baseline and MTL indeed improves the performance. OCD outperforms other RNN decoder-based models in maF1, miF1 and ebF1 and performs worse than BR only in terms of miF1. |
Order-free Learning Alleviating Exposure Bias in Multi-label Classification | 1909.03434 | Table 5: Performance comparison on resplited AAPD, whose test set contains 2000 samples whose label sets occur in the training set (Seen test set) and 2000 samples are not (Unseen test set). OCD (correct prefix) means we only sample correct labels in the training phase. | ['Models', 'Seen test set miF1', 'Seen test set ebF1', 'Unseen test set miF1', 'Unseen test set ebF1'] | [['Seq2seq', '0.730', '0.749', '0.508', '0.503'], ['Seq2seq + SS', '0.736', '0.754', '0.517', '0.515'], ['Order-free RNN', '0.732', '0.746', '0.496', '0.494'], ['Order-free RNN + SS', '0.724', '0.740', '0.520', '0.517'], ['OCD (correct prefix)', '0.726', '0.741', '0.513', '0.515'], ['OCD', '[BOLD] 0.746', '[BOLD] 0.771', '[BOLD] 0.521', '[BOLD] 0.530']] | OCD (correct prefix) means we only sample correct labels in the training phase, so this model has not encountered wrong prefix during training. Clearly, all models perform worse on unseen test set. We can see that SS improves the performance significantly on the unseen test set for both seq2seq and order free RNN. Additionally, OCD with correct prefix, which suffers from the exposure bias, performs worse in both case than OCD. They all demonstrate that sampling wrong labels from predicted distribution helps models become more robust when encountering rare situation. |
Order-free Learning Alleviating Exposure Bias in Multi-label Classification | 1909.03434 | Table 12: Performance comparisons on Reuters-21578 with more than one label. | ['Models Baselines', 'Models Baselines', 'maF1 Baselines', 'miF1 Baselines', 'ebF1 Baselines', 'ACC Baselines', 'HA Baselines', 'Average Baselines'] | [['BR', 'BR', '0.315', '0.706', '0.712', '0.365', '0.9850', '0.617'], ['Seq2seq', 'Seq2seq', '0.316', '0.712', '0.718', '0.405', '0.9855', '0.627'], ['Seq2seq+SS', 'Seq2seq+SS', '0.325', '0.718', '0.722', '0.380', '0.9859', '0.626'], ['Order-free RNN', 'Order-free RNN', '0.331', '0.730', '0.735', '0.425', '0.9862', '0.641'], ['Order-free RNN + SS', 'Order-free RNN + SS', '0.324', '0.699', '0.711', '0.400', '0.9849', '0.624'], ['Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods', 'Proposed methods'], ['OCD', 'OCD', '0.319', '0.734', '0.741', '0.415', '0.9864', '0.639'], ['OCD + MTL', 'RNN dec.', '0.335', '0.745', '0.749', ' [BOLD] 0.440', ' [BOLD] 0.9870', '0.651'], ['OCD + MTL', 'BR dec.', '0.322', '0.739', '0.737', '0.430', '0.9869', '0.643'], ['OCD + MTL', 'Logistic rescore', '0.337', ' [BOLD] 0.750', ' [BOLD] 0.752', '0.435', '0.9869', ' [BOLD] 0.652'], ['OCD + MTL', 'Logistic joint dec.', ' [BOLD] 0.342', '0.743', '0.746', '0.435', ' [BOLD] 0.9870', '0.651']] | The smaller test set has 405 samples. However, the performance gap between baseline models and proposed methods are larger, which strengthen the superiority of OCD and MTL. |
Few-Shot Representation Learning for Out-Of-Vocabulary Words | 1907.00505 | Table 2: Performance on Named Entity Recognition and Part-of-Speech Tagging tasks. All methods are evaluated on test data containing OOV words. Results demonstrate that the proposed approach, HiCE + Morph + MAML, improves the downstream model by learning better representations for OOV words. | ['Methods', 'Named Entity Recognition (F1-score) Rare-NER', 'Named Entity Recognition (F1-score) Bio-NER', 'POS Tagging (Acc) Twitter POS'] | [['Word2vec', '0.1862', '0.7205', '0.7649'], ['FastText', '0.1981', '0.7241', '0.8116'], ['Additive', '0.2021', '0.7034', '0.7576'], ['nonce2vec', '0.2096', '0.7289', '0.7734'], ['` [ITALIC] a\xa0la\xa0carte', '0.2153', '0.7423', '0.7883'], ['HiCE w/o Morph', '0.2394', '0.7486', '0.8194'], ['HiCE + Morph', '0.2375', '0.7522', '0.8227'], ['HiCE + Morph + MAML', '[BOLD] 0.2419', '[BOLD] 0.7636', '[BOLD] 0.8286']] | HiCE outperforms the baselines in all the settings. Compared to the best baseline `a la carte, the relative improvements are 12.4%, 2.9% and 5.1% for Rare-NER, Bio-NER, and Twitter POS, respectively. As aforementioned, the ratio of OOV words in Rare-NER is high. As a result, all the systems perform worse on Rare-NER than Bio-NER, while HiCE reaches the largest improvement than all the other baselines. Besides, our baseline embedding is trained on Wikipedia corpus (WikiText-103), which is quite different from the bio-medical texts and social media domain. The experiment demonstrates that HiCE trained on DT is already able to leverage the general language knowledge which can be transferred through different domains, and adaptation with MAML can further reduce the domain gap and enhance the performance. |
Few-Shot Representation Learning for Out-Of-Vocabulary Words | 1907.00505 | Table 1: Performance on the Chimera benchmark dataset with different numbers of context sentences, which is measured by Spearman correlation. Baseline results are from the corresponding papers. | ['Methods', '2-shot', '4-shot', '6-shot'] | [['Word2vec', '0.1459', '0.2457', '0.2498'], ['FastText', '0.1775', '0.1738', '0.1294'], ['Additive', '0.3627', '0.3701', '0.3595'], ['Additive, no stop words', '0.3376', '0.3624', '0.4080'], ['nonce2vec', '0.3320', '0.3668', '0.3890'], ['` [ITALIC] a\xa0la\xa0carte', '0.3634', '0.3844', '0.3941'], ['HiCE w/o Morph', '0.3710', '0.3872', '0.4277'], ['HiCE + Morph', '[BOLD] 0.3796', '0.3916', '0.4253'], ['HiCE + Morph + Fine-tune', '0.1403', '0.1837', '0.3145'], ['HiCE + Morph + MAML', '0.3781', '[BOLD] 0.4053', '[BOLD] 0.4307'], ['Oracle Embedding', '0.4160', '0.4381', '0.4427']] | In particular, our method (HiCE+Morph+MAML) Compared with the current state-of-the-art method, `a la carte, the relative improvements (i.e., the performance difference divided by the baseline performance) of HiCE are 4.0%, 5.4% and 9.3% in terms of 2,4,6-shot learning, respectively. We also compare our results with that of the oracle embedding, which is the embeddings trained from DT, and used as ground-truth to train HiCE. This results can be regarded as an upper bound. As is shown, when the number of context sentences (K) is relatively large (i.e., K=6), the performance of HiCE is on a par with the upper bound (Oracle Embedding) and the relative performance difference is merely 2.7%. This indicates the significance of using an advanced aggregation model. |
To Normalize, or Not to Normalize:The Impact of Normalization on Part-of-Speech Tagging | 1707.05116 | Table 3: Effect of different models on canonical/non-canonical words. | ['[EMPTY]', 'Bilty', '+Norm', '+Vecs', '+Comb'] | [['canonical', '86.1', '85.6', '91.2', '90.1'], ['non-canon.', '50.8', '70.3', '71.1', '78.5']] | Word embeddings have a higher impact on standard, canonical tokens. It is interesting to note that word embeddings and normalization both have a similar yet complementary effect on the words to be normalized (non-canonical). The improvements on non-canonical words seem to be complementary. The combined model additionally improves on words which need normalization, whereas it scores almost 1% lower on canonical words. This suggests that both strategies have potential to complement each other. Embeddings work considerably better than normalization, which confirms what we found on the Dev data. The combined approach yields the highest accuracy over all evaluation sets, however, it significantly differs from embeddings only on Test_L. This can be explained by our earlier observation (cf. Notice that Test_L does indeed contain the highest proportion of non-canonical tokens. |
The Role of Pragmatic and Discourse Contextin Determining Argument Impact | 2004.03034 | Table 6: F1 scores of each model for the claims with various context length values. | ['[EMPTY]', 'C [ITALIC] l=1', 'C [ITALIC] l=2', 'C [ITALIC] l=3', 'C [ITALIC] l=4'] | [['BERT models', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Claim only', '48.61±3.16', '53.15±1.95', '54.51±1.91', '50.89±2.95'], ['Claim + Parent', '51.49±2.63', '54.78±2.95', '54.94±2.72', '51.94±2.59'], ['Claim + Context [ITALIC] f(2)', '52.84±2.55', '53.77±1.00', '55.24±2.52', '57.04±1.19'], ['Claim + Context [ITALIC] f(3)', '[BOLD] 54.88± [BOLD] 2.49', '54.71±1.74', '52.93±2.07', '[BOLD] 58.17± [BOLD] 1.89'], ['Claim + Context [ITALIC] f(4)', '54.47±2.95', '[BOLD] 54.88± [BOLD] 1.53', '[BOLD] 57.11± [BOLD] 3.38', '57.02±2.22']] | To understand for what kinds of claims the best performing contextual model is more effective, we evaluate the BERT model with flat context representation for claims with context length values 1, 2, 3 and 4 separately. For the claims with context length 1, adding Contextf(3) and Contextf(4) representation along with the claim achieves significantly better (p<0.05) F1 score than modeling the claim only. Similarly for the claims with context length 3 and 4, Contextf(4) and Contextf(3) perform significantly better than BERT with claim only ((p<0.05) and (p<0.01) respectively). We see that models with larger context are helpful even for claims which have limited context (e.g. Cl=1). This may suggest that when we train the models with larger context, they learn how to represent the claims and their context better. |
The Role of Pragmatic and Discourse Contextin Determining Argument Impact | 2004.03034 | Table 1: Number of claims for the given range of number of votes. There are 19,512 claims in the dataset with 3 or more votes. Out of the claims with 3 or more votes, majority of them have 5 or more votes. | ['# impact votes', '# claims'] | [['[3,5)', '4,495'], ['[5,10)', '5,405'], ['[10,15)', '5,338'], ['[15,20)', '2,093'], ['[20,25)', '934'], ['[25,50)', '992'], ['[50,333)', '255']] | Distribution of impact votes. There are 19,512 claims in total with 3 or more votes. Out of the claims with 3 or more votes, majority of them have 5 or more votes. We limit our study to the claims with at least 5 votes to have a more reliable assignment for the accumulated impact label for each claim. |
The Role of Pragmatic and Discourse Contextin Determining Argument Impact | 2004.03034 | Table 2: Number of claims, with at least 5 votes, above the given threshold of agreement percentage for 3-class and 5-class cases. When we combine the low impact and high impact classes, there are more claims with high agreement score. | ['[EMPTY]', '3-class case', '5-class case'] | [['Agreement score', 'Number of claims', 'Number of claims'], ['>50%', '10,848', '7,304'], ['>60%', '7,386', '4,329'], ['>70%', '4,412', '2,195'], ['>80%', '2,068', '840']] | We see that when we combine the low impact and high impact classes, there are more claims with high agreement score. This may imply that distinguishing between no impact-low impact and high impact-very high impact classes is difficult. To decrease the sparsity issue, in our experiments, we use 3-class representation for the impact labels. Moreover, to have a more reliable assignment of impact labels, we consider only the claims with have more than 60% agreement. |
The Role of Pragmatic and Discourse Contextin Determining Argument Impact | 2004.03034 | Table 3: Number of votes for the given impact label. There are 241,884 total votes and majority of them belongs to the category medium impact. | ['Impact label', '# votes- all claims'] | [['No impact', '32,681'], ['Low impact', '37,457'], ['Medium impact', '60,136'], ['High impact', '52,764'], ['Very high impact', '58,846'], ['Total # votes', '241,884']] | Impact label statistics. The claims have 241,884 total votes. The majority of the impact votes belong to medium impact category. We observe that users assign more high impact and very high impact votes than low impact and no impact votes respectively. |
The Role of Pragmatic and Discourse Contextin Determining Argument Impact | 2004.03034 | Table 4: Number of claims for the given range of context length, for claims with more than 5 votes and an agreement score greater than 60%. | ['Context length', '# claims'] | [['1', '1,524'], ['2', '1,977'], ['3', '1,181'], ['[4,5]', '1,436'], ['(5,10]', '1,115'], ['>10', '153']] | Context length (Cl) for a particular claim C is defined by number of claims included in the argument path starting from the thesis until the claim C. We observe that more than half of these claims have 3 or higher context length. |
GECToR – Grammatical Error Correction: Tag, Not Rewrite | 2005.12592 | Table 6: Varying encoders from pretrained Transformers in our sequence labeling system. Training was done on data from training stage II only. | ['[BOLD] Encoder', '[BOLD] CoNLL-2014 (test) [BOLD] P', '[BOLD] CoNLL-2014 (test) [BOLD] R', '[BOLD] CoNLL-2014 (test) [BOLD] F0.5', '[BOLD] BEA-2019 (dev) [BOLD] P', '[BOLD] BEA-2019 (dev) [BOLD] R', '[BOLD] BEA-2019 (dev) [BOLD] F0.5'] | [['LSTM', '51.6', '15.3', '35.0', '-', '-', '-'], ['ALBERT', '59.5', '31.0', '50.3', '43.8', '22.3', '36.7'], ['BERT', '65.6', '36.9', '56.8', '48.3', '29.0', '42.6'], ['GPT-2', '61.0', '6.3', '22.2', '44.5', '5.0', '17.2'], ['RoBERTa', '[BOLD] 67.5', '38.3', '[BOLD] 58.6', '[BOLD] 50.3', '30.5', '[BOLD] 44.5'], ['XLNet', '64.6', '[BOLD] 42.6', '58.5', '47.1', '[BOLD] 34.2', '43.8']] | Encoders from pretrained transformers. We fine-tuned BERT Devlin et al. Radford et al. XLNet Yang et al. et al. We also added LSTM with randomly initialized embeddings (dim=300) as a baseline. BERT, RoBERTa and XLNet encoders perform better than GPT-2 and ALBERT, so we used them only in our next experiments. We hypothesize that encoders from Transformers which were pretrained as a part of the entire encoder-decoder pipeline are less useful for GECToR. |
GECToR – Grammatical Error Correction: Tag, Not Rewrite | 2005.12592 | Table 7: Comparison of single models and ensembles. The M2 score for CoNLL-2014 (test) and ERRANT for the BEA-2019 (test) are reported. In ensembles we simply average output probabilities from single models. | ['[BOLD] GEC system', '[BOLD] Ens.', '[BOLD] CoNLL-2014 (test) [BOLD] P', '[BOLD] CoNLL-2014 (test) [BOLD] R', '[BOLD] CoNLL-2014 (test) [BOLD] F0.5', '[BOLD] BEA-2019 (test) [BOLD] P', '[BOLD] BEA-2019 (test) [BOLD] R', '[BOLD] BEA-2019 (test) [BOLD] F0.5'] | [['Zhao et al. ( 2019 )', '[EMPTY]', '67.7', '40.6', '59.8', '-', '-', '-'], ['Awasthi et al. ( 2019 )', '[EMPTY]', '66.1', '43.0', '59.7', '-', '-', '-'], ['Kiyono et al. ( 2019 )', '[EMPTY]', '67.9', '[BOLD] 44.1', '61.3', '65.5', '[BOLD] 59.4', '64.2'], ['Zhao et al. ( 2019 )', '✓', '74.1', '36.3', '61.3', '-', '-', '-'], ['Awasthi et al. ( 2019 )', '✓', '68.3', '43.2', '61.2', '-', '-', '-'], ['Kiyono et al. ( 2019 )', '✓', '72.4', '[BOLD] 46.1', '65.0', '74.7', '56.7', '70.2'], ['Kantor et al. ( 2019 )', '✓', '-', '-', '-', '78.3', '58.0', '73.2'], ['GECToR (BERT)', '[EMPTY]', '72.1', '42.0', '63.0', '71.5', '55.7', '67.6'], ['GECToR (RoBERTa)', '[EMPTY]', '73.9', '41.5', '64.0', '77.2', '55.1', '71.5'], ['GECToR (XLNet)', '[EMPTY]', '[BOLD] 77.5', '40.1', '[BOLD] 65.3', '[BOLD] 79.2', '53.9', '[BOLD] 72.4'], ['GECToR (RoBERTa + XLNet)', '✓', '76.6', '42.3', '66.0', '[BOLD] 79.4', '57.2', '[BOLD] 73.7'], ['GECToR (BERT + RoBERTa + XLNet)', '✓', '[BOLD] 78.2', '41.5', '[BOLD] 66.5', '78.9', '[BOLD] 58.2', '73.6']] | Finally, our best single-model, GECToR (XLNet) achieves F0.5 = 65.3 on CoNLL-2014 (test) and F0.5 = 72.4 on BEA-2019 (test). Best ensemble model, GECToR (BERT + RoBERTa + XLNet) |
GECToR – Grammatical Error Correction: Tag, Not Rewrite | 2005.12592 | Table 8: Inference time for NVIDIA Tesla V100 on CoNLL-2014 (test), single model, batch size=128. | ['[BOLD] GEC system', '[BOLD] Time (sec)'] | [['Transformer-NMT, beam size = 12', '4.35'], ['Transformer-NMT, beam size = 4', '1.25'], ['Transformer-NMT, beam size = 1', '0.71'], ['GECToR (XLNet), 5 iterations', '0.40'], ['GECToR (XLNet), 1 iteration', '0.20']] | Speed comparison. We measured the model’s average inference time on NVIDIA Tesla V100 on batch size 128. For sequence tagging we don’t need to predict corrections one-by-one as in autoregressive transformer decoders, so inference is naturally parallelizable and therefore runs many times faster. Our sequence tagger’s inference speed is up to 10 times as fast as the state-of-the-art Transformer from Zhao et al. |
Structurally Sparsified Backward Propagation for Faster Long Short-Term Memory Training | 1806.00512 | Table 3: Performance of trained image captioning models. Higher BLEU-4 score means better performance. | ['Training Method', 'BLEU-4', 'Improvement'] | [['Dense', '31.0', '0'], ['Coarse-grained', '30.3', '-2.26%'], ['3/4 coarse + 1/4 dense', '30.8', '-0.65%'], ['Fine-grained', '30.6', '-1.29%'], ['3/4 fine + 1/4 dense', '31.1', '+0.32%']] | Compared to the 5.92% performance loss in language modeling, this result demonstrates generality to applications beyond language model datasets. Moreover, the performance gap between the sparse training and the dense training can be mitigated by the dense after sparse approach. Interestingly, the 3-to-1 sparse-to-dense ratio used in the word language model also works well for the image captioning. |
Structurally Sparsified Backward Propagation for Faster Long Short-Term Memory Training | 1806.00512 | Table 2: Performance of trained language models by dense after sparse method. Lower perplexity means better performance. | ['Training Method', 'Perplexity', 'Improvement'] | [['Dense', '93.101', '0'], ['Coarse-grained', '98.611', '-5.92%'], ['1/2 coarse + 1/2 dense', '88.675', '+4.75%'], ['3/4 coarse + 1/4 dense', '91.388', '+1.84%'], ['5/6 coarse + 1/6 dense', '99.47', '-6.84%'], ['Fine-grained', '96.410', '-3.55%'], ['1/2 fine + 1/2 dense', '88.607', '+4.83%'], ['3/4 fine + 1/4 dense', '91.118', '+2.13%'], ['5/6 fine + 1/6 dense', '96.151', '-3.28%']] | The model is first trained with the coarse-grained method or the fine-grained method, and then trained with the regular dense training model. With 75% steps of sparse training and 25% steps of dense training, the resulted model achieves slightly better performance than the baseline model. The number of total training steps remains constant across the methods. These results demonstrate that mixing the two methods compensates for the quality gap between the pure sparse training and pure dense training methods. A key parameter of applying this mixed training is the ratio of sparse steps to dense steps. Intuitively, the resulting model will perform better with more dense time. Our results show that a 3-to-1 ratio of sparse to dense is sufficient. |
Structurally Sparsified Backward Propagation for Faster Long Short-Term Memory Training | 1806.00512 | Table 4: Performance of trained NMT models. Higher BLEU score means better performance. | ['Training Method', 'BLEU', 'Improvement'] | [['Dense', '20.32', '0'], ['Coarse-grained', '19.60', '-3.5%'], ['3/4 coarse + 1/4 dense', '20.30', '-0.1%'], ['5/6 coarse + 1/6 dense', '20.18', '-0.7%'], ['Fine-grained', '19.92', '-2.0%'], ['3/4 fine + 1/4 dense', '20.45', '+0.64%'], ['5/6 fine + 1/6 dense', '20.17', '-0.74%']] | The BLEU scores are the validation BLEU scores at the end of the training (after 600,000 steps). Although our sparsifying methods enforce 50% sparsity in the gate gradients, the resulting models achieve acceptable BLEU scores. Even compared to the dense SGD, the coarse-grain sparsifying only suffers a 3.5% BLEU score decrease while the fine-grain method is slightly better with just a 2.0% decrease. |
Multi-Domain Dialogue Acts and Response Co-Generation | 2004.12363 | Table 3: Results of different act generation methods, where BiLSTM, Word-CNN and Transformer are baselines from Chen et al. (2019). MarCo is our act generator trained jointly with the response generator and Transformer (GEN) is that without joint training. | ['Method', 'F1'] | [['BiLSTM', '71.4'], ['Word-CNN', '71.5'], ['Transformer', '73.1'], ['Transformer (GEN)', '73.2'], ['MarCo', '[BOLD] 73.9']] | To evaluate the performance of our act generator, we compare it with several baseline methods mentioned in Chen et al. We use MarCo to represent our act generator which is trained jointly with the response generator, and use Transformer (GEN) to denote our act generator without joint training. But after trained jointly with the response generator, MarCo manages to show the best performance, confirming the effect of the co-generation. |
Multi-Domain Dialogue Acts and Response Co-Generation | 2004.12363 | Table 1: Overall results on the MultiWOZ 2.0 dataset. | ['Dialog Act', 'Model', 'Inform', 'Success', 'BLEU', 'Combined Score'] | [['Without Act', 'LSTM', '71.29', '60.96', '18.80', '84.93'], ['Without Act', 'Transformer', '71.10', '59.90', '19.10', '84.60'], ['Without Act', 'TokenMoE', '75.30', '59.70', '16.81', '84.31'], ['Without Act', 'Structured Fusion', '82.70', '72.10', '16.34', '93.74'], ['One-hot Act', 'SC-LSTM', '74.50', '62.50', '20.50', '89.00'], ['One-hot Act', 'HDSA (MarCo)', '76.50', '62.30', '21.85', '91.25'], ['One-hot Act', 'HDSA', '82.90', '68.90', '[BOLD] 23.60', '99.50'], ['Sequential Act', 'MarCo', '90.30', '75.20', '19.45', '102.20'], ['Sequential Act', 'MarCo (BERT)', '[BOLD] 92.30', '[BOLD] 78.60', '20.02', '[BOLD] 105.47']] | From the table we can notice that our co-generation model (MarCo) outperforms all the baselines in Inform Rate, Request Success, and especially in combined score which is an overall metric. By comparing the two HDSA models, we can find HDSA derives its main performance from the external BERT, which can also be used to improve our MarCo considerably (MarCo (BERT)). These results confirm the success of MarCo by modeling act prediction as a generation problem and training it jointly with response generation. |
Unsupervised Pretraining for Sequence to Sequence Learning | 1611.02683 | Table 1: English→German performance on WMT test sets. Our pretrained model outperforms all other models. Note that the model without pretraining uses the LM objective. | ['[ITALIC] System', '[ITALIC] ensemble?', '[ITALIC] BLEU [ITALIC] newstest2014', '[ITALIC] BLEU [ITALIC] newstest2015'] | [['Phrase Based MT (Williams et\xa0al., 2016 )', '-', '21.9', '23.7'], ['Supervised NMT (Jean et\xa0al., 2015 )', 'single', '-', '22.4'], ['Edit Distance Transducer NMT (Stahlberg et\xa0al., 2016 )', 'single', '21.7', '24.1'], ['Edit Distance Transducer NMT (Stahlberg et\xa0al., 2016 )', 'ensemble 8', '22.9', '25.7'], ['Backtranslation (Sennrich et\xa0al., 2015a )', 'single', '22.7', '25.7'], ['Backtranslation (Sennrich et\xa0al., 2015a )', 'ensemble 4', '23.8', '26.5'], ['Backtranslation (Sennrich et\xa0al., 2015a )', 'ensemble 12', '[BOLD] 24.7', '27.6'], ['No pretraining', 'single', '21.3', '24.3'], ['Pretrained seq2seq', 'single', '[BOLD] 24.0', '[BOLD] 27.0'], ['Pretrained seq2seq', 'ensemble 5', '[BOLD] 24.7', '[BOLD] 28.1']] | Equally impressive is the fact that our best single model outperforms the previous state of the art ensemble of 4 models. Our ensemble of 5 models matches or exceeds the previous best ensemble of 12 models. |
An Interpretable Knowledge Transfer Model for Knowledge Base Completion | 1704.05908 | Table 5: Different λ’s effect on our model performance. The compared models are trained for 2000 epochs | ['[BOLD] Method', '[BOLD] WN18 MR', '[BOLD] WN18 H10', '[BOLD] FB15k MR', '[BOLD] FB15k H10'] | [['[ITALIC] λ=0.0003', '[BOLD] 217', '95.0', '[BOLD] 68', '80.4'], ['[ITALIC] λ=0.001', '223', '[BOLD] 95.2', '73', '80.6'], ['[ITALIC] λ=0.003', '239', '[BOLD] 95.2', '82', '[BOLD] 80.9']] | We compare how different value of λ would influence our model’s performance in Table. With large λ and higher domain sampling probability, our model’s Hits@10 increases while mean rank also increases. The rise of mean rank is due to higher probability of generating a valid triple as a negative sample causing the energy of a valid triple to increase, which leads to a higher overall rank of a correct entity. However, the reasoning capability is boosted with higher Hits@10 as shown in the table. |
An Interpretable Knowledge Transfer Model for Knowledge Base Completion | 1704.05908 | Table 2: Link prediction results on two datasets. Higher Hits@10 or lower Mean Rank indicates better performance. Following Nguyen et al. (2016b) and Shen et al. (2016), we divide the models into two groups. The first group contains intrinsic models without using extra information. The second group make use of additional information. Results in the brackets are another set of results STransE reported. | ['[BOLD] Model', '[BOLD] Additional Information', '[BOLD] WN18 Mean Rank', '[BOLD] WN18 Hits@10', '[BOLD] FB15k Mean Rank', '[BOLD] FB15k Hits@10'] | [['SE Bordes et\xa0al. ( 2011 )', 'No', '985', '80.5', '162', '39.8'], ['Unstructured Bordes et\xa0al. ( 2014 )', 'No', '304', '38.2', '979', '6.3'], ['TransE (Bordes et\xa0al., 2013 )', 'No', '251', '89.2', '125', '47.1'], ['TransH (Wang et\xa0al., 2014 )', 'No', '303', '86.7', '87', '64.4'], ['TransR (Lin et\xa0al., 2015b )', 'No', '225', '92.0', '77', '68.7'], ['CTransR (Lin et\xa0al., 2015b )', 'No', '218', '92.3', '75', '70.2'], ['KG2E (He et\xa0al., 2015 )', 'No', '348', '93.2', '59', '74.0'], ['TransD (Ji et\xa0al., 2015 )', 'No', '212', '92.2', '91', '77.3'], ['TATEC (García-Durán et\xa0al., 2016 )', 'No', '-', '-', '[BOLD] 58', '76.7'], ['NTN (Socher et\xa0al., 2013 )', 'No', '-', '66.1', '-', '41.4'], ['DISTMULT (Yang et\xa0al., 2015 )', 'No', '-', '94.2', '-', '57.7'], ['STransE\xa0(Nguyen et\xa0al., 2016b )', 'No', '206 (244)', '93.4 (94.7)', '69', '79.7'], ['ITransF', 'No', '[BOLD] 205', '94.2', '65', '81.0'], ['ITransF (domain sampling)', 'No', '223', '[BOLD] 95.2', '77', '[BOLD] 81.4'], ['rTransE García-Durán et\xa0al. ( 2015 )', 'Path', '-', '-', '50', '76.2'], ['PTransE Lin et\xa0al. ( 2015a )', 'Path', '-', '-', '58', '84.6'], ['NLFeat Toutanova and Chen ( 2015 )', 'Node + Link Features', '-', '94.3', '-', '87.0'], ['Random Walk (Wei et\xa0al., 2016 )', 'Path', '-', '94.8', '-', '74.7'], ['IRN\xa0(Shen et\xa0al., 2016 )', 'External Memory', '249', '[ITALIC] 95.3', '[ITALIC] 38', '[ITALIC] 92.7']] | The overall link prediction results Our model consistently outperforms previous models without external information on both the metrics of WN18 and FB15k. On WN18, we even achieve a much better mean rank with comparable Hits@10 than current state-of-the-art model IRN employing external information. |
An Interpretable Knowledge Transfer Model for Knowledge Base Completion | 1704.05908 | Table 3: Performance of model with dense attention vectors or sparse attention vectors. MR, H10 and Time denotes mean rank, Hits@10 and training time per epoch respectively | ['[BOLD] Method', '[BOLD] WN18 MR', '[BOLD] WN18 H10', '[BOLD] WN18 Time', '[BOLD] FB15k MR', '[BOLD] FB15k H10', '[BOLD] FB15k Time'] | [['Dense', '[BOLD] 199', '94.0', '4m34s', '69', '79.4', '4m30s'], ['Dense + ℓ1', '228', '[BOLD] 94.2', '4m25s', '131', '78.9', '5m47s'], ['Sparse', '207', '94.1', '[BOLD] 2m32s', '[BOLD] 67', '[BOLD] 79.6', '[BOLD] 1m52s']] | Generally, ITransF with sparse attention has slightly better or comparable performance comparing to dense attention. We see that ℓ1 regularization does not produce a sparse attention, especially on FB15k. |
An Interpretable Knowledge Transfer Model for Knowledge Base Completion | 1704.05908 | Table 4: Different methods to obtain sparse representations | ['[BOLD] Method', '[BOLD] WN18 MR', '[BOLD] WN18 H10', '[BOLD] FB15k MR', '[BOLD] FB15k H10'] | [['Sparse Encoding', '211', '86.6', '66', '79.1'], ['ITransF', '[BOLD] 205', '[BOLD] 94.2', '[BOLD] 65', '[BOLD] 81.0']] | On both benchmarks, ITransF achieves significant improvement against sparse encoding on pretrained model. This performance gap should be expected since the objective function of sparse encoding methods is to minimize the reconstruction loss rather than optimize the criterion for link prediction. |
AMUSED: A Multi-Stream Vector Representation Method for Use in Natural Dialogue | 1912.10160 | Table 3: Human based evaluation is conducted for 5 different components in the network as well as KV memory networks. AMUSED achieves the highest percent gain over specified baseline model. The scale is 1-10. | ['[BOLD] Model', '[BOLD] Coherence', '[BOLD] Context Aware', '[BOLD] Non Monotonicity', '[BOLD] Average Rating', '[BOLD] %gain'] | [['Bi-GRU & GCN only', '6.82', '7.35', '6.77', '6.98', 'Baseline'], ['BERT only', '7.61', '7.24', '6.33', '7.06', '1.14'], ['BERT with Bi-GRU & GCN', '7.54', '6.91', '7.38', '7.27', '4.15'], ['BERT and External KB', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['with Bi-GRU & GCN', '7.16', '7.34', '7.72', '7.40', '6.01'], ['KV Memory Networks(Zhang et al., 2018b )', '7.56', '8.09', '7.84', '7.83', '12.18'], ['BERT & External KB with Bi-GRU,', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['GCN and memory networks', '8.21', '8.34', '7.82', '8.12', '16.33']] | To monitor the effect of each neural component, we get it rated by experts either in isolation or in conjunction with other components. Such a study helps us understand the impact of different modules on a human based conversation. Dialogue system proposed by Zhang et al. |
AMUSED: A Multi-Stream Vector Representation Method for Use in Natural Dialogue | 1912.10160 | Table 2: Precision @1 comparison between different methods. Precision@1 % tell us the number of times the correct response from the dataset comes up. Details in Section 5.4.2 | ['[BOLD] Method', '[BOLD] Precision@1'] | [['Seq2Seq', '0.092'], ['Profile Memory', '0.092'], ['IR Baseline', '0.214'], ['AMUSED(Persona Chat)', '[BOLD] 0.326'], ['AMUSED(DSTC)', '[BOLD] 0.78']] | Accuracy on this binary classification problem has been used to select the best network. Furthermore, we perform ablation studies using different modules to understand the effect of each component in the network. A 4 layer neural network with ReLU activation in its hidden layers and softmax in the final layer is used as the classifier. External knowledge in conjunction with memory and GCN module has the best accuracy when embeddings of query and response are concatenated together. This is another metric used to judge the effectiveness of our network. It is different from the next sentence prediction task accuracy. It measures that for n trials, the number of times a relevant response is reported with the highest confidence value. |
First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs | 1408.2873 | Table 2: Train and test set character error rate (CER) results for a deep neural network (DNN) without recurrence, recurrent deep neural network with forward temporal connections (RDNN), and a bi-directional recurrent deep neural network (BRDNN). All models have 5 hidden layers. The DNN and RDNN both have 2,048 hidden units in each hidden layer while the BRDNN has 1,824 hidden units per hidden layer to keep its total number of free parameters similar to the other models. For all models we choose the most likely character at each timestep and apply CTC collapsing to obtain a character-level transcript hypothesis. | ['Model', 'Parameters (M)', 'Train CER', 'Test CER'] | [['DNN', '16.8', '3.8', '22.3'], ['RDNN', '22.0', '4.2', '13.5'], ['BRDNN', '20.9', '2.8', '10.7']] | Previous experiments with DNN-HMM systems found minimal benefits from recurrent connections in DNN acoustic models. It is natural to wonder whether recurrence, and especially bi-directional recurrence, is an essential aspect of our architecture. To evaluate the impact of recurrent connections we compare the train and test CERs of DNN, RDNN, and BRDNN models while roughly controlling for the total number of free parameters in the model. |
First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs | 1408.2873 | Table 1: Word error rate (WER) and character error rate (CER) results from a BDRNN trained with the CTC loss function. As a baseline (No LM) we decode by choosing the most likely label at each timestep and performing standard collapsing as done in CTC training. We compare this baseline against our modified prefix-search decoder using a dictionary constraint and bigram language model. | ['Model', 'CER', 'WER'] | [['No LM', '10.0', '35.8'], ['Dictionary LM', '8.5', '24.4'], ['Bigram LM', '5.7', '14.1']] | We trained a BRDNN with 5 hidden layers, all with 1824 hidden units, for a total of 20.9M free parameters. The third hidden layer of the network has recurrent connections. We use the Nesterov accelerated gradient optimization algorithm as described in Sutskever et al. After each full pass through the training set we divide the learning rate by 1.2 to ensure the overall learning rate decreases over time. We train the network for a total of 20 passes over the training set, which takes about 96 hours using our Python GPU implementation. For decoding with prefix search we use a beam size of 200 and cross-validate with a held-out set to find a good setting of the parameters α and β. |
Incorporating Interlocutor-Aware Context into Response Generation on Multi-Party Chatbots | 1910.13106 | Table 6: Performances over different memory types. | ['[BOLD] Memory Type', '[BOLD] Referenced', '[BOLD] Referenced', '[BOLD] Unreferenced', '[BOLD] Unreferenced'] | [['[BOLD] Memory Type', '[BOLD] BLEU', '[BOLD] ROUGE', '[BOLD] Length', '[BOLD] #Noun'], ['[BOLD] addressee memory', '[BOLD] 10.63', '8.73', '11.34', '[BOLD] 1.68'], ['all utterance memory', '10.39', '[BOLD] 8.78', '[BOLD] 11.38', '1.37'], ['latest memory', '10.43', '8.40', '10.16', '1.28'], ['speaker memory', '10.03', '8.28', '10.72', '1.66'], ['w/o memory', '10.25', '8.23', '10.73', '1.27']] | Comparative Results. It can see that our method, the addressee memory, achieves the best or near-best performances on all metrics. Although memorizing all utterances is competitive, the complexity of all utterance memory is n times compared with the one in the addressee memory, where n is the number of utterances in a context. The speaker memory performs closely to without memory, which indicates that not all memories can improve the performance. |
Incorporating Interlocutor-Aware Context into Response Generation on Multi-Party Chatbots | 1910.13106 | Table 4: Performances on sparse and plentiful learning data with different numbers of interlocutor’s dialogue turns, where the test data is divided into different intervals according to the number of dialogue turns in training dataset said by target addressee (named as interlocutor’s dialogue turns). | ['[BOLD] Interlocutor’s', '[BOLD] Persona Model', '[BOLD] Persona Model', '[BOLD] ICRED (ours)', '[BOLD] ICRED (ours)'] | [['[BOLD] Dialogue Turns', '[BOLD] \xa0BLEU', '[BOLD] ROUGE', '[BOLD] BLEU', '[BOLD] ROUGE'], ['[0, 100]', '8.47', '6.72', '10.63', '8.60'], ['(100, 1000]', '8.87', '7.14', '10.50', '8.61'], ['(1000, 5000]', '9.48', '7.74', '[BOLD] 10.77', '[BOLD] 8.90'], ['(5000, +∞)', '[BOLD] 9.51', '[BOLD] 7.80', '10.60', '8.79']] | Comparative Results. We can clearly see that the persona model has a sparsity issue: it performs very poorly on sparse learning data (e.g., BLEU score = 8.47 on “[0, 100]”) while it achieves good performances on plentiful learning data (e.g., BLEU score = 9.51 on “(5000, +∞)”), which demonstrates that the fixed person vectors in the persona model need to be learned from large-scale training data for each interlocutor. In contrast, ICRED exploits interactive interlocutor representation learned from current dialog context rather than the fixed person vectors obtained from all training dialog utterances. Therefore, ICRED has no sparsity issues and it performs closely on sparse and plentiful learning data. |
Incorporating Interlocutor-Aware Context into Response Generation on Multi-Party Chatbots | 1910.13106 | Table 5: Ablation Experiments by removing the main components. | ['[BOLD] Model', '[BOLD] Referenced', '[BOLD] Referenced', '[BOLD] Unreferenced', '[BOLD] Unreferenced'] | [['[BOLD] Model', '[BOLD] BLEU', '[BOLD] ROUGE', '[BOLD] Length', '[BOLD] #Noun'], ['[BOLD] ICRED', '[BOLD] 10.63', '[BOLD] 8.73', '[BOLD] 11.34', '[BOLD] 1.68'], ['w/o Adr_Mem', '10.25', '8.23', '10.73', '1.27'], ['w/o Ctx_Spk_Vec', '10.13', '8.22', '10.86', '1.59'], ['w/o Ctx_Adr_Vec', '9.95', '8.18', '10.93', '1.26']] | Comparative Results. We can see that removing any component causes obvious performance degradation. In particular, “w/o Ctx_Adr_Vec” performs the worst on almost all of the metrics, which demonstrates the importance of contextual information for the target addressee. |
Cross-Lingual Machine Reading Comprehension | 1909.00361 | Table 3: Zero-shot cross-lingual machine reading comprehension results on Japanese and French SQuAD data. † are extracted in Asai et al. (2018). | ['[EMPTY]', '[BOLD] Japanese [BOLD] EM', '[BOLD] Japanese [BOLD] F1', '[BOLD] French [BOLD] EM', '[BOLD] French [BOLD] F1'] | [['Back-Translation†', '24.8', '42.6', '23.5', '44.0'], ['+Runtime MT†', '37.0', '52.2', '40.7', '61.9'], ['GNMT+BERT [ITALIC] Len', '26.9', '46.2', '39.1', '67.0'], ['+SimpleMatch', '37.3', '58.0', '47.4', '71.5'], ['BERT [ITALIC] SQ− [ITALIC] Bmul', '61.3', '73.4', '57.6', '77.1']] | In this paper, we propose a simple but effective approach called SimpleMatch to align translated answer to original passage span. While one may argue that using neural machine translation attention to project source answer to original target passage span is ideal as used in Asai et al. However, to extract attention value in neural machine translation system and apply it to extract the original passage span is bothersome and computationally ineffective. To demonstrate the effectiveness of using SimpleMatch instead of using NMT attention to extract original passage span in zero-shot condition, we applied SimpleMatch to Japanese and French SQuAD (304 samples for each) which is what exactly used in Asai et al. |
Cross-Lingual Machine Reading Comprehension | 1909.00361 | Table 2: Experimental results on CMRC 2018 and DRCD. † indicates unpublished works (some of the systems are using development set for training, which makes the results not directly comparable.). ♠ indicates zero-shot approach. We mark our system with an ID in the first column for reference simplicity. | ['[BOLD] #', '[BOLD] System', '[BOLD] CMRC 2018 [BOLD] Dev', '[BOLD] CMRC 2018 [BOLD] Dev', '[BOLD] CMRC 2018 [BOLD] Test', '[BOLD] CMRC 2018 [BOLD] Test', '[BOLD] CMRC 2018 [BOLD] Challenge', '[BOLD] CMRC 2018 [BOLD] Challenge', '[BOLD] DRCD [BOLD] Dev', '[BOLD] DRCD [BOLD] Dev', '[BOLD] DRCD [BOLD] Test', '[BOLD] DRCD [BOLD] Test'] | [['[BOLD] #', '[BOLD] System', '[BOLD] EM', '[BOLD] F1', '[BOLD] EM', '[BOLD] F1', '[BOLD] EM', '[BOLD] F1', '[BOLD] EM', '[BOLD] F1', '[BOLD] EM', '[BOLD] F1'], ['[EMPTY]', '[ITALIC] Human Performance', '[ITALIC] 91.1', '[ITALIC] 97.3', '[ITALIC] 92.4', '[ITALIC] 97.9', '[ITALIC] 90.4', '[ITALIC] 95.2', '-', '-', '[ITALIC] 80.4', '[ITALIC] 93.3'], ['[EMPTY]', 'P-Reader (single model)†', '59.9', '81.5', '65.2', '84.4', '15.1', '39.6', '-', '-', '-', '-'], ['[EMPTY]', 'Z-Reader (single model)†', '79.8', '92.7', '74.2', '88.1', '13.9', '37.4', '-', '-', '-', '-'], ['[EMPTY]', 'MCA-Reader (ensemble)†', '66.7', '85.5', '71.2', '88.1', '15.5', '37.1', '-', '-', '-', '-'], ['[EMPTY]', 'RCEN (ensemble)†', '76.3', '91.4', '68.7', '85.8', '15.3', '34.5', '-', '-', '-', '-'], ['[EMPTY]', 'r-net (single model)†', '-', '-', '-', '-', '-', '-', '-', '-', '29.1', '44.4'], ['[EMPTY]', 'DA (Yang et al., 2019 )', '49.2', '65.4', '-', '-', '-', '-', '55.4', '67.7', '-', '-'], ['1', 'GNMT+BERT [ITALIC] SQ− [ITALIC] Ben♠', '15.9', '40.3', '20.8', '45.4', '4.2', '20.2', '28.1', '50.0', '26.6', '48.9'], ['2', 'GNMT+BERT [ITALIC] SQ− [ITALIC] Len♠', '16.8', '42.1', '21.7', '47.3', '5.2', '22.0', '28.9', '52.0', '28.7', '52.1'], ['3', 'GNMT+BERT [ITALIC] SQ− [ITALIC] Len+SimpleMatch♠', '26.7', '56.9', '31.3', '61.6', '9.1', '35.5', '36.9', '60.6', '37.0', '61.2'], ['4', 'GNMT+BERT [ITALIC] SQ− [ITALIC] Len+Aligner', '46.1', '66.4', '49.8', '69.3', '16.5', '40.9', '60.1', '70.5', '59.5', '70.7'], ['5', 'GNMT+BERT [ITALIC] SQ− [ITALIC] Len+Verifier', '64.7', '84.7', '68.9', '86.8', '20.0', '45.6', '83.5', '90.1', '82.6', '89.6'], ['6', 'BERT [ITALIC] Bcn', '63.6', '83.9', '67.8', '86.0', '18.4', '42.1', '83.4', '90.1', '81.9', '89.0'], ['7', 'BERT [ITALIC] Bmul', '64.1', '84.4', '68.6', '86.8', '18.6', '43.8', '83.2', '89.9', '82.4', '89.5'], ['8', '[BOLD] Dual BERT', '65.8', '86.3', '70.4', '88.1', '23.8', '47.9', '84.5', '90.8', '83.7', '90.3'], ['9', 'BERT [ITALIC] SQ− [ITALIC] Bmul♠', '56.5', '77.5', '59.7', '79.9', '18.6', '41.4', '66.7', '81.0', '65.4', '80.1'], ['10', 'BERT [ITALIC] SQ− [ITALIC] Bmul + Cascade Training', '66.6', '87.3', '71.8', '89.4', '25.6', '52.3', '85.2', '91.4', '84.4', '90.8'], ['11', 'BERT [ITALIC] Bmul + Mixed Training', '66.8', '87.5', '72.6', '89.8', '26.7', '53.4', '85.3', '91.6', '84.7', '91.2'], ['12', '[BOLD] Dual BERT (w/ SQuAD)', '68.0', '88.1', '73.6', '90.2', '27.8', '55.2', '86.0', '92.1', '85.4', '91.6']] | As we can see that, without using any alignment approach, the zero-shot results are quite lower regardless of using English BERT-base (#1) or BERT-large (#2). When we apply SimpleMatch (#3), we observe significant improvements demonstrating its effectiveness. The Answer Aligner (#4) could further improve the performance beyond SimpleMatch approach, demonstrating that the machine learning approach could dynamically adjust the span output by learning the semantic relationship between translated answer and target passage. Also, the Answer Verifier (#5) could further boost performance and surpass the multi-lingual BERT baseline (#7) that only use target training data, demonstrating that it is beneficial to adopt rich-resourced language to improve machine reading comprehension in other languages. |
On NMT Search Errors and Model Errors: Cat Got Your Tongue? | 1908.10090 | Table 4: Length normalization fixes translation lengths, but prevents exact search from matching the BLEU score of Beam-10. Experiment conducted on 48.3% of the test set. | ['[BOLD] Search', '[BOLD] W/o length norm. [BOLD] BLEU', '[BOLD] W/o length norm. [BOLD] Ratio', '[BOLD] With length norm. [BOLD] BLEU', '[BOLD] With length norm. [BOLD] Ratio'] | [['Beam-10', '37.0', '1.00', '36.3', '1.03'], ['Beam-30', '36.7', '0.98', '36.3', '1.04'], ['Exact', '27.2', '0.74', '36.4', '1.03']] | We can find the global best translations under length normalization by generalizing our exact inference scheme to length dependent lower bounds γk. (e.g. zero to 1.2 times the source sentence length). (4) Exact search under length normalization does not suffer from the length deficiency anymore (last row in Tab. but it is not able to match our best BLEU score under Beam-10 search. This suggests that while length normalization biases search towards translations of roughly the correct length, it does not fix the fundamental modelling problem. |
On NMT Search Errors and Model Errors: Cat Got Your Tongue? | 1908.10090 | Table 1: NMT with exact inference. In the absence of search errors, NMT often prefers the empty translation, causing a dramatic drop in length ratio and BLEU. | ['[BOLD] Search', '[BOLD] BLEU', '[BOLD] Ratio', '[BOLD] #Search errors', '[BOLD] #Empty'] | [['Greedy', '29.3', '1.02', '73.6%', '0.0%'], ['Beam-10', '30.3', '1.00', '57.7%', '0.0%'], ['Exact', '2.1', '0.06', '0.0%', '51.8%']] | Our main result is shown in Tab. > token. Fig. Large beam sizes reduce the number of search errors, but the BLEU score drops because translations are too short. Even a large beam size of 100 produces 53.62% search errors. Fig. For example, Beam-10 yields 15.9% fewer search errors (absolute) than greedy decoding (57.68% vs. 73.58%), but Beam-100 improves search only slightly (53.62% search errors) despite being 10 times slower than beam-10. |
Adversarial Multi-task Learning for Text Classification | 1704.05742 | Table 3: Error rates of our models on 16 datasets against vanilla multi-task learning. ϕ (Books) means that we transfer the knowledge of the other 15 tasks to the target task Books. | ['[BOLD] Source Tasks', '[BOLD] Single Task LSTM', '[BOLD] Single Task BiLSTM', '[BOLD] Single Task sLSTM', '[BOLD] Single Task Avg.', '[BOLD] Transfer Models SP-MTL-SC', '[BOLD] Transfer Models SP-MTL-BC', '[BOLD] Transfer Models ASP-MTL-SC', '[BOLD] Transfer Models ASP-MTL-BC'] | [['[ITALIC] ϕ (Books)', '20.5', '19.0', '18.0', '19.2', '17.8(−1.4)', '16.3(−2.9)', '16.8(−2.4)', '16.3(−2.9)'], ['[ITALIC] ϕ (Electronics)', '19.5', '21.5', '23.3', '21.4', '15.3(−6.1)', '14.8(−6.6)', '17.8(−3.6)', '16.8(−4.6)'], ['[ITALIC] ϕ (DVD)', '18.3', '19.5', '22.0', '19.9', '14.8(−5.1)', '15.5(−4.4)', '14.5(−5.4)', '14.3(−5.6)'], ['[ITALIC] ϕ (Kitchen)', '22.0', '18.8', '19.5', '20.1', '15.0(−5.1)', '16.3(−3.8)', '16.3(−3.8)', '15.0(−5.1)'], ['[ITALIC] ϕ (Apparel)', '16.8', '14.0', '16.3', '15.7', '14.8(−0.9)', '12.0(−3.7)', '12.5(−3.2)', '13.8(−1.9)'], ['[ITALIC] ϕ (Camera)', '14.8', '14.0', '15.0', '14.6', '13.3(−1.3)', '12.5(−2.1)', '11.8(−2.8)', '10.3(−4.3)'], ['[ITALIC] ϕ (Health)', '15.5', '21.3', '16.5', '17.8', '14.5(−3.3)', '14.3(−3.5)', '12.3(−5.5)', '13.5(−4.3)'], ['[ITALIC] ϕ (Music)', '23.3', '22.8', '23.0', '23.0', '20.0(−3.0)', '17.8(−5.2)', '17.5(−5.5)', '18.3(−4.7)'], ['[ITALIC] ϕ (Toys)', '16.8', '15.3', '16.8', '16.3', '13.8(−2.5)', '12.5(−3.8)', '13.0(−3.3)', '11.8(−4.5)'], ['[ITALIC] ϕ (Video)', '18.5', '16.3', '16.3', '17.0', '14.3(−2.7)', '15.0(−2.0)', '14.8(−2.2)', '14.8(−2.2)'], ['[ITALIC] ϕ (Baby)', '15.3', '16.5', '15.8', '15.9', '16.5(+0.6)', '16.8(+0.9)', '13.5(−2.4)', '12.0(−3.9)'], ['[ITALIC] ϕ (Magazines)', '10.8', '8.5', '12.3', '10.5', '10.5(+0.0)', '10.3(−0.2)', '8.8(−1.7)', '9.5(−1.0)'], ['[ITALIC] ϕ (Software)', '15.3', '14.3', '14.5', '14.7', '13.0(−1.7)', '12.8(−1.9)', '14.5(−0.2)', '11.8(−2.9)'], ['[ITALIC] ϕ (Sports)', '18.3', '16.0', '17.5', '17.3', '16.3(−1.0)', '16.3(−1.0)', '13.3(−4.0)', '13.5(−3.8)'], ['[ITALIC] ϕ (IMDB)', '18.3', '15.0', '18.5', '17.3', '12.8(−4.5)', '12.8(−4.5)', '12.5(−4.8)', '13.3(−4.0)'], ['[ITALIC] ϕ (MR)', '27.3', '25.3', '28.0', '26.9', '26.0(−0.9)', '26.5(−0.4)', '24.8(−2.1)', '23.5(−3.4)'], ['AVG', '18.2', '17.4', '18.3', '18.0', '15.6(−2.4)', '15.2(−2.8)', '14.7(−3.3)', '14.3(−3.7)']] | , we can see the shared layer from ASP-MTL achieves a better performance compared with SP-MTL. Besides, for the two kinds of transfer strategies, the Bi-Channel model performs better. The reason is that the task-specific layer introduced in the Bi-Channel model can store some private features. Overall, the results indicate that we can save the existing knowledge into a shared recurrent layer using adversarial multi-task learning, which is quite useful for a new task. |
Adversarial Multi-task Learning for Text Classification | 1704.05742 | Table 2: Error rates of our models on 16 datasets against typical baselines. The numbers in brackets represent the improvements relative to the average performance (Avg.) of three single task baselines. | ['[BOLD] Task', '[BOLD] Single Task LSTM', '[BOLD] Single Task BiLSTM', '[BOLD] Single Task sLSTM', '[BOLD] Single Task Avg.', '[BOLD] Multiple Tasks MT-DNN', '[BOLD] Multiple Tasks MT-CNN', '[BOLD] Multiple Tasks FS-MTL', '[BOLD] Multiple Tasks SP-MTL', '[BOLD] Multiple Tasks ASP-MTL'] | [['Books', '20.5', '19.0', '18.0', '19.2', '17.8(−1.4)', '15.5(−3.7)', '17.5(−1.7)', '18.8(−0.4)', '16.0(−3.2)'], ['Electronics', '19.5', '21.5', '23.3', '21.4', '18.3(−3.1)', '16.8(−4.6)', '14.3(−7.1)', '15.3(−6.1)', '13.2(−8.2)'], ['DVD', '18.3', '19.5', '22.0', '19.9', '15.8(−4.1)', '16.0(−3.9)', '16.5(−3.4)', '16.0(−3.9)', '14.5(−5.4)'], ['Kitchen', '22.0', '18.8', '19.5', '20.1', '19.3(−0.8)', '16.8(−3.3)', '14.0(−6.1)', '14.8(−5.3)', '13.8(−6.3)'], ['Apparel', '16.8', '14.0', '16.3', '15.7', '15.0(−0.7)', '16.3(+0.6)', '15.5(−0.2)', '13.5(−2.2)', '13.0(−2.7)'], ['Camera', '14.8', '14.0', '15.0', '14.6', '13.8(−0.8)', '14.0(−0.6)', '13.5(−1.1)', '12.0(−2.6)', '10.8(−3.8)'], ['Health', '15.5', '21.3', '16.5', '17.8', '14.3(−3.5)', '12.8(−5.0)', '12.0(−5.8)', '12.8(−5.0)', '11.8(−6.0)'], ['Music', '23.3', '22.8', '23.0', '23.0', '15.3(−7.7)', '16.3(−6.7)', '18.8(−4.2)', '17.0(−6.0)', '17.5(−5.5)'], ['Toys', '16.8', '15.3', '16.8', '16.3', '12.3(−4.0)', '10.8(−5.5)', '15.5(−0.8)', '14.8(−1.5)', '12.0(−4.3)'], ['Video', '18.5', '16.3', '16.3', '17.0', '15.0(−2.0)', '18.5(+1.5)', '16.3(−0.7)', '16.8(−0.2)', '15.5(−1.5)'], ['Baby', '15.3', '16.5', '15.8', '15.9', '12.0(−3.9)', '12.3(−3.6)', '12.0(−3.9)', '13.3(−2.6)', '11.8(−4.1)'], ['Magazines', '10.8', '8.5', '12.3', '10.5', '10.5(+0.0)', '12.3(+1.8)', '7.5(−3.0)', '8.0(−2.5)', '7.8(−2.7)'], ['Software', '15.3', '14.3', '14.5', '14.7', '14.3(−0.4)', '13.5(−1.2)', '13.8(−0.9)', '13.0(−1.7)', '12.8(−1.9)'], ['Sports', '18.3', '16.0', '17.5', '17.3', '16.8(−0.5)', '16.0(−1.3)', '14.5(−2.8)', '12.8(−4.5)', '14.3(−3.0)'], ['IMDB', '18.3', '15.0', '18.5', '17.3', '16.8(−0.5)', '13.8(−3.5)', '17.5(+0.2)', '15.3(−2.0)', '14.5(−2.8)'], ['MR', '27.3', '25.3', '28.0', '26.9', '24.5(−2.4)', '25.5(−1.4)', '25.3(−1.6)', '24.0(−2.9)', '23.3(−3.6)'], ['AVG', '18.2', '17.4', '18.3', '18.0', '15.7(−2.2)', '15.5(−2.5)', '15.3(−2.7)', '14.9(−3.1)', '13.9(−4.1)']] | The column of “Single Task” shows the results of vanilla LSTM, bidirectional LSTM (BiLSTM), stacked LSTM (sLSTM) and the average error rates of previous three models. The column of “Multiple Tasks” shows the results achieved by corresponding multi-task models. From this table, we can see that the performance of most tasks can be improved with a large margin with the help of multi-task learning, in which our model achieves the lowest error rates. More concretely, compared with SP-MTL, ASP-MTL achieves 4.1% average improvement surpassing SP-MTL with 1.0%, which indicates the importance of adversarial learning. It is noteworthy that for FS-MTL, the performances of some tasks are degraded, since this model puts all private and shared information into a unified space. |
Reciprocal Attention Fusion for Visual Question Answering | 1805.04247 | Table 3: Ablation Study on VQAv2 val-set. | ['Cat.', 'Methods', 'Val-set'] | [['I', 'RAF-I(ResNet)', '53.9'], ['[EMPTY]', 'HieCoAtt Lu et\xa0al. ( 2016 ); Goyal et\xa0al. ( 2016 )', '54.6'], ['[EMPTY]', 'RAF-I(ResNeXt)', '58.0'], ['[EMPTY]', 'MCB Fukui et\xa0al. ( 2016 ); Goyal et\xa0al. ( 2016 )', '59.1'], ['[EMPTY]', 'MUTAN Ben-Younes et\xa0al. ( 2017 )', '60.1'], ['II', 'Up-DownAnderson et\xa0al. ( 2018 )', '63.2'], ['[EMPTY]', 'RAF-O(ResNet)', '63.9'], ['III', 'RAF-IO(ResNet-ResNet)', '64.0'], ['[EMPTY]', 'RAF-IO(ResNeXt-ResNet)', '64.2']] | We perform an extensive ablation study of the proposed model on VQAv2 Goyal et al. This ablation study helps to better understand the contribution of different components of our model towards the overall performance on the VQA task. The objective of this ablation study is to show that when the language features are combined with image- grid and object level visual features, the accuracy of the high level visual reasoning task (i.e. VQA) increases in contrast to only combining language with image or object level features. We observe RAF-I achieve comparable performance in this category. In Category II, RAF-O model extracts only 36 object level features but outperforms the models in Category I. Anderson et al. When we combine image and object level features together in Category III, we observe that the best results are obtained. This proves our hypothesis that the questions relate to both objects, object parts and local attributes, which should be attended for jointly an improved VQA performance. |
Reciprocal Attention Fusion for Visual Question Answering | 1805.04247 | Table 1: Comparison of the state-of-the-art methods with our single model performance on VQAv1.0 test-dev and test-standard server. | ['Methods', 'Test-dev Y/N', 'Test-dev No.', 'Test-dev Other', 'Test-dev All', 'Test-standard Y/N', 'Test-standard No.', 'Test-standard Other', 'Test-standard All'] | [['RAF (Ours)', '[BOLD] 85.9', '[BOLD] 41.3', '[BOLD] 58.7', '[BOLD] 68.0', '[BOLD] 85.8', '41.4', '58.9', '[BOLD] 68.2'], ['ReasonNetIlievski and Feng ( 2017 )', '-', '-', '-', '-', '84.0', '38.7', '[BOLD] 60.4', '67.9'], ['MFB+CoAtt+Glove Yu et\xa0al. ( 2018 )', '85.0', '39.7', '57.4', '66.8', '85.0', '39.5', '57.4', '66.9'], ['Dual-MFA Lu et\xa0al. ( 2017 )', '83.6', '40.2', '56.8', '66.0', '83.4', '40.4', '56.9', '66.1'], ['MLB+VG Kim et\xa0al. ( 2016 )', '84.1', '38.0', '54.9', '65.8', '-', '-', '-', '-'], ['MCB+Att+GloVe Fukui et\xa0al. ( 2016 )', '82.3', '37.2', '57.4', '65.4', '-', '-', '-', '-'], ['MLAN Yu et\xa0al. ( 2017 )', '81.8', '41.2', '56.7', '65.3', '81.3', '[BOLD] 41.9', '56.5', '65.2'], ['MUTAN Ben-Younes et\xa0al. ( 2017 )', '84.8', '37.7', '54.9', '65.2', '-', '-', '-', '-'], ['DAN (ResNet) Nam et\xa0al. ( 2016 )', '83.0', '39.1', '53.9', '64.3', '82.8', '38.1', '54.0', '64.2'], ['HieCoAtt Lu et\xa0al. ( 2016 )', '79.7', '38.7', '51.7', '61.8', '-', '-', '-', '62.1'], ['A+C+K+LSTMWu et\xa0al. ( 2016 )', '81.0', '38.4', '45.2', '59.2', '81.1', '37.1', '45.8', '59.4'], ['VQA LSTM Q+I Antol et\xa0al. ( 2015 )', '80.5', '36.8', '43.1', '57.8', '80.6', '36.5', '43.7', '58.2'], ['SANYang et\xa0al. ( 2016 )', '79.3', '36.6', '46.1', '58.7', '-', '-', '-', '58.9'], ['AYN Malinowski et\xa0al. ( 2017 )', '78.4', '36.4', '46.3', '58.4', '78.2', '37.1', '45.8', '59.4'], ['NMN Andreas et\xa0al. ( 2016 )', '81.2', '38.0', '44.0', '58.6', '-', '-', '-', '58.7'], ['DMN+ Xiong et\xa0al. ( 2016 )', '60.3', '80.5', '48.3', '56.8', '-', '-', '-', '60.4'], ['iBowling Zhou et\xa0al. ( 2015 )', '76.5', '35.0', '42.6', '55.7', '76.8', '35.0', '42.6', '55.9']] | Remarkably, our model outperforms all other models in the overall accuracy. We report a significant performance boost of 1.2% on the test-dev set and 0.3% on the test-standard set. It is to be noted that using multiple ensembles and data augmentation with complementary training in Visual Genome QA pairs can increase the accuracy performance of the VQA models. For instance, MCB Fukui et al. , MUTAN Ben-Younes et al. et al. It is interesting to note that except for MFB (7) all other ensemble models are ∼1% less than our reported single model performance. We do not ensemble our model or use data augmentation with complementary dataset as it makes the best results irreproducible and most of the models in the literature do not adopt this strategy. |