paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
Assertion Detection in Multi-Label Clinical Text using Scope Localization
2005.09246
Table 2: Distribution of Assertion classes in the data.
['[BOLD] Split', '[BOLD] Dataset-I [BOLD] Max', '[BOLD] Dataset-I [BOLD] Min', '[BOLD] Dataset-I [BOLD] Mean', '[BOLD] Dataset-II [BOLD] Max', '[BOLD] Dataset-II [BOLD] Min', '[BOLD] Dataset-II [BOLD] Mean']
[['train', '661', '19', '440', '1028', '82', '610'], ['val', '642', '289', '452', '911', '82', '630'], ['test', '560', '228', '432', '968', '336', '642']]
The annotations were done using BRAT tool Rules for annotation were generated after consulting with the Radiologist supervising the annotators. Other Radiologists were consulted to annotate any mentions that were previously unseen or ambiguous and also for the final review. For a fair comparison with the baseline, the box predictions from our model are converted to a sequence of labels per token. On first impressions, the performance seem to be affected by the quantity of data available for training with the best performance on present class and least performance on AWSE class. After further analysis, it appears that the scope lengths found in the training set is also a crucial factor. As shown, model performance for the present class declines with scope lengths 7, 10, and 20, which reflect sparsity of this class for these scopes in the training set. In contrast, the model performs well on the hypothetical class with scope length 7, reflective of the better distribution of this class for this scope relative to other scopes.
Assertion Detection in Multi-Label Clinical Text using Scope Localization
2005.09246
Table 2: Distribution of Assertion classes in the data.
['[BOLD] Class', '[BOLD] Dataset-I [BOLD] Train', '[BOLD] Dataset-I [BOLD] Val', '[BOLD] Dataset-I [BOLD] Test', '[BOLD] Dataset-II [BOLD] Train', '[BOLD] Dataset-II [BOLD] Val', '[BOLD] Dataset-II [BOLD] Test']
[['1', '3.36±2.7', '3.23±2.59', '3.39±2.95', '3.48±2.15', '3.38±2.06', '3.48±2.16'], ['2', '2.79±1.36', '2.68±1.25', '2.68±1.07', '3.15±2.26', '3.10±2.18', '3.09±2.13'], ['3', '2.85±1.04', '2.87±0.87', '2.68±0.65', '3.24±2.92', '3.20±1.95', '2.60±1.74'], ['4', '5.05±2.44', '4.5±2.44', '5.0±2.43', '2.19±1.21', '2.60±0.92', '2.84±2.47'], ['5', '3.14±3.69', '1.67±0.47', '3.27±2.41', '2.96±2.83', '2.40±1.82', '3.27±2.41'], ['6', '2.47±0.72', '1.67±0.47', '5.5±2.5', '1.71±1.35', '2.00±1.73', '1.0±0.0']]
The annotations were done using BRAT tool Rules for annotation were generated after consulting with the Radiologist supervising the annotators. Other Radiologists were consulted to annotate any mentions that were previously unseen or ambiguous and also for the final review. For a fair comparison with the baseline, the box predictions from our model are converted to a sequence of labels per token. On first impressions, the performance seem to be affected by the quantity of data available for training with the best performance on present class and least performance on AWSE class. After further analysis, it appears that the scope lengths found in the training set is also a crucial factor. As shown, model performance for the present class declines with scope lengths 7, 10, and 20, which reflect sparsity of this class for these scopes in the training set. In contrast, the model performs well on the hypothetical class with scope length 7, reflective of the better distribution of this class for this scope relative to other scopes.
Assertion Detection in Multi-Label Clinical Text using Scope Localization
2005.09246
Table 2: Distribution of Assertion classes in the data.
['[BOLD] Class', '[BOLD] Model [BOLD] Baseline', '[BOLD] Model [BOLD] Baseline', '[BOLD] Model [BOLD] Scope Localization model', '[BOLD] Model [BOLD] Scope Localization model']
[['[BOLD] Class', '[BOLD] Dataset-I', '[BOLD] Dataset-II', '[BOLD] Dataset-I', '[BOLD] Dataset-II'], ['Present', '0.97', '0.92', '0.90', '0.84'], ['Absent', '0.27', '0.34', '0.84', '0.93'], ['Conditional', '0.39', '0.45', '0.74', '0.65'], ['Hypothetical', '0.76', '0.69', '0.87', '0.75'], ['Possibility', '0.0', '0.07', '0.0', '0.13'], ['AWSE', '0.42', '0.39', '0.60', '0.0'], ['None', '0.81', '0.89', '0.96', '0.95'], ['Macro', '0.52', '0.53', '0.70', '0.61']]
The annotations were done using BRAT tool Rules for annotation were generated after consulting with the Radiologist supervising the annotators. Other Radiologists were consulted to annotate any mentions that were previously unseen or ambiguous and also for the final review. For a fair comparison with the baseline, the box predictions from our model are converted to a sequence of labels per token. On first impressions, the performance seem to be affected by the quantity of data available for training with the best performance on present class and least performance on AWSE class. After further analysis, it appears that the scope lengths found in the training set is also a crucial factor. As shown, model performance for the present class declines with scope lengths 7, 10, and 20, which reflect sparsity of this class for these scopes in the training set. In contrast, the model performs well on the hypothetical class with scope length 7, reflective of the better distribution of this class for this scope relative to other scopes.
DisSent: Learning Sentence Representations from Explicit Discourse Relations
1710.04334
Table 9: Ngram Bag-of-words baseline sentence embeddings performance on DisSent training task: test recall / precision for each discourse marker on the classification task, and overall accuracy. Average metric reports the weighted average of all classes.
['Marker', 'All Prec', 'All Rec', 'Books 8 Prec', 'Books 8 Rec', 'Books 5 Prec', 'Books 5 Rec']
[['and', '60.1', '65.0', '65.6', '70.1', '70.1', '71.4'], ['but', '49.9', '65.3', '55.2', '69.4', '59.7', '69.9'], ['because', '34.7', '10.2', '42.1', '11.1', '42.8', '10.6'], ['if', '54.6', '56.9', '58.8', '56.4', '64.4', '60.0'], ['when', '43.2', '40.1', '52.1', '52.2', '58.4', '54.3'], ['so', '35.5', '11.3', '38.5', '11.0', '—', '—'], ['though', '40.6', '20.8', '56.2', '25.2', '—', '—'], ['before', '47.8', '29.1', '56.6', '35.4', '—', '—'], ['as', '51.9', '63.1', '—', '—', '—', '—'], ['while', '33.4', '11.6', '—', '—', '—', '—'], ['after', '41.0', '17.6', '—', '—', '—', '—'], ['although', '11.9', '0.4', '—', '—', '—', '—'], ['still', '34.7', '2.5', '—', '—', '—', '—'], ['also', '16.7', '0.4', '—', '—', '—', '—'], ['then', '36.2', '2.1', '—', '—', '—', '—'], ['Average', '40.2', '40.3', '46.2', '44.5', '53.3', '50.7'], ['Accuracy', '51.8', '51.8', '58.1', '58.1', '63.3', '63.3']]
As a reference point for training task performance we present baseline performance. Note that a model which simply chose the most common class would perform with 21.79% accuracy on the ALL task, 28.35% on the Books 8 task, and 31.87% on the Books 5 task. Using either unigram, bigram and trigram bag of words or Arora et al. ’s baseline sentence representations as features to a logistic regression results in much lower performance than our DisSent classifier.
DisSent: Learning Sentence Representations from Explicit Discourse Relations
1710.04334
Table 5: Discourse classification task performance: Unweighted average F1 across discourse markers on the test set, and overall accuracy. Ngram-bow is a bag-of-words model built on mixture of ngram features. GloVe-bow averages word embedding with correction to frequency Arora et al. (2017). BiLSTM is the DisSent sentence encoder model. BERT is finetuned on all of the DisSent tasks.
['Model', 'All F1', 'All Acc', 'Books 8 F1', 'Books 8 Acc', 'Books 5 F1', 'Books 5 Acc']
[['GloVe-bow', '17.1', '41.8', '27.6', '47.3', '41.7', '52.5'], ['Ngram-bow', '28.1', '51.8', '44.0', '58.1', '54.1', '63.3'], ['BiLSTM', '47.2', '67.5', '64.4', '73.5', '72.1', '77.3'], ['BERT', '60.1', '77.5', '76.2', '82.9', '82.6', '86.1']]
, we achieve high levels of test performance for all discourse markers. (Though it is interesting that because, perhaps the conceptually deepest relation, is also systematically the hardest for our model.) The larger the set of discourse markers, the more difficult the task becomes, and we therefore see lower test accuracy despite larger dataset size. We conjecture that as we increase the number of discourse markers, we also increase the ambiguity between them (semantic overlap in discourse markers’ meanings), which may further explain the drop in performance. We provide per-discourse-marker performance in the Appendix.
DisSent: Learning Sentence Representations from Explicit Discourse Relations
1710.04334
Table 6: SentEval Task Results Using Fixed Sentence Encoder. We report the best results for generalization tasks. † indicates models that we trained. FastSent, FastSent + AE Hill et al. (2016), SkipThought Kiros et al. (2015), SkipThought-LN, DictRep (bow), and InferSent are reported from Conneau et al. (2017). LSMTL is reported from Subramanian et al. (2018). Globally best results are shown in bold, best DisSent results are underlined.
['Model', 'MR', 'CR', 'SUBJ', 'MPQA', 'SST', 'TREC', 'SICK-R', 'SICK-E', 'MRPC']
[['Self-supervised training methods', 'Self-supervised training methods', 'Self-supervised training methods', 'Self-supervised training methods', 'Self-supervised training methods', 'Self-supervised training methods', 'Self-supervised training methods', 'Self-supervised training methods', 'Self-supervised training methods', 'Self-supervised training methods'], ['DisSent Books 5†', '80.2', '85.4', '93.2', '90.2', '82.8', '91.2', '0.845', '83.5', '76.1'], ['DisSent Books 8†', '79.8', '85.0', '93.4', '90.5', '83.9', '93.0', '0.854', '83.8', '76.1'], ['DisSent Books ALL†', '80.1', '84.9', '93.6', '90.1', '84.1', '[BOLD] 93.6', '0.849', '83.7', '75.0'], ['Disc BiGRU', '—', '—', '88.6', '—', '—', '81.0', '—', '—', '71.6'], ['Unsupervised training methods', 'Unsupervised training methods', 'Unsupervised training methods', 'Unsupervised training methods', 'Unsupervised training methods', 'Unsupervised training methods', 'Unsupervised training methods', 'Unsupervised training methods', 'Unsupervised training methods', 'Unsupervised training methods'], ['FastSent', '70.8', '78.4', '88.7', '80.6', '—', '76.8', '—', '—', '72.2'], ['FastSent + AE', '71.8', '76.7', '88.8', '81.5', '—', '80.4', '—', '—', '71.2'], ['Skipthought', '76.5', '80.1', '93.6', '87.1', '82.0', '92.2', '0.858', '82.3', '73.0'], ['Skipthought-LN', '79.4', '83.1', '93.7', '89.3', '82.9', '88.4', '0.858', '79.5', '—'], ['Supervised training methods', 'Supervised training methods', 'Supervised training methods', 'Supervised training methods', 'Supervised training methods', 'Supervised training methods', 'Supervised training methods', 'Supervised training methods', 'Supervised training methods', 'Supervised training methods'], ['DictRep (bow)', '76.7', '78.7', '90.7', '87.2', '—', '81.0', '—', '—', '—'], ['InferSent', '81.1', '86.3', '92.4', '90.2', '[BOLD] 84.6', '88.2', '0.884', '86.1', '76.2'], ['Multi-task training methods', 'Multi-task training methods', 'Multi-task training methods', 'Multi-task training methods', 'Multi-task training methods', 'Multi-task training methods', 'Multi-task training methods', 'Multi-task training methods', 'Multi-task training methods', 'Multi-task training methods'], ['LSMTL', '[BOLD] 82.5', '[BOLD] 87.7', '[BOLD] 94.0', '[BOLD] 90.9', '83.2', '93.0', '[BOLD] 0.888', '[BOLD] 87.8', '[BOLD] 78.6']]
Similar generalization performance was achieved when training on 5, 8, and all 15 discourse markers.
DisSent: Learning Sentence Representations from Explicit Discourse Relations
1710.04334
Table 7: Discourse Generalization Tasks using PDTB: We report test accuracy for sentence embedding and state-of-the-art models.
['Model', 'IMP', 'MVU']
[['Sentence Encoder Models', 'Sentence Encoder Models', 'Sentence Encoder Models'], ['SkipThought Kiros et\xa0al. ( 2015 )', '9.3', '57.2'], ['InferSent Conneau et\xa0al. ( 2017 )', '39.3', '84.5'], ['Patterson and Kehler ( 2013 )', '—', '86.6'], ['DisSent Books 5', '40.7', '86.5'], ['DisSent Books 8', '41.4', '[BOLD] 87.9'], ['DisSent Books ALL', '[BOLD] 42.9', '87.6'], ['Fine-tuned Models', 'Fine-tuned Models', 'Fine-tuned Models'], ['BERT', '52.7', '80.5'], ['BERT + MNLI', '53.7', '80.7'], ['BERT + SNLI + MNLI', '51.3', '79.8'], ['BERT + DisSent Books 5', '[BOLD] 54.7', '81.6'], ['BERT + DisSent Books 8', '52.4', '80.6'], ['BERT + DisSent Books ALL', '53.2', '[BOLD] 81.8'], ['Previous Single Task Models', 'Previous Single Task Models', 'Previous Single Task Models'], ['Word Vectors Qin et\xa0al. ( 2017 )', '36.9', '74.8'], ['Lin et\xa0al. ( 2009 ) + Brown Cluster', '40.7', '—'], ['Adversarial Net (Qin et\xa0al., 2017 )', '[BOLD] 46.2', '—']]
Much to our surprise, fine-tuned BERT models are not able to perform better than the BiLSTM sentence encoder model. We leave explorations of this phenomenon to future work.
DisSent: Learning Sentence Representations from Explicit Discourse Relations
1710.04334
Table 10: Corrected GloVe Bag-of-words sentence embeddings performance on DisSent training task: test recall / precision for each discourse marker on the classification task, and overall accuracy. Average metric reports the weighted average of all classes.
['Marker', 'All Prec', 'All Rec', 'Books 8 Prec', 'Books 8 Rec', 'Books 5 Prec', 'Books 5 Rec']
[['and', '46.9', '59.4', '52.9', '63.6', '58.0', '64.3'], ['but', '38.1', '57.9', '43.5', '62.3', '48.9', '62.4'], ['because', '24.1', '0.5', '20.2', '0.3', '27.7', '0.47'], ['if', '41.8', '37.1', '46.2', '37.9', '50.5', '38.2'], ['when', '36.8', '25.8', '45.6', '40.0', '58.3', '41.3'], ['so', '37.0', '2.5', '39.5', '2.9', '—', '—'], ['though', '27.2', '1.4', '29.7', '1.3', '—', '—'], ['before', '42.0', '10.0', '48.8', '11.8', '—', '—'], ['as', '43.4', '55.6', '—', '—', '—', '—'], ['while', '29.1', '3.4', '—', '—', '—', '—'], ['after', '37.1', '4.8', '—', '—', '—', '—'], ['although', '0.0', '0.0', '—', '—', '—', '—'], ['still', '0.0', '0.0', '—', '—', '—', '—'], ['also', '0.0', '0.0', '—', '—', '—', '—'], ['then', '0.0', '0.0', '—', '—', '—', '—'], ['Avg', '50.1', '51.1', '57.5', '56.4', '63.0', '62.2'], ['Accuracy', '41.8', '41.8', '47.3', '47.3', '52.5', '52.5']]
As a reference point for training task performance we present baseline performance. Note that a model which simply chose the most common class would perform with 21.79% accuracy on the ALL task, 28.35% on the Books 8 task, and 31.87% on the Books 5 task. Using either unigram, bigram and trigram bag of words or Arora et al. ’s baseline sentence representations as features to a logistic regression results in much lower performance than our DisSent classifier.
Exploring Transformers for Large-Scale Speech Recognition
2005.09684
Table 4: Results of deeper Transformer models. L denotes the model depth.
['Model', 'IC', 'Size(M)', '[ITALIC] L', 'Context', 'dev', 'eval']
[['BLSTM', '–', '55.0', '6', '[-∞,∞]', '19.5', '12.7'], ['LC-BLSTM', '–', '55.0', '6', '[-1, 40]', '20.2', '12.9'], ['[EMPTY]', '✗', '53.5', '12', '[-∞,∞]', '18.4', '11.9'], ['Transformer', '✗', '97.0', '12', '[-∞,∞]', '18.3', '–'], ['[EMPTY]', '✗', '101.7', '24', '[-∞,∞]', '17.8', '11.7'], ['[EMPTY]', '✗', '53.5', '12', '[-40, 40]', '21.0', '12.9'], ['Tranformer-XL', '✗', '101.7', '24', '[-40, 40]', '19.1', '12.4'], ['[EMPTY]', '✓', '50.5', '12', '[-40, 40]', '20.4', '12.9'], ['[EMPTY]', '✓', '95.5', '24', '[-40, 40]', '19.3', '12.6'], ['[EMPTY]', '✓', '185.7', '48', '[-40, 40]', '18.5', '12.2']]
We observe similar trend on the eval set. We also investigated the tradeoff between increasing the dimension of hidden state in self-attention layers and increasing the depth the model. For the offline 12-layer Transformer, we increased dk to 960, which resulted in a model with 97 million parameters. However, we only achieved very marginal improvement on the dev set. The gain from increasing the model depth to 24 layers is more considerable on the dev set, but the gain on the eval set is still small. It is possible that the model is overfitted, and increasing the dropout ratio may result in more gains. As for Transformer-XL, we can obtain gains in terms of accuracy by increasing the depth up to 48 layers. However, the gains are not as large as we have expected, and regularizing the deep Transformers may result in further improvements.
Exploring Transformers for Large-Scale Speech Recognition
2005.09684
Table 1: Results of the Transformers with convolution layers in the offline mode. The number of heads is 4, and the number of layers is 12. IC stands for interleaved 1D convolution. All models have around 50 million (M) model parameters.
['Model', 'IC', 'Size(M)', 'Encode layer', '[ITALIC] dk', 'dev']
[['[EMPTY]', '✗', '51.5', 'Linear', '620', '34.7'], ['Transformer', '✓', '50.0', 'Linear', '512', '20.2'], ['[EMPTY]', '✓', '51.5', 'VGG', '512', '19.6'], ['[EMPTY]', '✗', '52.0', 'VGG', '620', '19.4']]
The self-attention operation cannot maintain the monotonicity of input sequence, which is particularly harmful for time-synchronous acoustic model such as the hybrid model studies in this paper. The kernel size for the 1D convolution is 3, while the VGG net has 4 layers of 2D convolutions with 3x3 filters. When the VGG encoder was applied, we used features of frame rate as 10ms, and employed a max-pooling layer to down-sample the features by a factor of 2. Both interleaved convolution and VGG net can significant improve the accuracy of Transformer models. In addition, when applying a VGG as the encoder, it is more beneficial to remove the interleaved convolutions but increase the model dimension of self-attention layers if the model size is constrained to be the same.
Exploring Transformers for Large-Scale Speech Recognition
2005.09684
Table 2: Results of the 12-layer Transformer model with different number of attention heads. VGG net was used as the encoding layer for all the Transformers. N denotes the number of attention heads, and dk is the model dimension as in Eq(1).
['Model', 'IC', 'Size (M)', '[ITALIC] N', '[ITALIC] dk', 'dev']
[['[EMPTY]', '✓', '50.5', '4', '512', '19.6'], ['[EMPTY]', '✓', '50.5', '8', '512', '19.7'], ['[EMPTY]', '✓', '50.5', '16', '512', '18.8'], ['Transformer', '✗', '52.0', '4', '620', '19.4'], ['[EMPTY]', '✗', '53.5', '8', '624', '18.4'], ['[EMPTY]', '✗', '53.5', '16', '624', '18.6'], ['BLSTM', '–', '55.0', '–', '–', '19.5']]
With the interleaved convolution, the Transformer with 16 attention heads achieved the lowest WER, while for the vanilla Transformer, 8 attention heads are sufficient. We did not further increase the number of the attention heads in our experiments due to the memory constraint. Compared to the BLSTM with around 55 million model parameters, the Transformer can achieve around 6% relative WER reduction with a similar model size.
Exploring Transformers for Large-Scale Speech Recognition
2005.09684
Table 3: Results of streaming Transformer models. The number of layers is 12.
['Model', 'IC', 'Size (M)', '[ITALIC] N', '[ITALIC] dk', 'Context', 'dev']
[['[EMPTY]', '✓', '50.5', '16', '512', '[-∞,∞]', '18.8'], ['[EMPTY]', '✓', '50.5', '16', '512', '[-∞,16]', '20.6'], ['[EMPTY]', '✓', '50.5', '16', '512', '[-∞,28]', '20.7'], ['[EMPTY]', '✓', '50.5', '16', '512', '[-∞,40]', '20.0'], ['Transformer', '✗', '53.5', '8', '624', '[-∞,∞]', '18.4'], ['[EMPTY]', '✗', '53.5', '8', '624', '[-∞,4]', '23.0'], ['[EMPTY]', '✗', '53.5', '8', '624', '[-∞,16]', '21.1'], ['[EMPTY]', '✗', '53.5', '8', '624', '[-∞,28]', '21.8'], ['[EMPTY]', '✗', '53.5', '8', '624', '[-∞,40]', '19.8'], ['Transformer-XL', '✓', '50.5', '16', '512', '[-40, 40]', '20.4'], ['[EMPTY]', '✗', '53.5', '8', '624', '[-40, 40]', '21.0'], ['BLSTM', '–', '55.0', '–', '–', '[−∞,∞]', '19.5'], ['LC-BLSTM', '–', '55.0', '–', '–', '[-1, 40]', '20.2']]
The previous experiments focused on the offline scenario. In this section, we evaluate the accuracy of Transformers in the steaming condition. For instance, [−∞,40] corresponds to looking ahead 3 frames for each self-attention layer in a 12-layer Transformer without interleaved convolution, with an additional 4 frames latency from the VGG encoder. Since our Transformers operated at the 20 ms frame rate, 40 frames correspond to 800 ms latency. For the attention mask based Transformer, we did not limit the left context, so it is marked as −∞, while the context window as [−∞,∞] refers to the offline system. For Transformer-XL, we set the chunk size as 40, and since the model takes the hidden states from the previous chunk as feature, we denote the context size as [−40,40]. Note that there are not overlaps in the chunks during both training and inference, and it emits 40 outputs each time during inference. For LC-BLSTM, the chunk size is also 40, and because it takes the previous hidden state as the representation of history, arguably, we denote the context size as [−1,40]. The chunks in LC-BLSTM are overlapped by 20 frames, so it only emits 20 outputs each time during inference. First, without the interleaved convolution, the streaming Transformers based on attention masks degrade the recognition accuracy by 18% - 25% compared to the offline baseline model in the low latency condition. With the interleaved convolution, the accuracy loss is much smaller. When the latency constraint is less tight, the effect of interleaved convolution layers is diminishing. Second, Transformer-XL still lags behind the vanilla Transformer with the context window of [−∞,40]. This is not surprising as in Transformer-XL, the previous chunk is only an approximation of the full history. Third, the gap between the streaming Transformer (or Transformer-XL) and its offline model is larger than that between LC-BLSTM and BLSTM. The offline Transformer outperforms BLSTM by a considerable margin, while Transformer-XL is only comparable with LC-BLSTM in terms of WERs. Though the Transformer with attention mask as [−∞,40] can outperform LC-BLSTM, it is not computationally feasible during inference. This observation may align well with the argument that Transformers are more powerful to capture the long-term correlations in sequential signals. In a scenario with limited feature context, however, Transformers are hindered to release their modeling power.
A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning
1908.05514
Table 3: Ablation tests of different architecture choices using MTMSN\textscLARGE.
['Model', 'EM', 'F1']
[['MTMSN', '76.7', '80.5'], ['w/o Q/P Vectors', '75.1', '79.2'], ['w/o CLS Vector', '74.0', '78.4'], ['Q/P Vectors Using Last Hidden', '76.5', '80.2'], ['w/o Gated Span Prediction', '75.8', '79.7'], ['Combine Add/Sub with Negation', '75.5', '79.4']]
S4SS3SSS0Px2 Architecture ablation First, we investigate the effects of some “global vectors” used in our model. Specifically, we find that removing the question and passage vectors from all involved computation leads to 1.3 % drop on F1. Ablating the representation of [CLS] token leads to even worse results. We also try to use the last hidden representation (denoted as M3) to calculate question and passage vectors, but find that does not work. Next, we remove the gating mechanism used during span prediction, and observe a nearly 0.8% decline on both metrics. Finally, we share parameters between the arithmetic expression component and the negation component, and find the performance drops by 1.1% on F1.
A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning
1908.05514
Table 1: The performance of MTMSN and other competing approaches on DROP dev and test set.
['Model', 'Dev EM', 'Dev F1', 'Test EM', 'Test F1']
[['Heuristic Baseline\xa0Dua et\xa0al. ( 2019 )', '4.28', '8.07', '4.18', '8.59'], ['Semantic Role Labeling\xa0Carreras and Màrquez ( 2004 )', '11.03', '13.67', '10.87', '13.35'], ['BiDAF\xa0Seo et\xa0al. ( 2017 )', '26.06', '28.85', '24.75', '27.49'], ['QANet+ELMo\xa0Yu et\xa0al. ( 2018 )', '27.71', '30.33', '27.08', '29.67'], ['BERT \\textsc [ITALIC] BASE\xa0Devlin et\xa0al. ( 2019 )', '30.10', '33.36', '29.45', '32.70'], ['NAQANet\xa0Dua et\xa0al. ( 2019 )', '46.20', '49.24', '44.07', '47.01'], ['NABERT \\textsc [ITALIC] BASE', '55.82', '58.75', '-', '-'], ['NABERT \\textsc [ITALIC] LARGE', '64.61', '67.35', '-', '-'], ['MTMSN \\textsc [ITALIC] BASE', '68.17', '72.81', '-', '-'], ['MTMSN \\textsc [ITALIC] LARGE', '[BOLD] 76.68', '[BOLD] 80.54', '[BOLD] 75.85', '[BOLD] 79.88'], ['Human Performance\xa0Dua et\xa0al. ( 2019 )', '-', '-', '92.38', '95.98']]
MTMSN outperforms all existing approaches by a large margin, and creates new state-of-the-art results by achieving an EM score of 75.85 and a F1 score of 79.88 on the test set. Since our best model utilizes BERT\textscLARGE as encoder, we therefore compare MTMSN\textscLARGE with the NABERT\textscLARGE baseline. As we can see, our model obtains 12.07/13.19 absolute gain of EM/F1 over the baseline, demonstrating the effectiveness of our approach. However, as the human achieves 95.98 F1 on the test set, our results suggest that there is still room for improvement.
A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning
1908.05514
Table 2: Ablation tests of base and large models on the DROP dev set.
['Model', 'BASE EM', 'BASE F1', 'LARGE EM', 'LARGE F1']
[['MTMSN', '68.2', '72.8', '76.7', '80.5'], ['w/o Add/Sub', '46.7', '51.3', '53.8', '58.0'], ['w/o Count', '62.5', '66.4', '71.8', '75.6'], ['w/o Negation', '59.4', '63.6', '67.2', '70.9'], ['w/o Multi-Span', '67.5', '70.7', '75.6', '78.4'], ['w/o Reranking', '66.9', '71.2', '74.9', '78.7']]
S4SS3SSS0Px1 Component ablation To analyze the effect of the proposed components, we conduct ablation studies on the development set. Predicting count numbers is also an important component that contributes nearly 5% gain on both metrics. Moreover, enhancing the model with the negation type significantly increases the F1 by roughly 9 percent on both models. In brief, the above results show that multi-type answer prediction is vitally important for handling different forms of answers, especially in cases where discrete reasoning abilities are required.
A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning
1908.05514
Table 4: Performance breakdown of NABERT\textscLARGE and MTMSN\textscLARGE by gold answer types.
['Type', '(%)', 'NABERT EM', 'NABERT F1', 'MTMSN EM', 'MTMSN F1']
[['Date', '1.6', '55.7', '60.8', '55.7', '69.0'], ['Number', '61.9', '63.8', '64.0', '80.9', '81.1'], ['Single Span', '31.7', '75.9', '80.6', '77.5', '82.8'], ['Multi Span', '4.8', '0', '22.7', '25.1', '62.8']]
Performance breakdown We now provide a quantitative analysis by showing performance breakdown on the development set. Moreover, significant improvements are also obtained in the multi-span category, where the F1 score increases by more than 40 points. This result further proves the validity of our multi-span extraction method.
A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning
1908.05514
Table 5: Performance breakdown of NABERT\textscLARGE and MTMSN\textscLARGE by predicted answer types.
['Type', 'NABERT (%)', 'NABERT EM', 'NABERT F1', 'MTMSN (%)', 'MTMSN EM', 'MTMSN F1']
[['Span', '43.0', '67.9', '74.2', '42.7', '72.2', '81.0'], ['Add/Sub', '43.6', '62.0', '62.1', '32.4', '78.1', '78.2'], ['Count', '13.4', '62.4', '62.4', '13.4', '70.4', '70.4'], ['Negation', '0', '0', '0', '11.5', '96.3', '96.3']]
As shown in the Table, the main improvements are due to the addition/subtraction and negation types. We conjecture that there are two reasons for these improvements. First, our proposed expression reranking mechanism helps validate candidate expressions. Second, a new inductive bias that enables the model to perform logical negation has been introduced. The impressive performance on the negation type confirms our judgement, and suggests that the model is able to find most of negation operations. In addition, we also observe promising gains brought by the span and count types. We think the gains are mainly due to the multi-span extraction method as well as architecture designs.
Vector Embedding of Wikipedia Concepts and Entities
1702.03470
Table 4: Comparing the results in Phrase Similarity dataset for common entries between all approaches. Rho is Spearmans’s correlation.
['Datasets #', 'Datasets Dataset Name', 'Datasets #Pairs', 'Wikipedia Miner Rho', 'HitSim Rho', 'ConVec Rho', 'ConVec (Heuristic) Rho', 'ConVec (Only Anchors) Rho']
[['1', 'WS-REL', '130', '0.6662', '0.5330', '0.6022', '0.6193', '0.6515'], ['2', 'SIMLEX', '406', '0.2405', '0.3221', '0.3011', '0.3087', '0.2503'], ['3', 'WS-MAN ', '224', '0.6762', '0.6854', '0.6331', '0.6371', '0.6554'], ['4', 'WS-411 ', '314', '0.7311', '0.7131', '0.7126', '0.7136', '0.7308'], ['5', 'WS-SIM', '108', '0.7538', '0.6968', '0.7492', '0.7527', '0.7596'], ['6', 'RWD', '268', '0.3072', '0.2906', '0.1989', '0.1864', '0.1443'], ['7', 'WS-ALL', '192', '0.6656', '0.6290', '0.6372', '0.6482', '0.6733'], ['8', 'RG', '20', '0.7654', '0.7805', '0.6647', '0.7338', '0.6301'], ['9', 'MC', '9', '0.3667', '0.5667', '0.2667', '0.2167', '0.2833'], ['10', 'MTurk', '122', '0.6627', '0.5175', '0.6438', '0.6453', '0.6432'], ['-', 'Average', '179', '0.5333', '0.5216', '0.5114', '0.5152', '0.5054']]
We also compared the results with another structural based similarity approach called HitSim The comparable result of our approach to structural based methods is another proof that we could embed the Wikipedia link structure properly. The result of heuristic based approach is slightly better than our base model. This shows that without sacrificing the accuracy, we could increase the coverage. This means that with the proposed heuristic, we have a vector representation of more Wikipedia pages. Results for only anchors version of ConVec This shows it is better to learn Wikipedia’s concepts vector in the context of other words (words that are not anchored) and as a result to have the same vector space for both Concepts and words.
Vector Embedding of Wikipedia Concepts and Entities
1702.03470
Table 2: Comparing the results of three different versions of ConVec (trained on Wikipedia 2.1B tokens) with Google Freebase pre-trained vectors over Google 100B tokens news dataset in the Phrase Analogy task. The Accuracy (All), shows the coverage and performance of each approach for answering questions. The accuracy for common questions (Accuracy (Commons)), is for fair comparison of each approach. #phrases shows the number of top frequent words of each approach that are used to calculate the accuracy. #found is the number of questions that all 4 words of them are present in the approach dictionary.
['Embedding Name', '#phrases', 'Accuracy (All) #found', 'Accuracy (All) Accuracy', 'Accuracy (Commons) #found', 'Accuracy (Commons) Accuracy']
[['Google Freebase', 'Top 30,000', '1048', '55.7%', '89', '52.8%'], ['Google Freebase', 'Top 300,000', '1536', '47.0%', '800', '48.5%'], ['Google Freebase', 'Top 3,000,000', '1838', '42.1%', '1203', '42.7%'], ['ConVec', 'Top 30,000', '202', '81.7%', '89', '82.0%'], ['ConVec', 'Top 300,000', '1702', '68.0%', '800', '72.1%'], ['ConVec', 'Top 3,000,000', '2238', '56.4%', '1203', '61.1%'], ['ConVec (Fine Tuned)', 'Top 30,000', '202', '80.7%', '89', '79.8%'], ['ConVec (Fine Tuned)', 'Top 300,000', '1702', '68.3%', '800', '73.0%'], ['ConVec (Fine Tuned)', 'Top 3,000,000', '2238', '56.8%', '1203', '63.6%'], ['ConVec (Heuristic)', 'Top 30,000', '242', '81.4%', '89', '80.9%'], ['ConVec (Heuristic)', 'Top 300,000', '1804', '65.6%', '800', '68.9%'], ['ConVec (Heuristic)', 'Top 3,000,000', '2960', '46.6%', '1203', '58.7%']]
The first accuracy is to compare the coverage and performance of each approach over the all questions in the test dataset ( Accuracy All). The second accuracy is to compare the methods over only common questions (Accuracy commons). Only common questions between each method are used to compare the Accuracy for Commons scenario. The result of ConVec Heuristic for common questions, argue that this heuristic does not have a significant impact on the quality of base ConVec model and it just improved the coverage (added more concepts to the list of concept vectors). and consequently, the quality of the training corpus is more important than its size.
Vector Embedding of Wikipedia Concepts and Entities
1702.03470
Table 3: Comparing the results in Phrase Similarity dataset. Rho is Spearman’s correlation to the human evaluators. !Found is the number of pairs not found in each approach dataset.
['Datasets #', 'Datasets Dataset Name', 'Datasets #Pairs', 'Wikipedia Miner !Found', 'Wikipedia Miner Rho', 'Google Freebase !Found', 'Google Freebase Rho', 'ConVec !Found', 'ConVec Rho', 'ConVec (Heuristic) !Found', 'ConVec (Heuristic) Rho']
[['1', 'WS-REL ', '251', '114', '0.6564', '87', '0.3227', '104', '0.5594', '57', '0.5566'], ['2', 'SIMLEX ', '961', '513', '0.2166', '369', '0.1159', '504', '0.3406', '357', '0.2152'], ['3', 'WS-SIM ', '200', '83', '0.7505', '58', '0.4646', '81', '0.7524', '41', '0.6101'], ['4', 'RW ', '1182', '874', '0.2714', '959', '0.1777', '753', '0.2678', '469', '0.2161'], ['5', 'WS-ALL ', '349', '142', '0.6567', '116', '0.4071', '136', '0.6348', '74', '0.5945'], ['6', 'RG ', '62', '35', '0.7922', '14', '0.3188', '36', '0.6411', '25', '0.5894'], ['7', 'MC ', '28', '15', '0.7675', '9', '0.3336', '16', '0.2727', '12', '0.4706'], ['8', 'MTurk ', '283', '155', '0.6558', '123', '0.5132', '128', '0.5591', '52', '0.5337'], ['-', 'Average', '414', '241', '0.4402', '217', '0.2693', '219', '0.4391', '136', '0.3612']]
In these datasets, each row consists of two words with their relatedness assigned by the human. The Spearman’s correlation is used for comparing the result of different approaches with the human evaluated results. These datasets contain words and not the Wikipedia concepts. We replaced all the words in these datasets with their corresponding Wikipedia pages if their surface form and the Wikipedia concept match. We used the simple but effective most frequent sense disambiguation method to disambiguate words that may correspond to several Wikipedia concept. This method of assigning words to concepts is not error prone but this error is considered for all approaches. The average correlation for the heuristic based approach is less than the other approaches, but average of not-found entries in this approach is much less than the others. It shows that using the heuristic can increase the coverage of the Wikipedia concepts.
Igbo-English Machine Translation:An Evaluation Benchmark
2004.00648
Table 3: Data Sources and Counts
['[BOLD] Source', '[BOLD] Sentences', '[BOLD] Tokens', '[BOLD] UniqToks']
[['eze-goes-to-school.txt', '1272', '25413', '2616'], ['mmadu-ka-a-na-aria.txt', '2023', '39731', '3292'], ['bbc-igbo.txt', '34056', '566804', '28459'], ['igbo-radio.txt', '5131', '191450', '13391'], ['jw-ot-igbo.txt', '32251', '712349', '13417'], ['jw-nt-igbo.txt', '10334', '253806', '6731'], ['jw-books.txt', '142753', '1879755', '25617'], ['jw-teta.txt', '14097', '196818', '7689'], ['jw-ulo-nche.txt', '27760', '392412', '10868'], ['jw-ulo-nche-naamu.txt', '113772', '1465663', '17870'], ['[BOLD] Total', '[BOLD] 383,449', '[BOLD] 5,724,201', '[BOLD] 69,091']]
A large chunk of the data is collected from the Jehova’s Though we included the Bible, more contemporary contents (books and magazine e.g. Teta! (Awake!), Ulo Nche! (WatchTower)) were the main focus. This phase is still on-going but we have so far collected and cleaned ≈ It is important to point out that we have also collected data in other formats (e.g. audio, non-electronic texts) from local media houses which we hope to also transcribe and include in our collection.
Igbo-English Machine Translation:An Evaluation Benchmark
2004.00648
Table 2: Splits of the Benchmark Evaluation Parallel Data
['[BOLD] Type', '[BOLD] Sent pairs', '[BOLD] Sources']
[['[ITALIC] Igbo-English', '5,836', 'https://www.bbc.com/igbo'], ['[ITALIC] English-Igbo', '5,748', 'Mostly from local newspapers (e.g. Punch)'], ['[ITALIC] Total', '11,584', '[EMPTY]']]
To achieve the objectives above, the task was broken down in the following phases: Phase 1: Raw data collection and pre-processing : This phase is to produce cleaned and pre-processed a minimum 10,000 sentences: 5,000 English and 5,000 Igbo. It involved the collection, cleaning and pre-processing (normalisation, diacritic restoration, spelling correction etc.) of Igbo and English sentences from freely available electronic texts (e.g. Wikipedia, CommonCrawl, local government materials, local TV/Radio stations etc). Phase 2: Translation and correction In this phase, the 10,000 sentence pairs are created manual translation and correction. The key tasks include: Translating English sentences to Igbo (EN-IG) Translating Igbo sentences to English (IG-EN) Correcting the translations 5 Igbo speakers were engaged for the bidirectional of translations while 3 other Igbo speakers, including an Igbo linguist are assisting with the on-going corrections. Chunks (≈ 250 each) of sentences are given to each translator in each direction (i.e. IG-EN and EN-IG).
Igbo-English Machine Translation:An Evaluation Benchmark
2004.00648
Table 2: Splits of the Benchmark Evaluation Parallel Data
['[BOLD] Evaluation Splits', '[BOLD] IG-EN', '[BOLD] EN-IG']
[['[ITALIC] Development Set', '5000', '5000'], ['[ITALIC] Test set', '500', '500'], ['[ITALIC] Hidden Test', '336', '248']]
To achieve the objectives above, the task was broken down in the following phases: Phase 1: Raw data collection and pre-processing : This phase is to produce cleaned and pre-processed a minimum 10,000 sentences: 5,000 English and 5,000 Igbo. It involved the collection, cleaning and pre-processing (normalisation, diacritic restoration, spelling correction etc.) of Igbo and English sentences from freely available electronic texts (e.g. Wikipedia, CommonCrawl, local government materials, local TV/Radio stations etc). Phase 2: Translation and correction In this phase, the 10,000 sentence pairs are created manual translation and correction. The key tasks include: Translating English sentences to Igbo (EN-IG) Translating Igbo sentences to English (IG-EN) Correcting the translations 5 Igbo speakers were engaged for the bidirectional of translations while 3 other Igbo speakers, including an Igbo linguist are assisting with the on-going corrections. Chunks (≈ 250 each) of sentences are given to each translator in each direction (i.e. IG-EN and EN-IG).
Integrate Image Representation to Text Model on Sentence Level: a Semi-supervised Framework
1912.00336
Table 5: Result on SICK. Here, * stands for multimodal NLU models.
['[BOLD] Models', '[BOLD] Acc(%)']
[['*Cap2Both 2018:NAACL_sentence_visual', '81.7'], ['BERT', '90.4'], ['BERT + STVF', '[BOLD] 90.6']]
Similar with performance on SNLI, we can also see the framework bring improvement to SOTA NLU model, and achieve higher performance after integrate to pre-trained NLU models.
Integrate Image Representation to Text Model on Sentence Level: a Semi-supervised Framework
1912.00336
Table 1: Results on SemEval 2018 Task 11 (MCScript). Here, models with * are ensemble models.
['[BOLD] Models', '[BOLD] Dev', '[BOLD] Test']
[['TriAN', '82.78', '81.08'], ['TriAN + ConceptNet', '83.84', '81.94'], ['*TriAN + ConceptNet', '85.27', '83.84'], ['Integrated TriAN', '83.90', '82.10'], ['Integrated TriAN + ConceptNet', '[BOLD] 84.42', '[BOLD] 83.12'], ['*Integrated TriAN + ConceptNet', '[BOLD] 85.94', '[BOLD] 84.41']]
Integrated Reading Comprehension: Table. After integrating the semi-supervised visual integration framework, the TriAN model outperforms previous SOTA performance both on single and ensemble condition. Because we could not access the test data, we verify the improvement of the framework on develop set and see statistically improvement under the one-tailed paired t-test at the 99% significance level. To show the ability of the semi-supervised visual integration framework, we also compare how much performance gain the framework and ConceptNet conceptnet:2012 give to TriAN base model.
Integrate Image Representation to Text Model on Sentence Level: a Semi-supervised Framework
1912.00336
Table 2: Results on six most frequent question types.
['[BOLD] Models', '[BOLD] y/n', '[BOLD] what', '[BOLD] why', '[BOLD] who', '[BOLD] where', '[BOLD] when']
[['TriAN', '81.4', '85.4', '[BOLD] 84.9', '89.7', '84.7', '78.9'], ['Integrated', '[BOLD] 81.6', '[BOLD] 86.3', '84.4', '89.7', '[BOLD] 85.3', '78.9']]
Analysis by Question Categories: To figure out how the semi-supervised visual integration framework helps reading comprehension, we perform an analysis of performance change on six most frequent question types. The framework does not help answering abstract why, who and when questions, which supports our hypothesis that the framework mainly gives real world state of objects and scene.
Integrate Image Representation to Text Model on Sentence Level: a Semi-supervised Framework
1912.00336
Table 3: Result of SemEval 2018 Task 11 (MCScript) on different train/test image bases.
['[BOLD] Train Base', '[BOLD] Test Base', '[BOLD] Test']
[['MSCOCO', 'MSCOCO', '83.12'], ['MSCOCO', 'Flicker 30k', '83.09'], ['Flicker 30k', 'MSCOCO', '82.81'], ['Flicker 30k', 'Flicker 30k', '82.81']]
In order to study the effect of image memory base size on experimental results, we alternate image base in train and test process. As shown in Table. When we shrink the test image memory base to the Flicker30k, the integrated model performance reduces, but is still higher than using 30k image both in train and test. More importantly, when the model is trained on the Flicker30k image base, using a bigger one does not contribute to test performance. These results suggest that the framework could work better as the size of the image base in train and test increases.
Integrate Image Representation to Text Model on Sentence Level: a Semi-supervised Framework
1912.00336
Table 4: Result on SNLI. Here, * stands for multimodal NLU models.
['[BOLD] Models', '[BOLD] Test']
[['*GroundSent 2018:NAACL_sentence_visual', '76.1'], ['*Picturebook hinton:2018', '86.5'], ['ESIM + ELMo SNLI_ELMO', '88.7'], ['300D DMAN SNLI_MAN', '88.8'], ['SLRC SNLI_SLRC', '89.1'], ['LMTransformer SNLI_2', '89.9'], ['MT-DNN SNLI_1', '91.1'], ['BERT', '91.2'], ['BERT + STVF', '[BOLD] 91.3']]
Integrated NLI Model: In Table we shows the results of integrating the mechanism to language inference model based on BERT. We firstly report the state-of-the-art performance of BERT-large model (91.2%) on the SNLI dataset. Based on the trained BERT-large encoder, we then train the integrated visualization language inference model using three-step training process, and achieve new SOTA performance reported (91.3%) yet on the SNLI dataset. By integrate to pre-trained NLU model, the framework also largely outperforms previous multimodal NLU models.
Improving Sparse Word Representations with Distributional Inference for Semantic Composition
1608.06794
Table 3: Effect of the magnitude of the shift parameter k in SPPMI on the word similarity tasks. Boldface means best performance per dateset.
['Apt [BOLD] s', '[BOLD] MEN [ITALIC] without DI', '[BOLD] MEN [ITALIC] with DI', '[BOLD] SimLex-999 [ITALIC] without DI', '[BOLD] SimLex-999 [ITALIC] with DI', '[BOLD] WordSim-353 (rel) [ITALIC] without DI', '[BOLD] WordSim-353 (rel) [ITALIC] with DI', '[BOLD] WordSim-353 (sub) [ITALIC] without DI', '[BOLD] WordSim-353 (sub) [ITALIC] with DI']
[['[ITALIC] [ITALIC] k=1', '0.54', '0.52', '0.31', '0.30', '0.34', '0.27', '0.62', '0.60'], ['[ITALIC] [ITALIC] k=5', '0.64', '0.65', '0.35', '[BOLD] 0.36', '0.56', '0.51', '0.74', '0.73'], ['[ITALIC] [ITALIC] k=10', '0.63', '0.66', '0.35', '[BOLD] 0.36', '0.56', '0.55', '0.75', '0.74'], ['[ITALIC] [ITALIC] k=40', '0.63', '[BOLD] 0.68', '0.30', '0.32', '0.55', '[BOLD] 0.61', '0.75', '[BOLD] 0.76'], ['[ITALIC] [ITALIC] k=100', '0.61', '0.67', '0.26', '0.29', '0.47', '0.60', '0.71', '0.72']]
For the Apt model, a value of k=40 performs best (except for SimLex-999, where smaller shifts give better results), with a performance drop-off for larger shifts. In our experiments we find that a shift of k=1 results in top performance for the untyped vector space model. It appears that shifting the PPMI scores in the Apt model has the effect of cleaning the vectors from noisy PPMI artefacts, which reinforces the predominant sense, while other senses get suppressed. Subsequently, this results in a cleaner neighbourhood around the word vector, dominated by a single sense. This explains why distributional inference slightly degrades performance for smaller values of k.
Improving Sparse Word Representations with Distributional Inference for Semantic Composition
1608.06794
Table 1: Example feature spaces for the lexemes white and clothes extracted from the dependency tree of Figure 1. Not all features are displayed for space reasons. Offsetting ¯¯¯¯¯¯¯¯¯¯¯amod:shoes by amod results in an empty dependency path, leaving just the word co-occurrence :shoes as feature.
['[ITALIC] [BOLD] white [BOLD] Distributional Features', '[ITALIC] [BOLD] white [BOLD] Offset Features (by amod)', '[ITALIC] [BOLD] white [BOLD] Co-occurrence Count', '[ITALIC] [BOLD] clothes [BOLD] Distributional Features', '[ITALIC] [BOLD] clothes [BOLD] Co-occurrence Count']
[['¯¯¯¯¯¯¯¯¯¯¯amod: [ITALIC] shoes', ': [ITALIC] shoes', '1', 'amod: [ITALIC] clean', '1'], ['¯¯¯¯¯¯¯¯¯¯¯amod:¯¯¯¯¯¯¯¯¯¯¯dobj: [ITALIC] bought', '¯¯¯¯¯¯¯¯¯¯¯dobj: [ITALIC] bought', '1', '¯¯¯¯¯¯¯¯¯¯¯dobj: [ITALIC] like', '1'], ['¯¯¯¯¯¯¯¯¯¯¯amod:¯¯¯¯¯¯¯¯¯¯¯dobj: [ITALIC] folded', '¯¯¯¯¯¯¯¯¯¯¯dobj: [ITALIC] folded', '1', '¯¯¯¯¯¯¯¯¯¯¯dobj: [ITALIC] folded', '1'], ['¯¯¯¯¯¯¯¯¯¯¯amod:¯¯¯¯¯¯¯¯¯¯¯dobj:nsubj: [ITALIC] we', '¯¯¯¯¯¯¯¯¯¯¯dobj:nsubj: [ITALIC] we', '1', '¯¯¯¯¯¯¯¯¯¯¯dobj:nsubj: [ITALIC] we', '1']]
For composing the adjective white with the noun clothes via the dependency relation amod we need to consider how the adjective interacts with the noun in the vector space. The distributional features of white describe things that are white via their first order relations such as ¯¯¯¯¯¯¯¯¯¯¯amod, and things that can be done to white things, such as bought via ¯¯¯¯¯¯¯¯¯¯¯amod:¯¯¯¯¯¯¯¯¯¯¯dobj in the example above. However through the inclusion of inverse and higher order dependency paths we can observe that the second order features of the adjective align with the first order features of the noun.
Improving Sparse Word Representations with Distributional Inference for Semantic Composition
1608.06794
Table 2: Comparison of composition by union and composition by intersection. Not all features are displayed for space reasons.
['[BOLD] Composition by [ITALIC] union [BOLD] Distributional Features', '[BOLD] Composition by [ITALIC] union [BOLD] Co-occurrence Count', '[BOLD] Composition by [ITALIC] intersection [BOLD] Distributional Features', '[BOLD] Composition by [ITALIC] intersection [BOLD] Co-occurrence Count']
[[': [ITALIC] shoes', '1', '[EMPTY]', '[EMPTY]'], ['amod: [ITALIC] clean', '1', '[EMPTY]', '[EMPTY]'], ['¯¯¯¯¯¯¯¯¯¯¯dobj: [ITALIC] bought', '1', '[EMPTY]', '[EMPTY]'], ['¯¯¯¯¯¯¯¯¯¯¯dobj: [ITALIC] folded', '2', '¯¯¯¯¯¯¯¯¯¯¯dobj: [ITALIC] folded', '2'], ['¯¯¯¯¯¯¯¯¯¯¯dobj: [ITALIC] like', '1', '[EMPTY]', '[EMPTY]'], ['¯¯¯¯¯¯¯¯¯¯¯dobj:nsubj: [ITALIC] we', '2', '¯¯¯¯¯¯¯¯¯¯¯dobj:nsubj: [ITALIC] we', '2']]
We are then in a position to compose the offset representation of white with the vector for clothes by the union or the intersection of their features. It is worth noting that any arithmetic operation can be used to combine the counts of the aligned features, however for this paper we use pointwise addition for both composition functions. One of the advantages of this approach to composition is that the inherent interpretability of count-based models naturally expands beyond the word level, allowing us to study the distributional semantics of phrases in the same space as words. Due to offsetting one of the constituents, the composition operation is not commutative and hence avoids identical representations for house boat and boat house. However, the typed nature of our vector space results in extreme sparsity, for example while the untyped VSM has 130k dimensions, our Apt model can have more than 3m dimensions. We therefore need to enrich the elementary vector representations with the distributional information of their nearest neighbours to ease the sparsity effect and infer missing information. Due to the syntactic nature of our composition operation it is not straightforward to apply common dimensionality reduction techniques such as SVD, as the type information needs to be preserved.
Improving Sparse Word Representations with Distributional Inference for Semantic Composition
1608.06794
Table 4: Neighbour retrieval function comparison. Boldface means best performance on a dataset per VSM type. *) With 3 significant figures, the density window approach (0.713) is slightly better than the baseline without DI (0.708), static top n (0.710) and WordNet (0.710).
['Apt [BOLD] s ( [ITALIC] k=40)', '[BOLD] No Distributional Inference', '[BOLD] Density Window', '[BOLD] Static Top [ITALIC] n', '[BOLD] WordNet']
[['[ITALIC] MEN', '0.63', '0.67', '[BOLD] 0.68', '0.63'], ['[ITALIC] SimLex-999', '0.30', '0.32', '0.32', '[BOLD] 0.38'], ['[ITALIC] WordSim-353 (rel)', '0.55', '[BOLD] 0.62', '0.61', '0.56'], ['[ITALIC] WordSim-353 (sub)', '0.75', '[BOLD] 0.78', '0.76', '0.77'], ['[BOLD] Untyped VSM ( [ITALIC] k=1)', '[BOLD] No Distributional Inference', '[BOLD] Density Window', '[BOLD] Static Top [ITALIC] n', '[BOLD] WordNet'], ['[ITALIC] MEN*', '[BOLD] 0.71', '[BOLD] 0.71', '[BOLD] 0.71', '[BOLD] 0.71'], ['[ITALIC] SimLex-999', '0.30', '0.29', '0.30', '[BOLD] 0.36'], ['[ITALIC] WordSim-353 (rel)', '0.60', '[BOLD] 0.64', '[BOLD] 0.64', '0.52'], ['[ITALIC] WordSim-353 (sub)', '0.70', '[BOLD] 0.73', '0.72', '0.67']]
The improvements are typically larger for the Apt model, suggesting that it is missing more distributional knowledge in its elementary representations than untyped models. The density window and static top n neighbour retrieval functions perform very similar, however the static approach is more consistent and never underperforms the baseline for either model type on any dataset. The WordNet based neighbour retrieval function performs particularly well on SimLex-999. This can be explained by the fact that antonyms, which frequently happen to be among the nearest neighbours in distributional vector spaces, are regarded as dissimilar in SimLex-999, whereas the WordNet neighbour retrieval function only returns synonyms. The results furthermore confirm the effect that untyped models perform better on datasets modelling relatedness, whereas typed models work better for substitutability tasks
Improving Sparse Word Representations with Distributional Inference for Semantic Composition
1608.06794
Table 6: Neighbour retrieval function. Underlined means best performance per phrase type, boldface means best average performance overall.
['Apt [BOLD] s', '[BOLD] No Distributional Inference [ITALIC] intersection', '[BOLD] No Distributional Inference [ITALIC] union', '[BOLD] Density Window [ITALIC] intersection', '[BOLD] Density Window [ITALIC] union', '[BOLD] Static Top [ITALIC] n [ITALIC] intersection', '[BOLD] Static Top [ITALIC] n [ITALIC] union', '[BOLD] WordNet [ITALIC] intersection', '[BOLD] WordNet [ITALIC] union']
[['[ITALIC] Adjective-Noun', '0.10', '0.41', '0.31', '0.39', '0.25', '0.40', '0.12', '0.41'], ['[ITALIC] Noun-Noun', '0.18', '0.42', '0.34', '0.38', '0.37', '0.45', '0.24', '0.36'], ['[ITALIC] Verb-Object', '0.17', '0.36–––––', '0.36', '0.36', '0.34', '0.35', '0.25', '0.36'], ['[ITALIC] [BOLD] Average', '0.15', '[BOLD] 0.40', '0.34', '0.38', '0.32', '[BOLD] 0.40', '0.20', '0.38']]
For a quantitative analysis of distributional inference for semantic composition, we evaluate our model on the composition dataset of Mitchell and Lapata \shortciteMitchell_2010, consisting of 108 adjective-noun, 108 noun-noun, and 108 verb-object pairs. The task is to compare the model’s similarity estimates with the human judgements by computing Spearman’s ρ. For comparing the performance of the different neighbour retrieval functions, we choose the same parameter settings as in the word similarity experiments (k=40 and using 30 neighbours for DI). The density window retrieval function outperforms static top n for composition by intersection and vice versa for composition by union. The WordNet approach is competitive for composition by union, but underperfoms the other approaches for composition by intersection significantly. For further experiments we use the static top n approach as it is computationally cheap and easy to interpret due to the fixed number of neighbours.
Data Augmentation for Hypernymy Detection
2005.01854
Table 2: Accuracy scores for the data augmentation and the two dataset extension strategies in comparison to the same FF model without any augmentation or extension.
['[BOLD] Model', '[BOLD] Weeds', '[BOLD] LEDS', '[BOLD] HP4K']
[['Baseline - No Augmentation/Extension', '0.72', '0.77', '0.67'], ['Distributional Composition Augmentation', '[BOLD] 0.76', '[BOLD] 0.83', '0.70'], ['[ITALIC] GANDALF Augmentation', '0.75', '0.80', '[BOLD] 0.71'], ['WordNet Extension', '0.75', '[BOLD] 0.83', '0.69'], ['Hearst Patterns Extension', '0.74', '0.81', '0.68'], ['Weeds et al.Weeds_2014b', '0.75', '-', '-'], ['Carmona and Riedel\xa0Carmona_2017', '0.63', '0.81', '-']]
Our techniques are able to outperform a non-augmented model by 4-6 points in accuracy, representing a relative error reduction of 14%-26%. While the primary objective in this work is to improve an existing model setup with data augmentation, our augmented models compare favourably with previously published results.
Data Augmentation for Hypernymy Detection
2005.01854
Table 4: Accuracy for the hypernym-only and full models on the Weeds dataset with no, DC or GAN augmentation.
['[BOLD] Augmentation', '[BOLD] Hypernym-Only', '[BOLD] Full Model']
[['None', '0.59', '0.72'], ['DC (size=100)', '0.60', '[BOLD] 0.74'], ['DC (size=500)', '[BOLD] 0.57', '0.71'], ['GAN (size=500)', '[BOLD] 0.58', '[BOLD] 0.75'], ['GAN (size=1000)', '0.60', '0.73']]
Ideally, we would hope to see weak performance for the hypernym-only and strong performance on the full model. This would indicate that the classifier does not rely on prototypical features in the hypernym, but is able to focus on specific features in a given hyponym-hypernym pair.
Natural Language Multitasking Analyzing and Improving Syntactic Saliency of Latent Representations
1801.06024
Table 3: Best-of-100 clustering errors with fewer training examples
['Model', 'R-D-1024', 'R-D-256', 'R-D-F-1024', 'R-D-P-256']
[['Full training set', '22', '29', '24', '3'], ['1/2 training set', '37', '33', '-', '-'], ['1/3 training set', '-', '-', '19', '1']]
Note that the perplexities these models achieved are all comparable to their reference models (trained with the full training set), and none of the models overfit the training data.
Natural Language Multitasking Analyzing and Improving Syntactic Saliency of Latent Representations
1801.06024
Table 2: Clustering errors by model
['Model', 'REP', 'REP-FR', 'REP-FR-DE', 'REP-DE', 'REP-POS', 'REP-DE-POS']
[['Error', '51', '26', '24', '22', '8', '0']]
Each sentence prototype is randomly populated by common English words 100 times. The syntax of each sentence in such a category is very similar or identical to all others in the same category, but different from sentences in other categories. These sentences are then fed into our models. We record every resulting representation and pair it with its input sentence. Using K-means clustering with K=14 we cluster the representation-sentence pairs in the representation space. For each resulting cluster, we count how many sentences of each prototype it contains. This yields a list such as this: [30,3,1,7,88,0,0,0,0,0,0,0,0,0], which shows the content of one of 14 clusters: 30 sentences of type one, 3 of type two and so on. Since most sentences in this cluster are of type five, this cluster is assigned to be the cluster of sentence category five. However, 41 sentences of type other than five were “falsely” assigned to this cluster. Therefore, the error of this cluster is 41. The sum of errors of all 14 clusters is the clustering error, which is our quality measure for this experiment. Since K-means clustering is nondeterministic, we run the algorithm 100 times.
Data Augmenting Contrastive Learning of Speech Representationsin the Time Domain
2007.00991
Table S3: Additive noise augmented CPC, ABX errors (Libri-light dev set). Within- and across-speaker phoneme discriminability scores (lower is better) on the Libri-light clean and other dev sets for CPC training as a function of varying types of additive noise augmentation.
['[EMPTY]', 'Within spk. dev', 'Within spk. dev', 'Across spk. dev', 'Across spk. dev']
[['System', 'clean', 'other', 'clean', 'other'], ['MFCC Baseline', '10.95', '13.55', '20.94', '29.41'], ['CPC LL-60k', '6.11', '8.17', '8.05', '12.83'], ['[ITALIC] CPC2 – Trained on LibriSpeech clean 80h', '[ITALIC] CPC2 – Trained on LibriSpeech clean 80h', '[ITALIC] CPC2 – Trained on LibriSpeech clean 80h', '[ITALIC] CPC2 – Trained on LibriSpeech clean 80h', '[EMPTY]'], ['no augmentation', '6.06', '8.18', '7.59', '12.8'], ['[ITALIC] Band pass – Musan – past only', '[ITALIC] Band pass – Musan – past only', '[ITALIC] Band pass – Musan – past only', '[ITALIC] Band pass – Musan – past only', '[EMPTY]'], ['no filtering', '5.81', '7.40', '8.03', '12.7'], ['[0,80] Hz', '5.55', '7.56', '6.82', '12.0'], ['[80,240] Hz', '5.38', '7.58', '6.99', '12.1'], ['[240,720] Hz', '6.22', '8.32', '7.89', '12.9'], ['[720,2160] Hz', '6.71', '9.11', '8.52', '13.8'], ['[2160,8000] Hz', '6.64', '8.74', '8.30', '13.4'], ['[ITALIC] Band pass – Musan – past + future', '[ITALIC] Band pass – Musan – past + future', '[ITALIC] Band pass – Musan – past + future', '[ITALIC] Band pass – Musan – past + future', '[EMPTY]'], ['no filtering', '6.52', '8.79', '8.20', '13.5'], ['[0,80] Hz', '5.28', '7.48', '6.83', '12.1'], ['[80,240] Hz', '[BOLD] 5.16', '[BOLD] 7.33', '[BOLD] 6.77', '[BOLD] 11.7'], ['[240,720] Hz', '6.01', '8.36', '7.45', '12.9'], ['[720,2160] Hz', '7.40', '9.83', '9.06', '14.2'], ['[2160,8000] Hz', '7.40', '9.72', '9.00', '14.2']]
We discovered accidentally that for additive noise, low frequencies are more effective than high frequencies. We therefore explored systematically the effect of the spectral characteristics of noise by filtering sounds from the MUSAN dataset [musan2015] in successive frequency bands. We selected 5 broad bands, defined by 4 cutoff points by the tripling of the frequency: 80Hz, 240Hz, 720Hz, 2160Hz). We found that the optimal additive noise was obtained by bandpass filtering MUSAN sounds in the [80,240] Here, we explore how frequency filtering affects additive noise data augmentation. We did two experiments: band-pass filtering, and lowpass filtering. For bandpass, here are the frequency bands we applied to the MUSAN dataset: [0,80] Hz, [80,240] Hz, [240,720] Hz, [720,2060] Hz, [2160−8000] Hz. The second band corresponds roughly to the range of human pitch (F0), the third, to the range of the first formant (F1), the fourth to the range of the second formant (F2). The extreme ranges (very low or very high frequencies) do not typically carry much information. An optimal range seems to be [80,240] Hz. For lowpass, we selected sucessive 100Hz bands, starting from zero.
Data Augmenting Contrastive Learning of Speech Representationsin the Time Domain
2007.00991
Table 1: ABX errors on data-augmented CPC features (Libri-light dev set). Within- and across-speaker phoneme discriminability scores (lower is better) on the Libri-light clean and other dev sets for CPC training as a function of types of data augmentation, in isolation or combination (see Section 4.1).
['[EMPTY]', 'Within spk. dev', 'Within spk. dev', 'Across spk. dev', 'Across spk. dev']
[['System', 'clean', 'other', 'clean', 'other'], ['MFCC Baseline', '10.95', '13.55', '20.94', '29.41'], ['CPC LL-60k [kahn2020]', '6.11', '8.17', '8.05', '12.83'], ['[ITALIC] Single augmentations (CPC2 on LibriSpeech clean 100h)', '[ITALIC] Single augmentations (CPC2 on LibriSpeech clean 100h)', '[ITALIC] Single augmentations (CPC2 on LibriSpeech clean 100h)', '[ITALIC] Single augmentations (CPC2 on LibriSpeech clean 100h)', '[ITALIC] Single augmentations (CPC2 on LibriSpeech clean 100h)'], ['no augmentation', '6.06', '8.18', '7.59', '12.84'], ['pitch-past', '4.90', '6.28', '6.84', '11.04'], ['pitch-past+future', '5.03', '6.35', '7.11', '11.30'], ['add-past', '5.47', '7.58', '6.97', '12.17'], ['add-past+future', '5.16', '7.33', '6.77', '11.71'], ['reverb-past', '5.55', '7.61', '7.16', '12.19'], ['reverb-past+future', '5.58', '7.91', '7.77', '13.07'], ['bandrej-past', '5.83', '7.88', '7.07', '12.21'], ['bandrej-past+future', '5.92', '7.81', '7.19', '12.24'], ['tdrop-past', '5.78', '7.92', '7.18', '12.56'], ['[ITALIC] 2-way combinations, past only (same model and train set)', '[ITALIC] 2-way combinations, past only (same model and train set)', '[ITALIC] 2-way combinations, past only (same model and train set)', '[ITALIC] 2-way combinations, past only (same model and train set)', '[ITALIC] 2-way combinations, past only (same model and train set)'], ['pitch+add', '4.81', '6.03', '6.79', '10.90'], ['pitch+reverb', '4.74', '6.75', '[BOLD] 6.06', '10.99'], ['pitch+tdrop', '4.83', '6.15', '6.90', '11.08'], ['add+reverb', '5.41', '6.87', '7.41', '11.97'], ['add+tdrop', '5.38', '6.97', '7.70', '12.22'], ['reverb+tdrop', '5.41', '6.93', '7.32', '12.05'], ['[ITALIC] 3-way combinations, past only (same model and train set)', '[ITALIC] 3-way combinations, past only (same model and train set)', '[ITALIC] 3-way combinations, past only (same model and train set)', '[ITALIC] 3-way combinations, past only (same model and train set)', '[ITALIC] 3-way combinations, past only (same model and train set)'], ['pitch + add + reverb', '[BOLD] 4.66', '[BOLD] 5.81', '6.62', '[BOLD] 10.60'], ['pitch + add + tdrop', '4.86', '6.09', '6.70', '10.78'], ['pitch + reverb + tdrop', '4.72', '6.02', '6.53', '10.70'], ['add + reverb + tdrop', '5.40', '6.87', '7.47', '11.98'], ['[ITALIC] 4-way Combinations, past only (same model and train set)', '[ITALIC] 4-way Combinations, past only (same model and train set)', '[ITALIC] 4-way Combinations, past only (same model and train set)', '[ITALIC] 4-way Combinations, past only (same model and train set)', '[ITALIC] 4-way Combinations, past only (same model and train set)'], ['pitch+add+reverb+tdrop', '4.87', '6.08', '6.79', '10.76']]
The only augmentation performing better on past+future is add. According to their average performance, the individual augmentations can be sorted, from most to least useful: pitch, add, reverb, tdrop, and bandrej. Next, we study the performance of combinations of augmentations. We decided to drop bandrej from consideration due to its poor results. We only consider augmenting past, as this gives roughly the same quality of representations, but requires less computation. As a result, we have 6 possible two-way, 4 three-way, and 1 four-way combination of effects.
Data Augmenting Contrastive Learning of Speech Representationsin the Time Domain
2007.00991
Table 3: ABX errors on the ZeroResource Speech Challenge 2017 (120s). Within- (“W.”) and across-speaker (“A.”) phoneme discriminability scores on English, French and Mandarin speech for CPC features with and without data augmentation. For comparison, the best systems plus supervised topline of the ZeroSpeech leaderboard trained on the provided datasets.
['[EMPTY]', 'English W.', 'English A.', 'French W.', 'French A.', 'Mandarin W.', 'Mandarin A.', 'AVG']
[['[ITALIC] Trained on ZeroSpeech2017 (45h, 24h, 2h30, resp.)', '[ITALIC] Trained on ZeroSpeech2017 (45h, 24h, 2h30, resp.)', '[ITALIC] Trained on ZeroSpeech2017 (45h, 24h, 2h30, resp.)', '[ITALIC] Trained on ZeroSpeech2017 (45h, 24h, 2h30, resp.)', '[ITALIC] Trained on ZeroSpeech2017 (45h, 24h, 2h30, resp.)', '[ITALIC] Trained on ZeroSpeech2017 (45h, 24h, 2h30, resp.)', '[ITALIC] Trained on ZeroSpeech2017 (45h, 24h, 2h30, resp.)', '[EMPTY]'], ['Superv. topline\xa0[dunbar2017]', '5.3', '6.9', '6.8', '9.1', '4.2', '5.7', '6.33'], ['Heck et al.\xa0[heck2017]', '6.2', '8.7', '8.7', '11.7', '7.9', '[BOLD] 7.4', '8.43'], ['Chorow. et al.\xa0[chor2019]', '5.5', '8.0', '[BOLD] 7.5', '[BOLD] 10.8', '10.7', '11.2', '8.95'], ['CPC2', '8.6', '12.0', '12.2', '16.4', '12.0', '14.0', '12.53'], ['CPC2+WavAug', '6.6', '9.3', '9.3', '14.1', '11.2', '11.9', '10.4'], ['[ITALIC] Trained on out-of-domain (100h, 76h, 80h, resp.)', '[ITALIC] Trained on out-of-domain (100h, 76h, 80h, resp.)', '[ITALIC] Trained on out-of-domain (100h, 76h, 80h, resp.)', '[ITALIC] Trained on out-of-domain (100h, 76h, 80h, resp.)', '[ITALIC] Trained on out-of-domain (100h, 76h, 80h, resp.)', '[ITALIC] Trained on out-of-domain (100h, 76h, 80h, resp.)', '[ITALIC] Trained on out-of-domain (100h, 76h, 80h, resp.)', '[EMPTY]'], ['CPC2', '6.1', '8.7', '10.3', '12.9', '9.3', '9.6', '9.48'], ['CPC2+WavAug', '4.7', '6.5', '8.6', '11.1', '7.9', '7.8', '7.77'], ['CPC2-3L+WavAug', '[BOLD] 4.6', '[BOLD] 5.8', '7.6', '10.9', '[BOLD] 7.8', '8.0', '[BOLD] 7.45']]
As can be seen, while noise augmentation improves the score on all three languages, we cannot reach the SOTA with the small training datasets provided from the challenge. We can however, be on par with or improve over best performing baseline with our out-of-domain train sets (same languages, larger datasets), in particular with the larger model. This shows that while our technique scales with dataset size, it is still less data efficient than the techniques described in Heck et al. [heck2017] and Choroskwi et al. [chor2019]. Note however, that both studies used speaker adaptation which are outside the scope of what can be done with standard time domain data augmentation techniques.
Data Augmenting Contrastive Learning of Speech Representationsin the Time Domain
2007.00991
Table 4: Phone Error Rate (PER) in the semi-supervised setting. A linear classifier is added on top of Librispeech-100 pretrained CPC2 models and fine tuned with either 10min, 1h or 10h of Libri-light labelled data with a CTC loss. For comparison, reference Libri-light results plus the untrained CPC2 architecture fully fined-tuned with 10 h.
['System', 'Augmented fine-tuning', 'dev- clean', 'dev- other', 'test- clean', 'test- other']
[['[ITALIC] Reference', '[ITALIC] Reference', '[ITALIC] Reference', '[ITALIC] Reference', '[ITALIC] Reference', '[EMPTY]'], ['CPC unlab-60k+train-10h-full', 'CPC unlab-60k+train-10h-full', '28.4', '41.4', '27.9', '43.6'], ['CPC no pretraining - 10h-full', 'CPC no pretraining - 10h-full', '45.9', '55.7', '43.7', '58.6'], ['CPC2 no pretraining - 10h-full', 'CPC2 no pretraining - 10h-full', '41.3', '52.3', '39.3', '56.1'], ['[ITALIC] Frozen features - classifier trained on 10min', '[ITALIC] Frozen features - classifier trained on 10min', '[ITALIC] Frozen features - classifier trained on 10min', '[ITALIC] Frozen features - classifier trained on 10min', '[ITALIC] Frozen features - classifier trained on 10min', '[EMPTY]'], ['CPC2', 'No', '47.8', '60.9', '47.0', '60.1'], ['CPC2', 'Yes', '49.4', '57.9', '49.4', '59.2'], ['CPC2+WavAug', 'No', '39.5', '51.3', '39.1', '52.4'], ['CPC2+WavAug', 'Yes', '41.6', '51.7', '41.7', '52.9'], ['[ITALIC] Frozen features - classifier trained on 1h', '[ITALIC] Frozen features - classifier trained on 1h', '[ITALIC] Frozen features - classifier trained on 1h', '[ITALIC] Frozen features - classifier trained on 1h', '[ITALIC] Frozen features - classifier trained on 1h', '[EMPTY]'], ['CPC2', 'No', '34.6', '47.5', '32.9', '50.0'], ['CPC2', 'Yes', '33.5', '46.9', '32.7', '49.4'], ['CPC2+WavAug', 'No', '29.1', '42.4', '28.8', '44.3'], ['CPC2+WavAug', 'Yes', '28.0', '41.3', '27.8', '43.3'], ['[ITALIC] Frozen features - classifier trained on 10h', '[ITALIC] Frozen features - classifier trained on 10h', '[ITALIC] Frozen features - classifier trained on 10h', '[ITALIC] Frozen features - classifier trained on 10h', '[ITALIC] Frozen features - classifier trained on 10h', '[EMPTY]'], ['CPC2', 'No', '29.3', '43.7', '29.0', '47.1'], ['CPC2', 'Yes', '31.1', '44.9', '30.6', '48.3'], ['CPC2+WavAug', 'No', '26.1', '39.9', '25.7', '41.6'], ['CPC2+WavAug', 'Yes', '25.7', '39.3', '25.3', '41.2'], ['[ITALIC] Full fine-tuning, 10h of data', '[ITALIC] Full fine-tuning, 10h of data', '[ITALIC] Full fine-tuning, 10h of data', '[ITALIC] Full fine-tuning, 10h of data', '[ITALIC] Full fine-tuning, 10h of data', '[EMPTY]'], ['CPC2', 'No', '27.8', '42.6', '26.5', '45.0'], ['CPC2', 'Yes', '26.3', '39.9', '25.4', '43.9'], ['CPC2+WavAug', 'No', '24.5', '39.0', '24.1', '40.8'], ['CPC2+WavAug', 'Yes', '23.5', '37.6', '23.1', '41.0'], ['CPC2-L3+WavAug', 'No', '22.9', '37.3', '22.8', '[BOLD] 39.9'], ['CPC2-L3+WavAug', 'Yes', '[BOLD] 22.5', '[BOLD] 36.8', '[BOLD] 22.2', '[BOLD] 39.9']]
For the supervised fine-tuning phase, we found out that we got the best results by using only pitch augmentation. Other methods having low or negative effects in this case. The combined effects of data augmentation on pretraining and fine-tuning adds up to 12-15% relative improvement across the different training sets. Interestingly, we find that with data augmentation we can beat the reference baseline (pretraining on 60k hours plus fine tuning on 10 hours) on frozen features with substantially less data (pretraining on 100 hours, plus fine tuning on 1 hour). Another point worth mentioning is that with data augmentation, 10 minutes of data on frozen features is sufficient to outperform the no-pretraining reference with 10 hours of labels.
Data Augmenting Contrastive Learning of Speech Representationsin the Time Domain
2007.00991
Table S1: Architecture ablations, ABX errors (Libri-light dev set). We compare the original CPC model described in [riviere2020multi] with modifications including more LSTM layers and a single Multi-Head prediction model for all time steps (MH). The bottom model is the one we refer to as CPC2 in the paper.
['[EMPTY]', 'Within spk. dev', 'Within spk. dev', 'Across spk. dev', 'Across spk. dev']
[['System', 'clean', 'other', 'clean', 'other'], ['CPC LS-100 [riviere2020multi]', '6.81', '8.91', '8.46', '13.70'], ['CPC + 2 layers LSTM', '5.97', '8.12', '7.39', '12.79'], ['CPC + 3 layers LSTM', '5.93', '8.41', '7.76', '13.14'], ['CPC + MH', '6.84', '9.10', '8.68', '14.08'], ['CPC + MH + 2 layers LSTM', '[BOLD] 6.02', '[BOLD] 8.11', '[BOLD] 7.56', '[BOLD] 12.91']]
CPC2 is a modified version of the CPC architecture in [kahn2020, riviere2020multi]. The encoder architecture is unchanged (5 convolutional layers with kernel sizes [10,8,4,4,4], strides [5,4,2,2,2] and hidden dimension 256). For the recurrent context nextwork, we use a 2-layer LSTM, as a tradeoff between feature quality and training speed. In the prediction network, we replace the k independent transformers in [kahn2020, riviere2020multi], each one predicting a specific time-step ahead, to a single multi-head transformer layer with k classifiers at its heads. This has a limited impact on accuracy but dramatically decreases training time. We started from the model described in [riviere2020multi]: the encoder network is composed of 5 convolutional layers with kernel sizes [10,8,4,4,4], strides [5,4,2,2,2] and hidden dimension 256. We worked with ReLU activation and inserted a channel normalization procedure between each convolutional layer. As far as the context network is concerned, we used a 2-layers LSTM. Finally, we used a single layer multihead transformer to do the prediction instead of several single head transformers. We observe that in the cases of 3h and 45h datasets, the architecture with 2 layers of LSTM still perform best. However, with 100h of data, increasing the model depth turns out to be beneficial.
Data Augmenting Contrastive Learning of Speech Representationsin the Time Domain
2007.00991
Table S2: Architecture ablations, ABX errors (Libri-light dev set). We compare modifications of the CPC architecture across different dataset sizes. In all cases, we apply the best data augmentation reported in the main text.
['[EMPTY]', 'Within spk. dev', 'Within spk. dev', 'Across spk. dev', 'Across spk. dev']
[['System', 'clean', 'other', 'clean', 'other'], ['[ITALIC] 3h Libri-light', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['CPC2 + 2 layers LSTM', '12.82', '13.81', '17.21', '20.85'], ['CPC2 + 3 layers LSTM', '13.22', '14.30', '18.30', '22.27'], ['[ITALIC] 45h Libri-light', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['CPC2 + 2 layers LSTM', '6.31', '8.37', '8.52', '13.38'], ['CPC2 + 3 layers LSTM', '6.38', '8.29', '8.43', '13.72'], ['[ITALIC] 100h LibriSpeech', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['CPC2 + 2 layers LSTM', '4.66', '6.62', '5.81', '10.60'], ['CPC2 + 3 layers LSTM', '4.24', '6.38', '5.76', '10.43']]
In this experiment, we test whether our data augmentation technique can build better speech features that can be used for downstream tasks. Here, we use the Libri-light limited supervision phone classification task [kahn2020], which contains intentionally small training sets (10 min, 1h or 10 hours of labelled data). We fine-tune a linear phone classifier built on top of the CPC features with a CTC loss (frozen features). On 10 hours of data, we also fine-tune the entire network. Again, we additionally experiment with an architecture that has a 3-layer LSTM (CPC2-L3) In the next experiment, we study the performance of our model in function of the size of the available data and the architecture size (controlled by the number of LSTM layers). We simulate the amounts of data available at ZeroSpeech2017 for Mandarin (3h), French (45h), and English (100h) by sub-sampling from LibriLight (3h and 45h) and using LibriSpeech (100h). In all experiments, we use the best data augmentation found in the main text (pitch+add+reverb-past).
Extracting Parallel Paragraphs from Common Crawl
1804.10413
Table 1: Prealigned Data (CzEng 1.0); Experiment: Effectiveness
['[BOLD] Recall (%)', '63.02']
[['[BOLD] Precision (%)', '93.74']]
In the preliminary alignments of the tail, 50.30% of all the top candidates are exact matches. However, this ratio is more satisfactory in the scored alignments—71.30%. This means that the scoring of the preliminary alignments is an important step in the whole process. Of all the existing pairs of parallel sentences 63.02% were detected, and 93.74% of all the detected pairs were correct.
Towards Hate Speech Detection at Large via Deep Generative Modeling
2005.06370
Table 4: Cross-Dataset performance, comparing detector training using baseline to augmented-baseline training sets
['Trainset- Testset', 'Accuracy Baseline', 'Accuracy Augmented', 'Accuracy (%)', 'Precision Baseline', 'Precision Augmented', 'Precision (%)', 'Recall Baseline', 'Recall Augmented', 'Recall (%)', 'F1 Baseline', 'F1 Augmented', 'F1 (%)']
[['FN-SE', '0.613', '0.645', '+5.22', '0.689', '0.570', '-17.27', '0.155', '0.644', '+315.48', '0.253', '0.605', '+139.13'], ['FN-WH', '0.846', '0.850', '+0.47', '0.636', '0.528', '-16.98', '0.063', '0.507', '+704.76', '0.114', '0.517', '+353.50'], ['DV-WH', '0.794', '0.820', '+3.27', '0.278', '0.441', '+58.63', '0.189', '0.507', '+168.25', '0.225', '0.472', '+109.77'], ['DV-WS', '0.652', '0.832', '+27.60', '0.471', '0.870', '+84.71', '0.107', '0.599', '+459.81', '0.174', '0.709', '+307.47'], ['SE-WS', '0.662', '0.754', '+13.89', '0.875', '0.872', '-0.34', '0.017', '0.331', '+1,847.06', '0.034', '0.480', '+1,311.76'], ['SE-FN', '0.926', '0.924', '-0.21', '0.694', '0.489', '-29.53', '0.031', '0.226', '+629.03', '0.059', '0.309', '+423.72'], ['WH-SE', '0.584', '0.593', '+1.54', '0.572', '0.522', '-8.74', '0.069', '0.450', '+552.17', '0.124', '0.484', '+290.32'], ['WH-DV', '0.752', '0.777', '+3.32', '0.714', '0.610', '-14.56', '0.053', '0.364', '+586.79', '0.099', '0.456', '+360.60'], ['WS-WH', '0.749', '0.824', '+10.01', '0.332', '0.450', '+35.54', '0.579', '0.489', '-15.54', '0.422', '0.469', '+11.13'], ['WS-FN', '0.822', '0.895', '+8.88', '0.135', '0.233', '+72.59', '0.255', '0.176', '-30.98', '0.177', '0.201', '+13.55'], ['Average', '0.740', '0.791', '[BOLD] +6.95', '0.539', '0.559', '[BOLD] +3.50', '0.151', '0.429', '[BOLD] +182.81', '0.168', '0.470', '[BOLD] +179.71']]
In another set of experiments, we re-trained the detector with each one of the corresponding augmented training set, and evaluated on the same cross-dataset combinations of the first experiment. The average metrics of all cross-dataset pairs reveals a consistent increase in accuracy (+6.9%), precision (+3.5%), recall (+182.8%) and F1 (+179.7%). That is, augmenting the original labeled training sets with the automatically generated text sequences resulted in a dramatic increase of recall, while maintaining good levels of precision. Overall, this trend is reflected by a steep increase in the combined measure of F1. These results thus provide another approval of the benefits of augmenting the data with generated sentences and using a broader data distribution by utilizing a pre-trained language model.
Towards Hate Speech Detection at Large via Deep Generative Modeling
2005.06370
Table 2: ROUGE-L scores of data generated by the GPT-2 language model per dataset. These values indicate on low similarity between the generated sequences and the sequences on which the model has been fine-tuned, for each dataset and class.
['Source dataset', 'Generated hate', 'Generated non-hate']
[['WS', '0.12', '0.05'], ['DV', '0.07', '0.05'], ['FN', '0.11', '0.14'], ['WH', '0.09', '0.16'], ['SE', '0.05', '0.03']]
We further wished to verify that the generated sequences depart from the seed labeled examples, forming new and diverse language usage, as opposed to duplicating or repeating parts of the input examples. (nothing in common) to 1 (identical sequences). As shown, the similarity scores are low, indicating that the rich language model encoded within GPT-2 was effectively leveraged for generating new and different text sequences.
Towards Hate Speech Detection at Large via Deep Generative Modeling
2005.06370
Table 3: Intra-Dataset classification results, comparing the baseline and augmented-baseline training sets
['Dataset', 'Accuracy Baseline', 'Accuracy Augmented', 'Accuracy (%)', 'Precision Baseline', 'Precision Augmented', 'Precision (%)', 'Recall Baseline', 'Recall Augmented', 'Recall (%)', 'F1 Baseline', 'F1 Augmented', 'F1 (%)']
[['WS', '0.967', '0.977', '+1.03', '0.968', '0.989', '+2.17', '0.936', '0.943', '+0.75', '0.952', '0.966', '+1.47'], ['WH', '0.891', '0.872', '-2.13', '0.862', '0.600', '-30.40', '0.375', '0.582', '+55.20', '0.523', '0.591', '+13.00'], ['SE', '0.715', '0.764', '+6.85', '0.901', '0.767', '-14.87', '0.367', '0.635', '+73.02', '0.522', '0.695', '+33.14'], ['DV', '0.922', '0.935', '+1.41', '0.929', '0.923', '-0.65', '0.753', '0.814', '+8.10', '0.832', '0.865', '+3.97'], ['FN', '0.956', '0.942', '-1.46', '0.874', '0.644', '-26.32', '0.337', '0.515', '+52.82', '0.486', '0.573', '+17.90'], ['Combined', '0.904', '0.905', '+0.11', '0.895', '0.718', '-19.77', '0.433', '0.642', '+48.26', '0.584', '0.678', '+16.10']]
Intra-Dataset Experiments. The results indicate that augmenting the training sets with automatically generated examples leads to improvement in Recall and F1 in most cases, peaking at +73.0% and +33.1%, respectively, for the SE dataset. Precision on the other hand is decreased for most datsets, and Accuracy changes mildly. This is not surprising, as the training and test sets belong to the same distribution and are highly similar in this setup, whereas the generated text sequences introduce language diversity, as well as some label noise. For the combined dataset, which is relatively diverse, there is a yet a boost due to train set augmentation of +48.2% in Recall and +16.1% in F1; Precision decreases by 19.7% and Accuracy almost unchanged. Therefore, by dataset augmentation, a dramatic drop in false-negatives (missed hate speech sequences) is achieved, namely, significantly more hate speech is detected, which is also evident in the improved F1. The decrease in Precision indicates more false-positives (non-hate sequences classified as hate), however, these are less severe than missed hate speech sequences. These curves clearly demonstrates that training data augmentation yields significantly imrpved Recall (and consequently F1) across this range.
The Importance of Automatic Syntactic Features in Vietnamese Named Entity Recognition
1705.10610
Table 4: Performance of our model when using one and two layers
['Entity', 'Bi-LSTM Pre.', 'Bi-LSTM Rec.', 'Bi-LSTM [ITALIC] F1', 'LSTM Pre.', 'LSTM Rec.', 'LSTM [ITALIC] F1']
[['LOC', '83.63', '82.48', '83.05', '74.60', '77.38', '75.96'], ['MISC', '84.14', '78.37', '81.07', '2.15', '2.04', '2.09'], ['ORG', '49.85', '50.51', '50.07', '32.22', '34.60', '33.60'], ['PER', '72.77', '65.73', '69.06', '67.95', '60.73', '64.12'], ['ALL', '75.88', '72.26', '[BOLD] 74.02', '66.61', '65.04', '65.80']]
S4SS3SSS0Px2 Effect of Bidirectional Learning In the second experiment, we examine the benefit of accessing both past and future contexts by comparing the performances of RNN, LSTM and Bi-LSTM models. In this task, RNN model fails because it faces the gradient vanishing/exploding problem when training with long-range dependencies (132 time steps), leading to the unstable value of the cost functions.
The Importance of Automatic Syntactic Features in Vietnamese Named Entity Recognition
1705.10610
Table 5: Performance of our model when using one and two layers
['Entity', 'Two layers Pre.', 'Two layers Rec.', 'Two layers [ITALIC] F1', 'One layer Pre.', 'One layer Rec.', 'One layer [ITALIC] F1']
[['LOC', '83.63', '82.48', '83.05', '82.22', '80.64', '81.41'], ['MISC', '84.14', '78.37', '81.07', '85.15', '74.29', '79.32'], ['ORG', '49.85', '50.51', '50.07', '44.10', '40.88', '42.39'], ['PER', '72.77', '65.73', '69.06', '72.70', '62.15', '66.91'], ['ALL', '75.88', '72.26', '[BOLD] 74.02', '74.83', '68.91', '71.74']]
S4SS3SSS0Px3 Number of Bi-LSTM Layers In the third experiment, we investigate the improvement when adding more Bi-LSTM layers. We observe a significant improvement when using two layers of Bi-LSTM. The performance is increased from 71.74% to 74.02%
The Importance of Automatic Syntactic Features in Vietnamese Named Entity Recognition
1705.10610
Table 7: Performance of our model when adding more features
['Features', 'Pre.', 'Rec.', '[ITALIC] F1']
[['Word', '75.88', '72.26', '74.02'], ['Word+POS', '84.23', '87.64', '85.90'], ['Word+Chunk', '90.73', '83.18', '86.79'], ['Word+Case', '83.68', '84.45', '84.06'], ['Word+Regex', '76.58', '71.86', '74.13'], ['Word+POS+Chunk+Case+Regex', '90.25', '92.55', '91.39'], ['Word+POS+Chunk+Regex', '91.09', '93.03', '[BOLD] 92.05']]
As shown in the previous experiments, using only word features in deep learning models is not enough to achieve the state-of-the-art result. In particular, the accuracy of this model is only 74.02%. This result is far lower in comparison to that of state-of-the-art systems for Vietnamese NER. In the following experiments, we add more useful features to enhance the performance of our deep learning model.
FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization
2005.03754
Table 8: Pearson (P) and Spearman (S) correlation between human-annotated faithfulness scores and ROUGE scores of content selection (computed between the reference and the output sentence). High content selection scores (typical ROUGE score for summarization) do not necessarily imply faithfulness of the summary.
['Metric', 'CNN/DM P', 'CNN/DM S', 'XSum P', 'XSum S']
[['ROUGE-1', '15.31^{**}', '14.92^{**}', '5.44', '5.79'], ['ROUGE-2', '15.10^{**}', '16.39^{**}', '8.25', '6.79'], ['ROUGE-L', '13.33^{**}', '13.35^{**}', '4.61', '3.97']]
Current evaluation metrics for summarization produce a single measure of the overall quality of the summary. Typically, the output summary is compared against the reference summary in terms of n-gram overlap. These metrics mainly evaluate content selection, i.e. whether the content of the output is similar to the content of the reference. In contrast, to evaluate faithfulness, we compare the output summary against the source document. One natural question that follows is whether high content matching sufficient for faithfulness. We compute the correlation coefficients between human-annotated faithfulness scores and ROUGE scores computed from the reference and the output sentence. For XSum, there is no significant correlation between the content selection metrics and faithfulness. This suggests that content selection and faithfulness should be measured separately as opposed to using a unified score.
Neural Naturalist: Generating Fine-Grained Image Comparisons
1909.04101
Table 2: Experimental results for comparative paragraph generation on the proposed dataset. For human captions, mean and standard deviation are given for a one-vs-rest scheme across twenty-five runs. We observed that CIDEr-D scores had little correlation with description quality. The Neural Naturalist model benefits from a strong joint encoding and Transformer-based comparative module, achieving the highest BLEU-4 and ROUGE-L scores.
['[EMPTY]', 'Dev BLEU-4', 'Dev ROUGE-L', 'Dev CIDEr-D', 'Test BLEU-4', 'Test ROUGE-L', 'Test CIDEr-D']
[['Most Frequent', '0.20', '0.31', '[BOLD] 0.42', '0.20', '0.30', '[BOLD] 0.43'], ['Text-Only', '0.14', '0.36', '0.05', '0.14', '0.36', '0.07'], ['Nearest Neighbor', '0.18', '0.40', '0.15', '0.14', '0.36', '0.06'], ['CNN + LSTM Vinyals et al. ( 2015 )', '0.22', '0.40', '0.13', '0.20', '0.37', '0.07'], ['CNN + Attn. + LSTM Xu et al. ( 2015 )', '0.21', '0.40', '0.14', '0.19', '0.38', '0.11'], ['Neural Naturalist\xa0– Simple Joint Encoding', '0.23', '0.44', '0.23', '-', '-', '-'], ['Neural Naturalist\xa0– No Comparative Module', '0.09', '0.27', '0.09', '-', '-', '-'], ['Neural Naturalist\xa0– Small Decoder', '0.22', '0.42', '0.25', '-', '-', '-'], ['Neural Naturalist\xa0– Full', '[BOLD] 0.24', '[BOLD] 0.46', '0.28', '[BOLD] 0.22', '[BOLD] 0.43', '0.25'], ['Human', '0.26 +/- 0.02', '0.47 +/- 0.01', '0.39 +/- 0.04', '0.27 +/- 0.01', '0.47 +/- 0.01', '0.42 +/- 0.03']]
We observe improvement across BLEU-4 and ROUGE-L scores compared to baselines. Curiously, we observe that the CIDEr-D metric is susceptible to common patterns in the data; our model, when stopped at its highest CIDEr-D score, outputs a variant of, “these animals appear exactly the same” for 95% of paragraphs, nearly mimicking the behavior of the most frequent paragraph (Freq.) baseline. The corpus-level behavior of CIDEr-D gives these outputs a higher score. We observed anecdotally higher quality outputs correlated with ROUGE-L score, which we verify using a human evaluation (paragraph after next).
Neural Naturalist: Generating Fine-Grained Image Comparisons
1909.04101
Table 4: Human evaluation results on 120 test set samples, twenty per column. Scale: -1 (perfectly wrong) to 1 (perfectly correct). Columns are ordered left-to-right by increasing distance. Our model outperforms baselines for several distances, though highly similar comparisons still prove difficult.
['[EMPTY]', 'Visual', 'Species', 'Genus', 'Family', 'Order', 'Class']
[['Freq.', '0.00', '0.00', '0.00', '0.00', '0.00', '0.00'], ['Text-Only', '0.00', '-0.10', '-0.05', '0.00', '0.15', '-0.15'], ['CNN + LSTM', '-0.15', '[BOLD] 0.20', '0.15', '[BOLD] 0.50', '0.40', '0.15'], ['CNN + Attn. + LSTM', '[BOLD] 0.15', '0.15', '0.15', '-0.05', '0.05', '0.20'], ['Neural Naturalist', '0.10', '-0.10', '[BOLD] 0.35', '0.40', '[BOLD] 0.45', '[BOLD] 0.55'], ['Human', '0.55', '0.55', '0.85', '1.00', '1.00', '1.00']]
In this measure, we see the frequency and text- only baselines now fall flat, as expected. The frequency baseline never receives any points, and the text-only baseline is often penalized for incorrectly guessing. Our model is successful at making distinctions between visually distinct species (Genus column and ones further right), which is near the challenge level of current fine-grained visual classification tasks. However, it struggles on the two data subsets with highest visual similarity (Visual, Species). The significant gap between all methods and human performance in these columns indicates ultra fine-grained distinctions are still possible for humans to describe, but pose a challenge for current models to capture.
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
1701.06538
Table 8: Model comparison on 100 Billion Word Google News Dataset
['Model', 'Test Perplexity', 'Test Perplexity', 'ops/timestep (millions)', '#Params excluding embed. & softmax', 'Total #Params', 'TFLOPS per GPU']
[['[EMPTY]', '.1 epochs', '1 epoch', '[EMPTY]', '(millions)', '(billions)', '(observed)'], ['Kneser-Ney 5-gram', '67.1', '45.3', '0.00001', '[EMPTY]', '76.0', '[EMPTY]'], ['4xLSTM-512', '54.5', '47.0', '8.4', '8.4', '0.1', '[BOLD] 1.23'], ['MoE-32', '48.5', '40.4', '8.4', '37.8', '0.1', '0.83'], ['MoE-256-h', '42.8', '35.3', '8.4', '272.9', '0.4', '1.11'], ['MoE-1024-h', '40.3', '32.7', '8.5', '1079.0', '1.2', '1.14'], ['MoE-4096-h', '38.9', '30.9', '8.6', '4303.4', '4.4', '1.07'], ['MoE-16384-h', '[BOLD] 38.2', '29.7', '8.8', '17201.0', '17.3', '0.96'], ['MoE-65536-h', '[BOLD] 38.2', '[BOLD] 28.9', '9.2', '68791.0', '68.9', '0.72'], ['MoE-131072-h', '39.8', '29.2', '9.7', '137577.6', '137.7', '0.30']]
: We evaluate our model using perplexity on a holdout dataset. Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs.
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
1701.06538
Table 6: Experiments with different combinations of losses.
['[ITALIC] wimportance', '[ITALIC] wload', 'Test Perplexity', '[ITALIC] CV( [ITALIC] Importance( [ITALIC] X))', '[ITALIC] CV( [ITALIC] Load( [ITALIC] X))', '[ITALIC] max( [ITALIC] Load( [ITALIC] X)) [ITALIC] mean( [ITALIC] Load( [ITALIC] X))']
[['0.0', '0.0', '39.8', '3.04', '3.01', '17.80'], ['0.2', '0.0', '[BOLD] 35.6', '0.06', '0.17', '1.47'], ['0.0', '0.2', '35.7', '0.22', '0.04', '1.15'], ['0.1', '0.1', '[BOLD] 35.6', '0.06', '0.05', '1.14'], ['0.01', '0.01', '35.7', '0.48', '0.11', '1.37'], ['1.0', '1.0', '35.7', '0.03', '0.02', '[BOLD] 1.07']]
All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of wload had lower loads on the most overloaded expert.
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
1701.06538
Table 7: Model comparison on 1 Billion Word Language Modeling Benchmark. Models marked with * are from (Jozefowicz et al., 2016).
['Model', 'Test Perplexity', 'Test Perplexity', 'ops/timestep (millions)', '#Params excluding embed. & softmax', 'Total #Params', '[ITALIC] Drop- [ITALIC] Prob', 'TFLOPS per GPU']
[['[EMPTY]', '10 epochs', '(final)', '[EMPTY]', '(millions)', '(billions)', '[EMPTY]', '(observed)'], ['Kneser-Ney 5-gram*', '[EMPTY]', '67.6', '0.00001', '[EMPTY]', '1.8', '[EMPTY]', '[EMPTY]'], ['LSTM-512-512*', '[EMPTY]', '54.1', '2.4', '2.4', '0.8', '0.1', '[EMPTY]'], ['LSTM-1024-512*', '[EMPTY]', '48.2', '4.7', '4.7', '0.8', '0.1', '[EMPTY]'], ['LSTM-2048-512*', '45.0', '43.7', '9.4', '9.4', '0.8', '0.1', '0.61'], ['LSTM-2048-512', '44.7', '[EMPTY]', '9.4', '9.4', '0.8', '0.1', '1.21'], ['4xLSTM-512', '46.0', '[EMPTY]', '8.4', '8.4', '0.8', '0.1', '1.07'], ['MoE-1-Wide', '46.1', '[EMPTY]', '8.4', '8.4', '0.8', '0.1', '1.29'], ['MoE-1-Deep', '45.7', '[EMPTY]', '8.4', '8.4', '0.8', '0.1', '1.29'], ['MoE-4', '45.0', '[EMPTY]', '8.4', '8.4', '0.8', '0.1', '0.52'], ['MoE-32', '39.7', '[EMPTY]', '8.4', '37.8', '0.9', '0.1', '0.87'], ['MoE-256', '35.7', '[EMPTY]', '8.6', '272.9', '1.1', '0.1', '0.81'], ['MoE-256-h', '36.0', '[EMPTY]', '8.4', '272.9', '1.1', '0.1', '0.89'], ['MoE-1024-h', '34.6', '[EMPTY]', '8.5', '1079.0', '1.9', '0.2', '0.90'], ['MoE-4096-h', '34.1', '[EMPTY]', '8.9', '4303.4', '5.1', '0.2', '0.74'], ['2xLSTM-8192-1024*', '34.7', '30.6', '151.0', '151.0', '1.8', '0.25', '1.09'], ['MoE-34M', '31.3', '[EMPTY]', '33.8', '4313.9', '6.0', '0.3', '1.22'], ['MoE-143M', '[BOLD] 28.0', '[EMPTY]', '142.7', '4371.1', '6.0', '0.4', '[BOLD] 1.56']]
For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.74-0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA.
We Built a Fake News & Click-bait Filter:What Happened Next Will Blow Your Mind!
1803.03786
Table 2: Performance of the individual groups of hand-crafted features.
['[BOLD] Features', '[BOLD] P', '[BOLD] R', '[BOLD] F1', '[BOLD] Acc']
[['Lexical', '75.53', '74.59', '75.02', '79.89'], ['Stylometric', '74.35', '65.99', '67.68', '77.52'], ['Grammatical', '73.23', '50.60', '42.99', '71.48'], ['Embeddings', '61.48', '53.95', '51.67', '71.22']]
We can see that, among the hand-crafted features, the lexical features yield the best results, i.e., words are the most indicative features. The good results of the stylometric features indicate that the intricacies of language use are highly discriminative. The next group is the one with the grammatical features, which shows good performance in terms of Precision. The last one are the embedding features, which although having low individual performance, contribute to the overall performance of the system as shown in next paragraph.
We Built a Fake News & Click-bait Filter:What Happened Next Will Blow Your Mind!
1803.03786
Table 1: Words most strongly associated with the fake news class.
['[BOLD] Original word', '[BOLD] Translation', '[BOLD] PMI']
[['chemtrails', 'chemtrails', '0.92'], ['феноменните', 'the phenomenal', '0.94'], ['следете в', 'follow in', '0.97'], ['тайнствена', 'mysterious', '0.95'], ['скрит', 'hidden', '0.84']]
Fact-checking lexicon: Using lexicons of sentiment words has been shown to be very successful for the task of sentiment analysis Mohammad and Turney , and we applied the same idea to extract a fact-checking lexicon. In particular, we use point-wise mutual information (PMI) to find terms (words, word bi-grams, and named entities) that are highly correlated with the fake/factual news class. We calculated the PMI scores for uni-grams, bi-grams and on extracted named entities. We can see in the table some words that grab people attention, but are not very informative by themselves, such as mysterious or phenomenon. These words are largely context-independent and are likely to remain stable in their usage across different domains and even over an extended period of time. Thus, they should be useful beyond this task and this dataset.
We Built a Fake News & Click-bait Filter:What Happened Next Will Blow Your Mind!
1803.03786
Table 3: Performance of different models.
['[BOLD] Feature Group', '[BOLD] P', '[BOLD] R', '[BOLD] F1', '[BOLD] Acc']
[['Baseline', '35.61', '50.00', '41.59', '71.22'], ['TF.IDF', '75.53', '74.59', '75.02', '79.89'], ['AttNN', '78.52', '78.74', '78.63', '81.99'], ['TF.IDF &AttNN', '79,89', '79.40', '79.63', '83.44'], ['TF.IDF &Feats &AttNN', '80.07', '79.49', '79.77', '83.57']]
Evaluating the final model, we set as a baseline the prediction of the majority class, i.e., the fake news class. This baseline has an F1 of 41.59% and accuracy of 71.22%. Another stable baseline, apart from just taking the majority class, is the TF.IDF bag-of-words approach, which sets a high bar for the general model score. We then observe how much the attention mechanism embeddings improve the score (AttNN). Finally, we add the hand-crafted features (Feats), which further improve the performance. From the results, we can conclude that both the attention-based task-specific embeddings and the manual features are important for the task of finding fake news.
Controllable Length Control Neural Encoder-Decoder via Reinforcement Learning
1909.09492
Table 1: Example summaries of four LC models. (Note that “gunners” is a nickname of arsenal)
['Source article Reference summary', 'arsenal chairman peter hill-wood revealed thursday that he fears french striker thierry henry will leave highbury at the end of the season . arsenal boss fears losing henry', 'arsenal chairman peter hill-wood revealed thursday that he fears french striker thierry henry will leave highbury at the end of the season . arsenal boss fears losing henry']
[['model', 'desired length and sampled summaries (true length)', 'desired length and sampled summaries (true length)'], ['[EMPTY]', '25', 'arsenal chief quits to leave (24)'], ['LenLInit', '45', 'gunners chief fears french striker will leave the end (45)'], ['[EMPTY]', '65', 'gunners chief says he will leave as he fears french striker will leave (58)'], ['[EMPTY]', '25', 'arsenal fears henry henry (22)'], ['LenInit', '45', 'arsenal fears french striker henry will leave arsenal (46)'], ['[EMPTY]', '65', 'arsenal fears french striker henry will leave arsenal says arsenal chairman (65)'], ['[EMPTY]', '25', 'arsenal fear henry will leave (25)'], ['LenMC', '45', 'arsenal ’s arsenal worried about henry ’s return home (45)'], ['[EMPTY]', '65', 'arsenal ’s arsenal worried about french striker henry will leave wednesday (64)'], ['[EMPTY]', '25', 'arsenal ’s henry to quit again (25)'], ['LenEmb', '45', 'arsenal chairman fears henry ’s fate of henry ’s boots (45)'], ['[EMPTY]', '65', 'arsenal chairman fears french striker henry says he ’s will leave retirement (65)']]
It is also observed that LenLInit and LenMC perform better on short sentence summary in this case.
Controllable Length Control Neural Encoder-Decoder via Reinforcement Learning
1909.09492
Table 2: Results of ML training on standard “test-1951”
['model name', 'R-1', 'R-2', 'R-L', 'svar']
[['Summarization models', 'Summarization models', 'Summarization models', 'Summarization models', 'Summarization models'], ['ABS', '29.55', '11.32', '26.42', '-'], ['ABS+', '29.76', '11.88', '26.96', '-'], ['Luong-NMT', '33.10', '14.45', '30.71', '-'], ['RAS-LSTM', '32.55', '14.70', '30.03', '-'], ['RAS-ELman', '33.78', '15.97', '31.15', '-'], ['seq2seq (our impl.)', '32.24', '14.92', '30.21', '14.23'], ['Length-control models', 'Length-control models', 'Length-control models', 'Length-control models', 'Length-control models'], ['LenLInit (our)', '[BOLD] 30.47', '[BOLD] 13.35', '[BOLD] 28.40', '2.15'], ['LenInit', '29.97', '13.03', '28.07', '2.11'], ['LenMC (our)', '29.45', '12.65', '27.41', '0.87'], ['LenEmb', '28.83', '11.89', '26.92', '[BOLD] 0.85']]
Although the evaluation score is not the unique objective in this research, it is of interest that how exactly the score is deprived by LC capacity. After individually comparing two WLI models and two RLI models, we find the two proposed models, LenLInit and LenMC, slightly corrupt LC capacity while improve the scores obviously.
Controllable Length Control Neural Encoder-Decoder via Reinforcement Learning
1909.09492
Table 3: Performance of length control RL in “test-4k” (ML results also included for comparison). Obviously highest scores (0.4 larger than the second best) are in bolded font, the scores in italic font are significantly worse score (2 lower than best socre).
['model', 'parameter', '25 R-1', '25 R-2', '25 R-L', '45 R-1', '45 R-2', '45 R-L', '65 R-1', '65 R-2', '65 R-L', 'svar(±std)']
[['ML', 'ML', 'ML', 'ML', 'ML', 'ML', 'ML', 'ML', 'ML', 'ML', 'ML', 'ML'], ['LenLInit', '[EMPTY]', '[BOLD] 39.03', '[BOLD] 17.68', '[BOLD] 37.46', '42.04', '20.47', '39.87', '[BOLD] 39.40', '[BOLD] 18.71', '[BOLD] 36.96', '3.96'], ['LenInit', '[EMPTY]', '37.36', '16.76', '35.92', '42.11', '20.55', '39.83', '38.67', '18.23', '36.33', '2.98'], ['LenMC', '[EMPTY]', '37.10', '16.68', '35.72', '41.38', '19.98', '38.99', '37.93', '17.87', '35.51', '1.05'], ['LenEmb', '[EMPTY]', '[ITALIC] 34.77', '[ITALIC] 14.85', '[ITALIC] 33.41', '[ITALIC] 40.00', '[ITALIC] 18.43', '[ITALIC] 37.74', '[ITALIC] 36.96', '[ITALIC] 16.89', '[ITALIC] 34.49', '[BOLD] 0.97'], ['SCST', 'SCST', 'SCST', 'SCST', 'SCST', 'SCST', 'SCST', 'SCST', 'SCST', 'SCST', 'SCST', 'SCST'], ['LenLInit', '[EMPTY]', '[BOLD] 42.90', '[BOLD] 20.10', '[BOLD] 40.78', '[BOLD] 43.48', '[BOLD] 20.83', '[BOLD] 40.99', '[BOLD] 42.61', '[BOLD] 20.37', '[BOLD] 40.06', '[ITALIC] 11.0±2.57'], ['LenInit', '[EMPTY]', '39.55', '[ITALIC] 17.45', '[ITALIC] 37.85', '42.75', '20.20', '40.35', '40.79', '19.01', '38.37', '[ITALIC] 8.90±2.12'], ['LenMC', '[EMPTY]', '40.38', '18.14', '38.52', '42.14', '19.98', '39.48', '[ITALIC] 38.36', '[ITALIC] 17.75', '[ITALIC] 35.65', '2.46±0.47'], ['LenEmb', '[EMPTY]', '[ITALIC] 37.77', '[ITALIC] 15.42', '[ITALIC] 35.88', '[ITALIC] 40.40', '[ITALIC] 18.24', '[ITALIC] 37.75', '[ITALIC] 37.48', '[ITALIC] 16.87', '[ITALIC] 34.77', '[BOLD] 1.59±0.12'], ['MTS-RL', 'MTS-RL', 'MTS-RL', 'MTS-RL', 'MTS-RL', 'MTS-RL', 'MTS-RL', 'MTS-RL', 'MTS-RL', 'MTS-RL', 'MTS-RL', 'MTS-RL'], ['LenLInit', '[ITALIC] dth=16', '[BOLD] 42.64', '[BOLD] 20.13', '[BOLD] 40.50', '[BOLD] 43.12', '20.80', '[BOLD] 40.62', '[BOLD] 41.44', '[BOLD] 19.81', '[BOLD] 38.91', '[ITALIC] 8.54±0.78'], ['LenLInit', '[ITALIC] dth=8', '41.43', '19.01', '39.46', '42.63', '20.55', '40.23', '39.81', '19.03', '37.43', '5.14±0.60'], ['LenLInit', '[ITALIC] dth=4', '40.66', '18.43', '38.85', '42.46', '20.45', '40.02', '39.13', '18.61', '36.70', '[BOLD] 3.87±0.10'], ['LenInit', '[ITALIC] dth=16', '[BOLD] 40.22', '[BOLD] 17.88', '[BOLD] 38.42', '[BOLD] 42.77', '20.36', '[BOLD] 40.31', '[BOLD] 40.32', '[BOLD] 18.83', '[BOLD] 37.69', '[ITALIC] 6.17±0.46'], ['LenInit', '[ITALIC] dth=8', '39.52', '17.75', '37.79', '42.42', '20.19', '39.95', '39.16', '18.28', '36.57', '3.50±0.56'], ['LenInit', '[ITALIC] dth=4', '38.62', '17.31', '36.98', '42.26', '20.29', '39.82', '38.52', '18.00', '36.04', '[BOLD] 2.79±0.13'], ['LenMC', '[ITALIC] dth=1', '38.56', '16.53', '36.89', '41.33', '19.67', '39.04', '37.83', '17.66', '35.39', '1.01±0.07'], ['LenMC', '[ITALIC] dth=0', '38.60', '[BOLD] 16.98', '36.98', '[BOLD] 41.89', '20.08', '39.37', '38.18', '17.93', '35.68', '0.89±0.02'], ['SCD-RL', 'SCD-RL', 'SCD-RL', 'SCD-RL', 'SCD-RL', 'SCD-RL', 'SCD-RL', 'SCD-RL', 'SCD-RL', 'SCD-RL', 'SCD-RL', 'SCD-RL'], ['LenLInit', '[ITALIC] λ=0.1', '[BOLD] 41.22', '[BOLD] 18.95', '[BOLD] 39.31', '[BOLD] 42.77', '20.65', '[BOLD] 40.34', '[BOLD] 40.14', '[BOLD] 19.22', '[BOLD] 37.76', '6.05±0.45'], ['LenLInit', '[ITALIC] λ=0.8', '40.12', '18.15', '38.41', '42.25', '20.46', '39.93', '39.12', '18.59', '36.67', '[BOLD] 3.84±0.05'], ['LenInit', '[ITALIC] λ=0.1', '[BOLD] 40.45', '[BOLD] 17.88', '[BOLD] 38.48', '[BOLD] 42.88', '20.08', '[BOLD] 40.30', '[BOLD] 40.26', '[BOLD] 18.47', '[BOLD] 37.38', '4.52±0.14'], ['LenInit', '[ITALIC] λ=0.8', '38.39', '17.07', '36.75', '42.14', '20.21', '39.75', '38.45', '17.98', '35.88', '[BOLD] 2.64±0.06'], ['LenMC', '[ITALIC] λ=0.1', '[BOLD] 39.86', '[BOLD] 17.69', '[BOLD] 38.13', '[BOLD] 42.27', '20.19', '[BOLD] 39.78', '[BOLD] 38.33', '18.02', '[BOLD] 35.78', '1.46±0.04'], ['LenMC', '[ITALIC] λ=0.4', '38.87', '17.28', '37.28', '41.64', '19.95', '39.20', '37.75', '17.83', '35.41', '[BOLD] 1.15±0.02']]
We evaluate our models with sentence length of 25, 45 and 65, which represent short, median and long sentences separately. Results may vary after each training process since RL is usually unstable, so we repeat training for multiple times in each model and statistic the results on average. We make experiments of MTS-RL on three LC models, for WLI models LenLInit and LenInit, accuracy and svar both rise under the selected dth increasing, which means hyper-parameter dth in MTS-RL can be used to adjust the LC capacity. However, for the RLI model LenMC, results show there is no obvious distinction in scores when we use different dth. Hence, we adopt SCD-RL training algorithm for LenMC, the results show our SCD-RL algorithm can control LC capacity for RLI model as MTS-RL does for the WLI model. and SCD-RL can also manage the LC capacity for WLI models. Overall, two RL training algorithms prevent the model from length control collapsing, and make this capacity controllable via their own hyper-parameters.
Calculating the similarity between words and sentences using a lexical database and corpus statistics
1802.05667
TABLE III: Synsets and corresponding shortest path distances from WordNet
['Synset Pair', 'Shortest Path Distance']
[['Synset(‘river.n.01’) - Synset(‘bank.n.01’)', '8'], ['Synset(‘river.n.01’) - Synset(‘bank.n.09’)', '10'], ['Synset(‘river.n.01’) - Synset(‘bank.n.06’)', '11']]
When comparing two sentences, we have many such word pairs which have multiple synsets. Therefore, not considering the proper synset in context of the sentence, could introduce errors at the early stage of similarity calculation. Hence, sense of the word affects significantly on the overall similarity measure. Identifying sense of the word is part of the ‘word sense disambiguation’ research area. We use ‘max similarity’ algorithm, Eq. argmaxsynset(a)(n∑imaxsynset(i)(sim(i,a)) (1)
Upcycle Your OCR: Reusing OCRs for Post-OCR Text Correction in Romanised Sanskrit
1809.02147
Table 4: Performance in terms of CRR, WRR for Google OCR
['[BOLD] Model', '[BOLD] Bhagavad Gītā Ins', '[BOLD] Bhagavad Gītā Del', '[BOLD] Bhagavad Gītā Sub', '[BOLD] Sahaśranāma Ins', '[BOLD] Sahaśranāma Del', '[BOLD] Sahaśranāma Sub', '[BOLD] System errors Ins', '[BOLD] System errors Del', '[BOLD] System errors Sub']
[['OCR', '23', '63', '1868', '73', '696', '1596', '–', '–', '–'], ['PCRF', '22', '57', '641', '72', '663', '932', '0', '73', '209'], ['[BOLD] CopyNet', '22', '[BOLD] 45', '629', '72', '[BOLD] 576', '561', '10', '5', '52']]
, we analyse the reduction in specific error types for PCRF and CopyNet after the alignment of the predicted string with that of the ground truth in terms of insertion, deletion and substitution. We also report the system induced errors, where a correct component at the input (OCR output) is mispredicted to a wrong output by the model. CopyNet outperforms PCRF in correcting the errors and it also introduces lesser number of errors of its own. Both CopyNet and PCRF Schnober et al. Both the systems perform well in handling substitution errors, the type which dominated the strings in OCRTest, though neither of the systems was able to correct the insertion errors. Insertion can be seen as a special case of 1-to-many insertion matches, which both systems are ideally capable of handling. We see that for Sahaśranāma, CopyNet corrects about 17.24 % of the deletion errors as against <5% of the deletion errors corrected by PCRF.
Upcycle Your OCR: Reusing OCRs for Post-OCR Text Correction in Romanised Sanskrit
1809.02147
Table 1: OCR performances for different languages with overall CRR, total Insertion, Deletion and Substitution errors.
['[BOLD] Language', '[BOLD] Bhagavad Gītā CRR', '[BOLD] Bhagavad Gītā Ins', '[BOLD] Bhagavad Gītā Del', '[BOLD] Bhagavad Gītā Sub', '[BOLD] Sahaśranāma CRR', '[BOLD] Sahaśranāma Ins', '[BOLD] Sahaśranāma Del', '[BOLD] Sahaśranāma Sub', '[BOLD] Combined CRR']
[['English', '84.92', '23', '63', '1868', '64.06', '73', '696', '1596', '80.08'], ['French', '84.90', '21', '102', '1710', '63.91', '91', '702', '1670', '80.04'], ['Finnish', '82.61', '15', '141', '1902', '61.31', '80', '730', '1821', '78.81'], ['Italian', '83.45', '20', '73', '1821', '62.19', '84', '690', '1673', '79.03'], ['Irish', '84.52', '12', '78', '1810', '63.81', '72', '709', '1841', '79.93'], ['German', '84.40', '33', '72', '1821', '63.79', '87', '723', '1874', '79.12']]
French alphabet has the highest grapheme overlap with that of the Sanskrit alphabet (37 of 50), while all other languages have one less grapheme common with Sanskrit. Hence, we arbitrarily take 5 of the languages in addition to French and perform our analysis. The table also shows the count of error types made by the OCR after alignment Jiampojamarn et al. D’hondt et al. All the languages have a near similar CRR with English and French leading the list. Based on our observations on the OCR performance, we select English for our further experiments.
Word-based Domain Adaptation for Neural Machine Translation
1906.03129
Table 4: Study on the effect of different smoothing methods for word weights generation. Baseline is the same as before. w.w. without smoothing means the word weights (w.w.) are computed without smoothing in the log domain. w.w. (mean smooth.) indicates smoothing the word scores via using a mean average filter before thresholding and w.w. (gauss. smooth.) indicates using a normal distributed filter before thresholding. The approaches regarding different smoothing methods are described in Section 2.2.
['System', 'BLEU [%]', 'TER [%]']
[['Baseline', '24.37', '61.66'], ['+w.w. without smooth.', '21.38', '66.25'], ['+w.w. (mean smooth.)', '25.99', '60.70'], ['+w.w. (gauss. smooth.)', '26.14', '60.34']]
The word weights generated without using smoothing methods, where ^st=st, lead to poor translation quality of 21.38% from 24.37% BLEU and 66.25% from 61.66% TER, respectively. We need to smooth the word scores before thresholding because the values of logPI(yt|yt−1t−n)−logPO(yt|yt−1t−n) are noisy. If there are selected isolated words like ’,’ which have higher scores than the surrounding text, it may cause rare vocabulary problem after training.
Word-based Domain Adaptation for Neural Machine Translation
1906.03129
Table 3: E-commerce English → Chinese BLEU results on test set. Baseline is trained on mixed in-domain and out-of-domain data. No. 2 is continuing training from baseline with objective defined as Eq. 1. No. 3 is continuing training from baseline with sentence-level weights and No. 4 is with word weights, as defined in Section 2.2. No. 5 refers to assigning wt using LCW method described in Section 2.3. No. 6 is equivalent to directly fine-tuning on in-domain datasets starting from the baseline model and No. 7 is equivalent to fine-tuning on in-domain datasets after No. 5 is finished.
['No.', 'System description', 'Item descriptions BLEU [%]', 'Item descriptions TER [%]']
[['1', 'Baseline', '24.37', '61.66'], ['2', '1 + continue training without word weights', '24.31', '61.69'], ['3', '1 + continue training with sentence weights', '25.79', '60.82'], ['4', '1 + continue training with word weights', '26.14', '60.34'], ['5', '1 + continue training with chunk weights', '26.42', '60.10'], ['6', '1 + fine-tuning on in-domain', '26.06', '59.93'], ['7', '5 + fine-tuning on in-domain', '27.30', '58.29']]
First, the baseline trained on mixed in-domain and out-of-domain datasets gives 24.37% BLEU and 61.66% TER, respectively. Directly fine-tuning on in-domain dataset already improves the model due to the bias of the model towards in-domain data.
Evaluating historical text normalization systems: How well do they generalize?
1804.02545
Table 2: Tokens normalized correctly (%) for each dataset. Upper half: results on (A)ll tokens reported by Pettersson et al. (2014) for a hybrid model (apply memorization baseline to seen tokens and an edit-distance-based model to unseen tokens) and two SMT models (which align character unigrams and bigrams, respectively). Lower half: results from our experiments, including accuracy reported separately on (S)een and (U)nseen tokens.
['[EMPTY]', 'English A', 'English S', 'English U', 'German A', 'German S', 'German U', 'Hungarian A', 'Hungarian S', 'Hungarian U', 'Icelandic A', 'Icelandic S', 'Icelandic U', 'Swedish A', 'Swedish S', 'Swedish U']
[['Hybrid', '92.9', '[EMPTY]', '[EMPTY]', '95.1', '[EMPTY]', '[EMPTY]', '76.4', '[EMPTY]', '[EMPTY]', '[BOLD] 84.6', '[EMPTY]', '[EMPTY]', '90.8', '[EMPTY]', '[EMPTY]'], ['GIZA++ un', '[BOLD] 94.3', '[EMPTY]', '[EMPTY]', '[BOLD] 96.6', '[EMPTY]', '[EMPTY]', '79.9', '[EMPTY]', '[EMPTY]', '71.8', '[EMPTY]', '[EMPTY]', '[BOLD] 92.9', '[EMPTY]', '[EMPTY]'], ['GIZA++ bi', '92.4', '[EMPTY]', '[EMPTY]', '95.5', '[EMPTY]', '[EMPTY]', '80.1', '[EMPTY]', '[EMPTY]', '71.5', '[EMPTY]', '[EMPTY]', '92.5', '[EMPTY]', '[EMPTY]'], ['Mem.\xa0baseline', '91.5', '[BOLD] 96.9', '30.5', '94.1', '96.9', '30.5', '73.6', '[BOLD] 96.0', '2.9', '80.3', '[BOLD] 86.8', '28.3', '85.4', '[BOLD] 98.1', '41.4'], ['Soft attention', '89.9', '93.7', '46.9', '94.3', '98.1', '72.4', '79.8', '89.4', '49.6', '83.1', '85.9', '60.1', '89.7', '97.2', '63.8'], ['Hard attention', '93.0', '96.6', '[BOLD] 52.4', '96.5', '[BOLD] 99.3', '[BOLD] 80.5', '[BOLD] 88.0', '95.3', '[BOLD] 65.0', '83.5', '86.2', '[BOLD] 61.4', '90.7', '97.9', '[BOLD] 65.7']]
The split into seen/unseen highlights the fact that neither of the neural models does as well on seen items as the baseline; indeed the soft attention model is considerably worse in English and Hungarian, the two largest datasets. The result is that this model actually underperforms the baseline when applied to all tokens, although a hybrid model (baseline for seen, soft attention for unseen) would outperform the baseline. Nevertheless, the hard attention model performs best on unseen tokens in all cases, often by a wide margin, and also yields competitive overall performance.
A Simple Approach to Case-Based Reasoning in Knowledge Bases
2006.14198
Table 3: Link prediction results on WN18RR dataset.
['[BOLD] Metric', '[BOLD] TransE', '[BOLD] DistMult', '[BOLD] ComplEx', '[BOLD] ConvE', '[BOLD] RotatE', '[BOLD] GNTP', '[BOLD] CBR']
[['hits@1', '-', '0.39', '0.41', '0.40', '[BOLD] 0.43', '0.41', '0.38'], ['hits@3', '-', '0.44', '0.46', '0.44', '[BOLD] 0.49', '0.44', '0.46'], ['hits@10', '0.50', '0.49', '0.51', '0.52', '[BOLD] 0.57', '0.48', '0.51'], ['MRR', '0.23', '0.43', '0.44', '0.43', '[BOLD] 0.48', '0.43', '0.43']]
WN18RR: CBR performs competitively with GNTPs and most embedding based methods except RotatE sun2019rotate. Upon further analysis, we find that for 210 triples in the test set, the entity was not present in the graph and hence no answers were returned for those query entities.
A Simple Approach to Case-Based Reasoning in Knowledge Bases
2006.14198
Table 1: Link prediction results on the FB122 dataset.
['[EMPTY]', '[BOLD] Model', 'hits@3', 'hits@5', 'hits@10', 'MRR']
[['With Rules', 'KALE-Pre guo2016jointly', '0.358', '0.419', '0.498', '0.291'], ['With Rules', 'KALE-Joint guo2016jointly', '0.384', '[BOLD] 0.447', '[BOLD] 0.522', '0.325'], ['With Rules', '[ITALIC] ASR-DistMult minervini2017adversarial', '0.363', '0.403', '0.449', '0.330'], ['With Rules', '[ITALIC] ASR-ComplEx minervini2017adversarial', '0.373', '0.410', '0.459', '0.338'], ['Without Rules', 'TransE bordes2013translating', '0.360', '0.415', '0.481', '0.296'], ['Without Rules', 'DistMult yang2017differentiable', '0.360', '0.403', '0.453', '0.313'], ['Without Rules', 'ComplEx trouillon2016complex', '0.370', '0.413', '0.462', '0.329'], ['Without Rules', 'GNTPs minervini2019differentiable', '0.337', '0.369', '0.412', '0.313'], ['Without Rules', 'CBR (Ours)', '[BOLD] 0.400', '[BOLD] 0.445', '[BOLD] 0.488', '[BOLD] 0.359']]
FB122: Next we consider the FB122 dataset by guo2016jointly. Comparing results on FB122 is attractive for a couple of reasons — (a) Firstly, this dataset comes with a set of logical rules hand coded by the authors that can be used for logical inference. It would be interesting to see if our CBR approach is able to automatically uncover the rules from the data. (b) Secondly, there is a recent work on a neural model for logical inference (GNTPs) minervini2019differentiable that scales neural-theorem-provers ntp to this dataset and hence we can directly compare with them. We compare with several baselines. CBR significantly outperforms GNTPs and even outperforms most models which have access to the hand-coded rules during training. We also find that CBR is able to uncover correct rules for 27 out of 31 (87%) query relations.
A Simple Approach to Case-Based Reasoning in Knowledge Bases
2006.14198
Table 4: Link prediction results on NELL-995 for few shot relations.
['[BOLD] Model', 'hits@1', 'hits@10', 'MRR']
[['NeuralLP yang2017differentiable', '0.048', '0.351', '0.179'], ['NTP- [ITALIC] λ ntp', '0.102', '0.334', '0.155'], ['MINERVA das2018go', '0.162', '0.283', '0.201'], ['MultiHop(DistMult) LinRX2018:MultiHopKG', '0.145', '0.306', '0.200'], ['MultiHop(ConvE) LinRX2018:MultiHopKG', '0.178', '0.329', '0.231'], ['Meta-KGR(DistMult) lv2019adapting', '0.197', '0.345', '0.248'], ['Meta-KGR(ConvE) lv2019adapting', '0.197', '0.347', '0.253'], ['CBR (ours)', '[BOLD] 0.234', '[BOLD] 0.403', '[BOLD] 0.293']]
As mentioned before, our CBR based approach needs no training and gathers reasoning patterns from few similar entities. Therefore, it should ideally perform well for query relations for which we do not have a lot of data. Recently, lv2019adapting studied this problem and propose a meta-learning finn2017model based solution (Meta-KGR) for few-shot relations. We compare with their model to see if CBR based approach will be able to generalize for such few-shot relations.
Slim Embedding Layers for Recurrent Neural Language Models
1711.09873
Table 4: Time Usage Comparison
['Model', 'CPU(seconds)', 'GPU (milliseconds)']
[['Uncompressed', '2.7', '38'], ['HashNet', '80.6', '-'], ['SE', '0.7', '25']]
We report the time used on both CPU and GPU. All the computations use 32 bit floating point numbers. On CPU, HashNet is slower than the normal uncompressed model, mainly because of two reasons: 1) The uncompressed model uses optimized matrix multiplication subroutines, 2) The hash function used in HashNet is cheap, but it still has overhead compared with the uncompressed model. The SE model runs faster mainly because it uses matrix multiplication subroutines and has lower time complexity with the help of dynamic programming.
Slim Embedding Layers for Recurrent Neural Language Models
1711.09873
Table 2: Perplexity results for single models on BillionW. Bold number denotes results on a single GPU.
['Model', 'Perplexity', '#P[Billions]']
[['Interpolated Kneser-Ney 5-gram ', '67.6', '1.76'], ['4-layer IRNN-512 ', '69.4', '[EMPTY]'], ['RNN-2048 + BlackOut sampling ', '[BOLD] 68.3', '[EMPTY]'], ['RNN-1024 + MaxEnt 9-gram ', '51.3', '20'], ['LSTM-2048-512 ', '43.7', '0.83'], ['LightRNN ', '[BOLD] 66.0', '0.041'], ['LSTM-2048-512 ', '43.7', '0.83'], ['2-layer LSTM-8192-1024 ', '30.6', '1.8'], ['2-layer LSTM-8192-1024 + CNN inputs ', '30.0', '1.04'], ['2-layer LSTM-8192-1024 + CNN inputs + CNN softmax ', '39.8', '0.29'], ['LSTM-2048 Adaptive Softmax ', '[BOLD] 43.9', '>0.29'], ['2-layer LSTM-2048 Adaptive Softmax ', '[BOLD] 39.8', '[EMPTY]'], ['GCNN-13 ', '[BOLD] 38.1', '[EMPTY]'], ['MOE ', '28.0', '>4.37'], ['SE (2-layer 2048 LSTM NCE)', '[BOLD] 39.9', '0.32'], ['SE (3-layer 2048 LSTM NCE)', '[BOLD] 39.5', '0.25'], ['SE (3-layer 2048 LSTM IS )', '[BOLD] 38.3', '0.25']]
In the one billion word experiments, the total memory during training used on the GPU is about 7GB, and is smaller if a larger compression rate is used. We use a fixed smoothed unigram distribution (unigram distribution raised to 0.75) as the noise distribution. For the two layer model, the compression rate for the input layer is 1/32 and the output layer is 1/8, and the total number of parameters is 322 million. For the three layer model, the compression rates for the input and output layer are 1/32 and 1/16, and the total number of parameters is 254 million. Both experiments using NCE take about seven days of training on a GTX 1080 GPU. \citeauthorjozefowicz2016exploring \shortcitejozefowicz2016exploring suggests importance sampling (IS) could perform better than the NCE model, so we ran the experiment using IS and we used 4000 noise samples for each mini-batch. The PPL decreased to 38.3 after training for 8 days. As far as we know, the 3 layer model is the most compact recurrent neural language model that has a perplexity below 40 on this dataset.
Neural Baby Talk
1803.09845
Table 2: Performance on the test portion of Karpathy et al. [20]’s splits on COCO dataset. ∗ directly optimizes the CIDEr Metric, † uses better image features, and are thus not directly comparable.
['Method', 'BLEU1', 'BLEU4', 'METEOR', 'CIDEr', 'SPICE']
[['Adaptive ', '74.2', '32.5', '26.6', '[BOLD] 108.5', '19.5'], ['Att2in ', '-', '31.3', '26.0', '101.3', '-'], ['Up-Down ', '74.5', '33.4', '26.1', '105.4', '19.2'], ['Att2in∗ ', '-', '33.3', '26.3', '111.4', '-'], ['Up-Down† ', '79.8', '36.3', '27.7', '120.1', '21.4'], ['NBT', '[BOLD] 75.5', '[BOLD] 34.7', '[BOLD] 27.1', '107.2', '[BOLD] 20.1'], ['NBToracle', '75.9', '34.9', '27.4', '108.9', '20.4']]
Our method outperforms 4 out of 5 automatic evaluation metrics compared to the state of the art Interestingly, the NBToracle has little improvement over NBT. We suspect the reason is that explicit ground truth annotation is absent for visual words. Our model can be further improved with explicit co-reference supervision where the ground truth location annotation of the visual word is provided. Fig. We see that our model learns to correctly identify the visual word, and ground it in image regions even under weak supervision (COCO). Our model is also robust to erroneous detections and produces correct captions (3rd column).
Neural Baby Talk
1803.09845
Table 1: Performance on the test portion of Karpathy et al. [20]’s splits on Flickr30k Entities dataset.
['Method', 'BLEU1', 'BLEU4', 'METEOR', 'CIDEr', 'SPICE']
[['Hard-Attention ', '66.9', '19.9', '18.5', '-', '-'], ['ATT-FCN ', '64.7', '23.0', '18.9', '-', '-'], ['Adaptive ', '67.7', '25.1', '20.4', '53.1', '14.5'], ['NBT', '[BOLD] 69.0', '[BOLD] 27.1', '[BOLD] 21.7', '[BOLD] 57.5', '[BOLD] 15.6'], ['NBToracle', '72.0', '28.5', '23.1', '64.8', '19.6']]
When using ground truth proposals, NBToracle significantly outperforms previous methods, improving 5.1 on SPICE, which implies that our method could further benefit from improved object detectors.
Neural Baby Talk
1803.09845
Table 3: Performance on the test portion of the robust image captioning split on COCO dataset.
['Method', 'BLEU4', 'METEOR', 'CIDEr', 'SPICE', 'Accuracy']
[['Att2in ', '31.5', '24.6', '90.6', '17.7', '39.0'], ['Up-Down ', '31.6', '25.0', '92.0', '18.1', '39.7'], ['NBT', '[BOLD] 31.7', '[BOLD] 25.2', '[BOLD] 94.1', '[BOLD] 18.3', '[BOLD] 42.4'], ['NBToracle', '31.9', '25.5', '95.5', '18.7', '45.7']]
Results and analysis. As we can see, all models perform worse on the robust-COCO split than the Karpathy’s split by 2∼3 points in general. The oracle setting (NBToracle) has consistent improvements on all metrics, improving 3.3 on the proposed metric.
Neural Baby Talk
1803.09845
Table 4: Evaluation of captions generated using the proposed method. G means greedy decoding, and T1−2 means using constrained beam search [2] with 1−2 top detected concepts. ∗ is the result using VGG-16 [41] and † is the result using ResNet-101.
['Method', 'Out-of-Domain Test Data bottle', 'Out-of-Domain Test Data bus', 'Out-of-Domain Test Data couch', 'Out-of-Domain Test Data microwave', 'Out-of-Domain Test Data pizza', 'Out-of-Domain Test Data racket', 'Out-of-Domain Test Data suitcase', 'Out-of-Domain Test Data zebra', 'Out-of-Domain Test Data Avg', 'Out-of-Domain Test Data SPICE', 'Out-of-Domain Test Data METEOR', 'Out-of-Domain Test Data CIDEr', 'In-Domain Test Data SPICE', 'In-Domain Test Data METEOR', 'In-Domain Test Data CIDER']
[['DCC ', '4.6', '29.8', '45.9', '28.1', '64.6', '52.2', '13.2', '79.9', '39.8', '13.4', '21.0', '59.1', '15.9', '23.0', '77.2'], ['NOC ', '17.8', '68.8', '25.6', '24.7', '69.3', '68.1', '39.9', '89.0', '49.1', '-', '21.4', '-', '-', '-', '-'], ['C-LSTM ', '29.7', '74.4', '38.8', '27.8', '68.2', '70.3', '44.8', '91.4', '55.7', '-', '23.0', '-', '-', '-', '-'], ['Base+T4 ', '16.3', '67.8', '48.2', '29.7', '77.2', '57.1', '49.9', '85.7', '54.0', '15.9', '23.3', '77.9', '18.0', '24.5', '86.3'], ['NBT∗+G', '7.1', '73.7', '34.4', '61.9', '59.9', '20.2', '42.3', '88.5', '48.5', '15.7', '22.8', '77.0', '17.5', '24.3', '87.4'], ['NBT†+G', '14.0', '74.8', '42.8', '63.7', '74.4', '19.0', '44.5', '92.0', '53.2', '16.6', '23.9', '84.0', '[BOLD] 18.4', '25.3', '94.0'], ['NBT†+T1', '36.2', '77.7', '43.9', '65.8', '70.3', '19.8', '51.2', '93.7', '57.3', '16.7', '23.9', '85.7', '[BOLD] 18.4', '[BOLD] 25.5', '[BOLD] 95.2'], ['NBT†+T2', '[BOLD] 38.3', '[BOLD] 80.0', '[BOLD] 54.0', '[BOLD] 70.3', '[BOLD] 81.1', '[BOLD] 74.8', '[BOLD] 67.8', '[BOLD] 96.6', '[BOLD] 70.3', '[BOLD] 17.4', '[BOLD] 24.1', '[BOLD] 86.0', '18.0', '25.0', '92.1']]
Specifically, NBT†+T2 outperforms the previous state-of-art model C-LSTM by 14.6% on average F1 scores. From the category F1 scores, we can see that our model is less likely to select small objects, e.g. “bottle”, “racket” when only using the greedy decoding. Fig. Also see rightmost example in Fig.
FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension
1711.07341
Table 10: The performance (accuracy) of ESIM with our proposed attention enhancement on MultiNLI (Williams et al., 2017) development set. (d is the output hidden size of BiLSTM)
['[EMPTY]', '[BOLD] Cross-Domain', '[BOLD] In-Domain']
[['Our ESIM without CoVe ( [ITALIC] d=300)', '73.4', '73.3'], ['Our ESIM without CoVe + fully-aware ( [ITALIC] d=250)', '76.9', '76.2'], ['Our ESIM without CoVe + fully-aware + multi-level ( [ITALIC] d=250)', '78.2', '77.9'], ['Our ESIM ( [ITALIC] d=300)', '73.9', '73.7'], ['Our ESIM + fully-aware ( [ITALIC] d=250)', '77.3', '76.5'], ['Our ESIM + fully-aware + multi-level ( [ITALIC] d=250)', '[BOLD] 78.4', '[BOLD] 78.2']]
Augmenting with fully-aware attention yields the biggest improvement, which demonstrates the usefulness of this simple enhancement. Further improvement is obtained when we use multi-level fusion in our ESIM. Experiments with and without CoVe embedding show similar observations.
Morphological Inflection Generationwith Hard Monotonic Attention
1611.01487
Table 2: Results on the Wiktionary datasets
['[EMPTY]', 'DE-N', 'DE-V', 'ES-V', 'FI-NA', 'FI-V', 'FR-V', 'NL-V', 'Avg.']
[['durrettdenero2013', '88.31', '94.76', '99.61', '92.14', '97.23', '98.80', '90.50', '94.47'], ['nicolai-cherry-kondrak:2015:NAACL-HLT', '88.6', '97.50', '99.80', '93.00', '[BOLD] 98.10', '[BOLD] 99.20', '96.10', '96.04'], ['faruquiTND15', '88.12', '[BOLD] 97.72', '[BOLD] 99.81', '95.44', '97.81', '98.82', '96.71', '96.34'], ['yu2016online', '87.5', '92.11', '99.52', '95.48', '[BOLD] 98.10', '98.65', '95.90', '95.32'], ['Soft', '88.18', '95.62', '99.73', '93.16', '97.74', '98.79', '96.73', '95.7'], ['Hard', '[BOLD] 88.87', '97.35', '99.79', '[BOLD] 95.75', '98.07', '99.04', '[BOLD] 97.03', '[BOLD] 96.55']]
This shows the robustness of our model also with large amounts of training examples, and the advantage the hard attention mechanism provides over the encoder-decoder approach of \newcitefaruquiTND15 which does not employ an attention mechanism. Our model is also significantly more accurate than the model of \newciteyu2016online, which shows the advantage of using independently learned alignments to guide the network’s attention from the beginning of the training process. While our soft-attention implementation outperformed the models of \newciteyu2016online and \newcitedurrettdenero2013, it still performed inferiorly to the hard attention model.
Morphological Inflection Generationwith Hard Monotonic Attention
1611.01487
Table 1: Results on the CELEX dataset
['[EMPTY]', '13SIA', '2PIE', '2PKE', 'rP', 'Avg.']
[['med Kann and Schütze ( 2016a )', '83.9', '95', '87.6', '84', '87.62'], ['nwfst Rastogi et\xa0al. ( 2016 )', '86.8', '94.8', '87.9', '81.1', '87.65'], ['lat Dreyer et\xa0al. ( 2008 )', '[BOLD] 87.5', '93.4', '87.4', '84.9', '88.3'], ['Soft', '83.1', '93.8', '88', '83.2', '87'], ['Hard', '85.8', '[BOLD] 95.1', '[BOLD] 89.5', '[BOLD] 87.2', '[BOLD] 89.44']]
In addition, it significantly outperforms our implementation of the soft attention model (Soft). It is also, to our knowledge, the first model that surpassed in overall accuracy the latent variable model on this dataset. We attribute our advantage over the soft attention models to the ability of the hard attention control mechanism to harness the monotonic alignments found in the data. The advantage over the FST models may be explained by our conditioning on the entire output history which is not available in those models. While both models perform similarly on the train-set (with the soft attention model fitting it slightly faster), the hard attention model performs significantly better on the dev-set. This shows the soft attention model’s tendency to overfit on the small dataset, as it is not enforcing the monotonic assumption of the hard attention model. ; Hulden et al. ; Nicolai et al. Our work was mainly inspired by \newcitefaruquiTND15 which trained an independent encoder-decoder neural network for every inflection type in the training data, alleviating the need for feature engineering. \newcitekann2016,kann2016medtl tackled the task with a single soft attention model Bahdanau et al. Cotterell et al. In another closely related work, Rastogi et al. (2016) model the task with a WFST in which the arc weights are learned by optimizing a global loss function over all the possible paths in the state graph, while modeling contextual features with bi-directional LSTMS. This is similar to our approach, where instead of learning to mimic a single greedy alignment as we do, they sum over all possible alignments. Another recent work Kann et al.
Morphological Inflection Generationwith Hard Monotonic Attention
1611.01487
Table 3: Results on the SIGMORPHON 2016 morphological inflection dataset. The text above each language lists the morphological phenomena it includes: circ.=circumfixing, agg.=agglutinative, v.h.=vowel harmony, c.h.=consonant harmony
['[EMPTY]', 'suffixing+stem changes RU', 'suffixing+stem changes DE', 'suffixing+stem changes ES', 'circ. GE', 'suffixing+agg.+v.h. FI', 'suffixing+agg.+v.h. TU', 'suffixing+agg.+v.h. HU', 'c.h.', 'templatic AR', 'templatic MA', 'Avg.']
[['med', '91.46', '95.8', '98.84', '98.5', '95.47', '98.93', '96.8', '91.48', '[BOLD] 99.3', '[BOLD] 88.99', '95.56'], ['Soft', '92.18', '96.51', '98.88', '[BOLD] 98.88', '[BOLD] 96.99', '[BOLD] 99.37', '[BOLD] 97.01', '[BOLD] 95.41', '[BOLD] 99.3', '88.86', '[BOLD] 96.34'], ['Hard', '[BOLD] 92.21', '[BOLD] 96.58', '[BOLD] 98.92', '98.12', '95.91', '97.99', '96.25', '93.01', '98.77', '88.32', '95.61']]
As different languages show different morphological phenomena, we also experiment with how our model copes with these various phenomena using the morphological inflection dataset from the SIGMORPHON2016 shared task Cotterell et al. We compare our model to two soft attention baselines on this dataset: med Kann and Schütze dataset our model performs better than both soft-attention baselines for the suffixing+stem-change languages (Russian, German and Spanish) and is slightly less accurate than our implementation of the soft attention model on the rest of the languages, which is now the best performing model on this dataset to our knowledge. We explain this by looking at the languages from a linguistic typology point of view, as detailed in \newcitecotterell-sigmorphon2016. Since Russian, German and Spanish employ a suffixing morphology with internal stem changes, they are more suitable for monotonic alignment as the transformations they need to model are the addition of suffixes and changing characters in the stem. The rest of the languages in the dataset employ more context sensitive morphological phenomena like vowel harmony and consonant harmony, which require to model long range dependencies in the input sequence which better suits the soft attention mechanism. While our implementation of the soft attention model and med are very similar model-wise, we hypothesize that our soft attention model results are better due to the fact that we trained the model for 100 epochs and picked the best performing model on the development set, while the med system was trained for a fixed amount of 20 epochs (although trained on more data – both train and development sets).
Bilateral Multi-Perspective Matching for Natural Language Sentences
1702.03814
Table 1: Ablation studies on the dev set.
['Models', 'Accuracy']
[['Only [ITALIC] P→ [ITALIC] Q', '87.74'], ['Only [ITALIC] P← [ITALIC] Q', '87.47'], ['w/o Full-Matching', '87.86'], ['w/o Maxpooling-Matching', '87.64'], ['w/o Attentive-Matching', '87.87'], ['w/o MaxAttentive-Matching', '87.98'], ['Full Model', '[BOLD] 88.69']]
Second, to check the effectiveness of bilateral matching, we build two ablation models to matching sentences in only a single direction: 1) “Only P→Q” which only matches P against Q; 2) Comparing the two ablation models with the “Full Model”, we can observe that single direction matching hurts the performance for about 1 percent. Therefore, matching sentences in two directions is really necessary for acquiring better performance. Third, we evaluate the effectiveness of different matching strategies. To this end, we construct four ablation models (w/o Full-Matching, w/o Maxpooling-Matching, w/o Attentive-Matching, w/o Max-Attentive-Matching) by eliminating a matching strategy at each time. We can see that eliminating any of the matching strategies would hurt the performance significantly.
Bilateral Multi-Perspective Matching for Natural Language Sentences
1702.03814
Table 2: Performance for paraphrase identification on the Quora dataset.
['Models', 'Accuracy']
[['Siamese-CNN', '79.60'], ['Multi-Perspective-CNN', '81.38'], ['Siamese-LSTM', '82.58'], ['Multi-Perspective-LSTM', '83.21'], ['L.D.C.', '85.55'], ['BiMPM', '[BOLD] 88.17']]
Our “BiMPM” model outperforms the “L.D.C.” model by more than two percent. Therefore, our model is very effective for the paraphrase identification task.
Bilateral Multi-Perspective Matching for Natural Language Sentences
1702.03814
Table 3: Performance for natural language inference on the SNLI dataset.
['Models', 'Accuracy']
[['', '77.6'], ['', '81.4'], ['', '82.1'], ['', '83.5'], ['', '85.0'], ['', '85.1'], ['', '86.1'], ['', '86.3'], ['', '86.8'], ['', '87.3'], ['', '87.5'], [' (Single)', '87.7'], [' (Ensemble)', '88.3'], ['Only [ITALIC] P→ [ITALIC] Q', '85.6'], ['Only [ITALIC] P← [ITALIC] Q', '86.3'], ['BiMPM', '86.9'], ['BiMPM (Ensemble)', '[BOLD] 88.8']]
First, we can see that “Only P←Q” works significantly better than “Only P→Q”, which tells us that, for natural language inference, matching the hypothesis against the premise is more effective than the other way around. Second, our “BiMPM” model works much better than “Only P←Q”, which reveals that matching premise against the hypothesis can also bring some benefits. (Ensemble)”. Therefore, our models achieve the state-of-the-art performance in both single and ensemble scenarios for the natural language inference task.
Bilateral Multi-Perspective Matching for Natural Language Sentences
1702.03814
Table 4: Performance for answer sentence selection on TREC-QA and WikiQA datasets.
['Models', 'TREC-QA MAP', 'TREC-QA MRR', 'WikiQA MAP', 'WikiQA MRR']
[['', '0.695', '0.763', '0.652', '0.665'], ['', '0.728', '0.832', '–', '–'], ['Wang and Itty. wang2015faq', '0.746', '0.820', '–', '–'], ['', '0.753', '0.851', '0.689', '0.696'], ['', '–', '–', '0.692', '0.711'], ['', '–', '–', '0.689', '0.707'], ['', '0.771', '0.845', '0.706', '0.723'], ['', '0.777', '0.836', '0.709', '0.723'], ['', '0.801', '[BOLD] 0.877', '0.701', '0.718'], ['', '–', '–', '0.734', '0.742'], ['', '–', '–', '[BOLD] 0.743', '[BOLD] 0.755'], ['BiMPM', '[BOLD] 0.802', '0.875', '0.718', '0.731']]
In this Sub-section, we study the effectiveness of our model for answer sentence selection tasks. The answer sentence selection task is to rank a list of candidate answer sentences based on their similarities to the question, and the performance is measured by the mean average precision (MAP) and mean reciprocal rank (MRR). We can see that the performance from our model is on par with the state-of-the-art models. Therefore, our model is also effective for answer sentence selection tasks.
TextNAS: A Neural Architecture Search Space tailored for Text Representation
1912.10729
Table 4: Test accuracy on the text classification datasets. For each dataset, we conduct significance test against the best reproducible model, and * means that the improvement is significant at 0.05 significance level.
['Model', 'AG', 'Sogou', 'DBP', 'Yelp-B', 'Yelp', 'Yahoo', 'Amz', 'Amz-B']
[['Zhang ET AL., 2015', '92.36', '[BOLD] 97.19', '98.69', '95.64', '62.05', '71.20', '59.57', '95.07'], ['Joulin ET AL., 2016', '[BOLD] 92.50', '96.80', '98.60', '95.70', '63.90', '72.30', '60.20', '94.60'], ['Conneau ET AL., 2016', '91.33', '96.82', '98.71', '[BOLD] 95.72', '[BOLD] 64.72', '[BOLD] 73.43', '[BOLD] 63.00', '[BOLD] 95.72'], ['24-Layers Transformer', '92.17', '94.65', '[BOLD] 98.77', '94.07', '61.22', '72.67', '62.65', '95.59'], ['ENAS-macro', '92.39', '96.79', '[BOLD] 99.01', '[BOLD] 96.07', '64.60', '73.16', '62.64', '[BOLD] 95.80'], ['ENAS-micro', '92.27', '[BOLD] 97.24', '99.00', '96.01', '64.72', '70.63', '58.27', '94.89'], ['DARTS', '92.24', '97.18', '98.90', '95.84', '65.12', '73.12', '62.06', '95.48'], ['SMASH', '90.88', '96.72', '98.86', '95.62', '[BOLD] 65.26', '[BOLD] 73.63', '[BOLD] 62.72', '95.58'], ['One-Shot', '92.06', '96.92', '98.89', '95.78', '64.78', '73.20', '61.30', '95.20'], ['Random Search', '[BOLD] 92.54', '97.13', '98.98', '96.00', '65.23', '72.47', '60.91', '94.87'], ['textnas', '[BOLD] 93.14', '[BOLD] 96.76', '[BOLD] 99.01', '[BOLD] 96.41∗', '[BOLD] 66.56∗', '[BOLD] 73.97∗', '[BOLD] 63.14∗', '[BOLD] 95.94∗']]
The results demonstrate that the TextNAS model outperforms state-of-the-art methods on all text classification datasets except Sogou. One potential reason is that Sogou is a dataset in Chinese language, while the Glove embedding vectors are trained by English corpus. One can improve the performance by adding Chinese-language embeddings or char-embeddings, but we do not add them to keep the solution neat. In addition, we can pay a specific attention to the comparison of TextNAS with 29-layers CNN (Conneau ET AL., 2016) and 24-layers Transformer (VASWANI ET AL., 2017). As shown in the table, the TextNAS network improves two baselines by a large margin, indicating the advantage for mixture of different layers.
TextNAS: A Neural Architecture Search Space tailored for Text Representation
1912.10729
Table 3: Results on SST dataset. For each dataset, we conduct significance test against the best reproducible model, and * means that the improvement is significant at 0.05 significance level.
['Model', 'SST', 'SST-B']
[['Lai ET AL., 2015', '47.21', '-'], ['Zhou ET AL., 2015', '49.20', '87.80'], ['Liu ET AL., 2016', '49.60', '87.90'], ['Tai ET AL., 2016', '51.00', '88.00'], ['Kumar ET AL., 2016', '[BOLD] 52.10', '[BOLD] 88.60'], ['24-layers Transformer', '49.37', '86.66'], ['ENAS-macro', '51.55', '[BOLD] 88.90'], ['ENAS-micro', '47.00', '87.52'], ['DARTS', '[BOLD] 51.65', '87.12'], ['SMASH', '46.65', '85.94'], ['One-Shot', '50.37', '87.08'], ['Random Search', '49.20', '87.15'], ['TextNAS', '[BOLD] 52.51', '[BOLD] 90.33∗']]
We can see that the neural architecture discovered by TextNAS achieves competitive performances compared with state-of-the-art manual architectures, including the 24-layers Transformer adopted by BERT. At the same time, it outperforms other network architectures discovered automatically by other search spaces and algorithms. Specifically, the accuracy is improved by 11.7% from ENAS-MICRO and 1.9% from ENAS-MACRO on the SST dataset respectively, which shows the superiority of our novel search space for text representation. It should be noticed that there are other publications that have reported higher accuracies. However, they are not directly comparable to our scenario since they incorporate various kinds of external knowledge
TextNAS: A Neural Architecture Search Space tailored for Text Representation
1912.10729
Table 6: Detailed settings for experiments of text classification.
['Exp', 'batch size', 'max length', '[ITALIC] l2', 'lr', 'sliding window', 'hidden size']
[['AG', '128', '256', '1×10−6', '0.02', 'no', '256'], ['Sogou', '64', '1024', '1×10−6', '0.02', 'yes', '32'], ['DBP', '128', '256', '1×10−6', '0.02', 'no', '64'], ['Yelp-B', '128', '512', '1×10−6', '0.02', 'no', '64'], ['Yelp', '128', '512', '1×10−6', '0.02', 'no', '64'], ['Yahoo', '64', '1024', '1×10−6', '0.02', 'yes', '32'], ['Amz', '128', '256', '1×10−6', '0.02', 'yes', '128'], ['Amz-B', '128', '256', '1×10−6', '0.02', 'yes', '128']]
In all the experiments, we apply dropout (ratio=0.5) to the embedding layers, final output layers and self-attention layers. In addition, in the bidirectional GRU layers, we apply dropout (ratio=0.5) on the input and output tensors. Besides, for several time-consuming experiments, we employ sliding window trick to accelerate the training procedure. Given a sentence, we utilize a sliding window to segment the long input sentence into several sub-sentences, where window_size and stride are pre-defined hyper-parameters. The sub-sentences are fed separately to the neural network to output fixed-length vector representation for each sub-sentence. Then, a max pooling operator is applied on top to calculate the vector representation for the entire sentence. In all experiments using sliding window, we set window_size as 64 and stride as 32.
TextNAS: A Neural Architecture Search Space tailored for Text Representation
1912.10729
Table 7: Detailed settings for experiments of natural language inference.
['Exp', 'lr', 'training epoch', '[ITALIC] l2', 'dropout rate', 'penalization']
[['SNLI', '2×10−4', '8', '2×10−2', '0.2', '0'], ['MNLI', '1×10−4', '20', '1×10−2', '0.2', '0']]
In NLI experiments, we evaluate the result model of TextNAS by training it from scratch. We set the dimension of hidden units as 512 for all layers in the sentence encoder and 2400 for the three fully-connected layers before softmax output. All 24 layers in the sentence encoder are linearly combined to produce the ultimate sentence embedding vector. All convolutions follow an ordering of ReLU, convolution operation and batch normalization. We also employ layer normalization after the outputs of bidirectional GRU and self-attention layers. Dropout is adopted on the output of each word-embedding, GRU and fully-connected layer. We set batch size as 32 and max input length as 128. Adam optimizer with cosine decay of learning rate and warm up over the first epoch are utilized to train the model. Learning rate: 1×10−4, 2×10−4, 3×10−4 Training epochs: 8, 12, 16, 20, 30 l2 regularization: 1×10−2, 2×10−2, 5×10−2, 1×10−1, 2×10−1, 5×10−1 Dropout ratio: 0.1, 0.2, 0.3 Penalization: 0, 1×10−1, 1×10−2, 1×10−3
Polyglot Semantic Role Labeling
1805.11598
Table 3: Per-label breakdown of F1 scores for Catalan and Spanish. These numbers reflect labels for each argument; the combination is different from the overall semantic F1, which includes predicate sense disambiguation.
['[EMPTY]', '[BOLD] arg0', '[BOLD] arg1', '[BOLD] arg2', '[BOLD] arg3', '[BOLD] arg4', '[BOLD] arg [ITALIC] L', '[BOLD] arg [ITALIC] M']
[['Gold label count (cat)', '2117', '4296', '1713', '61', '71', '49', '2968'], ['Monolingual cat\xa0 [ITALIC] F1', '82.06', '79.06', '68.95', '28.89', '42.42', '39.51', '60.85'], ['+ eng\xa0improvement', '+2.75', '+2.58', '+4.53', '+18.17', '+9.81', '+1.35', '+1.10'], ['Gold label count (spa)', '2438', '4295', '1677', '49', '82', '46', '3237'], ['Monolingual spa\xa0 [ITALIC] F1', '82.44', '77.93', '70.24', '28.89', '41.15', '22.50', '58.89'], ['+ eng\xa0improvement', '+0.37', '+0.43', '+1.35', '-3.40', '-3.48', '+4.01', '+1.26']]
Label-wise results. In both languages, we find a small but consistent improvement in the most common label categories (e.g., arg1 and argM). Less common label categories are sensitive to small changes in performance; they have the largest changes in F1 in absolute value, but without a consistent direction. This could be attributed to the addition of English data, which improves learning of representations that are useful for the most common labels, but is essentially a random perturbation for the rarer ones. This pattern is seen across languages, and consistently results in overall gains from polyglot training.
Polyglot Semantic Role Labeling
1805.11598
Table 2: Semantic F1 scores (including predicate sense disambiguation) on the CoNLL 2009 dataset. State of the art for Catalan and Japanese is from Zhao2009, for German and Spanish from Roth2016-fn, for English and Chinese from marcheggiani2017gcn. Italics indicate use of syntax.
['[BOLD] Model', 'cat', 'ces', 'deu', 'eng', 'jpn', 'spa', 'zho']
[['marcheggiani2017lstm', '-', '86.00', '-', '87.60', '-', '80.30', '81.20'], ['Best previously reported', '[ITALIC] 80.32', '86.00', '[ITALIC] 80.10', '[ITALIC] 89.10', '[ITALIC] 78.15', '[ITALIC] 80.50', '[ITALIC] 81.20'], ['Monolingual', '77.31', '84.87', '66.71', '86.54', '74.99', '75.98', '81.26'], ['+ eng(simple polyglot)', '79.08', '84.82', '69.97', '–', '76.00', '76.45', '81.50'], ['+ eng(language ID)', '79.05', '85.14', '69.49', '–', '75.77', '77.32', '81.42'], ['+ eng(language-specific LSTMs)', '79.45', '84.78', '68.30', '–', '75.88', '76.86', '81.89']]
We observe that simple polyglot training improves over monolingual training, with the exception of Czech, where we observe no change in performance.
Polyglot Semantic Role Labeling
1805.11598
Table 4: Semantic F1 scores on the English test set for each language pair.
['eng-only', '+cat', '+ces', '+deu', '+jpn', '+spa', '+zho']
[['86.54', '86.79', '87.07', '87.07', '87.11', '87.24', '87.10']]
English SRL consistently benefits from polyglot training, with an increase of 0.25–0.7 absolute F1 points, depending on the language. Surprisingly, Czech provides the smallest improvement, despite the large amount of data added; the absence of crosslingual transfer in both directions for the English-Czech case, breaking the pattern seen in other languages, could therefore be due to differences in annotation rather than questions of dataset size.
Polyglot Semantic Role Labeling
1805.11598
Table 5: Unlabeled semantic F1 scores on the CoNLL 2009 dataset.
['[BOLD] Model', 'cat', 'ces', 'deu', 'eng', 'jpn', 'spa', 'zho']
[['Monolingual', '93.92', '91.92', '87.95', '92.87', '85.55', '93.61', '87.93'], ['+ eng', '94.09', '91.97', '89.01', '–', '86.17', '93.65', '87.90']]
S4SS0SSS0Px3 Labeled vs. unlabeled F1. As can be seen here, the unlabeled F1 improvements are generally positive but small, indicating that polyglot training can help both in structure prediction and labeling of arguments. The pattern of seeing the largest improvements on the languages with the smallest datasets generally holds here: the largest F1 gains are in German and Catalan, followed by Japanese, with minimal or no improvement elsewhere.
Strongly Incremental Repair Detection
1408.6788
Table 3: Comparison of performance of systems with different stack capacities
['[EMPTY]', 'F [ITALIC] rm', 'F [ITALIC] s', 'DA', 'EO', 'PO', 'TD [ITALIC] rp', 'TD [ITALIC] rm']
[['1-best [ITALIC] rmstart', '0.745', '0.707', '0.699', '3.780', '1.650', '1.0', '2.6'], ['2-best [ITALIC] rmstart', '0.758', '0.721', '0.701', '4.319', '1.665', '1.1', '2.7']]
Our experiments showed that different system settings perform better in different metrics, and no individual setting achieved the best result in all of them. ’s measure and STIR achieves 0.736 on the previously unevaluated Fs. The fastest average time to detection is 1 word for TDrp and 2.6 words for TDrm We make a preliminary investigation into the effect of increasing the stack capacity by comparing stacks with 1-best rmstart hypotheses per rpstart and 2-best stacks. Moving to the 2-stack condition results in gain in overall accuracy in Frm and Fs, but at the cost of EO and also time-to-detection scores TDrm and TDrp. The extent to which the stack can be increased without increasing jitter, latency and complexity will be investigated in future work.
Procedural Reasoning Networks for Understanding Multimodal Procedures
1909.08859
Table 1: Quantitative comparison of the proposed PRN model against the baselines.
['Model', 'Single-task Training Cloze', 'Single-task Training Coherence', 'Single-task Training Ordering', 'Single-task Training Average', 'Multi-task Training Cloze', 'Multi-task Training Coherence', 'Multi-task Training Ordering', 'Multi-task Training All']
[['Human∗', '77.60', '81.60', '64.00', '74.40', '–', '–', '–', '–'], ['Hasty Student', '27.35', '[BOLD] 65.80', '40.88', '44.68', '–', '–', '–', '–'], ['Impatient Reader', '27.36', '28.08', '26.74', '27.39', '–', '–', '–', '–'], ['BIDAF', '53.95', '48.82', '62.42', '55.06', '44.62', '36.00', '[BOLD] 63.93', '48.67'], ['BIDAF w/ static memory', '51.82', '45.88', '60.90', '52.87', '[BOLD] 47.81', '40.23', '62.94', '[BOLD] 50.59'], ['PRN', '[BOLD] 56.31', '53.64', '[BOLD] 62.77', '[BOLD] 57.57', '46.45', '[BOLD] 40.58', '62.67', '50.17'], ['∗ Taken from the RecipeQA project website, based on 100 questions sampled randomly from the validation set.', '∗ Taken from the RecipeQA project website, based on 100 questions sampled randomly from the validation set.', '∗ Taken from the RecipeQA project website, based on 100 questions sampled randomly from the validation set.', '∗ Taken from the RecipeQA project website, based on 100 questions sampled randomly from the validation set.', '∗ Taken from the RecipeQA project website, based on 100 questions sampled randomly from the validation set.', '∗ Taken from the RecipeQA project website, based on 100 questions sampled randomly from the validation set.', '∗ Taken from the RecipeQA project website, based on 100 questions sampled randomly from the validation set.', '∗ Taken from the RecipeQA project website, based on 100 questions sampled randomly from the validation set.', '[EMPTY]']]
In single-task training setting, PRN gives state-of-the-art results compared to other neural models. Moreover, it achieves the best performance on average. These results demonstrate the importance of having a dynamic memory and keeping track of entities extracted from the recipe. In multi-task training setting where a single model is trained to solve all the tasks at once, PRN and BIDAF w/ static memory perform comparably and give much better results than BIDAF. Note that the model performances in the multi-task training setting are worse than single-task performances. We believe that this is due to the nature of the tasks that some are more difficult than the others. We think that the performance could be improved by employing a carefully selected curriculum strategy McCann et al.
A Unified Linear-Time Framework for Sentence-Level Discourse Parsing
1905.05682
Table 4: Speed comparison of our systems with other open-sourced systems.
['[BOLD] System', '[BOLD] Speed (Sents/s)', '[BOLD] Speedup']
[['[BOLD] Only Segmenter', '[EMPTY]', '[EMPTY]'], ['CODRA Joty et\xa0al. ( 2015 )', '3.06', '1.0x'], ['WLY Wang et\xa0al. ( 2018 )', '4.30', '1.4x'], ['SPADE Soricut and Marcu ( 2003 )', '5.24', '1.7x'], ['Our (CPU)', '12.05', '3.9x'], ['Our (GPU)', '35.54', '11.6x'], ['[BOLD] Only Parser', '[EMPTY]', '[EMPTY]'], ['SPADE Soricut and Marcu ( 2003 )', '5.07', '1.0x'], ['CODRA Joty et\xa0al. ( 2015 )', '7.77', '1.5x'], ['Our (CPU)', '12.57', '2.5x'], ['Our (GPU)', '30.45', '6.0x'], ['[BOLD] End-to-End (Segmenter → Parser)', '[EMPTY]', '[EMPTY]'], ['CODRA Joty et\xa0al. ( 2015 )', '3.05', '1.0x'], ['SPADE Soricut and Marcu ( 2003 )', '4.90', '1.6x'], ['Our (CPU)', '11.99', '3.9x'], ['Our (GPU)', '28.96', '9.5x']]
As noted earlier, both our segmenter and parser operate in linear time with respect to the number of input units. We test all the systems with the same 100 sentences randomly selected from our test set on our machine (CPU: Intel Xeon W-2133, GPU: NVIDIA GTX 1080Ti). We include the model loading time for all the systems. In addition, CODRA’s DCRF parser has a O(n3) inference time. Our segmenter is 6.8x faster than SPADE. Compared to CODRA (the fastest parser as of yet), our parser is 3.9x faster. Finally, our end-to-end system is 5.9x faster than the fastest system out there (SPADE), making our system not only effective but also highly efficient. Even when tested only on CPU, our model is faster than all the other models.
A Unified Linear-Time Framework for Sentence-Level Discourse Parsing
1905.05682
Table 1: Discourse segmentation results. Superscript ⋆ indicates the model is significantly superior to the WLYELMo model with a p-value <0.01.
['[BOLD] Approach', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1']
[['[BOLD] Human Agreement', '98.5', '98.2', '98.3'], ['[BOLD] Baselines', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['SPADE Soricut and Marcu ( 2003 )', '83.8', '86.8', '85.2'], ['F&R Fisher and Roark ( 2007 )', '91.3', '89.7', '90.5'], ['JCN Joty et\xa0al. ( 2012 )', '88.0', '92.3', '90.1'], ['SegBotglove Li et\xa0al. ( 2018 )', '91.08±0.46', '91.03±0.42', '91.05±0.11'], ['WLYELMo Wang et\xa0al. ( 2018 )', '92.04±0.43', '94.41±0.53', '93.21±0.33'], ['[BOLD] Our Segmenter', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Pointer Net (Glove)', '90.55±0.33', '92.29±0.09', '91.41±0.21'], ['Pointer Net (BERT)', '92.05±0.44', '95.03±0.28', '93.51±0.16'], ['Pointer Net (ELMo)', '94.12±0.20⋆', '96.63±0.12⋆', '95.35±0.10⋆'], ['+ Joint training', '[BOLD] 93.34±0.23⋆', '[BOLD] 97.88±0.16⋆', '[BOLD] 95.55±0.13⋆']]
Using encoder hidden states as decoder inputs and adopting dot product as the attention score function together gives 0.40%-7.29% relative improvement in F1 over the first four baselines. Using ELMo, our segmenter outperforms all the baselines in all three measures. We achieve 2.3%-11.9%, 2.4%-11.3% and 2.3%-12.3% relative improvements in F1, Recall and Precision, respectively. Jointly training with the parser improves this further (95.55 F1). It is worthwhile to mention that our segmenter’s performance of 95.55 F1 is very close to the human agreement of 98.3 F1. ELMo, as a transfer learning method, provides notable improvements. A similar observation was reported in Wang et al. Surprisingly, the results with BERT were not as good. We suspect this is due to BERT’s special tokenization.
Towards Generating Long and Coherent Text with Multi-Level Latent Variable Models
1902.00154
Table 7: Human evaluations on Yelp Reviews dataset. Each block is a head-to-head comparison of two models on grammatically, consistency, and non-redundancy.
['[BOLD] Model', '[BOLD] Grammar.', '[BOLD] Cons.', '[BOLD] Non-Red.', '[BOLD] Overall']
[['[ITALIC] ml-VAE', '52.0', '55.0', '53.7', '60.0'], ['[ITALIC] flat-VAE', '30.0', '33.0', '27.7', '32.3'], ['[ITALIC] ml-VAE', '75.3', '86.0', '76.7', '86.0'], ['AAE', '13.3', '10.3', '15.0', '12.0'], ['[ITALIC] flat-VAE', '19.7', '18.7', '14.3', '19.0'], ['Real data', '61.7', '74.7', '74.3', '77.7'], ['[ITALIC] ml-VAE', '28.0', '26.3', '25.0', '30.3'], ['Real data', '48.6', '58.7', '49.0', '61.3']]
As shown in Even though both models underperform when compared against the ground-truth real reviews, ml-VAE was rated higher in comparison to flat-VAE (raters find ml-VAE closer to human-generated than the flat-VAE) in all the criteria evaluation criteria. When compared against AAE baseline models using the same data preprocessing steps and hyperparameters, ml-VAE again produces more grammatically-correct and semantically-coherent samples. The human evaluations correlate with the automatic metrics, which indicate that our ml-VAE is actually generating more coherent stories than the baseline models. We leave further evaluations using embedding based metrics as a possible extension to our work.
Towards Generating Long and Coherent Text with Multi-Level Latent Variable Models
1902.00154
Table 2: Language modeling results on Yelp and arXiv data. Upper block are baselines, and lower are our models.
['[BOLD] Model', '[BOLD] Yelp [BOLD] NLL', '[BOLD] Yelp [BOLD] KL', '[BOLD] Yelp [BOLD] PPL', '[BOLD] arXiv [BOLD] NLL', '[BOLD] arXiv [BOLD] KL', '[BOLD] arXiv [BOLD] PPL']
[['[ITALIC] flat-LM', '162.6', '-', '48.0', '218.7', '-', '57.6'], ['[ITALIC] flat-VAE', '≤ 163.1', '0.01', '≤ 49.2', '≤ 219.5', '0.01', '≤ 58.4'], ['[ITALIC] ml-LM', '162.4', '-', '47.9', '219.3', '-', '58.1'], ['[ITALIC] ml-VAE-S', '≤ 160.8', '3.6', '≤ 46.6', '≤ 216.8', '5.3', '≤ 55.6'], ['[ITALIC] ml-VAE-D', '≤ [BOLD] 160.2', '6.8', '≤ [BOLD] 45.8', '≤ [BOLD] 215.6', '12.7', '≤ [BOLD] 54.3']]
The flat-VAE model obtains slightly worse NNL and PPL relative to a flat LSTM-based language model. With multi-level LSTM decoder, our ml-VAE-S yields increased KL divergence, demonstrating that the VAE model tends to leverage more information from the latent variable in the decoding stage. The PPL of ml-VAE-S is also decreased from 47.9 to 46.6 (compared to ml-LM), indicating that the sampled latent codes improve word-level predictions.
Towards Generating Long and Coherent Text with Multi-Level Latent Variable Models
1902.00154
Table 3: Evaluation results for generated sequences by our models and baselines on corpus-level BLEU scores (B-n denotes the corpus-level BLEU-n score.)
['[BOLD] Model', '[BOLD] Yelp [BOLD] B-2', '[BOLD] Yelp [BOLD] B-3', '[BOLD] Yelp [BOLD] B-4', '[BOLD] arXiv [BOLD] B-2', '[BOLD] arXiv [BOLD] B-3', '[BOLD] arXiv [BOLD] B-4']
[['[ITALIC] ARAE', '0.684', '0.524', '0.350', '0.624', '0.475', '0.305'], ['[ITALIC] AAE', '0.735', '0.623', '0.383', '0.729', '0.564', '0.342'], ['[ITALIC] flat-VAE', '0.855', '0.705', '0.515', '0.784', '0.625', '0.421'], ['[ITALIC] ml-VAE-S', '0.901', '0.744', '0.531', '0.821', '[BOLD] 0.663', '0.447'], ['[ITALIC] ml-VAE-D', '[BOLD] 0.912', '[BOLD] 0.755', '[BOLD] 0.549', '[BOLD] 0.825', '0.657', '[BOLD] 0.460']]
VAE tends to be a stronger baseline for paragraph generation, exhibiting higher corpus-level BLEU scores than both AAE and ARAE. The VAE with multi-level decoder demonstrates better BLEU scores than the one with a flat decoder, indicating that the plan-ahead mechanism associated with the hierarchical decoding process indeed benefits the sampling quality. ml-VAE-D exhibits slightly better results than ml-VAE-S. We attribute this to the more flexible prior distribution of ml-VAE-D, which improves the ability of inference networks to extract semantic features from a paragraph, yielding more informative latent codes.