paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
Towards Generating Long and Coherent Text with Multi-Level Latent Variable Models
1902.00154
Table 5: Evaluation of diversity of 1000 generated sentences on self-BLEU scores (B-n), unique n-gram percentages (ngr), 2-gram entropy score.
['[BOLD] Model', '[BOLD] Yelp [BOLD] B-2', '[BOLD] Yelp [BOLD] B-3', '[BOLD] Yelp [BOLD] B-4', '[BOLD] Yelp [BOLD] 2gr', '[BOLD] Yelp [BOLD] 3gr', '[BOLD] Yelp [BOLD] 4gr', '[BOLD] Yelp [BOLD] Etp-2']
[['ARAE', '0.725', '0.544', '0.402', '36.2', '59.7', '75.8', '7.551'], ['AAE', '0.831', '0.672', '0.483', '33.2', '57.5', '71.4', '6.767'], ['[ITALIC] flat-VAE', '0.872', '0.755', '0.617', '23.7', '48.2', '69.0', '6.793'], ['[ITALIC] ml-VAE-S', '0.865', '0.734', '0.591', '28.7', '50.4', '70.7', '6.843'], ['[ITALIC] ml-VAE-D', '0.851', '0.723', '0.579', '30.5', '53.2', '72.6', '6.926']]
A small self-BLEU score together with a large BLEU score can justify the effectiveness of a model, i.e., being able to generate realistic-looking as well as diverse samples. Among all the VAE variants, ml-VAE-D shows the smallest BLEU score and largest unique n-grams percentage, demonstrating the effectiveness of hieararhically structured generative networks as well as latent variables. Even though AAE and ARAE yield better diversity according to both metrics, their corpus-level BLEU scores are much worse relative to ml-VAE-D. We leverage human evaluation for further comparison.
Bi-Decoder Augmented Network for Neural Machine Translation
2001.04586
Table 2: BLEU scores on WMT14 English-to-French for Transformer results.
['[BOLD] NMT Models', '[BOLD] En→De', '[BOLD] De→En']
[['Transformer', '27.5', '31.6'], ['Transformer + AD', '28.1', '32.0'], ['Transformer + AD (Denoising)', '28.0', '31.9'], ['Transformer + AD (RL)', '28.1', '[BOLD] 32.1'], ['[BOLD] BiDAN (Transformer)', '[BOLD] 28.1', '32.0']]
However, the improvement from reinforcement learning is quite modest, while the denoising part even declines the scores. We conjecture that this is because the positional encoding of the Transformer reduces the order dependency capture of the model, thus counteracts the effects of these approaches.
Bi-Decoder Augmented Network for Neural Machine Translation
2001.04586
Table 1: BLEU scores for NMT models on WMT14 English-German and English-French and IWSLT 2015 English-Vietnamese dataset. “AD” denotes auxiliary decoder, “RL” denotes reinforcement learning.
['[BOLD] NMT Models', '[BOLD] WMT14 [BOLD] En→De', '[BOLD] WMT14 [BOLD] De→En', '[BOLD] WMT14 [BOLD] En→Fr', '[BOLD] IWSLT15 [BOLD] En→Vi']
[['Baseline', '22.6', '26.8', '32.3', '24.9'], ['Baseline + AD', '24.0', '28.2', '33.6', '26.2'], ['Baseline + AD (Denoising)', '24.3', '28.4', '34.0', '26.6'], ['Baseline + AD (RL)', '24.4', '28.5', '33.9', '26.8'], ['BiDAN (All modules converge)', '24.6', '28.7', '34.1', '27.0'], ['[BOLD] BiDAN', '[BOLD] 24.7', '[BOLD] 28.9', '[BOLD] 34.2', '[BOLD] 27.1']]
In the medium part, we conduct an ablation experiment to evaluate the individual contribution of each component of our model. At first, we only add the auxiliary decoder D2 to the baseline model, and the BLEU scores on all the test sets rise about 1.4 point, which shows the effectiveness of our bi-decoder architecture and the significance of the language-independent representation of the text. We then train our model with the process of denoising, in other words, we take out the objective function JRL when optimizing our BiDAN model. We observe that the performance rises around 0.3 point, which proves that focusing on the internal structure of the languages is very helpful to the task. Finally, we use the reinforcement learning with the original loss to train our auxiliary decoder, which means we don’t use the objective function JD. The results show that this leads to about 0.4 point improvement, which indicates that relaxing the grammatical limitation and capturing the keyword information is very useful in our bi-decoder architecture. Finally, instead of first jointly train the parameters of the whole model to the decoder D1 90% convergence and then fix θ2 (parameters of the D2) and train Θ1 using J1 until the model fully converges, we directly training the model until all of its parts converge. The results show that training from a well-trained model may induce the local minimum of the optimization. In other words, we keep λd and λr as 2 and change the ratio of the original objective J2. As we can see, on all of the datasets the model drops sharply and even worse than the baseline model when we set λa as 0, which means we only use the objective functions JD and JRL without the original one. This indicates that totally ignoring the grammatical structure of the input text is not helpful to the task. We also observe that the performance rises with the increase of λa until 5 or 6. Afterwards, the results get worse when we raise the λa, which means the multi-objective learning can improve the performance.
Bi-Decoder Augmented Network for Neural Machine Translation
2001.04586
Table 3: BLEU scores on WMT14 English-to-French for NMT models with different encoders.
['[BOLD] Encoder Source', '[BOLD] BLEU ( [ITALIC] p1)', '[BOLD] BLEU ( [ITALIC] p2)']
[['Random Encoder', '7.2', '0.1'], ['En→De Encoder', '11.1', '0.2'], ['En→De Encoder (BiDAN)', '27.8', '2.5'], ['En→Fr Encoder (Original)', '[BOLD] 62.4', '[BOLD] 39.1']]
We train the NMT model on WMT14 English-to-French dataset, and then replace the well-trained encoder with different sources at the testing time. We first replace the encoder with random parameters and the results drop a lot, which is not surprising. Then we use the encoder trained on the WMT14 English-to-German dataset to test the performance of cross language performance. The results improve modestly compared to the random encoder but still have a huge gap to the original model. This indicates that the coarse cross language training doesn’t work even though the languages are quite similar. Finally, we replace the original encoder with the encoder from our BiDAN framework, which is also trained on the WMT14 English-to-German dataset. Although the performance is still far away from the level which is achieved by the original encoder, it is much better than the previous two methods. We point out that the only difference among those models are the parameters of the encoder, which determine the text representation. Thus the different performances demonstrate that our model provides a more language-dependent text representation, and this may also explain the improvement of our structure to the general NMT model.
Game-Based Video-Context Dialogue
1809.04560
Table 8: Ablation of cross-entropy loss vs. cross-entropy+maxmargin loss for our BiDAF-based generative model (on dev set).
['Models', 'recall@1', 'recall@2', 'recall@5']
[['Cross-entropy (XE)', '13.12', '23.45', '54.78'], ['XE+Max-margin', '15.61', '27.39', '57.02']]
Max-margin loss provides knowledge about the negative samples for the generative model, hence improves the retrieval-based recall@k scores.
Game-Based Video-Context Dialogue
1809.04560
Table 3: Performance of our baselines, discriminative models, and generative models for recall@k metrics on our Twitch-FIFA test set. C and V represent chat and video context, respectively.
['Models', 'r@1', 'r@2', 'r@5']
[['Baselines', 'Baselines', 'Baselines', 'Baselines'], ['Most-Frequent-Response', '10.0', '16.0', '20.9'], ['Naive Bayes', '9.6', '20.9', '51.5'], ['Logistic Regression', '10.8', '21.8', '52.5'], ['Nearest Neighbor', '11.4', '22.6', '53.2'], ['Chat-Response-Cosine', '11.4', '22.0', '53.2'], ['Discriminative Model', 'Discriminative Model', 'Discriminative Model', 'Discriminative Model'], ['Dual Encoder (C)', '17.1', '30.3', '61.9'], ['Dual Encoder (V)', '16.3', '30.5', '61.1'], ['Triple Encoder (C+V)', '18.1', '33.6', '68.5'], ['TriDAF+Self Attn (C+V)', '20.7', '35.3', '69.4'], ['Generative Model', 'Generative Model', 'Generative Model', 'Generative Model'], ['Seq2seq +Attn (C)', '14.8', '27.3', '56.6'], ['Seq2seq +Attn (V)', '14.8', '27.2', '56.7'], ['Seq2seq + Attn (C+V)', '15.7', '28.0', '57.0'], ['Seq2seq + Attn + BiDAF (C+V)', '16.5', '28.5', '57.7']]
We first discuss results of our simple non-trained and trained baselines (see Sec. The ‘Most-Frequent-Response’ baseline, which just ranks the 10-sized response retrieval list based on their frequency in the training data, gets only around 10% recall@1. Our other non-trained baselines: ‘Chat-Response-Cosine’ and ‘Nearest Neighbor’, which ranks the candidate responses based on (Twitch-trained RNN encoder’s vector) cosine similarity with chat-context and K-best training contexts’ response vectors, respectively, achieves slightly better scores. We also show that our simple trained baselines (logistic regression and nearest neighbor) also achieve relatively low scores, indicating that a simple, shallow model will not work on this challenging dataset. Our dual encoder models are significantly better than random choice and all our simple baselines above, and further show that they have complementary information because using both of them together (in ‘Triple Encoder’) improves the overall performance of the model. Finally, we show that our novel TriDAF model with self-attention performs significantly better than the triple encoder model. Next, we evaluate the performance of our generative models with both retrieval-based recall@k scores and phrase matching-based metrics as discussed in Sec. Starting with a simple sequence-to-sequence attention model with video only, chat only, and both video and chat encoders, the recall@k scores are better than all the simple baselines. Moreover, using both video+chat context is again better than using only one context modality. Finally, we show that the addition of the bidirectional attention flow mechanism improves the performance in all recall@k scores. Das et al.
Game-Based Video-Context Dialogue
1809.04560
Table 4: Performance of our generative models on phrase matching metrics.
['Models', 'METEOR', 'ROUGE-L']
[['Multiple References', 'Multiple References', 'Multiple References'], ['Seq2seq + Atten. (C)', '2.59', '8.44'], ['Seq2seq + Atten. (V)', '2.66', '8.34'], ['Seq2seq + Atten. (C+V) ⊗', '3.03', '8.84'], ['⊗ + BiDAF (C+V)', '3.70', '9.82']]
Again, our BiDAF model is stat. significantly better than non-BiDAF model on both METEOR (p<0.01) and ROUGE-L (p<0.02) metrics. Since dialogue systems can have several diverse, non-overlapping valid responses, we consider a multi-reference setup where all the utterances in the 10-sec response window are treated as valid responses.
Game-Based Video-Context Dialogue
1809.04560
Table 7: Ablation of classification vs. max-margin loss on our TriDAF discriminative model (on dev set).
['Models', 'recall@1', 'recall@2', 'recall@5']
[['Classification loss', '19.32', '33.72', '66.60'], ['Max-margin loss', '22.20', '35.90', '68.09']]
We observe that max-margin loss performs better than the classification loss, which is intuitive because max-margin loss tries to differentiate between positive and negative training example triples.
Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models
2005.10389
Table 1: Conpono improves the previous state-of-the-art on four DiscoEval tasks. The average accuracy across all tasks is also a new state-of-the-art, despite a small drop in accuracy for PDTB-E. BERT-Base and BERT-Large numbers are reported from Chen et al. (2019), while the rest were collected for this paper. We report standard deviations by running the evaluations 10 times with different seeds for the same Conpono model weights.
['Model', 'SP', 'BSO', 'DC', 'SSP', 'PDTB-E', 'PDTB-I', 'RST-DT', 'avg.']
[['BERT-Base', '53.1', '68.5', '58.9', '80.3', '41.9', '42.4', '58.8', '57.7'], ['BERT-Large', '53.8', '69.3', '59.6', '[BOLD] 80.4', '[BOLD] 44.3', '43.6', '59.1', '58.6'], ['RoBERTa-Base', '38.7', '58.7', '58.4', '79.7', '39.4', '40.6', '44.1', '51.4'], ['BERT-Base BSO', '53.7', '72.0', '71.9', '80.0', '42.7', '40.5', '[BOLD] 63.8', '60.6'], ['Conpono [ITALIC] isolated', '50.2', '57.9', '63.2', '79.9', '35.8', '39.6', '48.7', '53.6'], ['Conpono [ITALIC] uni-encoder', '59.9', '74.6', '72.0', '79.6', '40.0', '43.9', '61.9', '61.7'], ['Conpono (k=2)', '[BOLD] 60.7', '[BOLD] 76.8', '[BOLD] 72.9', '[BOLD] 80.4', '42.9', '[BOLD] 44.9', '63.1', '[BOLD] 63.0'], ['Conpono std.', '±.3', '±.1', '±.3', '±.1', '±.7', '±.6', '±.2', '-']]
We report results from versions of Conpono using each of these encoding approaches, labeled isolated to represent separate encoding and uni-encoder to represent joint encoding of the anchor and target without a separate anchor encoding. Results: Our model excels in particular on the sentence ordering and coherence tasks (SP, BSO, and DC). Note that our model parameter count is the same as BERT-Base but it outperforms BERT-Large, which has significantly more parameters and has used much more compute for pretraining. BERT-Base BSO scores tend to fall between those of BERT-Base and our model, implying that the sentence ordering objective is improving the models for this benchmark, but that binary sentence ordering is not sufficient to capture the added benefits of including more fine-grained ordering and negative examples.
Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models
2005.10389
Table 3: Our model improves accuracy over BERT-Base for RTE and COPA benchmarks. Improvements are comparable to BERT-Large but still lag behind much larger models trained on more data, such as ALBERT. All scores are on the validation set.
['Model', 'RTE', 'COPA']
[['BERT-Base', '66.4', '62.0'], ['BERT-Base BSO', '71.1', '67.0'], ['Conpono', '70.0', '69.0'], ['BERT-Large', '70.4', '69.0'], ['ALBERT', '86.6', '-']]
Results: We believe that the coherence and ordering aspects of these evaluation tasks are well fit to demonstrate the how our model can improve on strong baselines such as BERT-Base. Interestingly, we observe improvements over the baseline with BERT-Base BSO, showing that even simple discourse-level objectives could lead to noticeable downstream effects. Though these improvements are modest compared to BERT-Large, they are meant to highlight that our model does not only improve on results for artificial sentence ordering tasks, but also on aspects of benchmarks used to generally evaluate pretrained language models and language understanding.
Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models
2005.10389
Table 4: Conpono is more effective at classifying the most plausible sentence from the extended context than BERT-Base. We report the BERT-Large exact match score, where the model selects only the target entity from the context, for reference. All scores are on the validation set.
['Model', 'Accuracy']
[['BERT-Base', '61.2'], ['Conpono', '63.2'], ['BERT-Large', '69.8 [EM]']]
The task for the ReCoRD dataset is to select the correct entity from those that appear in the context to fill in the blank in the target. Previous models for ReCoRD have used a similar structure to SQuAD Rajpurkar et al. We, instead, generate all possible target sentences by filling the blank with each marked entity and discriminatively choose the sentence most likely to be the true “plausible” sentence from the context. This modified task evaluates how our model compares to BERT-Base choosing the most coherent sentence from a set of nearly identical sentences. The strong results from BERT-Large imply that having a better representation of the text with a large model is able to subsume any improvement from learning plausible contexts for this task.
Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models
2005.10389
Table 5: The ablation analysis shows the effects of different k values (ie. window sizes) in our objective, removing the MLM objective during pretraining and training with a small transformer encoder.
['Model', 'SP', 'BSO', 'DC', 'SSP', 'PDTB-E', 'PDTB-I', 'RST-DT', 'avg.']
[['k=4', '59.84', '76.05', '[BOLD] 73.62', '[BOLD] 80.65', '42.28', '44.25', '63.00', '62.81'], ['k=3', '60.47', '76.68', '72.74', '80.30', '[BOLD] 43.40', '44.28', '62.56', '62.92'], ['k=2', '[BOLD] 60.67', '[BOLD] 76.75', '72.85', '80.38', '42.87', '[BOLD] 44.87', '[BOLD] 63.13', '[BOLD] 63.07'], ['k=1', '47.56', '66.03', '72.62', '80.15', '42.79', '43.55', '62.31', '59.29'], ['- MLM', '54.92', '75.37', '68.35', '80.2', '41.67', '43.88', '61.27', '60.81'], ['Small', '45.41', '61.70', '67.71', '75.58', '35.26', '36.18', '46.58', '52.63']]
We observe that using a window size for our objective that is larger than 1 is key to seeing downstream improvements. We believe that this is due to the objective being harder for the model because there is more variation farther from the anchor. At the same time, increasing the window size beyond 2 seems to result in similar performance. This may be because larger distances from the anchor also lead to more ambiguity.
Unsupervised Opinion Summarization with Noising and Denoising
2004.10150
Table 4: ROUGE-L of our model and versions thereof with less synthetic data (second block), using only one noising method (third block), and without some modules (fourth block). A more comprehensive table and discussion can be found in the Appendix.
['Model', 'RT', 'Yelp']
[['DenoiseSum', '16.27', '17.65'], ['10% synthetic dataset', '15.39', '16.22'], ['50% synthetic dataset', '15.76', '17.54'], ['no segment noising', '16.03', '16.88'], ['no document noising', '16.22', '16.67'], ['no explicit denoising', '16.06', '17.06'], ['no partial copy', '15.89', '16.31'], ['no discriminator', '15.84', '16.64'], ['using human categories', '15.87', '15.86']]
Our experiments confirm that increasing the size of the synthetic data improves performance, and that both segment and document noising are useful. We also show that explicit denoising, partial copy, and the discriminator help achieve best results. Finally, human-labeled categories (instead of LDA topics) decrease model performance, which suggests that more useful labels can be approximated by automatic means.
Unsupervised Opinion Summarization with Noising and Denoising
2004.10150
Table 6: ROUGE-1/2/L F1 scores of our model and versions thereof with less synthetic data (second block), using only one noising method (third block), and without some modules (fourth block).
['[EMPTY]', 'Rotten Tomatoes', 'Rotten Tomatoes', 'Rotten Tomatoes', 'Yelp', 'Yelp', 'Yelp']
[['Model', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L'], ['DenoiseSum', '[BOLD] 21.26', '4.61', '[BOLD] 16.27', '[BOLD] 30.14', '[BOLD] 4.99', '[BOLD] 17.65'], ['10% synthetic dataset', '20.16', '3.14', '15.39', '28.54', '3.63', '16.22'], ['50% synthetic dataset', '20.76', '3.91', '15.76', '29.16', '4.40', '17.54'], ['no segment noising', '20.64', '4.39', '16.03', '28.93', '4.31', '16.88'], ['no document noising', '21.23', '4.38', '16.22', '28.75', '4.06', '16.67'], ['no explicit denoising', '21.17', '4.18', '16.06', '28.60', '4.10', '17.06'], ['no partial copy', '20.76', '4.01', '15.89', '28.03', '4.58', '16.31'], ['no discriminator', '20.77', '4.48', '15.84', '29.09', '4.22', '16.64'], ['using human categories', '20.67', '[BOLD] 4.69', '15.87', '28.54', '4.02', '15.86']]
The final model consistently performs better on all metrics when compared to versions with less synthetic data (second block), versions with only one type of noise (second block), and versions with a module removed (third block). When using human-labeled categories, we see a slight improvement in ROUGE-2 on Rotten Tomatoes, however the model performs substantially worse on other metrics. We believe there are several reasons for this. Firstly, human-labeled categories, at least the ones available, are not fine-grained enough to capture various aspects mentioned in the reviews and their sentiment (e.g., did the actors perform well? was the plot convoluted?). Secondly, the number of business types available on Yelp is very large (i.e., 898 types), which makes the discriminator loss Ldisc hard to optimize. This explains the relatively larger decrease in performance on Yelp.
Incorporating Pragmatic Reasoning Communication into Emergent Language
2006.04109
Table 2: Pragmatics communication accuracy using virtual interlocutors.
['[BOLD] Virtual Opponent', '[BOLD] SFide', '[BOLD] LFide', '[BOLD] ArgmaxL', '[BOLD] RSA', '[BOLD] IBR', '[BOLD] GT', '[BOLD] GTs']
[['Exact copy', '100%', '100%', '55.6±2.5', '55.1±2.1', '80.6±2.8', '75.5±2.0', '94.0±0.6'], ['Training 100k rnd', '96.6%', '97.0%', '53.0±2.3', '54.3±1.4', '68.6±1.6', '58.1±2.2', '69.9±1.8']]
In classic pragmatic frameworks, it is often assumed that interlocutors know about each other very well. For example, the prior probabilites PS0 and PL0 are common knowledge to each other. However, in practice, game information may be incomplete and one’s assumptions about the other interlocutor may diverge from reality. Each person learns their own language model and reasons about others based on their own mental model. To check how this affects pragmatics, we assume agents S and L first respectively learn to speak and listen as before, so we obtain PS0 and PL0 , S has its own listener model PL′0 and L has its own speaker model PS′0. We then train PS′0 and PL′0 by communicating with and simulating outputs of PS0 and PL0 regarding game instances. In the end, PS′0 is similar to PS0 but not exactly the same, and similarly for the listener model. This amounts to policy reconstruction methods from the perspective of theory of mind and opponent modeling. During testing, S and L try to communicate using pragmatic frameworks, each using their own model as the virtual interlocutor model. “Fide" here describes the average fidelity when a virtual model simulates the real one, which is computed as the cosine similarity between the output distributions of the models on the same input. We observe that although the fidelities are fairly high after training, their minor differences substantially hamper the pragmatic performance. GameTable is most susceptible to this, while IBR and GameTable-sequential are fairly robust.
Learning to Learn Morphological Inflection for Resource-Poor Languages
2004.13304
Table 2: Inflection accuracy on the test languages; best results in bold. Languages listed by language family; from top to bottom: Romance, Slavic, Uralic.
['[EMPTY]', '[BOLD] PG [BOLD] MAML-PG', '[BOLD] PG [BOLD] MulPG+FT', '[BOLD] PG [BOLD] MulPG', '[BOLD] PG', '[BOLD] MED [BOLD] MAML-MED', '[BOLD] MED [BOLD] MulMED+FT', '[BOLD] MED [BOLD] MulMED', '[BOLD] UZH']
[['Asturian', '72.08', '[BOLD] 73.28', '66.98', '69.82', '68.94', '65.54', '58.62', '71.56'], ['French', '[BOLD] 64.16', '61.54', '49.10', '53.98', '46.18', '43.64', '24.68', '64.04'], ['Friulian', '[BOLD] 80.00', '77.00', '66.00', '69.40', '63.40', '59.20', '35.40', '78.20'], ['Italian', '53.12', '[BOLD] 54.30', '40.12', '45.06', '40.88', '35.22', '20.22', '53.12'], ['Ladin', '58.80', '58.60', '40.80', '66.20', '50.20', '47.80', '31.80', '[BOLD] 68.60'], ['Latin', '14.04', '13.90', '7.90', '13.36', '9.10', '8.28', '2.52', '[BOLD] 15.98'], ['Middle French', '83.18', '80.68', '65.76', '81.70', '71.18', '66.24', '40.36', '[BOLD] 85.16'], ['Neapolitan', '81.00', '82.20', '72.60', '77.00', '78.00', '75.20', '47.40', '[BOLD] 84.20'], ['Norman', '53.20', '54.80', '34.40', '[BOLD] 58.40', '[BOLD] 58.40', '50.00', '13.20', '50.40'], ['Occitan', '[BOLD] 77.20', '72.40', '66.00', '70.60', '70.20', '70.80', '46.40', '74.80'], ['Old French', '[BOLD] 44.48', '42.86', '27.82', '36.16', '34.58', '31.78', '16.84', '42.14'], ['Romanian', '40.94', '39.40', '31.44', '33.70', '29.32', '26.30', '5.12', '[BOLD] 42.28'], ['Spanish', '71.62', '[BOLD] 72.20', '66.84', '55.58', '65.42', '59.08', '52.02', '65.08'], ['Venetian', '74.72', '73.46', '68.06', '75.14', '71.08', '67.86', '40.32', '[BOLD] 76.40'], ['Belarusian', '26.32', '24.74', '17.24', '22.12', '14.68', '11.84', '1.12', '[BOLD] 27.56'], ['Czech', '[BOLD] 50.36', '49.08', '42.86', '35.46', '34.46', '33.06', '26.14', '41.56'], ['Kashubian', '59.20', '58.40', '56.00', '57.20', '58.00', '59.20', '19.20', '[BOLD] 63.60'], ['Lower Sorbian', '[BOLD] 53.52', '51.54', '47.22', '40.72', '40.36', '36.88', '15.36', '41.88'], ['Old Church Slavonic', '[BOLD] 50.00', '[BOLD] 50.00', '35.80', '47.00', '38.00', '29.40', '7.40', '42.40'], ['Polish', '[BOLD] 43.40', '42.60', '35.38', '29.62', '33.40', '31.56', '14.36', '41.82'], ['Russian', '[BOLD] 52.48', '51.20', '41.74', '40.66', '29.34', '23.80', '11.68', '48.22'], ['Serbo-Croatian', '[BOLD] 39.88', '37.04', '23.02', '35.88', '26.50', '21.62', '9.32', '38.38'], ['Slovene', '[BOLD] 59.68', '58.44', '45.94', '48.42', '43.62', '42.74', '27.92', '53.18'], ['Ukrainian', '48.30', '[BOLD] 48.72', '43.68', '36.38', '35.66', '28.12', '18.04', '46.78'], ['Hungarian', '27.64', '23.22', '12.46', '37.78', '20.02', '17.60', '4.04', '[BOLD] 38.04'], ['Ingrian', '[BOLD] 50.40', '46.40', '31.60', '44.00', '34.40', '40.00', '16.40', '32.80'], ['Karelian', '85.20', '84.80', '62.40', '[BOLD] 90.80', '79.60', '71.20', '29.20', '79.20'], ['Livonian', '28.00', '27.00', '22.20', '[BOLD] 32.80', '22.80', '22.60', '2.60', '29.80'], ['Votic', '[BOLD] 26.60', '24.60', '11.20', '22.60', '25.60', '25.20', '11.40', '23.00'], ['[BOLD] average', '[BOLD] 54.12', '52.91', '42.50', '49.23', '44.60', '41.44', '22.38', '52.42']]
We make the following observations: For both MED and PG, MAML-trained models outperform all other models of the same architecture: MAML-PG (resp. MAML-MED) obtains a 1.21% (resp. 3.16%) higher accuracy on average over languages than the second best model MulPG+FT (resp. MulMED+FT). This demonstrates that MAML is more effective than multi-task training. For both architectures, models which are obtained by multi-task training and subse1uent fine-tuning outperform models trained exclusively in a multi-task fashion: MulPG+FT (resp. MulMED+FT) obtains a 10.41% (resp. 19.06%) higher accuracy on average than MulPG (resp. MulMED). Since differences in performance are substantial, we conclude that the use of fine-tuning—with or without MAML—is important. All PG models outperform their corresponding MED models. This is in line with the findings of \citeauthorcotterell-EtAl:2018:K18 -30 \shortcitecotterell-EtAl:2018:K18-30 that the pointer-generator network is a strong model for morphological inflection in the low-resource setting. Plain PG performs worse than fine-tuned PG models, but better than MulPG. That both MAML-PG and MulPG+FT outperform PG shows the importance of fine-tuning. Multi-task training as suggested by \citeauthorkann-cotterell-schutze:2017: Long \shortcitekann-cotterell-schutze:2017: Long seems to not work well in our setup. One possible reason for this is that we include languages from 3 different families as opposed to the original experiments which focussed on one family at a time. Thus, better ways to account for the amount of unrelated examples are probably needed for MulPG. MulPG+FT and MAML-PG perform better than UZH, the state-of-the-art model. We conclude that cross-lingual transfer is a promising direction to improve morphological inflection for resource-poor languages. Looking at individual languages, MAML-PG performs better than the plain PG model in all cases, except for 3 Romance (Ladin, Norman, and Venetian) and 3 Uralic (Hungarian, Karelian, and Livonian) languages. There are no exceptions for the Slavic family. That relatively many monolingual models outperform cross-lingual transfer models for Uralic languages might be due to these languages being less similar than those in the other families. We look into this in more detail in the next section.
Investigation of Synthetic Speech Detection Using Frame- and Segment-Specific Importance Weighting
1610.03009
TABLE II: Performance of Each of the Sound-class Detectors Measured in Terms of Equal-error-rates (EERs) for the Development Data. Frequency of Observation in Development Utterances is also Shown for Each Class Type.
['Class', 'S1', 'S2', 'S3', 'S4', 'S5', 'All', 'Freq.']
[['Vowel', '3.52', '13.30', '[BOLD] 0.65', '[BOLD] 0.74', '7.26', '6.35', '0.542'], ['Nasal', '8.90', '20.82', '5.09', '5.86', '13.79', '11.62', '0.156'], ['Glide', '9.33', '21.69', '4.10', '4.44', '15.92', '12.15', '0.118'], ['Stop', '[BOLD] 2.24', '[BOLD] 4.78', '0.70', '0.78', '[BOLD] 6.77', '[BOLD] 3.68', '0.112'], ['Rest', '8.97', '10.58', '3.32', '3.78', '16.08', '9.43', '0.072']]
Performance of each class is significantly different from each other and they change substantially depending on the attack method. Also note that, even though vowel class is observed more than other classes, their performance is better than other systems only for HMM-based TTS attacks. For the voice-conversion attacks, short-duration stop sounds become more informative even though they occur far less frequently than the vowels.
Read + Verify: Machine Reading Comprehension with Unanswerable Questions
1808.05759
Table 5: Comparison of different readers with fixed answer verifier.
['Configuration', 'All EM', 'All F1', 'NoAns ACC']
[['DocQA', '61.9', '64.8', '69.1'], ['+ Model-III', '[BOLD] 66.5', '[BOLD] 69.2', '[BOLD] 75.2'], ['DocQA + ELMo', '65.1', '67.6', '70.6'], ['+ Model-III', '[BOLD] 68.0', '[BOLD] 70.7', '[BOLD] 76.1']]
We find that the absolute improvements are even larger: the no-answer accuracy roughly increases by 6 points when adding Model-III to DocQA (from 69.1 to 75.2), and 5.5 points when adding Model-III to DocQA + ELMo (from 70.6 to 76.1).
Read + Verify: Machine Reading Comprehension with Unanswerable Questions
1808.05759
Table 1: Comparison of different approaches on the SQuAD 2.0 test set, extracted on Aug 28, 2018: Levy et al. Levy171, Clark et al. Clark182, Liu et al. liu2017stochastic3, Huang et al. Huang17b4 and Wang et al. wang2018multi5. † indicates unpublished works.
['Model', 'Dev EM', 'Dev F1', 'Test EM', 'Test F1']
[['BNA1', '59.8', '62.6', '59.2', '62.1'], ['DocQA2', '61.9', '64.8', '59.3', '62.3'], ['DocQA + ELMo', '65.1', '67.6', '63.4', '66.3'], ['ARRR†', '-', '-', '68.6', '71.1'], ['VS3−Net†', '-', '-', '68.4', '71.3'], ['SAN3', '-', '-', '68.6', '71.4'], ['FusionNet++(ensemble)4', '-', '-', '70.3', '72.6'], ['SLQA+5', '-', '-', '71.5', '[BOLD] 74.4'], ['RMR + ELMo + Verifier', '[BOLD] 72.3', '[BOLD] 74.8', '[BOLD] 71.7', '74.2'], ['Human', '86.3', '89.0', '86.9', '89.5']]
We use Model-III as the default answer verifier, and only report the best result. As we can see, our system obtains state-of-the-art results by achieving an EM score of 71.7 and a F1 score of 74.2 on the test set. Notice that SLQA+ has reached a comparable result compared to our approach.
Read + Verify: Machine Reading Comprehension with Unanswerable Questions
1808.05759
Table 2: Comparison of readers with different auxiliary losses.
['Configuration', 'HasAns EM', 'HasAns F1', 'All EM', 'All F1', 'NoAns ACC']
[['RMR', '72.6', '81.6', '66.9', '69.1', '73.1'], ['- indep-I', '71.3', '80.4', '66.0', '68.6', '72.8'], ['- indep-II', '72.4', '81.4', '64.0', '66.1', '69.8'], ['- both', '71.9', '80.9', '65.2', '67.5', '71.4'], ['RMR + ELMo', '79.4', '86.8', '71.4', '73.7', '77.0'], ['- indep-I', '78.9', '86.5', '71.2', '73.5', '76.7'], ['- indep-II', '79.5', '86.6', '69.4', '71.4', '75.1'], ['- both', '78.7', '86.2', '70.0', '71.9', '75.3']]
Next, we do an ablation study on the SQuAD 2.0 development set to show the effects of our proposed methods for each individual component. Removing the independent span loss (indep-I) results in a performance drop for all answerable questions (HasAns), indicating that this loss helps the model in better identifying the answer boundary. Ablating independent no-answer loss (indep-II), on the other hand, causes little influence on HasAns, but leads to a severe decline on no-answer accuracy (NoAns ACC). This suggests that a confliction between answer extraction and no-answer detection indeed happens. Finally, deleting both of two losses causes a degradation of more than 1.5 points on the overall performance in terms of F1, with or without ELMo embeddings.
Read + Verify: Machine Reading Comprehension with Unanswerable Questions
1808.05759
Table 3: Comparison of different architectures for the answer verifier.
['Configuration', 'NoAns ACC']
[['Model-I', '74.5'], ['Model-II', '74.6'], ['Model-II + ELMo', '75.3'], ['Model-III', '[BOLD] 76.2'], ['Model-III + ELMo', '76.1']]
Model-III outperforms all of other competitors, achieving a no-answer accuracy of 76.2. This illustrates that the combination of two different architectures can bring in further improvement. Adding ELMo embeddings, however, does not boost the performance.
Read + Verify: Machine Reading Comprehension with Unanswerable Questions
1808.05759
Table 4: Comparison of readers with different answer verifiers.
['Configuration', 'All EM', 'All F1', 'NoAns ACC']
[['RMR', '66.9', '69.1', '73.1'], ['+ Model-I', '68.3', '71.1', '76.2'], ['+ Model-II', '68.1', '70.8', '75.6'], ['+ Model-II + ELMo', '68.2', '70.9', '75.9'], ['+ Model-III', '[BOLD] 68.5', '[BOLD] 71.5', '[BOLD] 77.1'], ['+ Model-III + ELMo', '68.5', '71.2', '76.5'], ['RMR + ELMo', '71.4', '73.7', '77.0'], ['+ Model-I', '71.8', '74.4', '77.3'], ['+ Model-II', '71.8', '74.2', '78.1'], ['+ Model-II + ELMo', '72.0', '74.3', '78.2'], ['+ Model-III', '[BOLD] 72.3', '[BOLD] 74.8', '[BOLD] 78.6'], ['+ Model-III + ELMo', '71.8', '74.3', '78.3']]
The combination of base reader with any answer verifier can always result in considerable performance gains, and combining the reader with Model-III obtains the best result. We find that the improvement on no-answer accuracy is significant. This metric raises from 73.1 to 77.1 after adding Model-III to RMR, increasing by 4 absolute points. Similar observation can be found when ELMo embeddings are used, demonstrating that the gains are consistent and stable.
Assigning Medical Codes at the Encounter Levelby Paying Attention to Documents
1911.06848
Table 1: Document-level F1-score calculated by comparing document attention from ELDAN and human coders on 20 CPT codes. #enc is the number of encounters that contains the code. #doc is the number of documents within those encounters. #source is the number of documents being labeled by human coders as the source documents for the code. Attention (from ELDAN) and Chance both report document-level F1-score, and Diff is the difference between them.
['CPT Codes', '#enc', '#doc', '#source', 'Attention', 'Chance', 'Diff']
[['43239', '8', '19', '9', '88.89', '59.22', '29.67'], ['45380', '5', '11', '5', '90.91', '56.47', '34.44'], ['45385', '6', '13', '8', '85.71', '67.52', '18.20'], ['66984', '7', '13', '7', '100.00', '68.65', '31.35'], ['45378', '10', '20', '11', '90.91', '67.44', '23.47'], ['12001', '1', '3', '1', '100.00', '45.63', '54.37'], ['12011', '3', '8', '3', '57.14', '54.30', '2.85'], ['29125', '2', '9', '4', '72.73', '50.91', '21.81'], ['10060', '4', '9', '6', '100.00', '71.65', '28.35'], ['69436', '7', '18', '8', '87.50', '60.54', '26.96']]
for the most frequent 20 encounter-level codes, with surprisingly strong results: 100% F1-score on 7 out of 19 available codes. However, even chance performance could be good if the number of possible documents to assign credit to is very small. Therefore we compare to the chance baseline. ELDAN is consistently better, usually by a large margin. These results support the conclusion that ELDAN’s document attention is effective in identifying signal from “source” documents for the targeted code — crucially, without training on document-level annotations.
Assigning Medical Codes at the Encounter Levelby Paying Attention to Documents
1911.06848
Figure 1: Left: Encounter-level F1-scores of the 20 most frequent CPT codes. #Docs is the average number of documents found in the encounters that contain the code; prevalence is the percentage of all encounters that contain that code. Right: Macro average of encounter-level F1 scores for every 10 codes (from most to least frequent). ΔELDAN =\textscELDAN+transfer−\textscELDAN.
['CPT Codes', '#Docs', 'Prevalence', 'ELDN', 'ELDAN', 'ELDAN +transfer']
[['43239', '3.13', '4.15%', '84.59', '[BOLD] 86.21', '84.93'], ['45380', '2.78', '3.56%', '72.68', '[BOLD] 75.14', '74.02'], ['45385', '2.75', '2.44%', '71.33', '[BOLD] 72.33', '70.31'], ['66984', '2.51', '1.90%', '92.15', '92.87', '[BOLD] 93.00'], ['45378', '2.40', '1.89%', '62.67', '65.45', '[BOLD] 67.57'], ['12001', '2.20', '1.60%', '[BOLD] 46.96', '44.74', '43.62'], ['12011', '2.35', '1.19%', '41.03', '42.12', '[BOLD] 43.30'], ['29125', '2.85', '1.05%', '52.32', '[BOLD] 56.50', '54.10'], ['10060', '2.09', '1.00%', '45.15', '48.73', '[BOLD] 52.25'], ['69436', '3.01', '0.96%', '83.30', '85.18', '[BOLD] 88.32'], ['12002', '2.60', '0.92%', '25.53', '28.36', '[BOLD] 28.43'], ['59025', '1.86', '0.92%', '[BOLD] 73.82', '69.00', '67.73'], ['11042', '3.20', '0.88%', '64.38', '63.45', '[BOLD] 66.86'], ['47562', '4.36', '0.80%', '70.74', '76.25', '[BOLD] 77.67'], ['62323', '2.10', '0.79%', '61.17', '57.07', '[BOLD] 64.25'], ['Average', '2.62', '[EMPTY]', '58.02', '60.40', '[BOLD] 61.26']]
To show the trend across the full range of codes we macro-average every 10 codes from most frequent to least frequent ELDAN with or without naïve transfer learning consistently outperforms ELDN, even for extremely rare codes (<0.1%). As codes become rarer, ELDAN+transfer tends toward outperforming ELDAN more substantially; see increasing trend for ΔELDAN. This improvement can be explained by viewing the embedding layer as a vector space model that maps sparse features that are extracted from the document (such as medical concepts, UMLS CUIs) to a dense representation, which can be effective for bootstrapping the training of rare codes. Our 80-10-10 dataset split results in 371,092 encounters for training, 46,387 encounters for development/tuning, and 46,387 encounters for testing. Note that no document-level annotations are available. Since we are in an imbalanced setting (some medical codes can be extremely rare, see Fig. No resampling is done for the development set and test set. These hyperparameters were selected based on our results on the development set.
Assigning Medical Codes at the Encounter Levelby Paying Attention to Documents
1911.06848
Figure 1: Left: Encounter-level F1-scores of the 20 most frequent CPT codes. #Docs is the average number of documents found in the encounters that contain the code; prevalence is the percentage of all encounters that contain that code. Right: Macro average of encounter-level F1 scores for every 10 codes (from most to least frequent). ΔELDAN =\textscELDAN+transfer−\textscELDAN.
['Average', 'Prevalence', 'ELDN', 'ELDAN', 'ELDAN +transfer', 'ΔELDAN']
[['1st to 10th', '1.97%', '65.22', '66.93', '[BOLD] 67.14', '0.22'], ['11st to 20th', '0.78%', '50.82', '53.87', '[BOLD] 55.38', '1.50'], ['21st to 30th', '0.51%', '55.93', '[BOLD] 63.07', '62.23', '-0.85'], ['31st to 40th', '0.40%', '44.93', '51.92', '[BOLD] 55.24', '3.32'], ['41st to 50th', '0.30%', '32.08', '38.61', '[BOLD] 39.35', '0.74'], ['51st to 60th', '0.26%', '33.83', '38.80', '[BOLD] 39.10', '0.30'], ['61st to 70th', '0.23%', '28.37', '35.05', '[BOLD] 36.62', '1.56'], ['71st to 80th', '0.21%', '25.66', '30.62', '[BOLD] 32.93', '2.31'], ['81st to 90th', '0.18%', '34.92', '42.03', '[BOLD] 43.26', '1.23'], ['91st to 100th', '0.16%', '24.54', '29.06', '[BOLD] 31.32', '2.25'], ['101st to 110th', '0.14%', '25.15', '33.17', '[BOLD] 34.57', '1.40'], ['111st to 120th', '0.12%', '24.87', '31.74', '[BOLD] 32.84', '1.09'], ['121st to 130th', '0.11%', '18.14', '24.10', '[BOLD] 28.09', '3.99'], ['131st to 140th', '0.10%', '20.39', '28.53', '[BOLD] 32.21', '3.68'], ['141st to 150th', '0.08%', '26.93', '33.13', '[BOLD] 40.94', '7.82']]
To show the trend across the full range of codes we macro-average every 10 codes from most frequent to least frequent ELDAN with or without naïve transfer learning consistently outperforms ELDN, even for extremely rare codes (<0.1%). As codes become rarer, ELDAN+transfer tends toward outperforming ELDAN more substantially; see increasing trend for ΔELDAN. This improvement can be explained by viewing the embedding layer as a vector space model that maps sparse features that are extracted from the document (such as medical concepts, UMLS CUIs) to a dense representation, which can be effective for bootstrapping the training of rare codes. Our 80-10-10 dataset split results in 371,092 encounters for training, 46,387 encounters for development/tuning, and 46,387 encounters for testing. Note that no document-level annotations are available. Since we are in an imbalanced setting (some medical codes can be extremely rare, see Fig. No resampling is done for the development set and test set. These hyperparameters were selected based on our results on the development set.
Assigning Medical Codes at the Encounter Levelby Paying Attention to Documents
1911.06848
Table 1: Document-level F1-score calculated by comparing document attention from ELDAN and human coders on 20 CPT codes. #enc is the number of encounters that contains the code. #doc is the number of documents within those encounters. #source is the number of documents being labeled by human coders as the source documents for the code. Attention (from ELDAN) and Chance both report document-level F1-score, and Diff is the difference between them.
['CPT Codes', '#enc', '#doc', '#source', 'Attention', 'Chance', 'Diff']
[['12002', '4', '13', '6', '92.31', '56.02', '36.29'], ['59025', '0', '0', '0', '-', '-', '-'], ['11042', '5', '23', '16', '58.06', '64.89', '-6.82'], ['47562', '1', '5', '3', '100.00', '57.62', '42.38'], ['62323', '5', '11', '7', '87.50', '69.85', '17.65'], ['64483', '3', '8', '4', '100.00', '58.07', '41.93'], ['43235', '6', '18', '6', '83.33', '45.19', '38.15'], ['20610', '5', '9', '5', '100.00', '72.25', '27.75'], ['49083', '10', '27', '13', '85.71', '60.21', '25.50'], ['51702', '2', '2', '2', '100.00', '100.00', '0.00']]
for the most frequent 20 encounter-level codes, with surprisingly strong results: 100% F1-score on 7 out of 19 available codes. However, even chance performance could be good if the number of possible documents to assign credit to is very small. Therefore we compare to the chance baseline. ELDAN is consistently better, usually by a large margin. These results support the conclusion that ELDAN’s document attention is effective in identifying signal from “source” documents for the targeted code — crucially, without training on document-level annotations.
Adversarial reconstruction for Multi-modal Machine Translation
1910.02766
Table 4: Ablation study of Q-WAAE model
['[BOLD] Test sets', '[BOLD] COCO-ambiguous [BOLD] BLEU', '[BOLD] COCO-ambiguous [BOLD] METEOR']
[['Baseline', '28.50', '48.80'], ['Baseline + [ITALIC] G + no [ITALIC] v', '29.43', '49.60'], ['Baseline + [ITALIC] G', '29.91', '49.24'], ['[ITALIC] Q-WAAE + no [ITALIC] v', '30.57', '50.15'], ['[ITALIC] Q-WAAE', '31.41', '50.95']]
To understand the success of Q-WAAE on the ambiguous COCO data-set, we perform an ablation study of the model. We first discard the adversarial discriminator so that we only train the reconstruction module with the MSE loss (+ G). We also discard the use of the features v in the translation model for both the ablated model and Q-WAAE (no v).
Adversarial reconstruction for Multi-modal Machine Translation
1910.02766
Table 2: Q-WAAE : Impact on the METEOR metric of the reconstruction and adversarial loss coefficient on the ambiguous COCO data-set
['[EMPTY]', '[EMPTY]', '[ITALIC] λr 0.2', '[ITALIC] λr 0.5', '[ITALIC] λr 0.8']
[['[origin=c]90 [ITALIC] λa', '0.2', '[BOLD] 50.95', '50.08', '49.33'], ['[origin=c]90 [ITALIC] λa', '0.5', '49.79', '49.62', '49.16'], ['[origin=c]90 [ITALIC] λa', '0.8', '49.70', '49.16', '48.02']]
The results show that if the auxiliary loss (adversarial and/or reconstruction) is made too important compared to the translation loss, the translation quality is impaired.
Adversarial reconstruction for Multi-modal Machine Translation
1910.02766
Table 3: G-GWAN : Impact of the noise concatenated to the hidden state of size 512
['[EMPTY]', '[EMPTY]', '| [ITALIC] z| 64', '| [ITALIC] z| 128', '| [ITALIC] z| 256', '| [ITALIC] z| 512']
[['[origin=c]90', 'METEOR', '50.35', '[BOLD] 50.43', '49.71', '49.48']]
The G-WGAN also shows improvements over the baseline and obtains similar results to Q-WAAE. Nonetheless, a small discrepancy is noticeable on the COCO-ambiguous. We believe that the main advantage of the Q-WAAE loss is the actual presence of a direct mean square error reconstruction loss along the adversarial loss.
Query-Focused Opinion Summarization for User-Generated Content
1606.05702
Table 6: Effect of different dispersion functions, content coverage, and dissimilarity metrics on our system. [Left] JSD values for different combinations on Yahoo! data, using LDA with 100 topics. All systems are significantly different from each other at significance level α=0.05. Systems using summation of distances for dispersion function (hsum) uniformly outperform the ones using minimum distance (hmin). [Right] ROUGE scores of different choices for TAC 2008 data. All systems use LDA with 40 topics. The parameters of our systems are adopted from the ones tuned on Yahoo! Answers.
['[BOLD] TAC 2008', '[BOLD] TAC 2008 Dispersion [ITALIC] sum', '[BOLD] TAC 2008 Dispersion [ITALIC] sum', '[BOLD] TAC 2008 Dispersion [ITALIC] min', '[BOLD] TAC 2008 Dispersion [ITALIC] min']
[['Dissimi', 'Cont [ITALIC] tfidf', 'Cont [ITALIC] sem', 'Cont [ITALIC] tfidf', 'Cont [ITALIC] sem'], ['[ITALIC] Semantic', '0.2216', '0.2169', '0.2772', '0.2579'], ['[ITALIC] Topical', '0.2128', '0.2090', '[BOLD] 0.3234', '0.3056'], ['[ITALIC] Lexical', '0.2167', '0.2129', '0.3117', '0.3160']]
Given that the text similarity metrics and dispersion functions play important roles in the framework, we further study the effectiveness of different content coverage functions (Cosine using TFIDF vs. Semantic), dispersion functions (hsum vs. hmin), and dissimilarity metrics used in dispersion functions (Semantic vs. Topical vs. Lexical). Results on Yahoo! Meanwhile, Cosine using TFIDF is better at measuring content coverage than WordNet-based semantic measurement, and this may due to the limited coverage of WordNet on verbs. This is also true for dissimilarity metrics. This indicates that optimal dispersion function varies by genre. Topical-based dissimilarity also marginally outperforms the other two metrics in blog data.
Query-Focused Opinion Summarization for User-Generated Content
1606.05702
Table 3: [Left] Summaries evaluated by Jensen-Shannon divergence (JSD) on Yahoo Answer for summaries of 100 words and 200 words. The average length of the best answer is 102.70. [Right] Value addition of each component in the objective function. The JSD on each line is statistically significantly lower than the JSD on the previous (α=0.05).
['[EMPTY]', '[BOLD] Length 100', '[BOLD] Length 200']
[['Best answer', '0.3858', '-'], ['Lin:2011:CSF:2002472.2002537', '0.3398', '0.2008'], ['Lin:2011:CSF:2002472.2002537 + q', '0.3379', '0.1988'], ['dasgupta-kumar-ravi:2013:ACL2013', '0.3316', '0.1939'], ['Our system', '[BOLD] 0.3017', '[BOLD] 0.1758']]
The topic number is tuned on the development set, and we find that varying the number of topics does not impact performance too much. Meanwhile, both our system and \newcitedasgupta-kumar-ravi:2013:ACL2013 produce better JSD scores than the two variants of the \newciteLin:2011:CSF:2002472.2002537 system, which implies the effectiveness of the dispersion function.
Query-Focused Opinion Summarization for User-Generated Content
1606.05702
Table 3: [Left] Summaries evaluated by Jensen-Shannon divergence (JSD) on Yahoo Answer for summaries of 100 words and 200 words. The average length of the best answer is 102.70. [Right] Value addition of each component in the objective function. The JSD on each line is statistically significantly lower than the JSD on the previous (α=0.05).
['[EMPTY]', '[BOLD] JSD100', '[BOLD] JSD200']
[['Rel(evance)', '0.3424', '0.2053'], ['Rel + Aut(hor)', '0.3375', '0.2040'], ['Rel + Aut + TM (Topic Models)', '0.3366', '0.2033'], ['Rel + Aut + TM + Pol(arity)', '0.3309', '0.1983'], ['Rel + Aut + TM + Pol + Cont(ent Coverage)', '0.3102', '0.1851'], ['Rel + Aut + TM + Pol + Cont + Disp(ersion)', '[BOLD] 0.3017', '[BOLD] 0.1758']]
The topic number is tuned on the development set, and we find that varying the number of topics does not impact performance too much. Meanwhile, both our system and \newcitedasgupta-kumar-ravi:2013:ACL2013 produce better JSD scores than the two variants of the \newciteLin:2011:CSF:2002472.2002537 system, which implies the effectiveness of the dispersion function.
Query-Focused Opinion Summarization for User-Generated Content
1606.05702
Table 6: Effect of different dispersion functions, content coverage, and dissimilarity metrics on our system. [Left] JSD values for different combinations on Yahoo! data, using LDA with 100 topics. All systems are significantly different from each other at significance level α=0.05. Systems using summation of distances for dispersion function (hsum) uniformly outperform the ones using minimum distance (hmin). [Right] ROUGE scores of different choices for TAC 2008 data. All systems use LDA with 40 topics. The parameters of our systems are adopted from the ones tuned on Yahoo! Answers.
['[BOLD] Yahoo! Answer', '[BOLD] Yahoo! Answer Dispersion [ITALIC] sum', '[BOLD] Yahoo! Answer Dispersion [ITALIC] sum', '[BOLD] Yahoo! Answer Dispersion [ITALIC] min', '[BOLD] Yahoo! Answer Dispersion [ITALIC] min']
[['Dissimi', 'Cont [ITALIC] tfidf', 'Cont [ITALIC] sem', 'Cont [ITALIC] tfidf', 'Cont [ITALIC] sem'], ['[ITALIC] Semantic', '0.3143', '0.324 3', '0.3129', '0.3232'], ['[ITALIC] Topical', '0.3101', '0.3202', '0.3106', '0.3209'], ['[ITALIC] Lexical', '[BOLD] 0.3017', '0.3147', '0.3071', '0.3172']]
Given that the text similarity metrics and dispersion functions play important roles in the framework, we further study the effectiveness of different content coverage functions (Cosine using TFIDF vs. Semantic), dispersion functions (hsum vs. hmin), and dissimilarity metrics used in dispersion functions (Semantic vs. Topical vs. Lexical). Results on Yahoo! Meanwhile, Cosine using TFIDF is better at measuring content coverage than WordNet-based semantic measurement, and this may due to the limited coverage of WordNet on verbs. This is also true for dissimilarity metrics. This indicates that optimal dispersion function varies by genre. Topical-based dissimilarity also marginally outperforms the other two metrics in blog data.
Multilingual Neural Machine Translation with Knowledge Distillation
1902.10461
Table 6: BLEU scores of selective distillation (our method) and distillation all the time during the training process on the Ted talk dataset.
['[EMPTY]', 'Bg', 'Et', 'Fi', 'Fr', 'Gl', 'Hi', 'Hy', 'Ka']
[['distillation all the time', '28.07', '12.64', '15.13', '33.69', '30.28', '18.86', '19.88', '14.04'], ['selective distillation', '29.18', '15.63', '17.23', '34.32', '31.90', '21.00', '21.17', '18.27'], ['Δ', '+1.11', '+2.99', '+2.10', '+0.63', '+1.62', '+2.14', '+1.29', '+4.23'], ['[EMPTY]', 'Ku', 'Mk', 'My', 'Sl', 'Zh', 'Pl', 'Sk', 'Sv'], ['distillation all the time', '8.50', '32.10', '14.02', '22.10', '17.22', '25.05', '30.45', '37.88'], ['selective distillation', '13.38', '32.65', '15.17', '23.68', '19.39', '24.30', '29.91', '36.92'], ['Δ', '+4.88', '+0.55', '+1.15', '+1.58', '+2.17', '-0.75', '-0.54', '-0.96']]
Selective Distillation We list the 16 languages on which the two methods (selective distillation, and distillation all the time) that have difference bigger than 0.5 in terms of BLEU score. It can be seen that selective distillation performs better on 13 out of 16 languages, with large BLEU score improvements, which demonstrates the effectiveness of the selective distillation.
Multilingual Neural Machine Translation with Knowledge Distillation
1902.10461
Table 14: BLEU scores of the individual and multilingual models on the 44 languages→English on the Ted talk dataset.
['Language', 'Ar', 'Bg', 'Cs', 'Da', 'De', 'El', 'Es', 'Et', 'Fa']
[['[ITALIC] Individual', '31.07', '38.64', '26.42', '38.21', '34.63', '36.69', '41.20', '7.43', '26.67'], ['[ITALIC] Multilingual (Baseline)', '27.84', '27.76', '27.17', '40.41', '32.85', '36.04', '39.80', '14.86', '24.93'], ['[ITALIC] Multilingual (Our method)', '29.57', '29.18', '28.30', '42.23', '34.53', '37.49', '41.43', '15.63', '26.76'], ['Language', 'Fi', 'Frca', 'Fr', 'Gl', 'He', 'Hi', 'Hr', 'Hu', 'Hy'], ['[ITALIC] Individual', '10.78', '18.52', '39.62', '12.64', '36.81', '10.84', '34.14', '24.67', '12.30'], ['[ITALIC] Multilingual (Baseline)', '16.12', '33.08', '38.27', '30.32', '32.96', '19.93', '34.39', '22.76', '20.25'], ['[ITALIC] Multilingual (Our method)', '17.22', '34.32', '39.75', '31.9', '35.22', '21.00', '35.6', '24.56', '21.17'], ['Language', 'Id', 'It', 'Ja', 'Ka', 'Ko', 'Ku', 'Lt', 'Mk', 'My'], ['[ITALIC] Individual', '29.20', '38.06', '13.31', '7.06', '18.54', '5.63', '18.19', '21.93', '7.53'], ['[ITALIC] Multilingual (Baseline)', '29.08', '36.02', '12.33', '16.71', '16.71', '11.83', '20.96', '31.85', '13.85'], ['[ITALIC] Multilingual (Our method)', '30.56', '37.50', '13.28', '18.26', '18.14', '13.38', '22.65', '32.65', '15.16'], ['Language', 'Nb', 'Nl', 'Pl', 'Ptbr', 'Pt', 'Ro', 'Ru', 'Sk', 'Sl'], ['[ITALIC] Individual', '27.28', '35.85', '22.98', '44.28', '33.81', '34.07', '24.36', '25.67', '11.80'], ['[ITALIC] Multilingual (Baseline)', '39.88', '33.97', '23.50', '42.96', '40.59', '33.03', '24.02', '28.97', '22.52'], ['[ITALIC] Multilingual (Our method)', '41.35', '35.65', '24.30', '44.41', '42.57', '34.73', '25.01', '29.90', '23.67'], ['Language', 'Sq', 'Sr', 'Sv', 'Th', 'Tr', 'Uk', 'Vi', 'Zh', '[EMPTY]'], ['[ITALIC] Individual', '29.70', '32.13', '34.53', '20.95', '24.46', '25.76', '26.38', '12.56', '[EMPTY]'], ['[ITALIC] Multilingual (Baseline)', '33.05', '32.27', '35.92', '21.50', '21.79', '26.82', '25.76', '18.81', '[EMPTY]'], ['[ITALIC] Multilingual (Our method)', '34.73', '33.71', '36.92', '22.12', '23.67', '27.80', '26.53', '19.39', '[EMPTY]']]
It can be seen that while multilingual baseline performs worse than the individual model, multilingual model based on our method nearly matches and even outperforms the individual model. Note that the multilingual model handles 44 languages in total, which means our method can reduce the model parameters size to 1/44 without loss of accuracy.
Graph based Neural Networks for Event Factuality Prediction using Syntactic and Semantic Structures
1907.03227
Table 1: Test set performance. * denotes the models trained on separate datasets while ** indicates those trained on multiple datasets. †specifies the models in Rudinger et al. (2018) that are significantly improved with BERT.
['[EMPTY]', 'FactBank MAE', 'FactBank [ITALIC] r', 'UW MAE', 'UW [ITALIC] r', 'Meantime MAE', 'Meantime [ITALIC] r', 'UDS-IH2 MAE', 'UDS-IH2 [ITALIC] r']
[['Lee et\xa0al. ( 2015 )*', '-', '-', '0.511', '0.708', '-', '-', '-', '-'], ['Stanovsky et\xa0al. ( 2017 )*', '0.590', '0.710', '[BOLD] 0.420', '0.660', '0.340', '0.470', '-', '-'], ['L-biLSTM(2)-S*†', '0.427', '0.826', '0.508', '0.719', '0.427', '0.335', '0.960', '0.768'], ['L-biLSTM(2)-MultiBal**†', '0.391', '0.821', '0.496', '0.724', '0.278', '0.613', '-', '-'], ['L-biLSTM(1)-MultiFoc**†', '0.314', '0.846', '0.502', '0.710', '0.305', '0.377', '-', '-'], ['L-biLSTM(2)-MultiSimp w/UDS-IH2**†', '0.377', '0.828', '0.508', '0.722', '0.367', '0.469', '0.965', '0.771'], ['H-biLSTM(1)-MultiSimp**†', '0.313', '0.857', '0.528', '0.704', '0.314', '0.545', '-', '-'], ['H-biLSTM(2)-MultiSimp w/UDS-IH2**†', '0.393', '0.820', '0.481', '0.749', '0.374', '0.495', '0.969', '0.760'], ['L-biLSTM(2)-S+BERT*', '0.381', '0.85', '0.475', '0.752', '0.389', '0.394', '0.895', '0.804'], ['L-biLSTM(2)-MultiSimp w/UDS-IH2+BERT**', '0.343', '0.855', '0.476', '0.749', '0.358', '0.499', '0.841', '0.841'], ['H-biLSTM(1)-MultiSimp+BERT**', '0.310', '0.821', '0.495', '0.771', '0.281', '0.639', '0.822', '0.812'], ['H-biLSTM(2)-MultiSimp w/UDS-IH2+BERT**', '0.330', '0.871', '0.460', '0.798', '0.339', '0.571', '0.835', '0.802'], ['Graph-based (Ours)*', '0.315', '0.890', '0.451', '0.828', '0.350', '0.452', '0.730', '0.905'], ['Ours with multiple datasets**', '[BOLD] 0.310', '[BOLD] 0.903', '0.438', '[BOLD] 0.830', '[BOLD] 0.204', '[BOLD] 0.702', '[BOLD] 0.726', '[BOLD] 0.909']]
This section evaluates the effectiveness of the proposed model for EFP on the benchmark datasets. We compare the proposed model with the best reported systems in the literature with linguistic features Lee et al. ; Stanovsky et al. et al. Importantly, to achieve a fair comparison, we obtain the actual implementation of the current state-of-the-art EFP models from Rudinger et al. Following the prior work, we use MAE (Mean Absolute Error), and r (Pearson Correlation) as the performance measures.
Question-Answering with Grammatically-Interpretable Representations
1705.08432
Table 3: Symbol 27
['[BOLD] Token', '[BOLD] Similarity']
[['annexed', '0.836'], ['Brisbane', '0.8359'], ['European', '0.8341'], ['Scotland', '0.8321'], ['Cyprus', '0.8275'], ['governments', '0.8266'], ['Commonwealth', '0.8261'], ['Britain', '0.8243'], ['flexibility', '0.8227'], ['territories', '0.8219'], ['Switzerland', '0.821'], ['countries', '0.8206'], ['freedom', '0.819'], ['Germans', '0.8178'], ['north', '0.8173']]
To interpret the lexical-semantic content of the TPR symbols s(t) learned by the TPRN network: s(t)=SaS(t)∈R10 is calculated for all (120,950) word tokens w(t)in the validation set. The cosine similarity is computed between aS(t) and the embedding vector of each symbol. The symbol with maximum (cosine) similarity is assigned to the corresponding token. For each symbol, all tokens assigned to it are sorted based on their similarity to it; tokens of the same type are removed, and the top tokens from this list are examined to assess by inspection the semantic coherence of the symbol assignments (see The results provide significant support for our hypothesis that each symbol corresponds to a particular meaning, assigned to a cloud of semantically-related word tokens. For example, symbol 27 and symbol 6 can be respectively interpreted as meaning ‘occupation’ and ‘geopolitical unit’. Symbol 11 is assigned to multiple forms of the verb to be, e.g., was (85.8% of occurrences of tokens in the validation set), is, (93.2%) being (100%) and be (98%). Symbol 29 is selected by 10 of the 12 month names (along with other word types; more details in supplementary materials). Other symbols with semantically coherent token sets are reported in the supplementary materials.
Question-Answering with Grammatically-Interpretable Representations
1705.08432
Table 1: Performance of the proposed TPRN model compared to BiDAF proposed in [Seo et al.2016]
['Single Model', 'EM(dev)', 'F1(dev)', 'EM(test)', 'F1(test)']
[['TPRN', '63.8', '74.4', '66.6', '76.3'], ['BiDAF', '62.8', '73.5', '67.1', '76.8']]
We compared the performance of single models. In the TPRN model tested here for performance comparison purposes, we set the number of symbols and roles to 600 and 100 respectively and the embedding size of symbols and roles to 15 and 10. Each experiment for the TPRN model took about 13 hours on a single Tesla P100 GPU. Overall, the proposed TPRN gives results comparable to those of the state-of-the-art BiDAF model. Moreover, as we will elaborate in the following sections, our model offers considerable interpretability thanks to the structure built into TPRs.
Question-Answering with Grammatically-Interpretable Representations
1705.08432
Table 3: Symbol 27
['[BOLD] Token', '[BOLD] Similarity']
[['printmaker', '0.9587'], ['composer', '0.8992'], ['who', '0.8726'], ['mathematician', '0.8675'], ['guitarist', '0.8622'], ['musician', '0.8055'], ['Whose', '0.7774'], ['engineer', '0.7753'], ['chemist', '0.7485'], ['how', '0.7335'], ['strict', '0.7207']]
To interpret the lexical-semantic content of the TPR symbols s(t) learned by the TPRN network: s(t)=SaS(t)∈R10 is calculated for all (120,950) word tokens w(t)in the validation set. The cosine similarity is computed between aS(t) and the embedding vector of each symbol. The symbol with maximum (cosine) similarity is assigned to the corresponding token. For each symbol, all tokens assigned to it are sorted based on their similarity to it; tokens of the same type are removed, and the top tokens from this list are examined to assess by inspection the semantic coherence of the symbol assignments (see The results provide significant support for our hypothesis that each symbol corresponds to a particular meaning, assigned to a cloud of semantically-related word tokens. For example, symbol 27 and symbol 6 can be respectively interpreted as meaning ‘occupation’ and ‘geopolitical unit’. Symbol 11 is assigned to multiple forms of the verb to be, e.g., was (85.8% of occurrences of tokens in the validation set), is, (93.2%) being (100%) and be (98%). Symbol 29 is selected by 10 of the 12 month names (along with other word types; more details in supplementary materials). Other symbols with semantically coherent token sets are reported in the supplementary materials.
Question-Answering with Grammatically-Interpretable Representations
1705.08432
Table 3: Symbol 27
['[BOLD] Token', '[BOLD] Similarity']
[['phrase', '0.817'], ['wrong', '0.8146'], ['mean', '0.7972'], ['constitutes', '0.7771'], ['call', '0.7621'], ['happens', '0.752'], ['the', '0.7477'], ['God', '0.7425'], ['nickname', '0.7368'], ['spelled', '0.7162'], ['name', '0.712'], ['happened', '0.6889'], ['as', '0.6699'], ['defines', '0.647']]
To interpret the lexical-semantic content of the TPR symbols s(t) learned by the TPRN network: s(t)=SaS(t)∈R10 is calculated for all (120,950) word tokens w(t)in the validation set. The cosine similarity is computed between aS(t) and the embedding vector of each symbol. The symbol with maximum (cosine) similarity is assigned to the corresponding token. For each symbol, all tokens assigned to it are sorted based on their similarity to it; tokens of the same type are removed, and the top tokens from this list are examined to assess by inspection the semantic coherence of the symbol assignments (see The results provide significant support for our hypothesis that each symbol corresponds to a particular meaning, assigned to a cloud of semantically-related word tokens. For example, symbol 27 and symbol 6 can be respectively interpreted as meaning ‘occupation’ and ‘geopolitical unit’. Symbol 11 is assigned to multiple forms of the verb to be, e.g., was (85.8% of occurrences of tokens in the validation set), is, (93.2%) being (100%) and be (98%). Symbol 29 is selected by 10 of the 12 month names (along with other word types; more details in supplementary materials). Other symbols with semantically coherent token sets are reported in the supplementary materials.
Question-Answering with Grammatically-Interpretable Representations
1705.08432
Table 3: Symbol 27
['[BOLD] Token', '[BOLD] Similarity']
[['abolished', '0.8777'], ['west', '0.8734'], ['nations', '0.8613'], ['Newcastle', '0.8588'], ['south', '0.8573'], ['Melbourne', '0.8558'], ['Australia', '0.8544'], ['World', '0.8526'], ['Belgium', '0.849'], ['donors', '0.8476'], ['Asian', '0.8404'], ['Greece', '0.8402'], ['Europe', '0.8397'], ['Thailand', '0.8393'], ['Constituency', '0.8361']]
To interpret the lexical-semantic content of the TPR symbols s(t) learned by the TPRN network: s(t)=SaS(t)∈R10 is calculated for all (120,950) word tokens w(t)in the validation set. The cosine similarity is computed between aS(t) and the embedding vector of each symbol. The symbol with maximum (cosine) similarity is assigned to the corresponding token. For each symbol, all tokens assigned to it are sorted based on their similarity to it; tokens of the same type are removed, and the top tokens from this list are examined to assess by inspection the semantic coherence of the symbol assignments (see The results provide significant support for our hypothesis that each symbol corresponds to a particular meaning, assigned to a cloud of semantically-related word tokens. For example, symbol 27 and symbol 6 can be respectively interpreted as meaning ‘occupation’ and ‘geopolitical unit’. Symbol 11 is assigned to multiple forms of the verb to be, e.g., was (85.8% of occurrences of tokens in the validation set), is, (93.2%) being (100%) and be (98%). Symbol 29 is selected by 10 of the 12 month names (along with other word types; more details in supplementary materials). Other symbols with semantically coherent token sets are reported in the supplementary materials.
Where’s My Head? Definition, Dataset and Models for Numeric Fused-Heads Identification and Resolution
1905.10886
Table 6: NFH Resolution accuracies on the development and test sets.
['Model', 'Development', 'Test']
[['Base', '65.6', '60.8'], ['+ Elmo', '[BOLD] 77.2', '[BOLD] 74.0']]
The complete model trained on the entire training data achieves 65.6% accuracy on the development set and 60.8% accuracy on the test set. The model with ELMo embeddings Peters et al.
Where’s My Head? Definition, Dataset and Models for Numeric Fused-Heads Identification and Resolution
1905.10886
Table 2: NFH Identification corpus summary. The train and dev splits are noisy and the test set are gold annotations.
['[EMPTY]', 'train', 'dev', 'test', 'all']
[['pos', '71,821', '7865', '206', '79,884'], ['neg', '93,785', '10,536', '294', '104,623'], ['all', '165,606', '18,401', '500', '184,507']]
We improve the NFH identification using machine learning. We create a large but noisy data set by considering all the numbers in the corpus and treating the NFH identified by the rule-based approach as positive (79,678 examples) and all other numbers as negative (104,329 examples). We randomly split the dataset into train and development sets in a 90%, 10% split.
Keyphrase Generation: A Text Summarization Struggle
1904.00110
Table 2: Full-match scores of predicted keyphrases by various methods
['[BOLD] Method', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [BOLD] F1@5', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [BOLD] F1@7', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [BOLD] F1@5', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [BOLD] F1@7', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [BOLD] F1@5', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [BOLD] F1@7', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [BOLD] F1@5', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [BOLD] F1@7']
[['Yake!', '19.35', '21.47', '17.98', '17.4', '17.11', '15.19', '15.24', '14.57'], ['TopicRank', '16.5', '20.44', '6.93', '6.92', '11.93', '11.72', '11.9', '12.08'], ['Maui', '20.11', '20.56', '23.17', '23.04', '22.3', '19.63', '19.58', '18.42'], ['CopyRnn', '[BOLD] 29.2', '[BOLD] 33.6', '[BOLD] 30.2', '[BOLD] 25.2', '[BOLD] 32.8', '[BOLD] 25.5', '[BOLD] 33.06', '[BOLD] 31.92'], ['Merge', '6.85', '6.86', '4.92', '4.93', '8.75', '8.76', '11.12', '13.39'], ['Inject', '6.09', '6.08', '4.1', '4.11', '8.09', '8.09', '9.61', '11.22'], ['Abs', '14.75', '14.82', '10.24', '10.29', '12.17', '12.09', '14.54', '14.57'], ['PointCov', '22.19', '21.55', '19.87', '20.03', '20.45', '20.89', '22.72', '21.49']]
From the unsupervised models, we see that Yake! is consistently better than TopicRank. The next two supervised models perform even better, with CopyRnn being discretely superior than Maui.
Keyphrase Generation: A Text Summarization Struggle
1904.00110
Table 3: Rouge scores of predicted keyphrases by various methods
['[BOLD] Method', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [ITALIC] [BOLD] R [BOLD] 1 [ITALIC] [BOLD] F [BOLD] 1', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [ITALIC] [BOLD] R [BOLD] L [BOLD] F [BOLD] 1', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [ITALIC] [BOLD] R [BOLD] 1 [ITALIC] [BOLD] F [BOLD] 1', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [ITALIC] [BOLD] R [BOLD] L [BOLD] F [BOLD] 1', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [ITALIC] [BOLD] R [BOLD] 1 [ITALIC] [BOLD] F [BOLD] 1', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [ITALIC] [BOLD] R [BOLD] L [BOLD] F [BOLD] 1', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [ITALIC] [BOLD] R [BOLD] 1 [ITALIC] [BOLD] F [BOLD] 1', '[BOLD] Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) [ITALIC] [BOLD] R [BOLD] L [BOLD] F [BOLD] 1']
[['Yake!', '37.48', '24.83', '26.19', '18.57', '26.47', '17.36', '20.38', '14.54'], ['TopicRank', '32.0', '20.36', '14.08', '11.47', '21.68', '15.94', '17.46', '13.28'], ['Maui', '36.88', '27.16', '28.29', '23.74', '34.33', '28.12', '32.16', '25.09'], ['CopyRnn', '[BOLD] 44.58', '[BOLD] 35.24', '[BOLD] 39.73', '[BOLD] 30.29', '[BOLD] 42.93', '34.62', '[BOLD] 43.54', '[BOLD] 36.09'], ['Merge', '15.19', '9.45', '9.66', '7.14', '16.53', '12.31', '17.3', '14.43'], ['Inject', '14.15', '8.81', '9.58', '6.79', '15.6', '11.21', '14.3', '11.08'], ['Abs', '27.54', '19.48', '25.59', '18.2', '28.31', '22.16', '29.05', '25.77'], ['PointCov', '37.16', '33.69', '35.81', '29.52', '38.47', '[BOLD] 35.06', '38.66', '34.04']]
Abs works slightly better reaching scores from 10.24 to 14.75 %. PointCov is the best of the text summarizers producing keyphrase predictions that are usually clean and concise with few repetitions. This is probably the merit of the coverage mechanism. There is still a considerable gap between PointCov and CopyRnn. CopyRnn is still the best but PointCov is close. Abs scores are also comparable to those of Maui and Yake!. TopicRank, Merge and Inject are again the worst.
\csq@thequote@oinit\csq@thequote@oopenDid I Say Something Wrong?\csq@thequote@ocloseA Word-Level Analysis of Wikipedia Articles for Deletion Discussions
1603.08048
Table 2: This table shows the average performance of the classifiers with the various timeframes. The values are the arithmetic mean of the classifiers’ results. These are an SVM, a NB classifier and an LM classifier. A bold font highlights the best result considering this metric. The plus and minus symbols indicate whether the performance metric was calculated for disruptive (+) or constructive contributions (−).
['[BOLD] time', '[BOLD] recall+', '[BOLD] recall−', '[BOLD] precision+', '[BOLD] precision−', '[BOLD] F1+', '[BOLD] F1−', '[BOLD] accuracy', '[BOLD] AUC']
[['13 hours', '[BOLD] 49.81', '75.35', '66.68', '60.26', '56.8', '66.87', '62.58', '0.551'], ['1 day', '49.29', '[BOLD] 76.63', '[BOLD] 67.83', '[BOLD] 60.27', '[BOLD] 56.93', '[BOLD] 67.42', '[BOLD] 62.96', '[BOLD] 0.562'], ['1.5 days', '47.45', '76.03', '66.46', '59.22', '55.18', '66.52', '61.74', '0.542'], ['2 days', '48.14', '75.0', '65.85', '59.24', '55.38', '66.11', '61.57', '0.542'], ['2.5 days', '46.57', '75.77', '65.88', '58.72', '54.35', '66.09', '61.17', '0.536'], ['3 days', '46.49', '74.04', '64.33', '58.12', '53.71', '65.03', '60.26', '0.529'], ['4 days', '45.35', '74.03', '63.72', '57.59', '52.76', '64.7', '59.69', '0.521'], ['5 days', '45.13', '74.51', '64.07', '57.64', '52.72', '64.92', '59.82', '0.522'], ['6 days', '42.95', '74.6', '63.02', '56.7', '50.85', '64.35', '58.77', '0.512']]
The figure shows how the values for all metrics peak at the 1 day timeframe and worsen the longer the timeframe becomes. The positive precision value is the only one which was not the highest using a 1 day timeframe. It was 49.81% for 13 hours and 49.29% for 1 day. Nevertheless, this is negligible considering that the 1 day timeframe resulted in the greatest positive F1 score and accuracy. Thus, successive tests were performed using 1 day as timeframe length.
\csq@thequote@oinit\csq@thequote@oopenDid I Say Something Wrong?\csq@thequote@ocloseA Word-Level Analysis of Wikipedia Articles for Deletion Discussions
1603.08048
Table 1: The table shows how commonly a term appears in disruptive or constructive posts. A bold font indicates that the term appears more frequently in that class.
['[BOLD] term', '[BOLD] share of words from disruptive posts (‰)', '[BOLD] share of words from constructive posts (‰)']
[['fucking', '[BOLD] 0.06', '0.00'], ['fuck', '[BOLD] 0.06', '0.01'], ['shit', '[BOLD] 0.09', '0.01'], ['i', '6.40', '[BOLD] 10.70'], ['you', '[BOLD] 10.64', '4.52'], ['me', '[BOLD] 2.43', '1.20'], ['my', '[BOLD] 3.00', '1.68'], ['your', '[BOLD] 3.05', '1.25'], ['myself', '[BOLD] 0.22', '0.13'], ['yourself', '[BOLD] 0.20', '0.10']]
All 3,467,402 posts have been considered for these values. Common swear words like “fucking”, “fuck” and “shit” are hardly used in disruptive posts. With only 6.39‰ of disruptive posts containing any of the three swear words, the few posts that do, contain the terms multiple times. However, when they are used, they are quite expressive. For example, the term “shit” is 9.43 times likelier to appear in a disruptive than in a constructive post. In sum, with these swear words alone, a small recall but a high precision could potentially be achieved. Hence, a collection of swear words would be unlikely to suffice for identifying many disruptive posts. The use of “I”- and “You”-messages can also be investigated without using a classifier. As initially stated, correctly detecting these messages is out of the scope of this thesis. Instead, terms indicative of “I”- and “You”-messages can be counted like “I”, “you” but also “myself”, “yourself” and others. However, they are a lot more frequently used: 36.60% of all disruptive and 26.89% of all constructive posts contain at least one of the terms “I” and “you”. Additionally considering the terms “me”, “my”, “your”, “myself” and “yourself” increases this to 40.13% and 29.37% respectively. 43.93% of all disruptive and 32.11% of all constructive posts contain at least two of the terms. Therefore, if these terms and especially “I” and “you” were strong indicators for constructiveness or disruptiveness, they would improve the recall noticeably.
\csq@thequote@oinit\csq@thequote@oopenDid I Say Something Wrong?\csq@thequote@ocloseA Word-Level Analysis of Wikipedia Articles for Deletion Discussions
1603.08048
Table 3: This table shows the performance of the SVM, NB and LM classifiers using the independent posts approach. Function words classifications are marked with “FW”. All others were full text classifications. The plus and minus symbols indicate whether the performance metric was calculated for disruptive (+) or constructive contributions (−).
['[BOLD] Classifier', '[BOLD] Recall+', '[BOLD] Recall−', '[BOLD] Precision+', '[BOLD] Precision−', '[BOLD] F1+', '[BOLD] F1−', '[BOLD] Accuracy', '[BOLD] AUC']
[['SVM', '59.91', '75.56', '71.03', '65.33', '65.00', '70.07', '67.73', '0.750'], ['SVM (FW)', '31.11', '85.58', '68.33', '55.40', '42.75', '67.26', '58.34', '0.620'], ['NB', '42.42', '80.78', '68.82', '58.38', '52.49', '67.78', '61.60', '0.390'], ['NB (FW)', '26.04', '85.43', '64.12', '53.60', '37.04', '65.87', '55.73', '0.600'], ['LM', '44.56', '72.35', '61.71', '56.61', '51.75', '63.52', '58.45', '0.518'], ['LM (FW)', '44.28', '67.34', '57.55', '54.72', '50.05', '60.38', '55.81', '0.542']]
Overall, the support vector machine performs best, the naïve Bayes classifier ranks second and the language model ranks last. The SVM is outperformed only in negative recall by the NB classifier. However, due to lower negative precision, the NB classifier results in a worse negative F1 score than the SVM. The SVM is significantly better in positive recall and AUC, whereas the NB classifier and the LM classifier repeatedly perform similarly. When looking at the positive and negative F1 scores, it can be seen that all classifiers are better in predicting constructive posts than disruptive ones. It was expected that the SVM would outperform the NB classifier. Yet, we imagined that the language model would perform better. All in all, the three classifiers perform hardly well enough to draw reliable conclusions from the results. F1, AUC and accuracy scores of about 80% and above would be needed.
\csq@thequote@oinit\csq@thequote@oopenDid I Say Something Wrong?\csq@thequote@ocloseA Word-Level Analysis of Wikipedia Articles for Deletion Discussions
1603.08048
Table 4: This table shows the performance of the SVM, NB and LM classifiers using the sliding window approach with linear sampling. RapidMiner did not return AUC values for the SVM and NB classifier, so they had to be left out. Function words classifications are marked with “FW”. All others were full text classifications. The plus and minus symbols indicate whether the performance metric was calculated for disruptive (+) or constructive contributions (−).
['[BOLD] Classifier', '[BOLD] Recall+', '[BOLD] Recall−', '[BOLD] Precision+', '[BOLD] Precision−', '[BOLD] F1+', '[BOLD] F1−', '[BOLD] Accuracy', '[BOLD] AUC']
[['SVM', '20.97', '65.39', '37.73', '45.28', '26.96', '53.51', '43.18', '—'], ['SVM (FW)', '29.63', '37.92', '32.31', '35.01', '30.91', '36.41', '33.77', '—'], ['NB', '64.14', '46.39', '54.47', '56.40', '58.91', '50.91', '55.26', '—'], ['NB (FW)', '70.11', '51.28', '59.00', '63.18', '64.08', '56.61', '60.70', '—'], ['LM', '13.21', '93.26', '66.23', '51.80', '22.03', '66.61', '53.24', '0.497'], ['LM (FW)', '4.69', '96.88', '60.05', '50.41', '8.70', '66.31', '50.78', '0.559']]
The number of constructive and disruptive posts was increased compared to the independent posts classifications. This is conditioned by the sliding window algorithm which regards all merged posts containing a blocked post as disruptive. As this approach considers an editor’s post history, the data is different and the approach was evaluated on newly sampled data. However, multiple runs of classifications using both the independent posts as well as the sliding window approach showed that the results always remained similar.
\csq@thequote@oinit\csq@thequote@oopenDid I Say Something Wrong?\csq@thequote@ocloseA Word-Level Analysis of Wikipedia Articles for Deletion Discussions
1603.08048
Table 5: This table shows the performance of the SVM, NB and LM classifiers using the independent posts approach on the oldest discussion. Function words classifications are marked with “FW”. All others were full text classifications. The plus and minus symbols indicate whether the performance metric was calculated for disruptive (+) or constructive contributions (−).
['[BOLD] Classifier', '[BOLD] Recall+', '[BOLD] Recall−', '[BOLD] Precision+', '[BOLD] Precision−', '[BOLD] F1+', '[BOLD] F1−', '[BOLD] Accuracy', '[BOLD] AUC']
[['SVM', '68.26', '89.98', '87.20', '73.93', '76.58', '81.17', '79.12', '0.890'], ['SVM (FW)', '29.86', '90.68', '76.21', '56.39', '42.91', '69.54', '60.27', '0.660'], ['NB', '49.10', '90.44', '83.70', '63.99', '61.89', '74.95', '69.77', '0.470'], ['NB (FW)', '37.30', '82.43', '67.98', '56.80', '48.17', '67.26', '59.86', '0.630'], ['LM', '52.04', '81.08', '73.33', '62.83', '60.88', '70.80', '66.56', '0.541'], ['LM (FW)', '50.08', '68.04', '61.04', '57.68', '55.02', '62.43', '59.06', '0.582']]
Although we only present the data of a single run each, all tests have been executed multiple times with newly sampled data sets. This confirmed that the observed performances were not the result of a sampling bias. The results always remained comparable but for one exception. When operating on a data set chronologically sampled from the earliest \glspldiscussion, we found that the classifiers yielded significantly improved performance. Chronologically sampling data from more recent \glspldiscussion could not reproduce these results. However, due to the amount of existing data, we were not able to test all possible partitions.
\csq@thequote@oinit\csq@thequote@oopenDid I Say Something Wrong?\csq@thequote@ocloseA Word-Level Analysis of Wikipedia Articles for Deletion Discussions
1603.08048
Table 6: This table shows the performance of the SVM and NB classifier using the sliding window approach with stratified sampling. Function words classifications are marked with “FW”. All others were full text classifications. The plus and minus symbols indicate whether the performance metric was calculated for disruptive (+) or constructive contributions (−).
['[BOLD] Classifier', '[BOLD] Recall+', '[BOLD] Recall−', '[BOLD] Precision+', '[BOLD] Precision−', '[BOLD] F1+', '[BOLD] F1−', '[BOLD] Accuracy', '[BOLD] AUC']
[['SVM', '88.96', '84.60', '85.25', '88.46', '87.07', '86.49', '86.78', '0.950'], ['SVM (FW)', '78.73', '50.75', '61.52', '70.47', '69.07', '59.01', '64.74', '0.710'], ['NB', '69.57', '93.43', '91.37', '75.43', '78.99', '83.47', '81.50', '0.670'], ['NB (FW)', '42.27', '79.33', '67.16', '57.88', '51.88', '66.93', '60.80', '0.670']]
This bar chart shows the performance of the full text SVM in contrast to it only factoring in function words. The classifier used the sliding window approach with stratified sampling. Thus, it erroneously performed well. The values are in relation to a random classifier with 0% expressing equal performance. The percentages refer to the overall performance, meaning that 50% equates to a perfect result.
Morphosyntactic Tagging with a Meta-BiLSTM Modelover Context Sensitive Token Encodings
1805.08237
Table 8: F1 score of char models and their performance on the dev. set for selected languages with different gather strategies, concatenate to gi (Equation 1). DQM shows results for our reimplementation of DBLP:conf/conll/DozatQM17 (cf. §3.2), where we feed in only the characters. The final column shows the number of xpos tags in the training set.
['[BOLD] dev. set. [BOLD] lang.', '[ITALIC] F [ITALIC] last [ITALIC] B1 [ITALIC] st', '[ITALIC] F1 [ITALIC] st [ITALIC] B [ITALIC] last', '[ITALIC] F [ITALIC] last [ITALIC] B [ITALIC] last', '[ITALIC] F1 [ITALIC] st [ITALIC] B1 [ITALIC] st', 'DQM', '|xpos|']
[['el', '[BOLD] 96.6', '[BOLD] 96.6', '96.2', '96.1', '95.9', '16'], ['grc', '[BOLD] 87.3', '87.1', '87.1', '86.8', '86.7', '3130'], ['la_ittb', '91.1', '91.5', '[BOLD] 91.9', '91.3', '91.0', '811'], ['ru', '95.6', '95.4', '95.6', '95.3', '[BOLD] 95.8', '49'], ['tr', '93.5', '93.3', '93.2', '92.5', '[BOLD] 93.9', '37']]
Thus, the proposed model concatenates all four of these and passes it as input to an multilayer perceptron (MLP): gi = concat(F1st(w),Flast(w), (1) B1st(w),Blast(w)) mcharsi = MLP(gi) A tag can then be predicted with a linear classifier that takes as input the output of the MLP mcharsi, applies a softmax function and chooses for each word the tag with highest probability. The Table also contains a column with results for our reimplementation of \newciteDBLP:conf/conll/DozatQM17. We removed, for all systems, the word model in order to assess each strategy in isolation. The performance is quite different per language. E.g., for Latin, the outputs of the forward and backward LSTMs of the last character scored highest.
Morphosyntactic Tagging with a Meta-BiLSTM Modelover Context Sensitive Token Encodings
1805.08237
Table 3: Results on WSJ test set.
['[BOLD] System', '[BOLD] Accuracy']
[['Sogaard:2011:SCN', '97.50'], ['DBLP:journals/corr/HuangXY15', '97.55'], ['choi:16a', '97.64'], ['andor2016globally.', '97.44'], ['DBLP:conf/conll/DozatQM17', '97.41'], ['ours', '[BOLD] 97.96']]
We also performed experiments on the Penn Treebank with the usual split in train, development and test set. Our model significantly outperforms these systems, with an absolute difference of 0.32% in accuracy, which corresponds to a RRIE of 12%.
Morphosyntactic Tagging with a Meta-BiLSTM Modelover Context Sensitive Token Encodings
1805.08237
Table 5: Comparison of optimization methods: Separate optimization of the word, character and meta model is more accurate on average than full back-propagation using a single loss function.The results are statistically significant with two-tailed paired t-test for xpos with p<0.001 and for morphology with p<0.0001.
['[BOLD] Optimization', 'Avg. F1 Score morphology', 'Avg. F1 Score xpos']
[['separate', '[BOLD] 94.57', '[BOLD] 94.85'], ['jointly', '94.15', '94.48']]
We first compare jointly training the three model components (Meta-BiLSTM, character model, word model) to training each separately. Separate optimization leads to better accuracy for 34 out of 40 treebanks for the morphological features task and for 30 out of 39 treebanks for xpos tagging. Separate optimization out-performed joint optimization by up to 2.1 percent absolute, while joint never out-performed separate by more than 0.5% absolute. We hypothesize that separately training the models forces each sub-model (word and character) to be strong enough to make high accuracy predictions and in some sense serves as a regularizer in the same way that dropout does for individual neurons.
Morphosyntactic Tagging with a Meta-BiLSTM Modelover Context Sensitive Token Encodings
1805.08237
Table 6: F1 score for selected languages on sentence vs. word level character models for the prediction of morphology using late integration.
['[BOLD] dev. set', 'word char model', 'sentence char model']
[['el', '89.05', '93.41'], ['la_ittb', '93.22', '95.69'], ['ru', '88.94', '92.31'], ['tr', '87.78', '90.77']]
We compared the setup with sentence-based character context We selected for these experiments a number of morphological rich languages. The accuracy of the word-based character model joint with a word-based model were significantly lower than a sentence-based character model. We conclude also from these results and comparing with results of the reimplementation of DQM that early integration of the word-based character model performs much better as late integration via Meta-BiLSTM for a word-based character model.
Morphosyntactic Tagging with a Meta-BiLSTM Modelover Context Sensitive Token Encodings
1805.08237
Table 7: F1 score for the character, word and meta models. The standard deviation of 10 random restarts of each model is show in the last three columns. The differences in means are all statistically significant at p<0.001 (paired t-test).
['[BOLD] dev. set [BOLD] lang.', 'num. exp.', 'mean char', 'mean word', 'mean meta', 'stdev char', 'stdev word', 'stdev meta']
[['el', '10', '96.43', '95.36', '[BOLD] 97.01', '0.13', '0.11', '0.09'], ['grc', '10', '88.28', '73.52', '[BOLD] 88.85', '0.21', '0.29', '0.22'], ['la_ittb', '10', '91.45', '87.98', '[BOLD] 91.94', '0.14', '0.30', '0.05'], ['ru', '10', '95.98', '93.50', '[BOLD] 96.61', '0.06', '0.17', '0.07'], ['tr', '10', '93.77', '90.48', '[BOLD] 94.67', '0.11', '0.33', '0.14']]
The examples show that the combined model has significantly higher accuracy compared with either the character and word models individually.
Don’t Forget the Long Tail! A Comprehensive Analysis of Morphological Generalization in Bilingual Lexicon Induction
1909.02855
Table 4: The results on the standard BLI task and BLI controlled for lexeme for the original Ruder et al. (2018)’s model (✗) and the same model trained with a morphological constraint (✓) (discussed in §4.6).
['[EMPTY]', 'Normal In vocab', 'Normal In vocab', 'Normal +OOVs', 'Normal +OOVs', 'Lexeme In vocab', 'Lexeme In vocab', 'Lexeme +OOVs', 'Lexeme +OOVs', 'Dictionary Sizes In vocab', 'Dictionary Sizes +OOVs']
[['Constraint', '✗', '✓', '✗', '✓', '✗', '✓', '✗', '✓', 'In vocab', '+OOVs'], ['Ukrainian–Russian', '[BOLD] 68.4', '61.1', '[BOLD] 63.7', '56.1', '[BOLD] 89.9', '89.1', '[BOLD] 88.6', '87.6', '786', '933'], ['Russian–Slovak', '[BOLD] 25.7', '21.1', '[BOLD] 20.9', '17.0', '[BOLD] 79.3', '76.8', '[BOLD] 76.0', '74.2', '1610', '2150'], ['Polish–Czech', '42.0', '[BOLD] 44.4', '34.8', '[BOLD] 36.7', '80.6', '[BOLD] 81.1', '75.3', '[BOLD] 75.9', '4043', '5332'], ['Russian–Polish', '39.8', '[BOLD] 41.2', '34.8', '[BOLD] 36.1', '80.8', '[BOLD] 82.6', '77.7', '[BOLD] 80.2', '9183', '11697'], ['Catalan–Portuguese', '62.8', '[BOLD] 64.2', '41.1', '[BOLD] 42.4', '83.1', '[BOLD] 84.3', '57.7', '[BOLD] 59.0', '5418', '10759'], ['French–Spanish', '47.8', '[BOLD] 50.2', '26.7', '[BOLD] 28.9', '78.0', '[BOLD] 81.4', '47.9', '[BOLD] 52.2', '9770', '21087'], ['Portuguese–Spanish', '60.2', '[BOLD] 61.1', '36.8', '[BOLD] 37.6', '84.7', '[BOLD] 85.4', '57.1', '[BOLD] 58.2', '9275', '22638'], ['Italian–Spanish', '42.7', '[BOLD] 43.8', '21.1', '[BOLD] 22.1', '76.4', '[BOLD] 77.6', '47.6', '[BOLD] 49.6', '11685', '30686'], ['Polish–Spanish', '[BOLD] 36.1', '32.1', '[BOLD] 28.0', '25.0', '[BOLD] 78.1', '77.7', '[BOLD] 68.6', '68.4', '8964', '12759'], ['Spanish–Polish', '28.1', '[BOLD] 30.9', '21.0', '[BOLD] 23.2', '81.2', '[BOLD] 82.0', '64.2', '[BOLD] 65.8', '4270', '6095']]
We report the accuracy on the in-vocabulary pairs as well as all the pairs in the dictionary, including OOVs. As expected, compared to standard BLI this task is much easier for the models—the performance is generally high. For Slavic languages numbers remain high even in the open-vocabulary setup, which suggests that the models can generalize morphologically. On the other hand, for Romance languages we observe a visible drop in performance. We hypothesize that this difference is due to the large quantities of verbs in Romance dictionaries; in both Slavic and Romance languages verbs have substantial paradigms, often of more than 60 forms, which makes identifying the correct form more difficult. In contrast, most words in our Slavic dictionaries are nouns and adjectives with much smaller paradigms. As expected, the BLI results on unrelated languages are generally, but not uniformly, worse than those on related language pairs. The accuracy for Spanish–Polish is particularly low, at 28% (for in vocabulary pairs). We see large variation in performance across morphosyntactic categories and more and less frequent lexemes, similar to that observed for related language pairs. In particular, we observe that 2;imp;pl;v—the category difficult for Polish– Czech BLI is also among the most challenging for Polish–Spanish. However, one of the highest performing categories for Polish–Czech, 3;masc;pl;pst;v, yields much worse accuracy for Polish–Spanish. In our final experiment we demonstrate that improving morphological generalization has the potential to improve BLI results. We show that enforcing a simple, hard morphological constraint at training time can lead to performance improvements at test time—both on the standard BLI task and the controlled for lexeme BLI. We adapt the self-learning models of Artetxe et al. so that at each iteration they can align two words only if they share the same morphosyntactic category. Note that this limits the training data only to word forms present in UniMorph, as those are the only ones for which we have a gold tag. We take this as evidence that properly modelling morphology will have a role to play in BLI.
Don’t Forget the Long Tail! A Comprehensive Analysis of Morphological Generalization in Bilingual Lexicon Induction
1909.02855
Table 2: The sizes of our morphologically complete dictionaries for Slavic and Romance language families. We present the sizes for 20 base dictionaries. We further split those to obtain 40 train, development and test dictionaries—one for each mapping direction to ensure the correct source language lemma separation.
['[BOLD] Slavic', 'Czech', 'Russian', 'Slovak', 'Ukrainian', '[BOLD] Romance', 'Spanish', 'Catalan', 'Portuguese', 'Italian']
[['Polish', '53,353', '128,638', '14,517', '12,361', 'French', '686,139', '381,825', '486,575', '705,800'], ['Czech', '-', '65,123', '10,817', '8,194', 'Spanish', '-', '343,780', '476,543', '619,174'], ['Russian', '[EMPTY]', '-', '128,638', '10,554', 'Catalan', '[EMPTY]', '-', '261,016', '351,609'], ['Slovak', '[EMPTY]', '[EMPTY]', '-', '3,434', 'Portuguese', '[EMPTY]', '[EMPTY]', '-', '468,945']]
For each language pair (L1,L2) we first generated lemma translation pairs by mapping all L1 lemmata to all L2 lemmata for each synset that appeared in both L1 and L2 WordNets. We then filtered out the pairs which contained lemmata not present in UniMorph and generated inflected entries from the remaining pairs: one entry for each tag that appears in the UniMorph paradigms of both lemmata. The sizes of dictionaries vary across different language pairs and so does the POS distribution. In particular, while Slavic dictionaries are dominated by nouns and adjectives, verbs constitute the majority of pairs in Romance dictionaries. In our split, the train dictionary contains 60% of all lemmata, while the development and test dictionaries each have 20% of the lemmata.
DEAN: Learning Dual Emotion for Fake News Detection on Social Media
1903.01728
Table 3: Performance Comparison of Fake News Detection on Two Datasets.
['Dataset', 'Methods', 'Accuracy', 'Precision', 'Recall', 'F1-Score']
[['Weibo', 'DTC', '0.756', '0.754', '0.758', '0.756'], ['Weibo', 'ML-GRU', '0.799', '0.810', '0.790', '0.800'], ['Weibo', 'Basic-GRU', '0.835', '0.830', '0.850', '0.840'], ['Weibo', 'CSI', '0.835', '0.735', '[BOLD] 0.996', '0.858'], ['Weibo', 'HSA-BLSTM', '0.843', '[BOLD] 0.860', '0.810', '0.834'], ['Weibo', 'SAME', '0.776', '0.770', '0.780', '0.775'], ['[EMPTY]', '[BOLD] DEAN', '[BOLD] 0.872', '[BOLD] 0.860', '0.890', '[BOLD] 0.874'], ['Twitter', 'DTC', '0.613', '0.608', '0.570', '0.588'], ['Twitter', 'ML-GRU', '0.684', '0.663', '0.740', '0.692'], ['Twitter', 'Basic-GRU', '0.695', '0.674', '0.721', '0.697'], ['Twitter', 'CSI', '0.696', '0.706', '0.649', '0.671'], ['Twitter', 'HSA-BLSTM', '0.718', '[BOLD] 0.731', '0.663', '0.695'], ['Twitter', 'SAME', '0.667', '0.613', '0.849', '0.712'], ['[EMPTY]', '[BOLD] DEAN', '[BOLD] 0.751', '0.698', '[BOLD] 0.860', '[BOLD] 0.771']]
In particular, the DEAN model achieves an overall accuracy of 87.2% on Weibo dataset and 75.1% on Twitter dataset, which outperforms all the baseline models on both datasets. The outstanding performance of the proposed model demonstrates that the incorporation of emotion through embedding representation and gated fusion could effectively promote the detecting process on fake news. Besides, compared to the emotion-based model SAME, DEAN earns quite better performance in all aspects. On the one hand, it is because that SAME is mainly designed to deal with heterogeneous multi-modal data. On the other hand, the results demonstrate that only utilizing user comments’ emotion just like SAME is limited, and DEAN is more effective for detecting fake news by exploiting dual emotion from publisher and users corporately. Our method shows its strength on fake news detection in these experiments. And its f1-score is also 3.4% higher than the second one. On Twitter dataset, the improvement is more obvious by boosting the accuracy from 61.3% of feature-based models to 75.1%. Meanwhile, DEAN outperforms the second-best model by nearly 6% in f1-score. These observations demonstrate the importance of incorporating emotion information into models. By the way, comparing between datasets, we can see that performances on Weibo dataset are rather better than Twitter.
DEAN: Learning Dual Emotion for Fake News Detection on Social Media
1903.01728
Table 1: 19 Hand-crafted Emotion Features of News Content.
['Description', 'Amount']
[['Fraction of emotion positive and negative words', '2'], ['Fraction of negative adverbs', '1'], ['Fraction of adverbs of degree', '1'], ['Emotion score', '1'], ['Fraction of pronoun first, second and third', '3'], ['Fraction of punctuation ?, !, ?!, multi ? and multi !', '5'], ['Fraction of emoticons', '1'], ['Fraction of [ITALIC] anger, [ITALIC] doubt, [ITALIC] happiness, [ITALIC] sadness and [ITALIC] none emoticons', '5']]
Hand-crafted News Emotion Features The overall emotion information of news content is also important to help measure how much signal from the emotion part should be absorbed for each word. For example, the news content which expresses intense emotions could further strengthen the importance of emotion part in each word of the content. News emotion features of pj is denoted as sej.
DEAN: Learning Dual Emotion for Fake News Detection on Social Media
1903.01728
Table 4: Analysis of the Dual Emotion Modeling.
['Module', 'Methods', 'Weibo Dataset Accuracy', 'Weibo Dataset F1-Score', 'Twitter Dataset Accuracy', 'Twitter Dataset F1-Score']
[['Content ( [ITALIC] Publisher Emotion)', 'WE', '0.790', '0.801', '0.678', '0.627'], ['Content ( [ITALIC] Publisher Emotion)', 'EE', '0.700', '0.719', '0.639', '0.615'], ['Content ( [ITALIC] Publisher Emotion)', 'WEE(c)', '0.813', '0.810', '0.690', '0.709'], ['Content ( [ITALIC] Publisher Emotion)', 'WEE(att)', '0.799', '0.793', '0.701', '0.675'], ['[EMPTY]', 'WEE(gn)', '[BOLD] 0.851', '[BOLD] 0.854', '[BOLD] 0.725', '[BOLD] 0.735'], ['Comment ( [ITALIC] Social Emotion)', 'WE', '0.667', '0.550', '0.667', '0.634'], ['Comment ( [ITALIC] Social Emotion)', 'EE', '0.619', '0.553', '0.655', '0.667'], ['Comment ( [ITALIC] Social Emotion)', 'WEE(c)', '0.669', '0.560', '0.689', '0.693'], ['[EMPTY]', 'WEE(gc)', '[BOLD] 0.671', '[BOLD] 0.563', '[BOLD] 0.713', '[BOLD] 0.705'], ['Content + Comment ( [ITALIC] Dual Emotion)', '(WE+WE)(c)', '0.835', '0.840', '0.695', '0.697'], ['Content + Comment ( [ITALIC] Dual Emotion)', '(WEE(gn)+WE)(c)', '0.863', '0.860', '0.736', '0.754'], ['Content + Comment ( [ITALIC] Dual Emotion)', '(WEE(gn)+WEE(gc))(c)', '0.866', '0.870', '0.746', '[BOLD] 0.776'], ['[EMPTY]', '(WEE(gn)+WEE(gc))(gm)', '[BOLD] 0.872', '[BOLD] 0.874', '[BOLD] 0.751', '0.771']]
, we could make the following observations: 1) in content module, the overall performance rises while using emotion embeddings; especially on Twitter dataset, adding emotion information increases the f1-score by nearly 6%; 2) publisher emotion plays a more important role than social emotion on both datasets. It possibly results from the sparsity of comments data, which limits the effectiveness of social emotion; 3) compared to merely using semantic information, incorporation of emotion from one side or two sides all improve the performance of the whole framework, which demonstrates the importance of dual emotion on fake news detection.
Toward Making the Most of Context in Neural Machine Translation
2002.07982
Table 5: Accuracy (\%) of discourse phenomena. * different data and system conditions, only for reference.
['[BOLD] Model', 'deixis', 'lex.c.', 'ell.infl.', 'ell.VP']
[['SentNmt', '50.0', '45.9', '52.2', '24.2'], ['Ours', '61.3', '46.1', '61.0', '35.6'], ['voita2018context\xa0[voita2018context]*', '81.6', '58.1', '72.2', '80.0']]
We also want to examine whether the proposed model actually learns to utilize document context to resolve discourse inconsistencies that context-agnostic models cannot handle. We use contrastive test sets for the evaluation of discourse phenomena for English-Russian by \citeauthorvoita2018context [\citeyearvoita2018context]. There are four test sets in the suite regarding deixis, lexicon consistency, ellipsis (inflection), and ellipsis (verb phrase). Each testset contains groups of contrastive examples consisting of a positive translation with correct discourse phenomenon and negative translations with incorrect phenomena. The goal is to figure out if a model is more likely to generate a correct translation compared to the incorrect variation. Our model is better at resolving discourse consistencies compared to context-agnostic baseline. \citeauthorvoita2018context [\citeyearvoita2018context] use a context-agnostic baseline, trained on 4\times larger data, to generate first-pass drafts, and perform post-processings, which is not directly comparable, but would be easily incorporated with our model to achieve better results.
Toward Making the Most of Context in Neural Machine Translation
2002.07982
Table 1: Experiment results of our model in comparison with several baselines, including increments of the number of parameters over Transformer baseline (\Delta|{\bm{\theta}}|), training/testing speeds (v_{\rm{train}}/v_{\rm{test}}, some of them are derived from maruf2019selective [maruf2019selective]), and translation results of the testsets in BLEU score.
['[BOLD] Model', '\\Delta|{\\bm{\\theta}}|', 'v_{\\rm{train}}', 'v_{\\rm{test}}', 'Zh-En TED', 'En-De TED', 'En-De News', 'En-De Europarl', 'En-De avg.']
[['SentNmt\xa0[Vaswani2017Attention]', '0.0m', '1.0\\times', '1.0\\times', '17.0', '23.10', '22.40', '29.40', '24.96'], ['DocT\xa0[zhang2018improving]', '9.5m', '0.65\\times', '0.98\\times', '[EMPTY]', '24.00', '23.08', '29.32', '25.46'], ['HAN [miculicich2018document]', '4.8m', '0.32\\times', '0.89\\times', '17.9', '24.58', '[BOLD] 25.03', '28.60', '26.07'], ['SAN [maruf2019selective]', '4.2m', '0.51\\times', '0.86\\times', '[EMPTY]', '24.42', '24.84', '29.75', '26.33'], ['QCN\xa0[yang2019enhancing]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[BOLD] 25.19', '22.37', '29.82', '25.79'], ['Final', '4.7m', '0.22\\times', '1.08\\times', '[BOLD] 19.1', '25.10', '24.91', '[BOLD] 30.40', '[BOLD] 26.80']]
[maruf2019selective, SAN] and Query-guided Capsule Network [yang2019enhancing, QCN]. Among them, our model archives new state-of-the-art results on TED Zh-En and Europarl, showing the superiority of exploiting the whole document context. Though our model is not the best on TED En-De and News tasks, it is still comparable with QCN and HAN and achieves the best average performance on English-German benchmarks by at least 0.47 BLEU score over the best previous model. We suggest this could probably because we did not apply the two-stage training scheme used in \citeauthormiculicich2018document [\citeyearmiculicich2018document] or regularizations introduced in \citeauthoryang2019enhancing [\citeyearyang2019enhancing]. In addition, while sacrificing training speed, the parameter increment and decoding speed could be manageable.
Toward Making the Most of Context in Neural Machine Translation
2002.07982
Table 3: Ablation study on modeling context on TED Zh-En development set. ”Doc” means using a entire document as a sequence for input or output. BLEU{}_{\rm doc} indicates the document-level BLEU score calculated on the concatenation of all output sentences.
['[BOLD] Model', 'BLEU (BLEU{}_{\\rm doc})']
[['SentNmt\xa0[Vaswani2017Attention]', '11.4 (21.0)'], ['DocNmt (documents as input/output)', 'n/a (17.0)'], ['[ITALIC] Modeling source context', '[ITALIC] Modeling source context'], ['Doc2Sent', '6.8'], ['+ reset word positions for each sentence', '10.0'], ['+ segment embedding', '10.5'], ['+ segment-level relative attention', '12.2'], ['+ context fusion gate', '12.4'], ['[ITALIC] Modeling target context', '[ITALIC] Modeling target context'], ['Transformer-XL decoder [Sent2Doc]', '12.4'], ['Final', '12.9 (24.4)']]
First of all, using the entire document as input and output directly cannot even generate document translation with the same number of sentences as source document, which is much worse than sentence-level baseline and our model in terms of document-level BLEU. For source context modeling, only casting the whole source document as an input sequence (Doc2Sent) does not work. Meanwhile, reset word positions and introducing segment embedding for each sentence alleviate this problem, which verifies one of our motivations that we should focus more on local sentences. Moreover, the gains by the segment-level relative attention and gated context fusion mechanism demonstrate retrieving and integrating source global context are useful for document translation. As for target context, employing Transformer-XL decoder to exploit target historically global context also leads to better performance on document translation. This is somewhat contrasted to [zhang2018improving] claiming that using target context leading to error propagation. In the end, by jointly modeling both source and target contexts, our final model can obtain the best performance.
UR-FUNNY: A Multimodal Language Dataset for Understanding Humor
1904.06618
Table 4: Binary accuracy for different variants of C-MFN and training scenarios outlined in Section 5. The best performance is achieved using all three modalities of text (T), vision (V) and acoustic (A).
['Modality', 'T', 'A+V', 'T+A', 'T+V', 'T+A+V']
[['C-MFN (P)', '62.85', '53.3', '63.28', '63.22', '64.47'], ['C-MFN (C)', '57.96', '50.23', '57.78', '57.99', '58.45'], ['C-MFN', '64.44', '57.99', '64.47', '64.22', '65.23']]
Results demonstrate that both context and punchline information are important as C-MFN outperforms C-MFN (P) and C-MFN (C) models. Punchline is the most important component for detecting humor as the performance of C-MFN (P) is significantly higher than C-MFN (C). Therefore, UR-FUNNY dataset presents new challenges to the field of NLP, specifically research areas of humor detection and multimodal language analysis.
UR-FUNNY: A Multimodal Language Dataset for Understanding Humor
1904.06618
Table 1: Comparison between UR-FUNNY and notable humor detection datasets in the NLP community. Here, ‘pos’, ’neg’ , ‘mod’ and ‘spk’ denote positive, negative, modalities and speaker respectively.
['Dataset', '#Pos', '#Neg', 'Mod', 'type', '#spk']
[['16000 One-Liners', '16000', '16000', '{ [ITALIC] l}', 'joke', '-'], ['Pun of the Day', '2423', '2423', '{ [ITALIC] l}', 'pun', '-'], ['PTT Jokes', '1425', '2551', '{ [ITALIC] l}', 'political', '-'], ['Ted Laughter', '4726', '4726', '{ [ITALIC] l}', 'speech', '1192'], ['Big Bang Theory', '18691', '24981', '{ [ITALIC] l,a}', 'tv show', '<50'], ['[BOLD] UR-Funny', '8257', '8257', '{ [ITALIC] l,a,v}', 'speech', '1741']]
Humor Analysis: Humor analysis has been among active areas of research in both natural language processing and affective computing. Pun of the Day” Yang et al. The above datasets have studied humor from different perspectives. For example, “16000 One-Liner” and “Pun of the Day” focus on joke detection (joke vs. not joke binary task), while “Ted Laughter” focuses on punchline detection (whether or not punchline triggers laughter). Similar to “Ted Laughter”, UR-FUNNY focuses on punchline detection. Furthermore, punchline is accompanied by context sentences to properly model the build up of humor. Unlike previous datasets where negative samples were drawn from a different domain, UR-FUNNY uses a challenging negative sampling case where samples are drawn from the same videos. Furthermore, UR-FUNNY is the only humor detection dataset which incorporates all three modalities of text, vision and audio.
Restricted Recurrent Neural Tensor Networks: Exploiting Word Frequency and Compositionality
1704.00774
Table 1: Comparison of validation and test set perplexity for r-RNTNs with f mapping (K=100 for PTB, K=376 for text8) versus s-RNNs and m-RNN. r-RNTNs with the same H as corresponding s-RNNs significantly increase model capacity and performance with no computational cost. The RNTN was not run on text8 due to the number of parameters required.
['Method', '[ITALIC] H', 'PTB # Params', 'PTB Test PPL', 'text8 # Params', 'text8 Test PPL', 'Method', '[ITALIC] H', 'PTB # Params', 'PTB Test PPL']
[['s-RNN', '100', '2M', '146.7', '7.6M', '236.4', 'GRU', '244', '9.6M', '92.2'], ['r-RNTN [ITALIC] f', '100', '3M', '131.2', '11.4M', '190.1', 'GRU', '650', '15.5M', '90.3'], ['RNTN', '100', '103M', '128.8', '388M', '-', 'r-GRU [ITALIC] f', '244', '15.5M', '[BOLD] 87.5'], ['m-RNN', '100', '3M', '164.2', '11.4M', '895.0', 'LSTM', '254', '10M', '88.8'], ['s-RNN', '150', '3M', '133.7', '11.4M', '207.9', 'LSTM', '650', '16.4M', '[BOLD] 84.6'], ['r-RNTN [ITALIC] f', '150', '5.3M', '[BOLD] 126.4', '19.8M', '[BOLD] 171.7', 'r-LSTM [ITALIC] f', '254', '16.4M', '87.1']]
It is remarkable that even with K as small as 100, the r-RNTN approaches the performance of the RNTN with a small fraction of the parameters. This reinforces our hypothesis that complex transformation modeling afforded by distinct matrices is needed for frequent words, but not so much for infrequent words which can be well represented by a shared matrix and a distinct vector embedding. It appears that heuristically allocating increased model capacity as done by the f based r-RNTN is a better way to increase performance than simply increasing hidden layer size, which also incurs a computational penalty. Although m-RNNs have been successfully employed in character-level language models with small vocabularies, they are seldom used in word-level models.
Cyclical Annealing Schedule:A Simple Approach to Mitigating KL Vanishing
1903.10145
Table 5: Comparison on dialog response generation. BLEU (B) scores 1-4 are used for evaluation. Monotonic (M) and Cyclical (C) schedules are tested on two models.
['[BOLD] Model [BOLD] Schedule', 'CVAE [BOLD] M', 'CVAE [BOLD] C', 'CVAE+BoW [BOLD] M', 'CVAE+BoW [BOLD] C']
[['B1 prec', '0.326', '[BOLD] 0.423', '0.384', '[BOLD] 0.397'], ['B1 recall', '0.214', '[BOLD] 0.391', '0.376', '[BOLD] 0.387'], ['B2 prec', '0.278', '[BOLD] 0.354', '0.320', '[BOLD] 0.331'], ['B2 recall', '0.180', '[BOLD] 0.327', '0.312', '[BOLD] 0.323'], ['B3 prec', '0.237', '[BOLD] 0.299', '0.269', '[BOLD] 0.279'], ['B3 recall', '0.153', '[BOLD] 0.278', '0.265', '[BOLD] 0.275'], ['B4 prec', '0.185', '[BOLD] 0.234', '0.211', '[BOLD] 0.219'], ['B4 recall', '0.122', '[BOLD] 0.220', '0.210', '[BOLD] 0.219']]
A4SS1SSS0Px2 Results The cyclical schedule outperforms the monotonic schedule in both settings. Under similar ELBO results, the cyclical schedule provide lower reconstruction errors, higher KL values, and higher BLEU values than the monotonic schedule. Interestingly, the monotonic schedule tends to overfit, while the cyclical schedule does not, particularly on reconstruction errors. It means the monotonic schedule can learn better latent codes for VAEs, thus preventing overfitting.
Cyclical Annealing Schedule:A Simple Approach to Mitigating KL Vanishing
1903.10145
Table 3: Comparison on dialog response generation. Reconstruction perplexity (Rec-P) and BLEU (B) scores are used for evaluation.
['[BOLD] Model [BOLD] Schedule', 'CVAE [BOLD] M', 'CVAE [BOLD] C', 'CVAE+BoW [BOLD] M', 'CVAE+BoW [BOLD] C']
[['Rec-P ↓', '36.16', '[BOLD] 29.77', '18.44', '[BOLD] 16.74'], ['KL Loss ↑', '0.265', '[BOLD] 4.104', '14.06', '[BOLD] 15.55'], ['B4 prec', '0.185', '[BOLD] 0.234', '0.211', '[BOLD] 0.219'], ['B4 recall', '0.122', '[BOLD] 0.220', '0.210', '[BOLD] 0.219'], ['A-bow prec', '0.957', '[BOLD] 0.961', '0.958', '[BOLD] 0.961'], ['A-bow recall', '0.911', '[BOLD] 0.941', '0.938', '[BOLD] 0.940'], ['E-bow prec', '0.867', '0.833', '0.830', '0.828'], ['E-bow recall', '0.784', '[BOLD] 0.808', '0.808', '0.805']]
(i) Smoothed Sentence-level BLEU Chen and Cherry : BLEU is a popular metric that measures the geometric mean of modified n-gram precision with a length penalty. We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale. (ii) Cosine Distance of Bag-of-word Embedding Liu et al. We used Glove embedding and denote the average method as A−bow and extreme method as E−bow. The score is normalized to [0,1]. Higher values indicate more plausible responses. The BoW indeed reduces the KL vanishing issue, as indicated by the increased KL and decreased reconstruction perplexity. When applying the proposed cyclical schedule to CVAE, we also see a reduced KL vanishing issue. Interestingly, it also yields the highest BLEU scores. This suggests that the cyclical schedule can generate dialog responses of higher fidelity with lower cost, as the auxiliary BoW loss is not necessary.
A Text Classification Framework for Simple and Effective Early Depression Detection Over Social Media Streams
1905.08772
Table 4: Results on the test set using all subject’s history as a single document, i.e. timeless classification.
['[EMPTY]', '[ITALIC] F1', '[ITALIC] π', '[ITALIC] ρ']
[['SS3', '[BOLD] 0.61', '[BOLD] 0.63', '0.60'], ['LOGREG', '0.59', '0.56', '0.63'], ['SVM', '0.55', '0.5', '0.62'], ['MNB', '0.39', '0.25', '[BOLD] 0.96'], ['KNN', '0.54', '0.5', '0.58']]
It is interesting to notice that we also performed classification of subjects on the test set using all subject’s writings as if it were a single document (i.e. classical timeless classification); results are shown in SS3 obtained the highest values for F1 (0.61) and Precision (0.63) measures, possibly due to the flexibility that is given by its three hyper-parameters to discover important and discriminative terms. These results provide strong evidence that SS3 also achieves competitive performance when is trained and tested to optimize standard (non-temporal) evaluation measures. Note that the best configuration of MNB obtained after the model selection stage, aiming at overcoming the unbalanced dataset problem, tends to classify all subjects as depressed, that is the reason MNB had a Recall(ρ) close to 1 but a really poor precision (0.25).
GLOSS: Generative Latent Optimization of Sentence Representations
1907.06385
Table 3: Effect of dimensionlity: additional results for the test performance of GLOSS-BoW and GLOSS-Pos models with different dimensionality of the latent vectors on unsupervised STS-(12-16, B) and supervised tasks. (∗) Following SentEval, STS-13 does not include the SMT dataset due to licensing issues. Best results for each task are in bold.
['Model', 'Config. #Tok', 'Config. Dim', 'Unsupervised STS-() tasks 12', 'Unsupervised STS-() tasks 13*', 'Unsupervised STS-() tasks 14', 'Unsupervised STS-() tasks 15', 'Unsupervised STS-() tasks 16', 'Unsupervised STS-() tasks B', 'Unsupervised STS-() tasks Avg', 'Supervised tasks MR', 'Supervised tasks CR', 'Supervised tasks SUBJ', 'Supervised tasks MPQA', 'Supervised tasks TREC', 'Supervised tasks Avg']
[['GLOSS-BoW', '27M', '100', '54.8', '51.8', '68.4', '71.2', '[BOLD] 71.8', '[BOLD] 72.4', '65.1', '67.4', '72.0', '86.4', '79.5', '71.0', '75.3'], ['GLOSS-BoW', '27M', '300', '[BOLD] 55.9', '55.6', '[BOLD] 69.2', '73.4', '71.2', '72.1', '[BOLD] 66.2', '69.5', '74.7', '88.6', '82.3', '78.0', '78.6'], ['GLOSS-BoW', '27M', '700', '54.9', '[BOLD] 55.8', '68.8', '[BOLD] 73.7', '71.0', '71.4', '65.9', '72.4', '76.7', '90.2', '83.7', '[BOLD] 82.2', '81.0'], ['GLOSS-POS', '27M', '100', '54.6', '54.8', '68.3', '71.7', '71.4', '69.7', '65.1', '68.8', '73.9', '87.0', '83.3', '74.8', '77.6'], ['GLOSS-POS', '27M', '300', '54.2', '52.7', '68.1', '73.4', '70.5', '69.0', '64.7', '71.8', '75.5', '89.3', '84.7', '80.2', '80.3'], ['GLOSS-POS', '27M', '700', '53.6', '53.3', '67.8', '73.3', '70.1', '68.1', '64.4', '72.7', '77.4', '89.9', '85.4', '81.4', '81.4'], ['GLOSS-POS', '27M', '1K', '53.0', '52.9', '67.5', '72.0', '69.8', '68.0', '63.9', '[BOLD] 73.4', '[BOLD] 78.1', '[BOLD] 91.0', '[BOLD] 86.3', '82.1', '[BOLD] 82.2']]
Test performance on unsupervised STS-(12-16, B) and supervised tasks. (α) indicate results computed by us. (β) PV-DBOW results are taken from Pagliardini et al. (†) The unsupervised results for Skip-thought are taken from Arora et al. (∗) Following SentEval, STS-13 does not include the SMT dataset due to licensing issues. The STS-B scores of other baselines are from the official webpage, except for InferSent which are from Wieting and Gimpel All other results are from the respective publications. We observe that GLOSS-BoW is better than GLOSS-POS for unsupervised tasks (cf. Generally, increasing dimensionality does not improve accuracy on unsupervised tasks. This is in line with Hill et al.
GLOSS: Generative Latent Optimization of Sentence Representations
1907.06385
Table 1: Test performance on unsupervised STS-(12-16, B) and supervised tasks. (α) indicate results computed by us. (β) PV-DBOW results are taken from Pagliardini et al. (2017). (†) The unsupervised results for Skip-thought are taken from Arora et al. (2017) and the supervised ones from Pagliardini et al. (2017). (∗) Following SentEval, STS-13 does not include the SMT dataset due to licensing issues. The STS-B scores of other baselines are from the official webpage, except for InferSent which are from Wieting and Gimpel (2018). All other results are from the respective publications. See Table 3 in supplementary material for results with higher dimensionality.
['Model', 'Config. #Tok', 'Config. Dim', 'Unsupervised STS-() tasks 12', 'Unsupervised STS-() tasks 13*', 'Unsupervised STS-() tasks 14', 'Unsupervised STS-() tasks 15', 'Unsupervised STS-() tasks 16', 'Unsupervised STS-() tasks B', 'Unsupervised STS-() tasks Avg', 'Supervised tasks MR', 'Supervised tasks CR', 'Supervised tasks SUBJ', 'Supervised tasks MPQA', 'Supervised tasks TREC', 'Supervised tasks Avg']
[['[ITALIC] Unsupervised methods trained on unordered corpus', '[ITALIC] Unsupervised methods trained on unordered corpus', '[ITALIC] Unsupervised methods trained on unordered corpus', '[ITALIC] Unsupervised methods trained on unordered corpus', '[ITALIC] Unsupervised methods trained on unordered corpus', '[ITALIC] Unsupervised methods trained on unordered corpus', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Glove-BoW [ITALIC] α', '840B', '300', '52.2', '49.6', '54.6', '56.3', '51.4', '41.5', '50.9', '77.3', '78.3', '91.2', '87.9', '83.0', '83.5'], ['SIF', '840B', '300', '56.2', '63.8', '68.5', '71.7', '—', '72.0', '—', '—', '—', '—', '—', '—', '—'], ['uSIF', '840B', '300', '[BOLD] 64.9', '[BOLD] 71.8', '[BOLD] 74.4', '[BOLD] 76.1', '—', '71.5', '—', '—', '—', '—', '—', '—', '—'], ['uSIF [ITALIC] α', '27M', '100', '57.3', '60.8', '66.1', '68.0', '64.0', '61.9', '63.0', '—', '—', '—', '—', '—', '—'], ['PV-DBOW [ITALIC] β', '0.9B', '300', '—', '—', '41.7', '—', '—', '64.9', '—', '60.2', '66.9', '76.3', '70.7', '59.4', '66.7'], ['Sent2vec', '0.9B', '700', '55.6', '57.1', '68.4', '74.1', '69.1', '71.7', '66.0', '75.1', '80.2', '90.6', '86.3', '83.8', '83.2'], ['GLOSS-BoW', '27M', '100', '54.8', '51.8', '68.4', '71.2', '[BOLD] 71.8', '[BOLD] 72.4', '65.1', '67.4', '72.0', '86.4', '79.5', '71.0', '75.3'], ['GLOSS-BoW', '27M', '300', '55.9', '55.6', '69.2', '73.4', '71.2', '72.1', '66.2', '69.5', '74.7', '88.6', '82.3', '78.0', '78.6'], ['GLOSS-POS', '27M', '1K', '53.0', '52.9', '67.5', '72.0', '69.8', '68.0', '63.9', '73.4', '78.1', '91.0', '86.3', '82.1', '82.2'], ['[ITALIC] Unsupervised methods trained on ordered corpus', '[ITALIC] Unsupervised methods trained on ordered corpus', '[ITALIC] Unsupervised methods trained on ordered corpus', '[ITALIC] Unsupervised methods trained on ordered corpus', '[ITALIC] Unsupervised methods trained on ordered corpus', '[ITALIC] Unsupervised methods trained on ordered corpus', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Skip-thought†', '0.9B', '2.4K', '30.8', '25.0', '31.4', '31.0', '—', '—', '—', '76.5', '80.1', '[BOLD] 93.6', '87.1', '[BOLD] 92.2', '85.9'], ['[ITALIC] Supervised methods trained on labeled corpus', '[ITALIC] Supervised methods trained on labeled corpus', '[ITALIC] Supervised methods trained on labeled corpus', '[ITALIC] Supervised methods trained on labeled corpus', '[ITALIC] Supervised methods trained on labeled corpus', '[ITALIC] Supervised methods trained on labeled corpus', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['InferSent (AllNLI)', '26M', '4.1K', '59.2', '58.9', '69.6', '71.3', '71.5', '70.6', '[BOLD] 66.9', '[BOLD] 81.1', '[BOLD] 86.3', '92.4', '[BOLD] 90.2', '88.2', '[BOLD] 87.6']]
GLOSS-BoW performs particularly well on STS-Benchmark where it outperforms all methods. GLOSS-BoW is competitive to Sent2Vec, even though the latter was trained on 30 times more training data. In fact, it also matches the performance of InferSent which requires labeled training data. This shows that GLOSS is very data efficient which makes it attractive for low-resource languages. Amongst unsupervised methods, GLOSS-POS is slightly behind Sent2Vec which was trained on much more data. Both GLOSS-POS and GLOSS-BoW outperform paragraph vectors (PV-DBOW). Skip-thought and InferSent use much larger vectors and they require corpora with ordered sentences. We expect that increasing the dimensionality of z even further would also help our models (cf. However, we show that strong performance can be achieved with reasonable representation sizes.
Part-of-Speech Tagging for Historical English
1603.03144
Table 3: Accuracy results for temporal adaptation in the PPCMBE and the PPCEME of historical English. Percentage error reduction is shown for the best-performing method, Fema-attribute.
['Task', 'baseline SVM', 'baseline MEMM (Stanford)', 'SCL', 'Brown', 'word2vec', 'Fema single embedding', 'Fema attribute embeddings (error reduction)']
[['[ITALIC] Modern British English (training from 1840-1914)', '[ITALIC] Modern British English (training from 1840-1914)', '[ITALIC] Modern British English (training from 1840-1914)', '[ITALIC] Modern British English (training from 1840-1914)', '[ITALIC] Modern British English (training from 1840-1914)', '[ITALIC] Modern British English (training from 1840-1914)', '[ITALIC] Modern British English (training from 1840-1914)', '[EMPTY]'], ['→ 1770-1839', '96.30', '96.57', '96.42', '96.45', '96.44', '96.80', '[BOLD] 96.84 (15%)'], ['→ 1700-1769', '94.57', '94.83', '95.07', '95.15', '94.85', '95.65', '[BOLD] 95.75 (22%)'], ['average', '95.43', '95.70', '95.74', '95.80', '95.64', '96.23', '[BOLD] 96.30 (19%)'], ['[ITALIC] Early Modern English (training from 1640-1710)', '[ITALIC] Early Modern English (training from 1640-1710)', '[ITALIC] Early Modern English (training from 1640-1710)', '[ITALIC] Early Modern English (training from 1640-1710)', '[ITALIC] Early Modern English (training from 1640-1710)', '[ITALIC] Early Modern English (training from 1640-1710)', '[ITALIC] Early Modern English (training from 1640-1710)', '[EMPTY]'], ['→ 1570-1639', '93.62', '93.98', '94.23', '94.36', '94.18', '95.01', '[BOLD] 95.20 (25%)'], ['→ 1500-1569', '87.59', '87.47', '89.39', '89.73', '89.30', '91.40', '[BOLD] 91.63 (33%)'], ['average', '90.61', '90.73', '91.81', '92.05', '91.74', '93.20', '[BOLD] 93.41 (30%)']]
English spelling had become mostly uniform and stable since around 1700 Among the two baseline systems, MEMM performs slightly better than SVM, showing a small benefit to structured prediction. Among the domain adaptation algorithms, Fema clearly outperforms SCL, Brown clustering and word2vec, with an averaged increase of about 0.5% and 1.5% accuracies on the PPCMBE and the PPCEME test sets respectively. The metadata attribute information boosts performance by a small but consistent margin, 0.1-0.2% on average.
Part-of-Speech Tagging for Historical English
1603.03144
Table 4: Accuracy results for adapting from the PTB to the PPCMBE and the PPCEME of historical English. ∗Error reduction for the normalized PPCEME is computed against the unnormalized SVM accuracy, showing total error reduction.
['Target', 'Normalized', 'baseline SVM', 'baseline MEMM (Stanford)', 'SCL', 'Brown', 'word2vec', 'Fema single embedding', 'Fema attribute embeddings (error reduction)']
[['ppcmbe', 'No', '81.12', '81.35', '81.66', '81.65', '81.75', '82.34', '[BOLD] 82.46 (7%)'], ['ppceme', 'No', '74.15', '74.34', '75.89', '76.04', '75.85', '77.77', '[BOLD] 77.92 (15%)'], ['ppceme', 'Yes', '76.73', '76.87', '77.61', '77.65', '77.76', '78.85', '[BOLD] 79.05 (19%∗)']]
Nonetheless, domain adaptation can help: Fema improves performance by 1.3% on the PPCMBE data, and by 3.8% on the unnormalized PPCEME data. Spelling normalization also helps, improving the baseline systems by more than 2.5%. The combination of spelling normalization and domain adaptation gives an overall improvement in accuracy from 74.2% to 79.1%. The relative error reduction is lower than in the temporal adaptation setting: only 19% at best, versus 30% error reduction in temporal adaptation. This is because there are now at least two sources of error — language change and tagset mismatch — and unsupervised domain adaptation cannot address mismatches in the tag set.
Part-of-Speech Tagging for Historical English
1603.03144
Table 5: Tagging accuracies of adaptation of our baseline SVM tagger from the PTB to the PPCEME in ablation experiments.
['Feature set', 'IV', 'OOV', 'All']
[['All features', '81.68', '48.96', '74.15'], ['– word context', '79.69', '38.62', '70.23'], ['– prefix', '81.61', '46.11', '73.43'], ['– suffix', '81.36', '38.13', '71.40'], ['– affix', '81.22', '34.40', '70.44'], ['– orthographic', '81.68', '48.92', '74.14']]
Word context features are important for obtaining good accuracies on both IV and OOV tokens. Affix features, particularly suffix features, are crucial for the OOV tokens. The orthographic features are shown to be nearly irrelevant, as long as affix features are present. Overall, the high percentage of OOV tokens can be a major source of errors, as the tagging accuracy on OOV tokens is below 50% in our best baseline system. Note that these results are for a classification-based tagger; while the Viterbi-based MEMM tagger performs only marginally better overall (∼0.2% improvement), it is possible that its error distribution might be different due to the advantages of structured prediction.
Part-of-Speech Tagging for Historical English
1603.03144
Table 7: Tagging accuracies of domain adaptation models from the PTB to the PPCEME.
['System', 'IV', 'OOV', 'All']
[['SVM', '81.68', '48.96', '74.15'], ['SCL', '82.01', '55.45', '75.89'], ['Brown', '81.81', '56.76', '76.04'], ['word2vec', '81.79', '56.00', '75.85'], ['Fema-single', '82.30', '62.63', '77.77'], ['Fema-attribute', '82.34', '63.16', '77.92']]
Compared against the baseline tagger, Fema-attribute achieves an absolute improvement of 14% in accuracy on OOV tokens. SCL performs slightly better than Brown clustering and word2vec on IV tokens, but worse on OOV tokens. By incorporating metadata attributes, Fema-attribute performs better than Fema-single on OOV tokens, though the accuracies on IV tokens are similar. Interestingly, the venerable method of Brown clustering (slightly) outperforms both word2vec and SCL.
A language score based output selection methodfor multilingual speech recognition
2005.00851
Table 3: WER results
['[EMPTY]', '1st decoding with LM0', 'Rescoring with LM2', 'Rescoring with LM1', 'Proposed method']
[['Test-clean', '18.8', '-', '11.50', '11.50'], ['Test-other', '33.35', '-', '22.59', '22.59'], ['Reading-test', '3.84', '2.42', '-', '2.42'], ['Conversation-test', '20.54', '19.00', '-', '19.2'], ['YouTube-test', '24.55', '21.57', '-', '21.83'], ['VLSP2018', '10.12', '8.16', '-', '8.16']]
One pass decoding with the proposed method To experiment that our method can automatically select the best result without having to identify the input language, we did two experiments. The first experiment was to simulate the case of the input language is known in advance. We firstly decoded all 6 testing sets to obtain lattices using AM-2 and LM0. Afterward rescored English sets by LM1 and Vietnamese sets by LM2. In the second experiment, we combined all six testing sets into one set to simulate that the decoder will not know in advance the input language. It could be English or Vietnamese at random. Lattices of this combined set were also obtained by AM-2 and LM0 at the first decoding step. But at the rescoring step, both LM1 and LM2 were used to produce the best paths and applied the proposed method to select final outputs. Finally, we separated the final outputs based on their utterance ID back to the six corresponding testing sets to individually measure WER for each one. Comparing the results of these two experiments shown that our method produced a similar performance without a language identifier for input signals. There was a bit worse on Conversation-test and YouTube-test sets. We found that these sets contain some very short utterances with only one or two words. For these cases the language scores given by LM1 and LM2 could be similar since the output sentences chosen probably have meaning and are made up of homonyms between the two languages.
A language score based output selection methodfor multilingual speech recognition
2005.00851
Table 1: Audio datasets
['[BOLD] Set', '[BOLD] Language', '[BOLD] Corpus', '[BOLD] Hours', '[BOLD] Speakers', '[BOLD] Sentences', '[BOLD] Foreign words', '[BOLD] Style']
[['Training', 'English', 'Librispeech-360hr', '363.6', '921', '104,014', '[EMPTY]', 'Reading'], ['Training', 'English', 'Mozilla CommonVoice', '780', '31,858', '644,120', '-', 'Reading'], ['Training', 'English', 'TED-LIUM', '452', '2,351', '268,263', '-', 'Spontaneous'], ['Training', 'Vietnamese', 'VinBDI-set', '2,500', '18,000', '3,666,892', '-', 'Reading'], ['Testing', 'English', 'Test-clean', '5.4', '87', '2,620', '-', 'Reading'], ['Testing', 'English', 'Test-other', '5.3', '90', '2,939', '-', 'Reading'], ['Testing', 'Vietnamese', 'Reading-test', '9.9', '23', '5,358', '-', 'Reading'], ['Testing', 'Vietnamese', 'Conversation-test', '10.8', '1,892', '11,533', '-', 'Spontaneous'], ['Testing', 'Vietnamese', 'YouTube-test', '9.9', 'unknown', '5,432', '24,8%', 'Spontaneous'], ['Testing', 'Vietnamese', 'VLSP2018', '2.1', 'unknown', '796', '-', 'Reading']]
4 testing sets were used to evaluate for Vietnamese performance. They were named Reading-test, Conversation-test, YouTube-test, and VLSP2018. For English evaluation, testing sets were Test-clean and Test-other [librispeech]. Those sets were randomly selected from the same raw data at the beginning to construct training, evaluation, and test sets, excepting the YouTube-test and VLSP2018. The YouTube-test was randomly collected from YouTube. It includes teenage conversations and technique talks with a lot of noises like music, non-human sounds. This was the most challenging test set since it not only is a noisy speech but also contains many foreign words. The percentage of foreign words in the vocabulary of this set is up to 23% with 939 unique words, and most of them are common English words and property names. The accuracy of foreign words recognition will be evaluated only for this set because the ratio in others is not significant. VLSP2018 was developed by research teams involved in Vietnamese language and speech processing [vlsp2018]. All audio files were saved to wave format with sample rate of 16kHz and analog/digital conversion precision of 16 bits.
A language score based output selection methodfor multilingual speech recognition
2005.00851
Table 2: Language model evaluation
['[EMPTY]', '[BOLD] Multilinguage LM0', '[BOLD] English LM1', '[BOLD] Vietnamese LM2']
[['[BOLD] Test-clean', '569.3', '187.1', '-'], ['[BOLD] Test-other', '522.2', '174.3', '-'], ['[BOLD] Reading-test', '136.6', '-', '87.2'], ['[BOLD] Conversation-test', '95.2', '-', '62.7'], ['[BOLD] YouTube-test', '199.5', '-', '111.4'], ['[BOLD] VLSP2018', '75.7', '-', '47.5']]
After building LM1 and LM2 as above, the multilingual model LM0 was created by linearly interpolating LM1 and LM2 with α=0.5. LM0 was further cut off all n-grams that have a probability of less than 2E-8 to reduce its size and speed up the first decoding step. After this step, the size of the unpruned LM0 from 800MB has decreased to 30MB. The perplexities of LM0, LM1, and LM2 on All of these models were built by using the tool SRILM [tedlium].
Identifying Semantic Divergences in Parallel Text without Annotations
1803.11112
Table 2: Intrinsic evaluation on crowdsourced semantic equivalence vs. divergence testsets. We report overall F-score, as well as precision (P), recall (R) and F-score (F) for the equivalent (+) and divergent (-) classes separately. Semantic similarity yields better results across the board, with larger improvements on the divergent class.
['[BOLD] Divergence Detection [BOLD] Approach', '[BOLD] OpenSubtitles +P', '[BOLD] OpenSubtitles +R', '[BOLD] OpenSubtitles +F', '[BOLD] OpenSubtitles -P', '[BOLD] OpenSubtitles -R', '[BOLD] OpenSubtitles -F', '[BOLD] OpenSubtitles Overall F', '[BOLD] Common Crawl +P', '[BOLD] Common Crawl +R', '[BOLD] Common Crawl +F', '[BOLD] Common Crawl -P', '[BOLD] Common Crawl -R', '[BOLD] Common Crawl -F', '[BOLD] Common Crawl Overall F']
[['Sentence Embeddings', '65', '60', '62', '56', '61', '58', '60', '78', '58', '66', '52', '[BOLD] 74', '61', '64'], ['MT Scores (1 epoch)', '67', '53', '59', '54', '68', '60', '60', '54', '65', '59', '17', '11', '14', '42'], ['Non-entailment', '58', '78', '66', '53', '30', '38', '54', '73', '49', '58', '48', '72', '57', '58'], ['Non-parallel', '70', '83', '76', '61', '42', '50', '66', '70', '83', '76', '61', '42', '49', '67'], ['Semantic Dissimilarity', '[BOLD] 76', '[BOLD] 80', '[BOLD] 78', '[BOLD] 75', '[BOLD] 70', '[BOLD] 72', '[BOLD] 77', '[BOLD] 82', '[BOLD] 88', '[BOLD] 85', '[BOLD] 78', '69', '[BOLD] 73', '[BOLD] 80']]
The break down per class shows that both equivalent and divergent examples are better detected. The improvement is larger for divergent examples with gains of about 10 points for F-score for the divergent class, when compared to the next-best scores.
Data Diversification: An Elegant Strategy For Neural Machine Translation
1911.01986
Table 13: The average value of E[1N∑NipMi(yij|yi
['[EMPTY]', '[BOLD] En-De', '[BOLD] De-En', '[BOLD] En-Fr', '[BOLD] Fr-En']
[['[BOLD] Teacher models [ITALIC] Mi', '[BOLD] Teacher models [ITALIC] Mi', '[BOLD] Teacher models [ITALIC] Mi', '[BOLD] Teacher models [ITALIC] Mi', '[BOLD] Teacher models [ITALIC] Mi'], ['E[1 [ITALIC] N∑ [ITALIC] NipMi( [ITALIC] yij| [ITALIC] yi< [ITALIC] j)]', '0.76', '0.78', '0.76', '0.79'], ['E[max [ITALIC] ykj1 [ITALIC] N∑ [ITALIC] NipMi( [ITALIC] ykj| [ITALIC] y< [ITALIC] j)]', '0.75', '0.76', '0.74', '0.77'], ['Test BLEU', '28.6', '34.7', '44.0', '43.3'], ['[BOLD] Diversified Model ^ [ITALIC] M', '[BOLD] Diversified Model ^ [ITALIC] M', '[BOLD] Diversified Model ^ [ITALIC] M', '[BOLD] Diversified Model ^ [ITALIC] M', '[BOLD] Diversified Model ^ [ITALIC] M'], ['E[1 [ITALIC] N∑ [ITALIC] Nip^ [ITALIC] M( [ITALIC] yij| [ITALIC] yi< [ITALIC] j)]', '0.74', '0.74', '0.73', '0.75'], ['Test BLEU', '30.6', '37.0', '45.5', '45.0'], ['[BOLD] Overfitted Diversified Model ^ [ITALIC] M', '[BOLD] Overfitted Diversified Model ^ [ITALIC] M', '[BOLD] Overfitted Diversified Model ^ [ITALIC] M', '[BOLD] Overfitted Diversified Model ^ [ITALIC] M', '[BOLD] Overfitted Diversified Model ^ [ITALIC] M'], ['E[1 [ITALIC] N∑ [ITALIC] Nip^ [ITALIC] M( [ITALIC] yij| [ITALIC] yi< [ITALIC] j)]', '0.82', '0.86', '0.84', '0.89'], ['Test BLEU', '28.6', '34.8', '43.8', '43.3']]
Through certain experiments we will discuss later, we observe that our method is able to achieve high performance gain under the following conditions: Both sides of Eqn. This can be realized when the teacher models are well-trained from the parallel data. ,N} and yij=yej∀yj. First, when the constituent models Mi are well-trained (but not overfitted), the confidence of the models is also high. This results in the expectation E[maxykj1N∑NipMi(ykj|y
Data Diversification: An Elegant Strategy For Neural Machine Translation
1911.01986
Table 4: Performances on low-resource translations. As done by flores, the from-English pairs are measured in tokenized BLEU, while to-English are measured in detokenized SacreBLEU.
['[BOLD] Method', '[BOLD] En-Ne', '[BOLD] Ne-En', '[BOLD] En-Si', '[BOLD] Si-En']
[['flores', '4.3', '7.6', '1.0', '6.7'], ['Data Diversification', '[BOLD] 5.7', '[BOLD] 8.9', '[BOLD] 2.2', '[BOLD] 8.2']]
Specifically, the method achieves 5.7, 8.9, 2.2, and 8.2 BLEU for En-Ne, Ne-En, En-Si and Si-En tasks, respectively. In absolute terms, these are 1.4, 1.3, 2.2 and 1.5 BLEU improvements over the baseline model (flores). Without any monolingual data involved, our method establishes a new state of the art in all four low-resource tasks.
Data Diversification: An Elegant Strategy For Neural Machine Translation
1911.01986
Table 9: BLEU scores for models with and without back-translation (BT) on the IWSLT’14 English-German (En-De), German-English (De-En) and WMT’14 En-De tasks. Column |D| shows the total data used in back-translation compared to the original parallel data.
['[BOLD] Task', '[BOLD] No back-translation [BOLD] Baseline', '[BOLD] No back-translation [BOLD] Ours', '[BOLD] With back-translation | [ITALIC] D|', '[BOLD] With back-translation [BOLD] Baseline', '[BOLD] With back-translation [BOLD] Ours']
[['IWSLT’14 En-De', '28.6', '30.6', '29×', '30.0', '[BOLD] 31.8'], ['IWSLT’14 De-En', '34.7', '37.0', '29×', '37.1', '[BOLD] 38.5'], ['WMT’14 En-De', '29.3', '30.7', '2.4×', '30.8', '[BOLD] 31.8']]
Our method is also complementary to back-translation (BT) (backtranslate_sennrich-etal-2016-improving). To demonstrate this, we conducted experiments on the IWSLT’14 En-De and De-En tasks with extra monolingual data extracted from the WMT’14 En-De corpus. In addition, we also compare our method against the back-translation baseline in the WMT’14 En-De task with extra monolingual data from News Crawl 2009. We use the big Transformer as the final model in all our back-translation experiments. Further details of these experiments are provided in the Appendix. However, using our data diversification strategy with such monolingual data boosts the performance further with additional +1.0 BLEU over the back-translation baselines.
Data Diversification: An Elegant Strategy For Neural Machine Translation
1911.01986
Table 10: WMT’14 English-German (En-De) and English-French (En-Fr) diversity performances in BLEU and Pairwise-BLEU scores. Lower Pairwise-BLEU means more diversity, higher BLEU means better quality.
['[BOLD] Method', '[BOLD] Pairwise-BLEU [BOLD] En-De', '[BOLD] Pairwise-BLEU [BOLD] En-Fr', '[BOLD] BLEU [BOLD] En-De', '[BOLD] BLEU [BOLD] En-Fr']
[['Sampling', '24.1', '32.0', '37.8', '46.5'], ['Beam', '73.0', '77.1', '69.9', '79.8'], ['Div-beam', '53.7', '64.9', '60.0', '72.5'], ['hMup', '50.2', '64.0', '63.8', '74.6'], ['Human', '35.5', '46.5', '69.6', '76.9'], ['Ours', '57.1', '70.1', '69.5', '77.0']]
By just training multiple models with different seeds, their translations from the training set yield only 14% and 22% duplicates for En-De and En-Fr. These results may be surprising as we might expect more duplicates. To evaluate the diversity the teacher models used in data diversification, we compare them with the BLEU/Pairwise-BLEU benchmark proposed by mixture_model_nmt_shen2019. Specifically, we use our forward models trained on WMT’14 English-German and English-French and measure the BLEU and Pairwise-BLEU scores in the provided testset. As it can be seen in En-De experiments, our method is less diverse than the mixture of experts method (hMup) with 57.1 versus 50.2 Pairwise-BLEU. However, translations of our method is of better quality (69.5 BLEU), which is very close to human performance. The same conclusion can be derived from the WMT’14 English-French experiments.
Data Diversification: An Elegant Strategy For Neural Machine Translation
1911.01986
Table 11: Improvements of data diversification under conditions with- and without- dropout in the IWSLT’14 English-German and German-English.
['[BOLD] Task', '[BOLD] Baseline', '[BOLD] Ours', '[BOLD] Gain']
[['[BOLD] Dropout=0.3', '[BOLD] Dropout=0.3', '[BOLD] Dropout=0.3', '[BOLD] Dropout=0.3'], ['En-De', '28.6', '30.1', '+1.5 (5%)'], ['De-En', '34.7', '36.5', '+1.8 (5%)'], ['[BOLD] Dropout=0', '[BOLD] Dropout=0', '[BOLD] Dropout=0', '[BOLD] Dropout=0'], ['En-De', '25.7', '27.5', '+1.8 (6%)'], ['De-En', '30.7', '32.5', '+1.8 (5%)']]
First, given that parameter initialization affects diversity, it is logical to assume that dropout will magnify the diversification effects. However, our empirical results did not support this. We ran experiments to test whether non-zero dropout magnify the improvements of our method over the baseline. We trained the single-model baseline and our data diversification’s final model in cases of dropout=0.3 and dropout=0, in the IWSLT’14 English-German and German-English. We used factor k=1 in these experiments. However, the gains made by our data diversification with dropout are not particularly higher than the non-dropout counterpart. This suggests that dropout may not contribute to the diversity of the synthetic data.
Data Diversification: An Elegant Strategy For Neural Machine Translation
1911.01986
Table 12: Improvements of data diversification under conditions maximum likelihood and beam search in the IWSLT’14 English-German and German-English.
['[BOLD] Task', '[BOLD] Baseline', '[BOLD] Ours [BOLD] Beam=1', '[BOLD] Ours [BOLD] Beam=5']
[['En-De', '28.6', '30.3', '30.4'], ['De-En', '34.7', '36.6', '36.8']]
We hypothesized that beam search will generate more diverse synthetic translations of the original dataset, thus increases the diversity and improves generalization. We tested this hypothesis by using greedy decoding (beam size=1) to generate the synthetic data and compare its performance against beam search (beam size=5) counterparts. We also used the IWSLT’14 English-German and German-English as a testbed. Note that for testing with the final model, we used the same beam search (beam size5) procedure for both cases.
Do Not Have Enough Data? Deep Learning to the Rescue!
1911.03118
Table 6: Accuracy of LAMBADA with or without label vs. unlabeled data for ATIS dataset with 5 samples per class. Significant improvement for BERT and SVM classifiers (*McNemar, p−value<0.01).
['Classifier', 'Base.', 'Unlab. Data', 'Unlab. GPT', 'LAMBADA']
[['BERT', '53.3', '54.5', '73.2', '[BOLD] 75.7*'], ['SVM', '35.6', '23.5', '47.2', '[BOLD] 56.5*'], ['LSTM', '29.0', '[BOLD] 40.1*', '23.2', '33.7']]
Our augmentation framework does not require additional unlabeled data. As such, it can be applied when unlabeled data is unavailable or costly. To test the expected LAMBADA performance in such a scenario, we compared it to a semi-supervised approach [ruder2018strong] that uses unlabeled data. To create an unlabeled dataset, we randomly selected samples from the original dataset while ignoring their labels. Next, following a simple weak labeling approach, we classified the samples with one of the classifiers after training it on the labeled dataset. We compared LAMBADA’s classification results with the results we obtained from this classifier. Surprisingly, for most classifiers, LAMBADA achieves better accuracy compared to a simple weak labeling approach. Clearly, the generated dataset contributes more to improving the accuracy of the classifier than the unlabeled samples taken from the original dataset. We further assessed the importance of the ”generated” labels by removing them from LAMBADA’s synthesized dataset. In future work, we plan to use various data balancing approaches on the unlabeled dataset to assess the importance of the second factor above.
Do Not Have Enough Data? Deep Learning to the Rescue!
1911.03118
Table 5: Accuracy of LAMBADA vs. other generative approaches over all datasets and classifiers. LAMBADA is statistically (* McNemar, p−value<0.01) superior to all models on each classifier and each dataset (on par to EDA with SVM on TREC).
['Dataset', '[EMPTY]', 'BERT', 'SVM', 'LSTM']
[['ATIS', 'Baseline', '53.3', '35.6', '29.0'], ['ATIS', 'EDA', '62.8', '35.7', '27.3'], ['ATIS', 'CVAE', '60.6', '27.6', '14.9'], ['ATIS', 'CBERT', '51.4', '34.8', '23.2'], ['ATIS', 'LAMBADA', '[BOLD] 75.7*', '[BOLD] 56.5*', '[BOLD] 33.7*'], ['TREC', 'Baseline', '60.3', '42.7', '17.7'], ['TREC', 'EDA', '62.6', '[BOLD] 44.8*', '23.1'], ['TREC', 'CVAE', '61.1', '40.9', '25.4*'], ['TREC', 'CBERT', '61.4', '43.8', '24.2'], ['TREC', 'LAMBADA', '[BOLD] 64.3*', '43.9*', '[BOLD] 25.8 *'], ['WVA', 'Baseline', '67.2', '60.2', '26.0'], ['WVA', 'EDA', '67.0', '60.7', '28.2'], ['WVA', 'CVAE', '65.4', '54.8', '22.9'], ['WVA', 'CBERT', '67.4', '60.7', '28.4'], ['WVA', 'LAMBADA', '[BOLD] 68.6*', '[BOLD] 62.9*', '[BOLD] 32.0*']]
S5SSx4SSSx3 Comparison of Generative Models We compared our approach to other leading text generator approaches. In the TREC dataset, the results for BERT are significantly better than all other methods. On the TREC dataset with SVM classifier, our method is on par with EDA.
Research Paper
1802.05934
Table 2. Hyper-parameters which were used in experiments for News20, BBC & BBC-Sports datasets
['[BOLD] Hyper-parameter', 'News20', 'BBC', 'BBC-Sports']
[['Batch size', '256', '32', '16'], ['Learning rate', '0.01', '0.01', '0.01'], ['Word vector dim', '300', '300', '300'], ['Latent vector dim ( [ITALIC] m)', '50', '50', '50'], ['# Nearest neighbours ( [ITALIC] k)', '5', '5', '5'], ['Scaling factor ( [ITALIC] λ)', '10−4', '10−4', '10−4'], ['# Epochs per fold', '30', '20', '20']]
All experiments were carried on a Dell Precision Tower 7910 server with Quadro M5000 GPU with 8 GB of memory. The word embeddings were randomly initialized and trained along with the model. For testing purposes the algorithm was tested using 10-fold cross-validation scheme. The learning rate is regulated over the training epochs, it is decreased to 0.3 times its previous value after every 10 epochs.
Research Paper
1802.05934
Table 1. Dataset Specifications
['Dataset', 'Train Size', 'Test Size', '# Classes']
[['News20', '18000', '2000', '20'], ['BBC', '2000', '225', '5'], ['BBC Sports', '660', '77', '5']]
For our experiments, we have chosen three popular publicly-available news classification datasets. The dataset is partitioned almost evenly across 20 different classes: comp.graphics, comp.os.ms-windows.misc, comp.sys.ibm.pc.hardware, comp.sys.mac. hardware, comp. windows.x, rec.autos, rec.motorcycles, rec.sport. baseball, rec.sport. hockey, sci.crypt, sci.electronics, sci.med, sci.space, misc.forsale, talk.politics.misc, talk.politics.guns, talk. politics.mideast, talk. religion.misc, alt.atheism and soc. religion.christian. The dataset is divided into 5 major classes: athletics, cricket, football, rugby and tennis. The datasets are chosen in such a way that all of them share common domain knowledge and have small number of training examples so that the improvement observed using instance-infusion is significant.
Research Paper
1802.05934
Table 3. Classification accuracies and F1-Scores for news arcticle classifications for different source and target domains. The first row corresponds to the baseline performance trained on only the target dataset. The following two rows shows the performance of instance-infusion method with and without the usage of penalty function. In all the three cases, our approach outperforms the baselines by a significant margin.
['[BOLD] METHOD', 'Target Source', 'News20 BBC', 'News20 BBC', 'BBC News20', 'BBC News20', 'BBC Sports BBC', 'BBC Sports BBC']
[['[BOLD] METHOD', '[EMPTY]', 'Accuracy', 'F1-Score', 'Accuracy', 'F1-Score', 'Accuracy', 'F1-Score'], ['Bi-LSTM (Target Only)', '[EMPTY]', '65.17', '0.6328', '91.33', '0.9122', '84.22', '0.8395'], ['Instance-Infused Bi-LSTM', '[EMPTY]', '76.44', '0.7586', '95.35', '0.9531', '88.78', '0.8855'], ['Instance-Infused Bi-LSTM (with penalty function)', '[EMPTY]', '[BOLD] 78.29', '[BOLD] 0.7773', '[BOLD] 96.09', '[BOLD] 0.9619', '[BOLD] 91.56', '[BOLD] 0.9100']]
The source and target datasets are chosen in a manner such that the source dataset is able to provide relevant information. 20Newsgroups contains news articles from all categories, so a good choice for source dataset is BBC which also encompasses articles from similar broad categories. For the same reason BBC also has 20Newsgroups as its source dataset. BBC Sports focuses on sports articles, BBC is chosen as the source dataset as the news articles share a common domain (articles come from same news media BBC). For the proper functioning of the model, the final layer of the instance-infused model is replaced while the rest of the network is inherited from the pre-trained target only model. For 20Newsgroups the improvement over baseline model is 12%, BBC and BBC Sports datasets show an improvement of around 5%. As mentioned earlier, our approach is independent of the sentence encoder being used. Instead of bi-LSTM any other model can be used. As the proposed approach is independent of the source encoding procedure and the source instance embeddings are kept constant during the training procedure, we can incorporate source instances from multiple datasets simultaneously. In the subsequent experimental setups, we try varying setups to prove the robustness and efficacy of our proposed model.
Research Paper
1802.05934
Table 5. Test Accuracy for proposed model using instances from multiple source datasets with 50% target dataset
['Dataset', 'Single Source Accuracy', 'Single Source F1-Score', 'Multiple Sources Accuracy', 'Multiple Sources F1-Score']
[['News20', '61.72', '0.6133', '67.32', '0.6650'], ['BBC', '91.01', '0.9108', '91.41', '0.9120'], ['BBC Sports', '81.72', '0.7990', '82.81', '0.8027']]
Target Dataset Reduction with Multiple Source. In this section, we design an experimental setup in which only 0.5 fraction of the target dataset is utilized and study the influence of multiple source dataset infusion. The results improve as and when more source datasets are used in the infusion process. This can be effectively leveraged for improving the performance of very lean datasets, by heavily deploying large datasets as source. In multiple source experiment setup, for a given target dataset the other two datasets are used as source.
Research Paper
1802.05934
Table 6. Comparison of results using other learning schemes on News20, BBC and BBC Sports datasets. The proposed model using a deep learning model as a baseline achieves competitive performance for all the three datasets.
['Model', 'News20 Accuracy', 'News20 F1-Score', 'BBC Accuracy', 'BBC F1-Score', 'BBC Sports Accuracy', 'BBC Sports F1-Score']
[['kNN-ngrams', '35.25', '0.3566', '74.61', '0.7376', '94.59', '0.9487'], ['Multinomial NB-bigram', '[BOLD] 79.21', '[BOLD] 0.7841', '95.96', '0.9575', '95.95', '0.9560'], ['SVM-bigram', '75.04', '0.7474', '94.83', '0.9456', '93.92', '0.9393'], ['SVM-ngrams', '78.60', '0.7789', '95.06', '0.9484', '95.95', '0.9594'], ['Random Forests-bigram', '69.01', '0.6906', '87.19', '0.8652', '85.81', '0.8604'], ['Random Forests-ngrams', '78.36', '0.7697', '94.83', '0.9478', '94.59', '0.9487'], ['Random Forests- tf-idf', '78.6', '0.7709', '95.51', '0.9547', '[BOLD] 96.62', '[BOLD] 0.9660'], ['Bi-LSTM', '65.17', '0.6328', '91.33', '0.9122', '84.22', '0.8395'], ['Instance-Infused Bi-LSTM', '78.29', '0.7773', '[BOLD] 96.09', '[BOLD] 0.9619', '91.56', '0.9100']]
Comparative Study. Literature involving these datasets mainly focus on non-deep learning based approaches. Thereby, we compare our results with some popular conventional learning techniques. For the k-NN-ngram experiments, the number of nearest neighbours k was set to 5. For the mentioned datasets, conventional models outperform our baseline Bi-LSTM model, however upon instance infusion the deep learning based model is able to achieve competitive performance across all datasets. Moreover by instance infusion the simple bi-LSTM model approaches the classical models in performance on News20 and BBC Sports dataset, whereas on BBC Dataset the proposed instance infused bi-LSTM model beats all the mentioned models.
AMR Parsing as Sequence-to-Graph Transduction
1905.08704
Table 1: Hyper-parameter settings
['[BOLD] GloVe.840B.300d embeddings dim', '[BOLD] GloVe.840B.300d embeddings 300']
[['[BOLD] BERT embeddings', '[BOLD] BERT embeddings'], ['source', 'BERT-Large-cased'], ['dim', '1024'], ['[BOLD] POS tag embeddings', '[BOLD] POS tag embeddings'], ['dim', '100'], ['[BOLD] Anonymization indicator embeddings', '[BOLD] Anonymization indicator embeddings'], ['dim', '50'], ['[BOLD] Index embeddings', '[BOLD] Index embeddings'], ['dim', '50'], ['[BOLD] CharCNN', '[BOLD] CharCNN'], ['num_filters', '100'], ['ngram_filter_sizes', '[3]'], ['[BOLD] Encoder', '[BOLD] Encoder'], ['hidden_size', '512'], ['num_layers', '2'], ['[BOLD] Decoder', '[BOLD] Decoder'], ['hidden_size', '1024'], ['num_layers', '2'], ['[BOLD] Deep biaffine classifier', '[BOLD] Deep biaffine classifier'], ['edge_hidden_size', '256'], ['label_hidden_size', '128'], ['[BOLD] Optimizer', '[BOLD] Optimizer'], ['type', 'ADAM'], ['learning_rate', '0.001'], ['max_grad_norm', '5.0'], ['[BOLD] Coverage loss weight [ITALIC] λ', '1.0'], ['[BOLD] Beam size', '5'], ['[BOLD] Vocabulary', '[BOLD] Vocabulary'], ['encoder_vocab_size (AMR 2.0)', '18000'], ['decoder_vocab_size (AMR 2.0)', '12200'], ['encoder_vocab_size (AMR 1.0)', '9200'], ['decoder_vocab_size (AMR 1.0)', '7300'], ['[BOLD] Batch size', '64']]
Both encoder and decoder embedding layers have GloVe and POS tag embeddings as well as CharCNN, but their parameters are not tied. We apply dropout (dropout_rate = 0.33) to the outputs of each module.
AMR Parsing as Sequence-to-Graph Transduction
1905.08704
Table 3: Fine-grained F1 scores on the AMR 2.0 test set. vN’17 is van Noord and Bos (2017b); L’18 is Lyu and Titov (2018); N’19 is Naseem et al. (2019).
['Metric', 'vN’18', 'L’18', 'N’19', 'Ours']
[['Smatch', '71.0', '74.4', '75.5', '[BOLD] 76.3±0.1'], ['Unlabeled', '74', '77', '[BOLD] 80', '79.0±0.1'], ['No WSD', '72', '76', '76', '[BOLD] 76.8±0.1'], ['Reentrancies', '52', '52', '56', '[BOLD] 60.0±0.1'], ['Concepts', '82', '[BOLD] 86', '[BOLD] 86', '84.8±0.1'], ['Named Ent.', '79', '[BOLD] 86', '83', '77.9±0.2'], ['Wikification', '65', '76', '80', '[BOLD] 85.8±0.3'], ['Negation', '62', '58', '67', '[BOLD] 75.2±0.2'], ['SRL', '66', '70', '[BOLD] 72', '69.7±0.2']]
, we assess the quality of each subtask using the AMR-evaluation tools Damonte et al. We see a notable increase on reentrancies, which we attribute to target-side copy (based on our ablation studies in the next section). Significant increases are also shown on wikification and negation, indicating the benefits of using DBpedia Spotlight API and negation detection rules in post-processing.
AMR Parsing as Sequence-to-Graph Transduction
1905.08704
Table 4: Ablation studies on components of our model. (Scores are sorted by the delta from the full model.)
['Ablation', 'AMR 1.0', 'AMR 2.0']
[['Full model', '70.2', '76.3'], ['no source-side copy', '62.7', '70.9'], ['no target-side copy', '66.2', '71.6'], ['no coverage loss', '68.5', '74.5'], ['no BERT embeddings', '68.8', '74.6'], ['no index embeddings', '68.5', '75.5'], ['no anonym. indicator embed.', '68.9', '75.6'], ['no beam search', '69.2', '75.3'], ['no POS tag embeddings', '69.2', '75.7'], ['no CharCNN features', '70.0', '75.8'], ['only edge prediction', '88.4', '90.9']]
Ablation Study Removing target-side copy also leads to a large drop. Specifically, the subtask score of reentrancies drops down to 38.4% when target-side copy is disabled. Coverage loss is useful with regard to discouraging unnecessary repetitive nodes. In addition, our model benefits from input features such as language representations from BERT, index embeddings, POS tags, anonymization indicators, and character-level features from CharCNN. Beam search, commonly used in machine translation, is also helpful in our model. We provide side-by-side examples in the Appendix to further illustrate the contribution from each component, which are largely intuitive, with the exception of BERT embeddings. There the exact contribution of the component (qualitative, before/after ablation) stands out less: future work might consider a probing analysis with manually constructed examples, in the spirit of Linzen et al. ; Conneau et al.
AMR Parsing as Sequence-to-Graph Transduction
1905.08704
Table 5: Smatch scores of full models trained and tested based on different node linearization strategies.
['Node Linearization', 'AMR 1.0', 'AMR 2.0']
[['Pre-order + Alphanum', '70.2', '76.3'], ['Pre-order + Alignment', '61.9', '68.3'], ['Pure Alignment', '64.3', '71.3']]
Alignments are created using the tool by Pourdamghani et al. Clearly, our linearization strategy leads to much better results than the two alternates. We also tried other traversal strategies such as combining in-order traversal with alphanumerical sorting or alignment-based sorting, but did not get scores even comparable to the two alternates.
Modeling Word Emotion in Historical Language: Quantity Beats Supposed Stability in Seed Word Selection
1806.08115
Table 4: Results of the synchronic evaluation in Pearson’s r averaged over all three VAD dimensions. The best system for each seed lexicon and those with statistically non-significant differences (p ≥ 0.05) are in bold.
['Induction Method', 'Seed Selection', 'SVDPPMI', 'SGNS']
[['kNN', 'full', '[BOLD] 0.548', '0.487'], ['ParaSimNum', 'full', '[BOLD] 0.557', '0.489'], ['RandomWalkNum', 'full', '[BOLD] 0.544', '0.436'], ['kNN', 'limited', '0.181', '0.166'], ['ParaSimNum', 'limited', '0.249', '0.191'], ['RandomWalkNum', 'limited', '[BOLD] 0.330', '0.181']]
SGNS embeddings are worse than SVDPPMI embeddings for both full and limited seed lexicons. SVDPPMI embeddings seem to be better suited for induction based on the full seed set, leading to the highest observed correlation with ParaSimNum. However, results with other induction algorithms are not significantly different. For the limited seed set, consistent with claims by Hamilton et al. However, all results with the limited seed set are far (and significantly) worse than those with the full seed lexicon. For English, using the full seed lexicons, we achieve performance figures around r=.35. In contrast, using the limited seed lexicon we find that the performance is markedly weaker in each of our six conditions compared to using the full seed lexicon. This observation directly opposes the claims from Hamilton et al.
Modeling Word Emotion in Historical Language: Quantity Beats Supposed Stability in Seed Word Selection
1806.08115
Table 2: Inter-annotator agreement for our English (goldEN) and German (goldDE) gold standard, as well as the lexicon by Warriner13 for comparision; Averaged standard deviation of ratings for each VAD dimension and mean over all dimensions.
['[EMPTY]', 'Valence', 'Arousal', 'Dominance', 'Mean']
[['goldEN', '1.20', '1.08', '1.41', '1.23'], ['goldDE', '1.72', '1.56', '2.31', '1.86'], ['Warriner', '1.68', '2.30', '2.16', '2.05']]
We measure inter-annotator agreement (IAA) by calculating the standard deviation (SD) for each word and dimension and averaging these, first, for each dimension alone, and then over these aggregate values, thus constituting an error-based score (the lower the better). In comparison with the lexicon by \newciteWarriner13, our gold standard displays higher rating consistency. As average over all three VAD dimensions, our lexicon displays an IAA of 1.23 and 1.86 for English and German, respectively, compared to 2.05 as reported by \newciteWarriner13. This suggests that experts show higher consensus, even when judging word emotions for a historical language period, than crowdworkers for contemporary language. An alternative explanation might be differences in word material, i.e., our random sample of frequent words.
Neural Aspect and Opinion Term Extraction with Mined Rules as Weak Supervision
1907.03750
Table 6: Aspect and opinion term extraction performance.
['Approach', 'SE14-R Aspect', 'SE14-R Opinion', 'SE14-L Aspect', 'SE14-L Opinion', 'SE15-R Aspect', 'SE15-R Opinion']
[['BiLSTM-CRF + word2vec', '84.06', '84.59', '73.47', '75.41', '66.17', '68.16'], ['BERT fine-tuning', '84.36', '85.50', '75.67', '79.75', '65.84', '74.21'], ['BERT feature-based', '85.14', '85.74', '76.81', '81.41', '66.84', '73.92'], ['RINANTE+BERT', '[BOLD] 85.51', '[BOLD] 86.82', '[BOLD] 79.93', '[BOLD] 82.09', '[BOLD] 68.50', '[BOLD] 74.54']]
We can see that using BERT yields better performance than using word2vec. RINANTE is still able to further improve the performance when contextual embeddings obtained with BERT are used.