paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
Phoneme Classification in High-Dimensional Linear Feature Domains
1312.6849
TABLE III: Existing error rates obtained in other studies for a range of classification methods on the TIMIT core test set. Results in this paper are most comparable to the GMM baselines.
['[BOLD] Method', '[BOLD] Error [%]']
[['HMM (Minimum Classification Error)\xa0', '31.4'], ['GMM baseline\xa0', '26.3'], ['GMM baseline\xa0', '24.1'], ['GMM baseline\xa0', '23.4'], ['[BOLD] GMM ( [ITALIC] f-average + sector sum) PLP+Δ+ΔΔ', '[BOLD] 18.5'], ['SVM, 5th order polynomial kernel\xa0', '22.4'], ['Large Margin GMM (LMGMM)\xa0', '21.1'], ['Regularized least squares\xa0', '20.9'], ['Hidden conditional random fields\xa0', '20.8'], ['Hierarchical LMGMM H(2,4)\xa0', '18.7'], ['Optimum-transformed HMM with context (THMM) ', '17.8'], ['Committee hierarchical LMGMM H(2,4)\xa0', '16.7']]
We see that the best results for acoustic waveform classifiers are achieved around 9 frames, and around 11 frames for PLP without deltas. The PLP+Δ+ΔΔ features are less sensitive to the number of frames with little difference in error from 1 to 13 frames. We can now also assess quantitatively the performance benefit of including the deltas. If we consider the best results obtained for PLP without deltas, 22.4% using 11 frames, with the best for PLP+Δ+ΔΔ, 21.8% with 7 frames, then the performance gap of 0.6% is much smaller than if we were to compare error rates where both classifiers used the same number of frames. Clearly it is not surprising that fewer PLP+Δ+ΔΔ frames are required for the same level of performance as the deltas are a direct function of the neighbouring PLP frames. It is still worth noting that in terms of the ultimate performance on this classification task the error rates with and without deltas are similar. The error rates obtained using the f-average over the five best values of f are 32.1%, 21.4% and 18.5% for acoustic waveforms, PLP and PLP+Δ+ΔΔ respectively. Given the encouraging results from these experiments on a small set of phonemes we progressed to a more realistic task and extended the classification problem to include all phonemes from the TIMIT database. All of the entries show the error for isolated phoneme classification except for the optimum-transformed HMM (THMM) The inclusion of context for the HMM classifiers reduces the error rate from 31.4% to 17.8%. This dramatic reduction suggests that if the other classifiers were also developed to directly incorporate contextual information, significant improvements could be expected.
SenGen: Sentence Generating Neural Variational Topic Model
1708.00308
Table 1: Perplexity comparison of various models on two different datasets. All models are configured to use 25 topics. Lower is better.
['Model', '20 Newsgroups', 'CNN/Daily Mail']
[['LDA (Blei et\xa0al., 2003 )', '1247', '776'], ['NVDM (Miao et\xa0al., 2015 )', '757', '435'], ['NVLDA (Srivastava & Sutton, 2017 )', '1213', '592'], ['ProdLDA (Srivastava & Sutton, 2017 )', '1695', '735'], ['SenGen (Our Model)', '2354', '671']]
We compute perplexity of the test dataset using the trained SenGen model as follows: Perplexity=1NN∑d=1exp(−logP(wd|β)Nd) (9) where the log probability is computed using the lower-bound estimate in Eq. In the above equation, N is the number of test documents, and Nd is the number of words in document d. On 20 Newsgroups datasets, we suspect the main reason is due to the differences in preprocessing between our work and that of others – we noticed that there are many non-dictionary terms in our vocabulary that originated from email signatures and headers. Another potential reason is that the model may be overfitting the training set due to the extremely large number of parameters.
Syntax-Enhanced Neural Machine Translation with Syntax-Aware Word Representations
1905.02878
Table 3: The influence of fine-tuning parser parameters in the SAWR system.
['Parser', 'MT03', 'MT04', 'MT05', 'MT06', 'Average']
[['no Tune', '[BOLD] 38.42', '[BOLD] 40.60', '[BOLD] 38.27', '[BOLD] 38.04', '[BOLD] 38.83'], ['Tune', '37.33', '39.45', '36.93', '37.03', '37.69']]
As an interesting attempt, we can simultaneously fine tune the parameters of both the parser and the Seq2Seq NMT model during training. We can see that fine-tuning decreases the average BLEU score by 38.83−37.69=1.14 significantly. This may be because that fine-tuning disorders the representation ability of the parser and makes its function more overlapping with other network components. This further demonstrates that pretrained syntax-aware word representations are helpful for NMT.
Syntax-Enhanced Neural Machine Translation with Syntax-Aware Word Representations
1905.02878
Table 4: Ensemble performances, where the Hybrid model denotes SAWR + Tree-RNN + Tree-Linearization.
['System', 'MT03', 'MT04', 'MT05', 'MT06', 'Average/Δ']
[['Baseline×3', '40.90', '43.25', '40.64', '40.16', '41.24'], ['[BOLD] SAWR×3', '41.94', '44.59', '41.91', '41.97', '42.60/+1.36'], ['Tree-RNN×3', '42.03', '44.15', '41.50', '41.41', '42.27/+1.03'], ['Tree-Linearization×3', '41.74', '44.23', '41.32', '41.44', '42.18/+0.94'], ['[BOLD] Hybrid', '[BOLD] 42.72', '[BOLD] 45.14', '[BOLD] 42.38', '[BOLD] 42.15', '[BOLD] 43.10/+1.86']]
First, we can see that ensemble is one effective technique to improve the translation performances. More importantly, the results show that the heterogeneous ensemble achieves averaged BLEU improvements by 43.10−41.24=1.86 points, better than the gains achieved by all three homo-approach ensembles, denoting that the three approaches could be mutually complementary in representing dependency syntax, and the resulting models of the three approaches are highly diverse.
Syntax-Enhanced Neural Machine Translation with Syntax-Aware Word Representations
1905.02878
Table 5: Final results based on the transformer. Only the SAWR results are significantly better (p<0.05).
['System', 'MT03', 'MT04', 'MT05', 'MT06', 'Average/Δ']
[['Transformer', '40.45', '42.76', '40.09', '39.67', '40.74'], ['[BOLD] SAWR', '[BOLD] 41.63', '[BOLD] 43.60', '[BOLD] 41.68', '[BOLD] 40.21', '[BOLD] 41.78/+1.04'], ['Tree-RNN', '41.24', '43.38', '41.04', '40.02', '41.42/+0.68'], ['Tree-Linearization', '41.12', '43.02', '41.04', '39.86', '41.26/+0.52']]
As shown, the transformer results are indeed much better than RNN-based baseline. The BLEU scores show an average increase of 40.74−37.09=3.65. In addition, we can see that syntax information can still give positive influences based on the transformer. The SAWR approach can also outperform the baseline system significantly. Particularly, we find that our SAWR approach is much more effective than the Tree-RNN and Tree-Linearization approaches. The results further demonstrate the effectiveness of SAWRs in syntax integration for NMT.
Handling Syntactic Divergence in Low-resource Machine Translation
1909.00040
Table 3: BLEU of our approach (Reorder) with different amount of parallel sentences of ja-en and ug-en translation. Baselines are supervised learning (sup), supervised learning with back translation (back) and data augmentation with translated original English sentences (No-Reorder).
['Model', '3k NMT', '3k SMT', '6k NMT', '6k SMT', '10k NMT', '10k SMT', '20k NMT', '20k SMT', '400k NMT', '400k SMT', 'ug NMT', 'ug SMT']
[['sup', '2.17', '6.36', '7.86', '8.70', '11.67', '10.68', '15.98', '12.11', '26.56', '18.62', '0.58', '1.46'], ['back', '2.27', '8.46', '5.40', '10.61', '13.50', '12.05', '16.05', '13.68', '–', '–', '0.42', '1.37'], ['No-Reorder', '6.46', '3.08', '9.73', '5.24', '12.57', '6.72', '15.56', '8.96', '–', '–', '3.24', '1.67'], ['Reorder', '[BOLD] 9.94', '6.23', '[BOLD] 12.42', '8.14', '[BOLD] 14.98', '9.22', '[BOLD] 17.58', '11.21', '–', '–', '[BOLD] 4.17', '1.07']]
We present the full results in Tab. For SMT, reordering has much better performance than no-reorder, but still lags behind the supervised counterpart.
Graph Neural News Recommendation with Long-term and Short-term Interest Modeling
1910.14025
Table 2: Comparison of Different Models
['Model', 'Adressa-1week AUC(%)', 'Adressa-1week F1(%)', 'Adressa-10week AUC(%)', 'Adressa-10week F1(%)']
[['DMF', '55.66', '56.46', '53.20', '54.15'], ['DeepWide', '68.25', '69.32', '73.28', '69.52'], ['DeepFM', '69.09', '61.48', '74.04', '65.82'], ['DKN', '75.57', '76.11', '74.32', '72.29'], ['DAN', '75.93', '74.01', '76.76', '71.65'], ['GNewsRec', '[BOLD] 81.16', '[BOLD] 82.85', '[BOLD] 78.62', '[BOLD] 81.01']]
We attribute the significant superiority of our model to its three advantages: (1) Our model constructs a heterogeneous user-news-topic graph and learns better user and news embeddings with high-order information encoded by GNN. (2) Our model considers not only the long-term user interest but also the short-term interest. (3) The topic information incorporated in the heterogeneous graph can help better reflect a user’s interest and alleviate the sparsity issue of user-item interactions. The news items with few user clicks can still aggregate neighboring information through the topics.
Graph Neural News Recommendation with Long-term and Short-term Interest Modeling
1910.14025
Table 4: Impact of different GNN layers of GNewsRec.
['Model', 'Adressa-1week AUC(%)', 'Adressa-1week F1(%)', 'Adressa-10week AUC(%)', 'Adressa-10week F1(%)']
[['GNewsRec-1 layer', '75.24', '72.17', '76.17', '71.92'], ['GNewsRec-2 layers', '[BOLD] 81.16', '[BOLD] 82.85', '[BOLD] 78.62', '[BOLD] 81.01'], ['GNewsRec-3 layers', '78.94', '80.36', '77.92', '80.11']]
We vary the number of GNN layers from 1 to 3. This is because 1-layer GNN can’t capture the higher-order relationships between users and news. Nevertheless, 3-layer GNN may bring massive noise to the model. Thus, we choose 2-layer GNN in our model GNewsRec.
How Document Pre-processing affects Keyphrase Extraction Performance
1610.07809
Table 2: Maximum recall and average number of keyphrase candidates for each model.
['[BOLD] Model TF×IDF', '[BOLD] Lvl 1 80.2%', '[BOLD] Lvl 1 7\u2009837', '[BOLD] Lvl 2 78.2%', '[BOLD] Lvl 2 6\u2009958', '[BOLD] Lvl 3 67.8%', '[BOLD] Lvl 3 2\u2009270']
[['Kea', '80.2%', '3\u2009026', '78.2%', '2\u2009502', '67.8%', '912'], ['TopicRank', '70.9%', '742', '69.2%', '627', '57.8%', '241'], ['KP-Miner', '64.0%', '724', '61.8%', '599', '48.7%', '212'], ['WINGNUS', '75.2%', '1\u2009355', '73.0%', '1\u2009007', '63.0%', '403']]
Each model uses a distinct keyphrase candidate selection method that provides a trade-off between the highest attainable recall and the size of set of candidates. Syntax-based selection heuristics, as used by TopicRank and WINGNUS, are better suited to prune candidates that are unlikely to be keyphrases. As for KP-miner, removing infrequent candidates may seem rather blunt, but it turns out to be a simple yet effective pruning method when dealing with long documents. For details on how candidate selection methods affect keyphrase extraction, please refer to [Wang:2014:PAU:2649459.2649475].
Meta Multi-Task Learning for Sequence Modeling
1802.08969
Table 5: Accuracy rates of our models on three tasks for sequence tagging.† means evaluated by F1 score(%), ‡ means evaluated by accuracy(%). ⧫ is the model implemented in [Huang, Xu, and Yu2015] .
['[EMPTY]', 'CoNLL2000†', 'CoNLL2003†', 'WSJ‡']
[['Single Task Model:', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['LSTM+CRF⧫', '93.67', '89.91', '97.25'], ['Meta-LSTM+CRF', '93.71', '[BOLD] 90.08', '[BOLD] 97.30'], ['collobert2011natural\xa0(collobert2011natural)', '94.32', '89.59', '97.29'], ['Multi-Task Model:', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['LSTM-SSP-MTL+CRF', '94.32', '90.38', '97.23'], ['Meta-LSTM-MTL+CRF', '[BOLD] 95.11', '[BOLD] 90.72', '[BOLD] 97.45']]
As shown, our proposed Meta-LSTM performs better than our competitor models whether it is single or multi-task learning.
Meta Multi-Task Learning for Sequence Modeling
1802.08969
Table 3: Accuracies of our models on 16 datasets against typical baselines. The numbers in brackets represent the improvements relative to the average performance (Avg.) of three single task baselines. ∗is from [Liu, Qiu, and Huang2017]
['[BOLD] Task', '[BOLD] Single Task LSTM', '[BOLD] Single Task HyperLSTM', '[BOLD] Single Task MetaLSTM', '[BOLD] Single Task Avg.', '[BOLD] Multiple Tasks ASP-MTL∗', '[BOLD] Multiple Tasks PSP-MTL', '[BOLD] Multiple Tasks SSP-MTL', '[BOLD] Multiple Tasks Meta-MTL(ours)', '[BOLD] Transfer Meta-MTL(ours)']
[['Books', '79.5', '78.3', '83.0', '80.2', '87.0', '84.3', '85.3', '87.5', '86.3'], ['Electronics', '80.5', '80.7', '82.3', '81.2', '89.0', '85.7', '87.5', '89.5', '86.0'], ['DVD', '81.7', '80.3', '82.3', '81.4', '87.4', '83.0', '86.5', '88.0', '86.5'], ['Kitchen', '78.0', '80.0', '83.3', '80.4', '87.2', '84.5', '86.5', '91.3', '86.3'], ['Apparel', '83.2', '85.8', '86.5', '85.2', '88.7', '83.7', '86.0', '87.0', '86.0'], ['Camera', '85.2', '88.3', '88.3', '87.2', '91.3', '86.5', '87.5', '89.7', '87.0'], ['Health', '84.5', '84.0', '86.3', '84.9', '88.1', '86.5', '87.5', '90.3', '88.7'], ['Music', '76.7', '78.5', '80.0', '78.4', '82.6', '81.3', '85.7', '86.3', '85.7'], ['Toys', '83.2', '83.7', '84.3', '83.7', '88.8', '83.5', '87.0', '88.5', '85.3'], ['Video', '81.5', '83.7', '84.3', '83.1', '85.5', '83.3', '85.5', '88.3', '85.5'], ['Baby', '84.7', '85.5', '84.0', '84.7', '89.8', '86.5', '87.0', '88.0', '86.0'], ['Magazines', '89.2', '91.3', '92.3', '90.9', '92.4', '88.3', '88.0', '91.0', '90.3'], ['Software', '84.7', '86.5', '88.3', '86.5', '87.3', '84.0', '86.0', '88.5', '86.5'], ['Sports', '81.7', '82.0', '82.5', '82.1', '86.7', '82.0', '85.0', '86.7', '85.7'], ['IMDB', '81.7', '77.0', '83.5', '80.7', '85.8', '82.0', '84.5', '88.0', '87.3'], ['MR', '72.7', '73.0', '74.3', '73.3', '77.3', '74.5', '75.7', '77.0', '75.5'], ['AVG', '81.8', '82.4', '84.0', '82.8', '87.2(+4.4)', '83.7(+0.9)', '85.7(+2.9)', '[BOLD] 87.9(+5.1)', '85.9(+3.1)'], ['Parameters', '120 [ITALIC] K', '321 [ITALIC] K', '134 [ITALIC] K', '[EMPTY]', '5490 [ITALIC] k', '2056 [ITALIC] K', '1411 [ITALIC] K', '1339 [ITALIC] K', '1339 [ITALIC] K']]
With the help of meta knowledge, we observe an average improvement of 3.1% over the average accuracy of single models, and even better than other competitor multi-task models. This observation indicates that we can save the meta knowledge into a meta network, which is quite useful for a new task.
Neutralizing Gender Bias in Word Embedding with Latent Disentanglement and Counterfactual Generation
2004.03133
Table 2: WEAT hypothesis test results for five popular gender-biased word categories. The best performing model is indicated as boldface. The second-best model is indicated as underline. The absolute value of the effect size denotes the degree of bias, and the p-value denotes the statistical significance of the results.
['Embeddings', 'B1 : career vs family p-value', 'B1 : career vs family Effect size', 'B2 : maths vs arts p-value', 'B2 : maths vs arts Effect size', 'B3 : science vs arts p-value', 'B3 : science vs arts Effect size', 'B4 : intelligence vs appearance p-value', 'B4 : intelligence vs appearance Effect size', 'B5 : strength vs weakness p-value', 'B5 : strength vs weakness Effect size']
[['GloVe', '0.000', '1.605', '0.276', '0.494', '0.014', '1.260', '0.009', '0.706', '0.067', '0.640'], ['Hard-GloVe', '0.100', '0.842', '0.090', '-1.043', '0.003', '-0.747', '[BOLD] 0.693', '[BOLD] -0.121', '0.255', '0.400'], ['GN-GloVe', '0.000', '1.635', '0.726', '-0.169', '0.081', '1.007', '0.037', '0.595', '0.083', '0.620'], ['ATT-GloVe', '0.612', '0.255', '0.007', '-0.519', '0.000', '0.843', '0.129', '0.440', '0.211', '0.455'], ['CPT-GloVe', '0.004', '1.334', '0.058', '1.029', '0.000', '1.417', '0.001', '0.906', '0.654', '-0.172'], ['AE-GloVe', '0.000', '1.569', '0.019', '0.967', '0.024', '1.267', '0.007', '0.729', '0.027', '0.763'], ['AE-GN', '0.001', '1.581', '0.716', '0.317', '0.139', '0.639', '0.006', '0.770', '0.028', '0.585'], ['GP-GloVe', '0.000', '1.567', '0.019', '0.966', '0.027', '1.253', '0.006', '0.733', '0.028', '0.758'], ['GP-GN', '0.000', '1.599', '[BOLD] 0.932', '[BOLD] 0.109', '0.251', '0.591', '0.004', '0.791', '0.098', '0.610'], ['CF-GloVe', '[BOLD] 0.874', '[BOLD] -0.089', '0.669', '-0.125', '[BOLD] 0.360', '[BOLD] 0.480', '0.678', '-0.124', '[BOLD] 0.970', '[BOLD] 0.013']]
To quantify the degree of gender bias, we apply the Word Embedding Association Test (WEAT) Caliskan et al. WEAT measures the effect size and the hypothesis statistics based on the gender-definition words and the well-known gender-stereotypical words set, such as strength and weakness. While all baseline models record worse performance than GloVe on at least one of the categories, our model always shows better performances than GloVe.
Neutralizing Gender Bias in Word Embedding with Latent Disentanglement and Counterfactual Generation
2004.03133
Table 1: Percentage of predictions for each category on gender relational analogy task. We can expect a high percentage for Definition and low percentages for Stereotype and None for well-debiased word embeddings. † and ∗ denote the statistically significant differences comparing with Hard-GloVe and Glove, respectively. The best performing model is indicated as boldface.
['Embeddings', 'Sembias Definition ↑', 'Sembias Stereotype ↓', 'Sembias None ↓', 'Sembias subset Definition ↑', 'Sembias subset Stereotype ↓', 'Sembias subset None ↓']
[['GloVe', '80.22', '10.91', '8.86', '57.5', '20.0', '22.5'], ['Hard-Glove', '87.95∗', '8.41', '3.64∗', '50.0', '32.5', '17.5'], ['GN-GloVe', '97.73†∗', '1.36†∗', '0.91†∗', '75.0†', '15.0', '10.0'], ['ATT-GloVe', '80.22', '10.68', '9.09', '60.0', '17.5', '22.5'], ['CPT-GloVe', '73.63', '5.68', '20.68', '45.0', '12.5', '42.5'], ['AE-GloVe', '84.09', '7.95', '7.95', '65.0', '15.0', '20.0'], ['AE-GN', '98.18†∗', '1.14†∗', '0.68†∗', '80.0†∗', '12.5†', '7.5'], ['GP-GloVe', '84.09', '8.18', '7.73', '65.0†∗', '15.0', '20.0'], ['GP-GN', '98.41†∗', '1.14†∗', '0.45†∗', '82.5†∗', '12.5†', '5.0∗'], ['CF-GloVe', '[BOLD] 100.00†∗', '[BOLD] 0.00†∗', '[BOLD] 0.00†∗', '[BOLD] 100.0†∗', '[BOLD] 0.0†∗', '[BOLD] 0.0†∗']]
4.3.1 Sembias Analogy Test We perform the gender relational analogy test with the Sembias dataset Zhao et al. ; Jurgens et al. The dataset contains 440 instances, and each instance consists of four pairs of words: 1) a gender-definition word pair (Definition), 2) a gender-stereotype word pair (Stereotype), and 3,4) two none-type word pairs (None). A tested model chooses a word pair (a,b) whose difference vector, →a−→b, has the highest cosine similarity with −→he−−−→she as a classification for the gender-definition word pair. By following the past practice Zhao et al. whose gender-definition word pairs are not used for training. Our model clearly selects all the gender-definition word pairs, which demonstrates the maintenance of the gender latent information for those words. Also, our model selects neither gender-stereotype words nor none-type words, so the difference vector of →a−→b has a minimal linear correlation with those words after applying our debiasing method.
Higher-order Coreference Resolution with Coarse-to-fine Inference
1804.05392
Table 1: Results on the test set on the English CoNLL-2012 shared task. The average F1 of MUC, B3, and CEAFϕ4is the main evaluation metric. We show only non-ensembled models for fair comparison.
['[EMPTY]', 'MUC Prec.', 'MUC Rec.', 'MUC F1', 'B3 Prec.', 'B3 Rec.', 'B3 F1', 'CEAF [ITALIC] ϕ4 Prec.', 'CEAF [ITALIC] ϕ4 Rec.', 'CEAF [ITALIC] ϕ4 F1', 'Avg. F1']
[['martschat:2015', '76.7', '68.1', '72.2', '66.1', '54.2', '59.6', '59.5', '52.3', '55.7', '62.5'], ['clark:2015', '76.1', '69.4', '72.6', '65.6', '56.0', '60.4', '59.4', '53.0', '56.0', '63.0'], ['wiseman:2015', '76.2', '69.3', '72.6', '66.2', '55.8', '60.5', '59.4', '54.9', '57.1', '63.4'], ['wiseman:2016', '77.5', '69.8', '73.4', '66.8', '57.0', '61.5', '62.1', '53.9', '57.7', '64.2'], ['clark:2016a', '79.9', '69.3', '74.2', '71.0', '56.5', '63.0', '63.8', '54.3', '58.7', '65.3'], ['clark:2016b', '79.2', '70.4', '74.6', '69.9', '58.0', '63.4', '63.5', '55.5', '59.2', '65.7'], ['e2e-coref', '78.4', '73.4', '75.8', '68.6', '61.8', '65.0', '62.7', '59.0', '60.8', '67.2'], ['+ ELMo\xa0Peters et\xa0al. ( 2018 )', '80.1', '77.2', '78.6', '69.8', '66.5', '68.1', '66.4', '62.9', '64.6', '70.4'], ['+ hyperparameter tuning', '80.7', '78.8', '79.8', '71.7', '68.7', '70.2', '67.2', '66.8', '67.0', '72.3'], ['+ coarse-to-fine inference', '80.4', '[BOLD] 79.9', '80.1', '71.0', '[BOLD] 70.0', '70.5', '67.5', '[BOLD] 67.2', '67.3', '72.6'], ['+ second-order inference', '[BOLD] 81.4', '79.5', '[BOLD] 80.4', '[BOLD] 72.2', '69.5', '[BOLD] 70.8', '[BOLD] 68.2', '67.1', '[BOLD] 67.6', '[BOLD] 73.0']]
We include performance of systems proposed in the past 3 years for reference. The baseline relative to our contributions is the span-ranking model from \newcitee2e-coref augmented with both ELMo and hyperparameter tuning, which achieves 72.3 F1. Our full approach achieves 73.0 F1, setting a new state of the art for coreference resolution.
Query-based Attention CNN for Text Similarity Map
1709.05036
Table 2: Experiment result
['Models', 'dev set', 'test set']
[['One stage QACNN', '66.8', '-'], ['QACNN(no attention)', '69.6', '-'], ['QACNN(only word-level attention)', '72.5', '-'], ['QACNN(only sentence-level attention)', '75.1', '-'], ['QACNN(single)', '77.6', '75.84'], ['[BOLD] QACNN(ensemble)', '[BOLD] 79.0', '[BOLD] 79.99']]
In this experiment,we focused on the difference between one-stage QACNN and two-stage QACNN. For one-stage QACNN, we didn’t split an entire passage into sentences. That is, the shape of passage-query similarity map PQ and passage-choice similarity map PC are 2D rather than 3D. We convolved them directly on word-level and output passage feature without second-stage involved. The result shows that the modified one staged QACNN reaches 66.8% accuracy on validation set, which is ten percent lower than 78.1%, the original QACNN accuracy on validation set. However, this modified model would have a deficiency of query information. Therefore, we concatenated the final output representation of PQ and PC together before prediction layer. The result is almost ten percent less than the original one. 3) For the last one, instead of removing sentence-level attention, we removed word-level attention from QACNN Layer. We can see that QACNN(with only word-level attention) performs better than QACNN(without attention); QACNN(with only sentence-level attention) performs better than QACNN(with only word-level attention); And original QACNN which contains both word-level and sentence-level attention does the best job among all. Thus, not only word-level attention but also sentence-level attention can contribute to the performance of QACNN. However, sentence-level attention seems to play a more important role.
Look at the First Sentence:Position Bias in Question Answering
2004.14602
Table 3: Position bias in different positions. Each model is trained on a biased SQuAD dataset (SQuADktrain) and evaluated on SQuADdev.
['[EMPTY]', '[BOLD] SQuADdev EM', '[BOLD] SQuADdev F1', '[BOLD] SQuADdev EM', '[BOLD] SQuADdev F1', '[BOLD] SQuADdev EM', '[BOLD] SQuADdev F1', '[BOLD] SQuADdev EM', '[BOLD] SQuADdev F1']
[['[BOLD] SQuAD [ITALIC] ktrain', '[ITALIC] k=2', '[ITALIC] k=2', '[ITALIC] k=3', '[ITALIC] k=3', '[ITALIC] k=4', '[ITALIC] k=4', '[ITALIC] k=5,6,...', '[ITALIC] k=5,6,...'], ['[BOLD] SQuAD [ITALIC] ktrain', '(20,593 samples)', '(20,593 samples)', '(15,567 samples)', '(15,567 samples)', '(10,379 samples)', '(10,379 samples)', '(12,610 samples)', '(12,610 samples)'], ['[BOLD] BERT', '34.52', '41.39', '46.15', '54.08', '51.24', '59.98', '57.11', '66.17'], ['+Bias Product', '43.89', '51.35', '68.50', '67.27', '53.68', '62.83', '58.77', '67.56'], ['+Learned-Mixin', '[BOLD] 71.12', '[BOLD] 79.92', '[BOLD] 69.72', '[BOLD] 78.46', '[BOLD] 63.91', '[BOLD] 73.17', '[BOLD] 63.09', '[BOLD] 72.30'], ['[BOLD] BiDAF', '18.43', '25.74', '12.26', '19.04', '9.96', '16.50', '12.34', '19.65'], ['+Bias Product', '14.24', '21.21', '17.38', '26.25', '8.67', '15.13', '14.21', '22.20'], ['+Learned-Mixin', '[BOLD] 45.92', '[BOLD] 57.29', '[BOLD] 41.64', '[BOLD] 52.68', '[BOLD] 32.77', '[BOLD] 42.39', '[BOLD] 27.22', '[BOLD] 36.95'], ['[BOLD] XLNet', '47.55', '55.01', '46.67', '54.56', '50.49', '58.74', '58.29', '66.67'], ['+Bias Product', '52.19', '60.17', '55.73', '63.99', '54.82', '63.32', '59.24', '67.80'], ['+Learned-Mixin', '[BOLD] 60.57', '[BOLD] 72.04', '[BOLD] 61.25', '[BOLD] 71.62', '[BOLD] 61.69', '[BOLD] 71.01', '[BOLD] 61.89', '[BOLD] 71.06']]
Due to the blurred sentence boundaries, position bias is less problematic when k is large. We observe a similar trend in BERT and XLNet while a huge performance drop is observed in BiDAF even with a large k.
Look at the First Sentence:Position Bias in Question Answering
2004.14602
Table 1: Performance of QA models trained on the biased SQuAD dataset (SQuADk=1train), and tested on SQuADdev. Δ denotes the difference in F1 score with SQuADtrain. See Section 2.1 for more details.
['Training Data', '[BOLD] BiDAF EM', '[BOLD] BiDAF F1', '[BOLD] BiDAF Δ', '[BOLD] BERT EM', '[BOLD] BERT F1', '[BOLD] BERT Δ', '[BOLD] XLNet EM', '[BOLD] XLNet F1', '[BOLD] XLNet Δ']
[['[BOLD] SQuADtrain', '66.51', '76.46', '[EMPTY]', '79.54', '87.51', '[EMPTY]', '80.69', '89.24', '[EMPTY]'], ['[BOLD] SQuADtrain\xa0(Sampled)', '58.76', '70.52', '-5.94', '73.64', '84.99', '-2.52', '80.07', '88.32', '-0.92'], ['[BOLD] SQuAD [ITALIC] k=1train', '21.44', '27.92', '-48.54', '29.10', '35.24', '-52.27', '38.59', '45.27', '-43.97'], ['[BOLD] SQuAD [ITALIC] k=1train\xa0+ First Sentence', '53.16', '63.21', '-13.25', '72.11', '80.46', '-7.05', '74.85', '82.84', '-6.40'], ['[BOLD] SQuAD [ITALIC] k=1train\xa0+ Sentence Shuffle', '54.40', '65.20', '-11.26', '73.64', '82.30', '-5.21', '77.83', '86.18', '-3.06']]
The performance of recurrent models (BiDAF) and self-attentive models (BERT, XLNet) drop significantly compared to models trained on SQuADtrain or SQuADtrain (Sampled). On Average, F1 scores has dropped by 48.26% in all three models which shows position bias of existing QA models. The relative position encodings in XLNet mitigates position bias to some extent, but its performance still degrades significantly.
EliXa: A modular and flexible ABSA platform
1702.01944
Table 2: Results obtained on the slot2 evaluation on restaurant data.
['[BOLD] System (type)', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1 score']
[['Baseline', '55.42', '43.4', '48.68'], ['EliXa (u)', '68.93', '71.22', '[BOLD] 70.05'], ['NLANGP (u)', '70.53', '64.02', '67.12'], ['EliXa (c)', '67.23', '66.61', '66.91'], ['IHS-RD-Belarus (c)', '67.58', '59.23', '63.13']]
The implementation of the clustering features looks for the cluster class of the incoming token in one or more of the clustering lexicons induced following the three methods listed above. If found, then we add the class as a feature. The Brown clusters only apply to the token related features, which are duplicated. We chose the best combination of features using 5-fold cross validation, obtaining 73.03 F1 score with local features (e.g. constrained mode) and 77.12 adding the word clustering features, namely, in unconstrained mode. These two configurations were used to process the test set in this task.
Contextualization of Morphological Inflection
1905.01420
Table 2: Accuracy of the models for various prediction settings. tag refers to tag prediction accuracy, and form to form prediction accuracy. Our model is joint; gold denotes form prediction conditioned on gold target morphological tags; the other columns are baseline methods.
['Language', 'tag joint', 'form gold', 'form joint', 'form direct', 'form SM', 'form CPH']
[['Bulgarian', '81.55', '91.89', '78.81', '71.5', '77.10', '76.94'], ['English', '89.58', '95.57', '90.41', '86.75', '86.53', '86.71'], ['Basque', '66.63', '82.19', '61.05', '59.74', '61.20', '60.23'], ['Finnish', '65.99', '86.53', '59.34', '51.21', '56.61', '56.40'], ['Gaelic', '68.33', '84.50', '69.53', '64.51', '68.88', '66.89'], ['Hindi', '85.33', '88.29', '81.43', '85.39', '86.81', '87.50'], ['Italian', '92.28', '85.13', '80.39', '85.22', '88.74', '90.46'], ['Latin', '82.57', '89.69', '75.68', '71.36', '74.22', '74.89'], ['Polish', '71.94', '96.14', '74.83', '61.77', '72.40', '70.23'], ['Swedish', '81.86', '96.02', '82.47', '75.35', '78.40', '80.85']]
Below we highlight two main lessons from our error analysis that apply to a wider range of generation tasks, e.g., machine translation and dialog systems. author=kat,color=green!40,size=,fancyline, caption=,todo: author=kat,color=green!40,size=,fancyline,caption=,Tab2: ADD discussion: results on Italian and Hindi are interesting; both languages have high morphological tagging accuracies, but low performance on the final accuracy, even compared to the DIRECT model.
User Evaluation of a Multi-dimensional Statistical Dialogue System
1909.02965
Table 3: Test results on simulated data (same error rates as in training): task success rate (SuccRate), average dialogue length (AvgLen), average reward (AvgRew).
['[BOLD] system', '[BOLD] SuccRate', '[BOLD] AvgLen', '[BOLD] AvgRew']
[['[ITALIC] one-dim', '97.8%', '14.69', '66.36'], ['[ITALIC] multi-dim', '97.6%', '15.68', '64.97'], ['[ITALIC] trans-fixed', '96.8%', '15.08', '65.23'], ['[ITALIC] trans-adapt', '97.4%', '16.41', '64.20']]
To get a better picture of what we might expect during the human evaluation, we first ran evaluations with simulated data. As we hypothesised, the scores are very similar, the one-dim system only slightly outperforming the multi-dimensional systems.
Riposte! A Large Corpus of Counter-Arguments
1910.03246
Table 3: The average Jaccard’s similarity scores between CAs for a single argument for each fallacy type.
['[BOLD] Criteria Score', '0.61', '0.17', '0.35', '0.36', '[BOLD] Total 0.24']
[]
How similar are the CAs across annotators? One design decision when building Riposte! was that with more annotators , we could collect a wide variety of diverse CAs for a single-argument regardless of the fallacy type. We first calculate the similarity of the CAs across annotators for a single argument. We then calculate the average Jaccard similarity score for all combinations of CAs per unique argument and average over all arguments.
Riposte! A Large Corpus of Counter-Arguments
1910.03246
Table 4: BLEU scores of our baselines using gold fallacy type for topic (T), premise (P), and claim (C).
['[BOLD] Baseline', '[BOLD] T', '[BOLD] C', '[BOLD] P', '[BOLD] T+P+C', '[BOLD] T+C', '[BOLD] T+P', '[BOLD] P+C']
[['[BOLD] SO', '3.98', '6.37', '15.59', '13.56', '10.69', '13.76', '18.16'], ['[BOLD] seq2seq-i', '12.28', '12.31', '5.96', '14.54', '12.63', '13.37', '16.57'], ['[BOLD] seq2seq-o', '1.31', '1.05', '1.49', '4.78', '1.60', '1.53', '5.53']]
Our SO results indicate that workers mainly used the premise and claim when creating CAs. We observe that seq2seq-o ’s performance is low, indicating a simple model is not sufficient when unknown topics are introduced.
Video Question Answering via Attribute-Augmented Attention Network Learning
1707.06355
Table 2. Experimental results on both open-ended and multiple-choice video question answering tasks.
['Method', 'Open-ended VQA task question type What', 'Open-ended VQA task question type Who', 'Open-ended VQA task question type Other', 'Open-ended VQA task question type Total accuracy', 'Multiple-choice VQA task question type What', 'Multiple-choice VQA task question type Who', 'Multiple-choice VQA task question type Other', 'Multiple-choice VQA task question type Total accuracy']
[['VQA+', '0.2097', '0.2486', '0.7010', '0.386', '0.5998', '0.3071', '0.8144', '0.574'], ['SAN+', '0.168', '0.224', '0.722', '0.371', '0.582', '0.288', '0.804', '0.558'], ['r-ANL(− [ITALIC] a)', '0.164', '0.231', '0.784', '0.393', '0.550', '0.288', '0.825', '0.554'], ['r-ANL(1)', '0.179', '0.235', '0.701', '0.372', '0.582', '0.261', '0.825', '0.556'], ['r-ANL(2)', '0.158', '0.249', '0.794', '0.400', '0.603', '0.285', '0.825', '0.571'], ['r-ANL(3)', '[BOLD] 0.216', '[BOLD] 0.294', '[BOLD] 0.804', '[BOLD] 0.438', '[BOLD] 0.633', '[BOLD] 0.364', '[BOLD] 0.845', '[BOLD] 0.614']]
The hyperparameters and parameters which achieve the best performance on the validation set are chosen to conduct the testing evaluation. We report the average value of all the methods on three evaluation criteria.
Embed2Detect: Temporally Clustered Embedded Words for Event Detection in Social Media
2006.05908
Table 3: Evaluation results with different preprocessing techniques
['[BOLD] Data set [BOLD] Method', 'MUNLIV [BOLD] Recall', 'MUNLIV [BOLD] Precision', 'MUNLIV [BOLD] F1', 'BrexitVote [BOLD] Recall', 'BrexitVote [BOLD] Precision', 'BrexitVote [BOLD] F1']
[['all tokens', '0.826', '0.463', '0.594', '1.000', '0.800', '0.889'], ['without punctuation', '0.913', '0.457', '0.609', '1.000', '0.727', '0.842'], ['without punctuation and stop-words', '0.696', '0.552', '0.615', '1.000', '0.800', '0.889']]
Even though there is an improvement in the performance measures with preprocessing, these results show that we can obtain good measures without preprocessing also. This ability will be helpful in situations where we cannot integrate direct preprocessing mechanisms such as removing stop words in a less commonly used language and removing stop words in a data set which is composed by more than one language. Since both data sets used for this research are mainly written in English, we used the tokens without punctuation and stop-words for the following experiments.
NetSpam: a Network-based Spam Detection Framework for Reviews in Online Social Media
1703.03609
TABLE V: Weights of all features (using unsupervised approach); features are ranked based on their overall average weights.
['Dataset - Weights', 'DEV', 'NR', 'ETF', 'BST', 'RES', 'PP1', 'ACS', 'MCS']
[['Main', '0.0029', '0.0550', '0.0484', '0.0445', '0.0379', '0.0329', '0.0321', '0.0314'], ['Review-based', '0.0626', '0.0510', '0.0477', '0.0376', '0.0355', '0.0346', '0.0349', '0.0340'], ['Item-based', '0.0638', '0.0510', '0.0501', '0.0395', '0.0388', '0.0383', '0.0374', '0.0366'], ['User-based', '0.0630', '0.0514', '0.0494', '0.0380', '0.0373', '0.0377', '0.0367', '0.0367'], ['Amazon', '0.1102', '0.0897', '0.0746', '0.0689', '0.0675', '0.0624', '0.0342', '0.0297']]
Iv-C3 Unsupervised Method One of the achievement in this study is that even without using a train set, we can still find the best set of features which yield to the best performance. As it is explained in Sec. As shown in Fig. (p-value=0.0208) and for SPeaglePlus this value reach 0.90 (p=0.0021). As another example for user-based dataset there is a correlation equal to 0.93 (p=0.0006) for NetSpam, while for SPeagle this value is equal to 0.89 (p=0.0024). This observation indicates NetSpam can prioritize features for both frameworks. For all of them, DEV is most weighted features, followed by NR, ETF and BST.
NetSpam: a Network-based Spam Detection Framework for Reviews in Online Social Media
1703.03609
TABLE IV: Weights of all features (with 5% data as train set); features are ranked based on their overall average weights.
['Dataset - Weights', 'DEV', 'NR', 'ETF', 'BST', 'RES', 'PP1', 'ACS', 'MCS']
[['Main', '0.0029', '0.0032', '0.0015', '0.0029', '0.0010', '0.0011', '0.0003', '0.0002'], ['Review-based', '0.0023', '0.0017', '0.0017', '0.0015', '0.0010', '0.0009', '0.0004', '0.0003'], ['Item-based', '0.0010', '0.0012', '0.0009', '0.0009', '0.0010', '0.0010', '0.0004', '0.0003'], ['User-based', '0.0017', '0.0014', '0.0014', '0.0010', '0.0010', '0.0009', '0.0005', '0.0004']]
Features Weights Importance: Combination of these features can be a good hint for obtaining better performance. The results of the Main dataset show all the four behavioral features are ranked as first features in the final overall weights. In addition, as shown in the Review-based as well as other two datasets, DEV is the most weighted feature. This is also same for our second most weighted feature, NR. From the third feature to the last feature there are different order for the mentioned features. The third feature for both datasets User-based and Review-based is same, ETF, while for the other dataset, Item-based, PP1 is at rank 3.
Multimodal Sentiment Analysis: Addressing Key Issues and Setting up the Baselines
1803.07427
TABLE II: Accuracy reported for speaker-exclusive (Sp-Ex) and speaker-inclusive (Sp-In) split for Concatenation-Based Fusion. IEMOCAP: 10-fold speaker-exclusive average. MOUD: 5-fold speaker-exclusive average. MOSI: 5-fold speaker-exclusive average. Legend: A stands for Audio, V for Video, T for Text.
['Modality Combination', 'IEMOCAP Sp-In', 'IEMOCAP Sp-Ex', 'MOUD Sp-In', 'MOUD Sp-Ex', 'MOSI Sp-In', 'MOSI Sp-Ex']
[['A', '66.20', '51.52', '–', '53.70', '64.00', '57.14'], ['V', '60.30', '41.79', '–', '47.68', '62.11', '58.46'], ['T', '67.90', '65.13', '–', '48.40', '78.00', '75.16'], ['T + A', '78.20', '70.79', '–', '57.10', '76.60', '75.72'], ['T + V', '76.30', '68.55', '–', '49.22', '78.80', '75.06'], ['A + V', '73.90', '52.15', '–', '62.88', '66.65', '62.4'], ['T + A + V', '[BOLD] 81.70', '[BOLD] 71.59', '–', '[BOLD] 67.90', '[BOLD] 78.80', '[BOLD] 76.66']]
IEMOCAP: As this dataset contains 10 speakers, we performed a 10-fold speaker-exclusive test, where in each round exactly one of the speakers was included in the test set and missing from train set. The same SVM model was used as before and accuracy was used as performance metric. MOUD: This dataset contains videos of about 80 people reviewing various products in Spanish. Each utterance in the video has been labeled as positive, negative, or neutral. In our experiments, we consider only samples with positive and negative sentiment labels. The speakers were partitioned into 5 groups and a 5-fold person-exclusive experiment was performed, where in every fold one out of the five group was in the test set. MOSI: MOSI dataset is rich in sentimental expressions, where 93 people review various products in English. The videos are segmented into clips, where each clip is assigned a sentiment score between −3 to +3 by five annotators. We took the average of these labels as the sentiment polarity and naturally considered two classes (positive and negative). Like MOUD, speakers were divided into five groups and a 5-fold person-exclusive experiment was run. For each fold, on average 75 people were in the training set and the remaining in the test set. The training set was further partitioned and shuffled into 80%–20% split to generate train and validation sets for parameter tuning.
Multimodal Sentiment Analysis: Addressing Key Issues and Setting up the Baselines
1803.07427
TABLE I: Person-Independent Train/Test split details of each dataset (≈ 70/30 % split). Note: X→Y represents train: X and test: Y; Validation sets are extracted from the shuffled train sets using 80/20 % train/val ratio.
['Dataset', 'Train [ITALIC] utterance', 'Train [ITALIC] video', 'Test [ITALIC] utterance', 'Test [ITALIC] video']
[['IEMOCAP', '4290', '120', '1208', '31'], ['MOSI', '1447', '62', '752', '31'], ['MOUD', '322', '59', '115', '20'], ['MOSI → MOUD', '2199', '93', '437', '79']]
They collected 80 product review and recommendation videos from YouTube. Each video was segmented into its utterances (498 in total) and each of these was categorized by a sentiment label (positive, negative and neutral). On average, each video has 6 utterances and each utterance is 5 seconds long. In our experiment, we did not consider neutral labels, which led to the final dataset consisting of 448 utterances. We dropped the neutral label to maintain consistency with previous work. In a similar fashion, Zadeh et al. The videos address a large array of topics, such as movies, books, and products. In the experiment to address the generalizability issues, we trained a model on MOSI and tested on MOUD.
Multimodal Sentiment Analysis: Addressing Key Issues and Setting up the Baselines
1803.07427
TABLE IV: Accuracy reported for speaker-exclusive classification. IEMOCAP: 10-fold speaker-exclusive average. MOUD: 5-fold speaker-exclusive average. MOSI: 5-fold speaker-exclusive average. Legend: A represents Audio, V represents Video, T represents Text.
['Modality Combination', 'IEMOCAP SVM', 'IEMOCAP bc-LSTM', 'MOUD SVM', 'MOUD bc-LSTM', 'MOSI SVM', 'MOSI bc-LSTM']
[['A', '52.9', '[BOLD] 57.1', '51.5', '[BOLD] 59.9', '58.5', '[BOLD] 60.3'], ['V', '47.0', '[BOLD] 53.2', '46.3', '[BOLD] 48.5', '53.1', '[BOLD] 55.8'], ['T', '65.5', '[BOLD] 73.6', '49.5', '[BOLD] 52.1', '75.5', '[BOLD] 78.1'], ['T + A', '70.1', '[BOLD] 75.4', '53.1', '[BOLD] 60.4', '75.8', '[BOLD] 80.2'], ['T + V', '68.5', '[BOLD] 75.6', '50.2', '[BOLD] 52.2', '76.7', '79.3'], ['A + V', '67.6', '[BOLD] 68.9', '62.8', '[BOLD] 65.3', '58.6', '62.1'], ['T + A + V', '72.5', '[BOLD] 76.1', '66.1', '[BOLD] 68.1', '77.9', '80.3']]
We evaluated SVM and bc-LSTM fusion with MOSI, MOUD, and IEMOCAP dataset. So, it is very apparent that consideration of context in the classification process has substantially boosted the performance.
Direct Network Transfer: Transfer Learning of Sentence Embeddings for Semantic Similarity
1804.07835
Table 1: Count of valid submissions to and the numbering of each SemEval STS task from 2012 to 2017.
['[BOLD] Year', '[BOLD] Task no.', '[BOLD] Submissions']
[['2012', '#6', '89'], ['2013', '#6', '90'], ['2015', '#2', '74'], ['2016', '#1', '124'], ['2017', '#1', '85']]
Semantic similarity, or relating short texts in a semantic space – be those phrases, sentences or short paragraphs – is a task that requires systems to determine the degree of equivalence between the underlying semantics of the two texts. Although relatively easy for humans, this task remains one of the most difficult natural language understanding problems, receiving significant interest from the research community. For instance, from 2012 to 2017, the International Workshop on Semantic Evaluation (SemEval) has been holding a shared task called Semantic Textual Similarity (STS) Agirre et al.
SMHD: A Large-Scale Resource for Exploring Online Language Usage for Multiple Mental Health Conditions
1806.05258
Table 1: Comparison between the number of self-reported diagnosed users per condition in the dataset of coppersmith2015adhd and ours (smhd).
['[BOLD] Condition', 'Twitter, (Copper- smith et al, 2015)', 'Reddit, smhd (ours)']
[['ADHD', '102', '10,098'], ['Anxiety', '216', '8,783'], ['Autism', '[EMPTY]', '2,911'], ['Bipolar', '188', '6,434'], ['Borderline', '101', '[EMPTY]'], ['Depression', '393', '14,139'], ['Eating', '238', '598'], ['OCD', '100', '2,336'], ['PTSD', '403', '2,894'], ['Schizophrenia', '172', '1,331'], ['Seasonal Affective', '100', '[EMPTY]']]
Our work has the following significant distinctions compared to existing social media datasets related to mental health. This makes the Twitter language use rather different from real life discussions. Instead, we use data from Reddit, an interactive discussion-centric forum without any length constraints. Our dataset contains up to two orders of magnitude more diagnosed individuals for each condition than the Twitter dataset by \newcitecoppersmith2015adhd, making it suitable for exploring more recent data-driven learning methods (see We choose our control users in a systematic way that makes classification experiments on the dataset realistic. We normalize language usage between the users: by removing specific mental health signals and discussions, we focus on patterns of language in normal (general) discussions. While our dataset creation method is close to \newciteyates2017depression, we extend theirs by investigating multiple high-precision matching patterns to identify self-reported diagnoses for a range of conditions. Part of our patterns are obtained through synonym discovery. Considering relevant synonyms from reliable sources increases the variety of the diagnosed users and linguistic nuances. We also explore nine common mental health conditions while \newciteyates2017depression focus only on depression. We explore classification methods for identifying mental health conditions through social media language and provide detailed analysis that helps us understand the differences in language usage between conditions, and between diagnosed users and controls. Our contributions are as follows: (i) We investigate the creation of high-precision matching patterns to identify self-reported diagnoses of nine different mental health conditions. (ii) We introduce a large-scale dataset of nine mental health conditions that has significant extensions to existing datasets and we make our data publicly available. Our dataset includes users who might suffer from more than one condition, thus allowing language study of interacting mental conditions. (iii) We investigate language characteristics of each mental health group. (iv) We explore classification methods for detecting users with various mental health conditions.
Message Passing Attention Networks for Document Understanding
1908.06267
Table 2: Classification accuracies. Best performance per column in bold, *best MPAD variant. OOM: >16GB RAM.
['[BOLD] Model', 'Reut.', 'BBC', 'Pol.', 'Subj.', 'MPQA', 'IMDB', 'TREC', 'SST-1', 'SST-2', 'Yelp’13']
[['doc2vec [le2014distributed]', '95.34', '98.64', '67.30', '88.27', '82.57', '[BOLD] 92.5', '70.80', '48.7', '87.8', '57.7'], ['CNN [kim2014convolutional]', '97.21', '98.37', '[BOLD] 81.5', '93.4', '89.5', '90.28', '93.6', '48.0', '87.2', '64.89'], ['DAN [iyyer2015deep]', '94.79', '94.30', '80.3', '92.44', '88.91', '89.4', '89.60', '47.7', '86.3', '61.55'], ['Tree-LSTM [tai2015improved]', '-', '-', '-', '-', '-', '-', '-', '51.0', '88.0', '-'], ['DRNN [irsoy2014deep]', '-', '-', '-', '-', '-', '-', '-', '49.8', '86.6', '-'], ['LSTMN [cheng2016long]', '-', '-', '-', '-', '-', '-', '-', '47.9', '87.0', '-'], ['C-LSTM [zhou2015c]', '-', '-', '-', '-', '-', '-', '94.6', '49.2', '87.8', '-'], ['SPGK [nikolentzos2017shortest]', '96.39', '94.97', '77.89', '91.48', '85.78', 'OOM', '90.69', 'OOM', 'OOM', 'OOM'], ['WMD [kusner2015word]', '96.5', '98.71', '66.42', '86.04', '83.95', 'OOM', '73.40', 'OOM', 'OOM', 'OOM'], ['DiSAN [shen2018disan]', '97.35', '96.05', '80.38', '[BOLD] 94.2', '[BOLD] 90.1', '83.25', '94.2', '[BOLD] 51.72', '86.76', '60.51'], ['LSTM-GRNN [tang2015document]', '96.16', '95.52', '79.98', '92.38', '89.08', '89.98', '89.40', '48.09', '86.38', '65.1'], ['HN-ATT [yang2016hierarchical]', '97.25', '96.73', '80.78', '92.92', '89.08', '90.06', '90.80', '49.00', '86.71', '[BOLD] 68.2'], ['MPAD', '97.07', '98.37', '80.24', '93.46*', '90.02', '91.30', '[BOLD] 95.60*', '49.09', '87.80', '66.16'], ['MPAD-sentence-att', '96.89', '99.32', '80.44', '93.02', '[BOLD] 90.12*', '91.70', '[BOLD] 95.60*', '49.95*', '[BOLD] 88.30*', '66.47'], ['MPAD-clique', '[BOLD] 97.57*', '[BOLD] 99.72*', '81.17*', '92.82', '89.96', '91.87*', '95.20', '48.86', '87.91', '66.60'], ['MPAD-path', '97.44', '99.59', '80.46', '93.31', '89.81', '91.84', '93.80', '49.68', '87.75', '66.80*']]
: Ablation results. The n in nMP refers to the number of message passing iterations. For the baselines, we provide the scores reported in the original papers. Furthermore, we have evaluated some of the baselines on the rest of our benchmark datasets, and we also report these scores. MPAD reaches best performance on 5 out of 10 datasets, and is close second elsewhere. Moreover, the 5 datasets on which MPAD ranks first widely differ in training set size, number of categories, and prediction task (topic, sentiment, etc.), which indicates that MPAD can perform well in different settings.
Message Passing Attention Networks for Document Understanding
1908.06267
Table 3: Ablation results. The n in nMP refers to the number of message passing iterations. *vanilla model (MPAD in Table 2).
['[BOLD] MPAD variant', 'Reut.', 'Pol.', 'IMDB']
[['MPAD 1MP', '96.57', '79.91', '90.57'], ['MPAD 2MP*', '97.07', '80.24', '[BOLD] 91.30'], ['MPAD 3MP', '97.07', '80.20', '91.24'], ['MPAD 4MP', '[BOLD] 97.48', '80.52', '[BOLD] 91.30'], ['MPAD 2MP undirected', '97.35', '80.05', '90.97'], ['MPAD 2MP no master node', '96.66', '79.15', '91.09'], ['MPAD 2MP no renormalization', '96.02', '79.84', '91.16'], ['MPAD 2MP neighbors-only', '97.12', '79.22', '89.50'], ['MPAD 2MP no master node skip connection', '96.93', '[BOLD] 80.62', '91.12']]
To understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Number of MP iterations. First, we varied the number of message passing iterations from 1 to 4. We attribute this to the fact that we are reading out at each iteration from 1 to T (see Eq . Indeed, in initial experiments involving readout at t= T only, setting T≥2 was always decreasing performance, despite the GRU-based updates ( Eq. These results were consistent with that of [yao2019graph] and [kipf2016semi], who both are reading out only at t =T too. We hypothesize that node features at T≥2 are too diffuse to be entirely relied upon during readout. More precisely, initially at t=0, node representations capture information about words, at t=1, about their 1-hop neighborhood (bigrams), at t=2, about compositions of bigrams, etc. Thus, pretty quickly, node features become general and diffuse. In such cases, considering also the lower-level, more precise features of the earlier iterations when reading out may be necessary. No renormalization. That is, Eq. Unlike the mean, which captures distributions, the sum captures structural information [xu2018powerful]. We hypothesize that this is the case because statistical word co-occurrence networks tend to have similar structural properties, regardless of the topic, polarity, sentiment, etc. of the corresponding documents.
Incorporating Uncertain Segmentation Information into Chinese NER for Social Media Text
2004.06384
Table 3: The results of different models on the MSRA dataset. × indicates that the model uses the BERT.
['Model', 'P', 'R', 'F']
[['Chen et\xa0al. ( 2006 )', '91.22', '81.71', '86.20'], ['Dong et\xa0al. ( 2016 )', '91.28', '90.62', '90.95'], ['Zhang and Yang ( 2018 )', '93.57', '92.79', '93.18'], ['Zhu and Wang ( 2019 )', '93.53', '92.42', '92.97'], ['Ding et\xa0al. ( 2019 )', '94.60', '94.20', '94.40'], ['Zhao et\xa0al. ( 2019 )×', '95.46', '95.09', '95.28'], ['Gong et\xa0al. ( 2019 )×', '95.26', '[BOLD] 95.57', '95.42'], ['Johnson et\xa0al. ( 2020 )', '93.71', '92.29', '92.99'], ['UIcwsNN', '89.87', '90.54', '90.20'], ['UIcwsNN + BERT×', '[BOLD] 96.31', '[BOLD] 94.98', '[BOLD] 95.64']]
Our model UIcwsNN specializes in learning word-level representation, but rarely considers other-levels characteristics, such as long-distance temporal semantics. Therefore, it only achieves competitive performance on the formal text. But our model UIcwsNN+BERT realizes new state-of-the-art performance.
Incorporating Uncertain Segmentation Information into Chinese NER for Social Media Text
2004.06384
Table 2: The F values of existing models on the WeiboNER dataset. ∗ indicates that the model utilizes external lexicons. ∘ indicates that the model adopts joint learning. The previous models do not use the BERT, so we show the results of our model without BERT.
['Models', 'NAM', 'NOM', 'Overall']
[['Peng and Dredze ( 2015 )∘', '51.96', '61.05', '56.05'], ['Peng and Dredze ( 2016 )∘', '55.28', '62.97', '58.99'], ['He and Sun ( 2017a )', '50.60', '59.32', '54.82'], ['He and Sun ( 2017b )', '54.50', '62.17', '58.23'], ['Zhang and Yang ( 2018 )∗', '53.04', '62.25', '58.79'], ['Cao et\xa0al. ( 2018 )∘', '54.34', '57.35', '58.70'], ['Zhu and Wang ( 2019 )', '55.38', '62.98', '59.31'], ['Liu et\xa0al. ( 2019 )∗', '52.55', '[BOLD] 67.41', '59.84'], ['Ding et\xa0al. ( 2019 )∗', '-', '-', '59.50'], ['Gui et\xa0al. ( 2019 )∗', '55.34', '64.98', '60.21'], ['Johnson et\xa0al. ( 2020 )', '55.70', '62.80', '59.50'], ['BiLSTM+CRF', '53.95', '62.63', '57.69'], ['CNNs+CRF', '55.07', '62.97', '59.22'], ['Our model (UIcwsNN)', '[BOLD] 57.58', '[BOLD] 65.97', '[BOLD] 62.07']]
Our model UIcwsNN significantly outperforms other models and achieves new state-of-the-art performance. The overall score of our model is generally more than 2% higher than the scores of other models. Many methods use lexicon instead of the CWS to provide extractors with external word-level information, but how to choose the appropriate words based on sentence contexts is their challenge. Besides, the approaches that jointly train NER and CWS tasks do not achieve desired results, because segmentation noises affect their effectiveness inevitably. Our model handles this trouble.
A Hierarchical Approach for Generating Descriptive Image Paragraphs
1611.06607
Table 2: Main results for generating paragraphs. Our Region-Hierarchical method is compared with six baseline models and human performance along six language metrics.
['[EMPTY]', 'METEOR', 'CIDEr', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4']
[['Sentence-Concat', '12.05', '6.82', '31.11', '15.10', '7.56', '3.98'], ['Template', '14.31', '12.15', '37.47', '21.02', '12.30', '7.38'], ['DenseCap-Concat', '12.66', '12.51', '33.18', '16.92', '8.54', '4.54'], ['Image-Flat ()', '12.82', '11.06', '34.04', '19.95', '12.20', '7.71'], ['Regions-Flat-Scratch', '13.54', '11.14', '37.30', '21.70', '13.07', '8.07'], ['Regions-Flat-Pretrained', '14.23', '12.13', '38.32', '22.90', '14.17', '[BOLD] 8.97'], ['Regions-Hierarchical (ours)', '[BOLD] 15.95', '[BOLD] 13.52', '[BOLD] 41.90', '[BOLD] 24.11', '[BOLD] 14.23', '8.69'], ['Human', '19.22', '28.55', '42.88', '25.68', '15.55', '9.66']]
We present our main results at generating paragraphs in Tab. The Sentence-Concat method performs poorly, achieving the lowest scores across all metrics. Its lackluster performance provides further evidence of the stark differences between single-sentence captioning and paragraph generation. Surprisingly, the hard-coded template-based approach performs reasonably well, particularly on CIDEr, METEOR, and BLEU-1, where it is competitive with some of the neural approaches. However, its relatively poor performance on BLEU-3 and BLEU-4 highlights the limitation of reasoning about regions in isolation – it is unable to produce much text relating regions to one another, and further suffers from a lack of “connective tissue” that transforms paragraphs from a series of disconnected thoughts into a coherent whole. DenseCap-Concat scores worse than Template on all metrics except CIDEr, illustrating the necessity of Template’s caption parsing and recombination. To make these metrics more interpretable, we performed a human evaluation by collecting an additional paragraph for 500 randomly chosen images, with results in the last row of Tab. As expected, humans produce superior descriptions to any automatic method, performing better on all language metrics considered.
Unsupervised Text Style Transfer using Language Models as Discriminators
1805.11749
Table 1: Decipherment results measured in BLEU. Copy is directly measuring y against x. LM + adv denotes we use negative samples to train the language model.∗We run the code open-sourced by the authors to get the results.
['Model', '20%', '40%', '60%', '80%', '100%']
[['Copy', '64.3', '39.1', '14.4', '2.5', '0'], ['Shen et\xa0al. ( 2017 )∗', '86.6', '77.1', '70.1', '61.2', '[BOLD] 50.8'], ['Our results:', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['LM', '89.0', '[BOLD] 80.0', '[BOLD] 74.1', '62.9', '49.3'], ['LM + adv', '[BOLD] 89.1', '79.6', '71.8', '[BOLD] 63.8', '44.2']]
We can see that using adversarial training sometimes improves the results. However, we found empirically that using negative samples makes the training very unstable and the model diverges easily. This is the main reason why we did not get consistently better results by incorporating adversarial training.
Unsupervised Text Style Transfer using Language Models as Discriminators
1805.11749
Table 2: Results for sentiment modification. X=negative,Y=positive. PPLx denotes the perplexity of sentences transferred from positive sentences evaluated by a language model trained with negative sentences and vice versa.
['Model', 'Accu', 'BLEU', 'PPL [BOLD] X', 'PPL [BOLD] Y']
[['Shen et\xa0al. ( 2017 )', '79.5', '12.4', '50.4', '52.7'], ['Hu et\xa0al. ( 2017a )', '87.7', '[BOLD] 65.6', '115.6', '239.8'], ['Our results:', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['LM', '83.3', '38.6', '[BOLD] 30.3', '[BOLD] 42.1'], ['LM + Classifier', '[BOLD] 91.2', '57.8', '47.0', '60.9']]
Results: We report the results in Table. As a baseline, the original corpus has perplexity of 35.8 and 38.8 for the negative and positive sentences respectively. This demonstrates the effectiveness of using LM as the discriminator. has the highest accuracy and BLEU score among the three models while the perplexity is very high. It is not surprising that the classifier will only modify the features of the sentences that are related to the sentiment and there is no mechanism to ensure that the modified sentence being fluent. Hence the corresponding perplexity is very high.
Unsupervised Text Style Transfer using Language Models as Discriminators
1805.11749
Table 3: Results for sentiment modification based on the 500 human annotated sentences as ground truth from (Li et al., 2018).
['Model', 'ACCU', 'BLEU', 'PPL [BOLD] X', 'PPL [BOLD] Y']
[['Shen et\xa0al. ( 2017 )', '76.2', '6.8', '49.4', '45.6'], ['Fu et\xa0al. ( 2017 ):', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['StyleEmbedding', '9.2', '16.65', '97.51', '142.6'], ['MultiDecoder', '50.9', '11.24', '111.1', '119.1'], ['Li et\xa0al. ( 2018 ):', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Delete', '87.2', '11.5', '75.2', '68.7'], ['Template', '86.7', '18.0', '192.5', '148.4'], ['Retrieval', '[BOLD] 95.1', '1.3', '[BOLD] 31.5', '[BOLD] 37.0'], ['DeleteAndRetrieval', '90.9', '12.6', '104.6', '43.8'], ['Our results:', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['LM', '85.4', '13.4', '32.8', '40.5'], ['LM + Classifier', '90.0', '[BOLD] 22.3', '48.4', '61.6']]
Their method is feature based and consists of the following steps: (Delete) first, they use the statistics of word frequency to delete the attribute words such as “good, bad” from original sentences, (Retrieve) then they retrieve the most similar sentences from the other corpus based on nearest neighbor search, (Generate) the attribute words from retrieved sentences are combined with the content words of original sentences to generate transferred sentences. The authors provide 500 human annotated sentences as the ground truth of transferred sentences so we measure the BLEU score against those sentences. We can see our model has similar accuracy compared with DeleteAndRetrieve, but has much better BLEU scores and slightly better perplexity.
Unsupervised Text Style Transfer using Language Models as Discriminators
1805.11749
Table 4: Related language translation results measured in BLEU. The results for sr vs bs in measured in BLEU1 while cn vs tw is measure in BLEU.
['Model', 'sr–bs', 'bs–sr', 'cn–tw', 'tw–cn']
[['Copy', '0', '0', '32.3', '32.3'], ['Shen et\xa0al. ( 2017 )', '29.1', '30.3', '60.1', '60.7'], ['Our results:', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['LM', '[BOLD] 31.0', '[BOLD] 31.7', '[BOLD] 81.6', '[BOLD] 85.5']]
Results: The results are shown in Table. For sr–bos and bos–sr, since the vocabulary of two languages does not overlap at all, it is a very challenging task. We report the BLEU1 metric since the BLEU4 is close to 0. The case for zh–tw and tw–zh is much easier. Simple copying already has a reasonable score of 32.3.
Emotional Voice Conversion using multitask learning with Text-to-speech
1911.06149
Table 1: WER comparison
['[EMPTY]', 'VC', 'VCTTS-V', 'VCTTS-T', 'TTS']
[['WER', '54.54', '38.50', '31.98', '30.39']]
Word error rate (WER) was computed to measure how our proposed model improves the linguistic consistency of the converted speech. Practically, morphemes were used instead of words since morphemes are considered as recognition units of Korean speech [KwonP03, LeeC04, BangKK18]. Google Cloud Speech-to-Text API transcribed the converted speech, and transcripts were divided into a sequence of morphemes by the Komoran morphological analyzer in KoNLPy [ParkC14_konlpy]. The result shows that VCTTS-V outperforms VC, and WER of VCTTS-T is worse than TTS.
Dual Co-Matching Network for Multi-choice Reading Comprehension
1901.09381
Table 2: Experiment results on RACE test set. † means it is statistically significant to the models ablating either the bidirectional matching or gated mechanism. ∗ indicates ensemble model. DMNbase uses BERTbase as encoder and DMNlarge uses BERTlarge as encoder.
['[BOLD] Model', 'RACE-M', 'RACE-H', 'RACE']
[['DFN Xu et\xa0al. ( 2017 )', '51.5', '45.7', '47.4'], ['HAF Zhu et\xa0al. ( 2018 )', '45.0', '46.4', '46.0'], ['MRU Tay et\xa0al. ( 2018 )', '57.7', '47.4', '50.4'], ['HCM Wang et\xa0al. ( 2018 )', '55.8', '48.2', '50.4'], ['MMN Tang et\xa0al. ( 2019 )', '61.1', '52.2', '54.7'], ['GPT Radford ( 2018 )', '62.9', '57.4', '59.0'], ['RSM Sun et\xa0al. ( 2018 )', '69.2', '61.5', '63.8'], ['BERT [ITALIC] base', '71.1', '62.3', '65.0'], ['BERT [ITALIC] large', '76.6', '70.1', '72.0'], ['OCN Ran et\xa0al. ( 2019 )∗', '78.4', '71.5', '73.5'], ['Our Models', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['DMN [ITALIC] base', '72.3', '64.2', '66.5'], ['DMN [ITALIC] large', '77.6', '70.1', '72.3'], ['DMN∗ [ITALIC] large', '[BOLD] 79.5', '[BOLD] 71.8', '[BOLD] 74.1'], ['Human Performance', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Turkers', '85.1', '69.4', '73.3'], ['Ceiling', '95.4', '94.2', '94.5']]
Turkers is the performance of Amazon Turkers on a random subset of the RACE test set. Ceiling is the percentage of unambiguous questions in the test set. Comparison results show that our model is powerful and even the sing model outperforms all baselines and achieves new state-of-the-art accuracy. Our ensemble model further improves the performance for 1.5%. In this work, we mainly focus on show two core model improvements, (1) the bidirectional matching strategy and (2) the gated mechanism. We observe 1.5% performance decrease by only using unidirectional matching. In detail, we only build answer-aware passage representation without considering passage-aware answer representation when modeling the passage-answer sequence pair relationship (i.e., only use Sp as the matching representation without Sa in Eq. The ablation experiments on ROCStories dataset further prove above descriptions.
Multi-Perspective Relevance Matching with Hierarchical ConvNets for Social Media Search
1805.08159
Table 3: Main results on the TREC Microblog 2011–2014 datasets. Rows are numbered in the first column, where each represents a model or a contrastive condition. The last row shows the relative improvement against QL. The best numbers on each dataset are in bold. Superscripts and subscripts indicate the row indexes for which a metric difference is statistically significant at p<0.05. Only methods 1–3 and 12–13 are compared with all other methods in the significance tests.
['[BOLD] ID', '[BOLD] Model [BOLD] Metric', '[BOLD] 2011 [BOLD] MAP', '[BOLD] 2011 [BOLD] P30', '[BOLD] 2012 [BOLD] MAP', '[BOLD] 2012 [BOLD] P30', '[BOLD] 2013 [BOLD] MAP', '[BOLD] 2013 [BOLD] P30', '[BOLD] 2014 [BOLD] MAP', '[BOLD] 2014 [BOLD] P30']
[['[BOLD] Non-Neural Baselines', '[BOLD] Non-Neural Baselines', '[BOLD] Non-Neural Baselines', '[BOLD] Non-Neural Baselines', '[BOLD] Non-Neural Baselines', '[BOLD] Non-Neural Baselines', '[BOLD] Non-Neural Baselines', '[BOLD] Non-Neural Baselines', '[BOLD] Non-Neural Baselines', '[BOLD] Non-Neural Baselines'], ['1', 'QL', '0.3576', '0.4000', '0.2091', '0.3311', '0.2532', '0.4450', '0.3924', '0.6182'], ['2', 'RM3', '0.38241', '0.42111', '0.23421', '0.3452', '0.27661,2', '0.47331', '[BOLD] 0.44801,3', '0.6339'], ['3', 'L2R (all)', '0.38451', '0.4279', '0.22911', '0.3559', '0.2477', '0.4617', '0.3943', '0.6200'], ['[EMPTY]', '(text)', '0.3547', '0.4027', '0.2072', '0.3294', '0.2394', '0.4456', '0.3824', '0.6091'], ['[EMPTY]', '(text+URL)', '0.3816', '0.4272', '0.2317', '0.3667', '0.2489', '0.4506', '0.3974', '0.6206'], ['[EMPTY]', '(text+hashtag)', '0.3473', '0.4020', '0.2039', '0.3175', '0.2447', '0.4533', '0.3815', '0.5939'], ['[BOLD] Neural Baselines', '[BOLD] Neural Baselines', '[BOLD] Neural Baselines', '[BOLD] Neural Baselines', '[BOLD] Neural Baselines', '[BOLD] Neural Baselines', '[BOLD] Neural Baselines', '[BOLD] Neural Baselines', '[BOLD] Neural Baselines', '[BOLD] Neural Baselines'], ['4', 'DSSM\xa0huang2013learning', '0.1742', '0.2340', '0.1087', '0.1791', '0.1434', '0.2772', '0.2566', '0.4261'], ['5', 'C-DSSM\xa0shen2014learning', '0.0887', '0.1122', '0.0803', '0.1525', '0.0892', '0.1717', '0.1884', '0.2752'], ['6', 'DUET\xa0mitra2017learning', '0.1533', '0.2109', '0.1325', '0.2356', '0.1380', '0.2528', '0.2680', '0.4091'], ['7', 'DRMM\xa0guo2016deep', '0.2635', '0.3095', '0.1777', '0.3169', '0.2102', '0.4061', '0.3440', '0.5424'], ['8', 'K-NRM\xa0xiong2017end', '0.2519', '0.3034', '0.1607', '0.2966', '0.1750', '0.3178', '0.3472', '0.5388'], ['9', 'PACRR\xa0hui2017pacrr', '0.2856', '0.3435', '0.2053', '0.3232', '0.2627', '0.4872', '0.3667', '0.5642'], ['[BOLD] Neural Baselines with Interpolation', '[BOLD] Neural Baselines with Interpolation', '[BOLD] Neural Baselines with Interpolation', '[BOLD] Neural Baselines with Interpolation', '[BOLD] Neural Baselines with Interpolation', '[BOLD] Neural Baselines with Interpolation', '[BOLD] Neural Baselines with Interpolation', '[BOLD] Neural Baselines with Interpolation', '[BOLD] Neural Baselines with Interpolation', '[BOLD] Neural Baselines with Interpolation'], ['10', 'DUET+', '0.3576', '0.4000', '0.22431', '0.36441', '0.27791,3', '0.48781', '0.42191,3', '[BOLD] 0.64671'], ['11', 'DRMM+', '0.3477', '0.4034', '0.2213', '0.3537', '0.2639', '0.4772', '0.4042', '0.6139'], ['12', 'K-NRM+', '0.3576', '0.4000', '0.22771', '0.35201', '0.27211,3', '0.4756', '0.41371,3', '0.63581'], ['13', 'PACRR+', '0.3810', '0.42861', '0.23111', '0.35761', '0.28031,3', '0.49441', '0.41401,3', '0.63581'], ['[BOLD] Our Model', '[BOLD] Our Model', '[BOLD] Our Model', '[BOLD] Our Model', '[BOLD] Our Model', '[BOLD] Our Model', '[BOLD] Our Model', '[BOLD] Our Model', '[BOLD] Our Model', '[BOLD] Our Model'], ['14', 'MP-HCNN', '0.3832', '0.4075', '0.23371', '0.36891', '0.28181,3', '0.52221,3', '0.43041,3', '0.6297'], ['15', 'MP-HCNN+', '[BOLD] 0.40431,2,312', '[BOLD] 0.4293112', '[BOLD] 0.24601,312,13', '[BOLD] 0.37911,2,312,13', '[BOLD] 0.28961,312', '[BOLD] 0.52941,2,312,13', '0.44201,312,13', '0.6394'], ['[EMPTY]', '[EMPTY]', '(+13.1%)', '(+7.3%)', '(+17.6%)', '(+14.5%)', '(+14.3%)', '(+18.9%)', '(+12.6%)', '(+3.4%)']]
Rows are numbered in the first column, where each represents a model or a contrastive condition. We compare our model to three sets of baselines: non-neural, neural, and interpolation. Interpolation methods are denoted by a symbol “+” at the end of the original model name, such as DUET+. Superscripts and subscripts indicate the row indexes for which a metric difference is statistically significant at p<0.05. However, RM3 requires an additional round of retrieval to select terms for query expansion, and thus is substantially slower. LambdaMART achieves effectiveness on par with RM3 when using all the hand-crafted features. From its contrastive variant with only text-based features, we can see that the overlap-based features provide little gain over QL. Comparing the rows “(text+URL)” and “(text+hashtag)” to row “(text)”, adding URL-based features leads to a significant improvement over text-based features, while hashtag-based features seem to bring fewer benefits. This confirms our observation (cf. that URLs appear frequently in tweets and contain meaningful relevance signals. Turning our attention to the last three rows, we observe that removing the character representations of URLs or documents both lead to significant drops across all datasets, with larger drops when URLs are removed. This suggests that URLs provide more relevance signals than character-level document modeling. Taking away the entire character-level module causes slightly more effectiveness loss. To conclude, the word-level matching module contributes the most effectiveness, but the character-level matching module still provides complementary and significantly useful signals.
Label Embedding Network: Learning Label Representation for Soft Training of Deep Networks
1710.10393
Table 4: Results of Label Embedding for the IWSLT2015 machine translation task. The evaluation metric is BLEU score (higher is better).
['[BOLD] IWSLT2015', 'BLEU']
[['Stanford NMT (Luong & Manning, 2015 )', '23.3'], ['NMT (greedy) (Luong et\xa0al., 2017 )', '25.5'], ['NMT (beam=10) (Luong et\xa0al., 2017 )', '26.1'], ['Seq2seq-Attention (beam=10)', '25.7'], ['[BOLD] Seq2seq-Attention-LabelEmb (beam=10)', '[BOLD] 26.8 (+1.1)']]
Then, we show experimental results on the IWSLT2015 machine translation task. We measure the quality of the translation by BLEU, following common practice. The proposed method achieves better BLEU score than the baseline, with an improvement of 1.1 points. To our knowledge, 26.8 is the highest BLEU achieved on the task, surpassing the previous best result 26.1 From the experimental results, it is clear that the compressed label embedding can improve the results of the Seq-to-Seq model as well, and works for the tasks, where there is a massive number of labels.
Label Embedding Network: Learning Label Representation for Soft Training of Deep Networks
1710.10393
Table 3: Results of Label Embedding for the LCSTS text summarization task (W: Word model; C: Character model). The evaluation metric is ROUGE score (higher is better).
['[BOLD] LCSTS', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L']
[['Seq2seq (W)\xa0(Hu et\xa0al., 2015 )', '17.7', '8.5', '15.8'], ['Seq2seq (C)\xa0(Hu et\xa0al., 2015 )', '21.5', '8.9', '18.6'], ['Seq2seq-Attention (W)\xa0(Hu et\xa0al., 2015 )', '26.8', '16.1', '24.1'], ['Seq2seq-Attention (C)\xa0(Hu et\xa0al., 2015 )', '29.9', '17.4', '27.2'], ['Seq2seq-Attention (C) (our implementation)', '30.1', '17.9', '27.2'], ['[BOLD] Seq2seq-Attention-LabelEmb (C) (our proposal)', '[BOLD] 31.7 (+1.6)', '[BOLD] 19.1 (+1.2)', '[BOLD] 29.1 (+1.9)']]
First, we show experimental results on the LCSTS text summarization task. The performance is measured by ROUGE-1, ROUGE-2, and ROUGE-L. As we can see, the proposed method performs much better compared to the baselines, with ROUGE-1 score of 31.7, ROUGE-2 score of 19.1, and ROUGE-L score of 29.1, improving by 1.6, 1.2, and 1.9, respectively. In fact, in terms of all of the three metrics, our implementation consistently beats the previous work, and the proposed method could further improve the results.
Resolving Event Coreference with Supervised Representation Learning and Clustering-Oriented Regularization
1805.10985
Table 4: Within-document test set results on ECB+. Note that Lemma is equivalent to Lemma-δ in the within-document setting. Cybulska and Vossen Cybulska and Vossen (2015) did not report the performance of their model in this setting.
['[BOLD] Model', 'R', 'MUC P', 'F', 'R', 'B3 P', 'F', 'CM F', 'R', 'CE P', 'F', 'R', 'BLANC P', 'F', 'CoNLL F']
[['[BOLD] Baselines', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Lemma- [ITALIC] δ', '41', '77', '53', '86', '97', '[BOLD] 92', '85', '92', '82', '87', '65', '86', '71', '77'], ['Unsupervised', '32', '36', '34', '85', '86', '85', '74', '80', '78', '79', '65', '55', '57', '66'], ['[BOLD] Model Variants', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['CCE', '44', '49', '46', '87', '89', '88', '79', '82', '80', '81', '67', '67', '67', '72'], ['CORE', '55', '32', '40', '89', '70', '78', '65', '64', '79', '71', '75', '54', '56', '63'], ['CORE+CCE', '43', '68', '53', '87', '95', '91', '84', '90', '82', '86', '67', '76', '70', '76'], ['CORE+CCE+Lemma', '57', '69', '[BOLD] 63', '90', '94', '[BOLD] 92', '[BOLD] 86', '90', '86', '[BOLD] 88', '73', '78', '[BOLD] 75', '[BOLD] 81']]
These results are obtained by cutting all links drawn across documents for the gold standard chains and the predicted chains. We observe that, across all models, scores on the mention- and entity-based measures are substantially higher than the link-based measures (e.g., MUC and BLANC). The usefulness of CORE+CCE+Lemma (which initializes the clustering with the lemma-δ predictions and then continues to cluster with CORE+CCE) is exemplified by the improvements or matches in every measure when compared to both Lemma-δ and CORE+CCE. The most vivid improvement here is observed with the 10 point improvement in MUC over both models as well as the 4 and 5 point improvements in BLANC respectively, where the higher recall entails that CORE+CCE+Lemma confidently predicts coreference links that would otherwise have been false negatives.
Resolving Event Coreference with Supervised Representation Learning and Clustering-Oriented Regularization
1805.10985
Table 2: Model comparison based on validation set B3 accuracy with optimized τ cluster-similarity threshold. For CORE+CCE+Lemma (indicated as CORE+CCE+L) we tuned to δ=0.89; for Lemma-δ we tuned to δ=0.67.
['[BOLD] Model', '[ITALIC] λ1', '[ITALIC] λ2', 'B3', '[ITALIC] τ']
[['[BOLD] Baselines', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Unsupervised', '-', '-', '0.590', '0.657'], ['Lemma', '-', '-', '0.597', '-'], ['Lemma- [ITALIC] δ', '-', '-', '0.612', '-'], ['[BOLD] Model Variants', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['CORE+CCE+L', '2.0', '0.0', '[BOLD] 0.678', '0.843'], ['CORE+CCE', '2.0', '2.0', '0.663', '0.776'], ['[EMPTY]', '2.0', '1.0', '0.666', '0.773'], ['[EMPTY]', '2.0', '0.1', '0.665', '0.843'], ['[EMPTY]', '2.0', '0.0', '[BOLD] 0.669', '0.843'], ['[EMPTY]', '0.0', '2.0', '0.662', '0.710'], ['CORE', '2.0', '2.0', '0.631', '0.701'], ['[EMPTY]', '1.0', '1.0', '0.625', '0.689'], ['CCE', '-', '-', '0.644', '0.853']]
Interestingly, we observe that CORE+CCE performs slightly better with λ2=0; i.e., without repulsive regularization. This suggests that enforcing representation similarity is more important than enforcing division, although we cannot conclusively state that repulsive regularization would not be useful for other tasks. Nonetheless, for test set results we use the optimal hyperparameter configurations found during this validation-tuning step; e.g., for CORE+CCE we set λ1=2 and λ2=0.
Resolving Event Coreference with Supervised Representation Learning and Clustering-Oriented Regularization
1805.10985
Table 3: Combined within- and cross-document test set results on ECB+. Measures CM and CE stand for mention-based CEAF and entity-based CEAF, respectively.
['[BOLD] Model', 'R', 'MUC P', 'F', 'R', 'B3 P', 'F', 'CM F', 'R', 'CE P', 'F', 'R', 'BLANC P', 'F', 'CoNLL F']
[['[BOLD] Baselines', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Lemma', '66', '58', '62', '66', '58', '62', '51', '87', '39', '54', '64', '61', '63', '61'], ['Lemma- [ITALIC] δ', '55', '68', '61', '61', '80', '[BOLD] 69', '[BOLD] 59', '73', '60', '66', '62', '80', '[BOLD] 67', '66'], ['Unsupervised', '39', '63', '48', '55', '81', '66', '51', '72', '49', '58', '57', '58', '58', '57'], ['[BOLD] Previous Work', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['CV2015', '43', '77', '55', '58', '86', '[BOLD] 69', '58', '-', '-', '66', '60', '69', '63', '64'], ['[BOLD] Model Variants', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['CCE', '66', '63', '65', '69', '60', '64', '50', '59', '63', '61', '69', '56', '59', '63'], ['CORE', '58', '58', '58', '66', '58', '62', '44', '53', '53', '53', '66', '54', '56', '57'], ['CORE+CCE', '62', '70', '66', '67', '69', '68', '56', '73', '64', '68', '68', '59', '62', '67'], ['CORE+CCE+Lemma', '67', '71', '[BOLD] 69', '71', '67', '[BOLD] 69', '58', '71', '67', '[BOLD] 69', '72', '60', '64', '[BOLD] 69']]
and cross-document event coreference. Results for these models are obtained with the hyper-parameter settings that achieved optimal accuracy during validation-tuning.
End-to-End Speech-Translation with Knowledge Distillation:FBK@IWSLT2020
2006.02965
Table 2: Results on Librispeech with different K values, where K is the number of tokens considered for Word KD.
['Top K', 'BLEU']
[['4', '16.43'], ['8', '[BOLD] 16.50'], ['64', '16.37'], ['1024', '16.34']]
In this work, we follow Liu et al. , so the teacher model is our MT model and the student is the ST model. Compared to Liu et al. , we make the training more efficient by extracting only the top 8 tokens from the teacher distribution. In this way, we can precompute and store the MT output instead of computing it at each training iteration, since its size is reduced by three orders of magnitude. Moreover, this approach does not affect negatively the final score, as shown by Tan et al.
End-to-End Speech-Translation with Knowledge Distillation:FBK@IWSLT2020
2006.02965
Table 1: Results on Librispeech with Word KD varying the number of layers.
['2D Self-Attention', 'Encoder', 'Decoder', 'BLEU']
[['2', '6', '6', '16.50'], ['0', '8', '6', '[BOLD] 16.90'], ['2', '9', '6', '17.08'], ['2', '9', '4', '17.06'], ['2', '12', '4', '[BOLD] 17.31']]
The ASR and ST models are a revisited version of the S-Transformer introduced by Di Gangi et al. Moreover, we noticed that adding more layers in the encoder improves the results, while removing few layers of the decoder does not harm performance. Hence, the models used in this work process the input with two 2D CNNs, whose output is projected into the higher-dimensional space used by the Transformer encoder layers. The projected output is summed with positional embeddings before being fed to the Transformer encoder layers, which use logarithmic distance penalty.
End-to-End Speech-Translation with Knowledge Distillation:FBK@IWSLT2020
2006.02965
Table 3: Case sensitive BLEU scores for our E2E ST models. Notes: Seq KD: Sequence KD; FT: finetuning on ground-truth datasets; TS: time stretch; Multi ENC: multi-domain model with sum of the language token to the encoder input; Multi DEC: multi-domain model with sum of the language token to the decoder input; DEC PT: pretraining of the decoder with that of an MT model; CTC: multitask training with CTC loss on the 8th encoder layer in addition to the target loss; FT w/o KD: finetuning on all data with label smoothed cross entropy; 5e-3: indicates the learning rate used; AVG 5: average 5 checkpoints around the best.
['Model', 'MuST-C sentence', 'MuST-C VAD', 'IWSLT 2015']
[['Seq KD+FT (w/o TS)', '25.80', '20.94', '17.18'], ['+ FT w/o KD', '27.55', '19.64', '16.93'], ['Multi ENC (w/o TS)', '25.79', '21.37', '19.07'], ['+ FT w/o KD', '27.24', '20.87', '19.08'], ['Multi ENC+DEC PT', '25.30', '20.80', '16.76'], ['+ FT w/o KD', '27.40', '21.90', '18.55'], ['Multi ENC+CTC', '[ITALIC] 27.06', '[ITALIC] 21.58', '[ITALIC] 20.23'], ['+ FT w/o KD (1)', '27.98', '22.51', '20.58'], ['Multi ENC+CTC (5e-3)', '25.44', '20.41', '16.36'], ['+ FT w/o KD', '[BOLD] 29.08', '[BOLD] 23.70', '20.83'], ['+ AVG 5 (2)', '28.82', '23.66', '[BOLD] 21.42'], ['Multi DEC+CTC (5e-3)', '26.10', '19.94', '17.92'], ['+ FT w/o KD', '28.22', '22.61', '18.31'], ['Ensemble (1) and (2)', '[BOLD] 29.18', '[BOLD] 23.77', '[BOLD] 21.83']]
First, we compare the two training schemes examined. [Seq KD+FT] has the same performance as Multi-domain with language token summed to the input [Multi ENC] (or even slightly better) on the MuST-C test set, but it is significantly worse on the two test set segmented with VAD. This can be explained by the higher generalization capability of the Multi-domain model. Indeed, Sequence KD + Finetune seems to overfit more the training data; thus, on data coming from a different distribution, as VAD-segmented data are, its performance drops significantly. For this reason, all the following experiments use the Multi-domain training scheme.
Chinese Embedding via Stroke and Glyph Information:A Dual-channel View
1906.04287
Table 1: Performance on word similarity and word analogy task. The dimension of embeddings is set as 300. The evaluation metric is ρ for word similarity and accuracy percentage for word analogy.
['Model', 'Word Similarity', 'Word Similarity', 'Word Analogy 3CosAdd', 'Word Analogy 3CosAdd', 'Word Analogy 3CosAdd', 'Word Analogy 3CosMul', 'Word Analogy 3CosMul', 'Word Analogy 3CosMul']
[['Model', 'wordsim-240', 'wordsim-296', 'Capital', 'City', 'Family', 'Capital', 'City', 'Family'], ['Skipgram Mikolov et\xa0al. ( 2013 )', '0.5670', '0.6023', '[BOLD] 0.7592', '[BOLD] 0.8800', '0.3676', '[BOLD] 0.7637', '[BOLD] 0.8857', '0.3529'], ['CBOW Mikolov et\xa0al. ( 2013 )', '0.5248', '0.5736', '0.6499', '0.6171', '0.3750', '0.6219', '0.5486', '0.2904'], ['GloVe Pennington et\xa0al. ( 2014 )', '0.4981', '0.4019', '0.6219', '0.7714', '0.3167', '0.5805', '0.7257', '0.2375'], ['sisg Bojanowski et\xa0al. ( 2017 )', '0.5592', '0.5884', '0.4978', '0.7543', '0.2610', '0.5303', '0.7829', '0.2206'], ['CWE Chen et\xa0al. ( 2015 )', '0.5035', '0.4322', '0.1846', '0.1714', '0.1875', '0.1713', '0.1600', '0.1583'], ['GWE Su and Lee ( 2017 )', '0.5531', '0.5507', '0.5716', '0.6629', '0.2417', '0.5761', '0.6914', '0.2333'], ['JWE Yu et\xa0al. ( 2017 )', '0.4734', '0.5732', '0.1285', '0.3657', '0.2708', '0.1492', '0.3771', '0.2500'], ['cw2vec Cao et\xa0al. ( 2018 )', '0.5529', '0.5992', '0.5081', '0.7086', '0.2941', '0.5465', '0.7714', '0.2721'], ['DWE (ours)', '[BOLD] 0.6105', '[BOLD] 0.6137', '0.7120', '0.7486', '[BOLD] 0.6250', '0.6765', '0.7257', '[BOLD] 0.6140']]
We can observe that our DWE model achieves the best results both on dataset wordsim-240 and wordsim-296 in the similarity task as expected because of the particularity of Chinese morphology, but it only improves the accuracy for the family group in the analogy task.
Simple, Fast Semantic Parsing with a Tensor Kernel
1507.00639
Table 1: Results on the WebQuestions dataset, together with results reported in the literature.
['[EMPTY]', '[BOLD] Average F1 score']
[['Sempre ', '35.7'], ['ParaSempre ', '39.9'], ['Facebook ', '41.8'], ['DeepQA ', '45.3'], ['Tensor kernel with unigrams', '40.1']]
Our system achieves an average F1 score of 40.1%, compared to ParaSempre’s 39.9%. Our system runs faster however, due to the simpler method of generating features. Evaluating using ParaSempre on the development set took 22h31m; using the tensor kernel took 14h44m on a comparable machine.
Recovering Dropped Pronouns in Chinese Conversations via Modeling Their Referents
1906.02128
Table 1: Results in terms of precision, recall and F-score on 16 types of pronouns produced by the baseline systems and variants of our proposed NDPR model. For NRM∗ Zhang et al. (2016), we implement the proposed model as described in the paper.
['Model', 'Chinese SMS P(%)', 'Chinese SMS R(%)', 'Chinese SMS F', 'TC of OntoNotes P(%)', 'TC of OntoNotes R(%)', 'TC of OntoNotes F', 'BaiduZhidao P(%)', 'BaiduZhidao R(%)', 'BaiduZhidao F']
[['MEPR Yang et\xa0al. ( 2015 )', '37.27', '45.57', '38.76', '-', '-', '-', '-', '-', '-'], ['NRM∗ Zhang et\xa0al. ( 2016 )', '37.11', '44.07', '39.03', '23.12', '26.09', '22.80', '26.87', '49.44', '34.54'], ['BiGRU', '40.18', '45.32', '42.67', '25.64', '36.82', '30.93', '29.35', '42.38', '35.83'], ['NDPR-rand', '46.47', '43.23', '43.58', '28.98', '41.50', '33.38', '35.44', '43.82', '37.79'], ['NDPR-PC-BiGRU', '46.34', '46.21', '46.27', '36.69', '40.12', '38.33', '38.42', '48.01', '41.68'], ['NDPR-W', '46.78', '[BOLD] 46.61', '45.76', '38.67', '41.56', '39.64', '38.60', '[BOLD] 50.12', '[BOLD] 43.36'], ['NDPR-S', '46.99', '46.32', '44.89', '37.40', '40.32', '38.81', '39.32', '46.40', '41.53'], ['NDPR', '[BOLD] 49.39', '44.89', '[BOLD] 46.39', '[BOLD] 39.63', '[BOLD] 43.09', '[BOLD] 39.77', '[BOLD] 41.04', '46.55', '42.94']]
We can see that our proposed model and its variants outperform the baseline methods on all these datasets by different margins. Our best model, NDPR, outperforms MEPR by 7.63% in terms of F-score on the Chinese SMS dataset, and outperforms NRM by 16.97% and 8.40% on the OntoNotes and BaiduZhidao datasets respectively. Compared with the degenerate variant model BiGRU, our NDPR model also performs better on all three datasets, which demonstrate the effectiveness of referent modeling mechanism composed of sentence-level and word-level attention. We attribute this to the fact that there are only concrete pronouns in this dataset. The combination of “我(I)”, “你(singular you)” and “它(it)” accounts for 94.47% of the overall dropped pronoun population, for which the referent can be easily determined by word-level attention. Moreover, the fewer conversation turns in this data set mean there are few irrelevant referents that need to be filtered out by sentence-level attention.
Recovering Dropped Pronouns in Chinese Conversations via Modeling Their Referents
1906.02128
Table 2: F-scores of our proposed model NDPR and its two variants (NDPR-S, NDPR-W) for concrete and abstract pronouns on the Chinese SMS test set.
['Tag', 'NDPR-S', 'NDPR-W', 'NDPR']
[['他们(masculine they)', '17.05', '23.28', '[BOLD] 24.44'], ['她(she)', '32.35', '33.72', '[BOLD] 35.14'], ['previous utterance', '84.90', '86.08', '[BOLD] 87.55'], ['他(he)', '29.05', '31.20', '[BOLD] 34.92'], ['它(it)', '25.00', '26.67', '[BOLD] 26.95'], ['她们(feminine they)', '0', '0', '[BOLD] 40.00'], ['我(I)', '50.66', '50.90', '[BOLD] 52.98'], ['我们(we)', '31.49', '33.57', '[BOLD] 34.81'], ['你(singular you)', '42.88', '44.15', '[BOLD] 44.31'], ['pleonastic', '25.89', '22.29', '[BOLD] 28.46'], ['generic', '11.61', '11.08', '[BOLD] 16.83'], ['event', '6.15', '0', '[BOLD] 16.27'], ['existential', '34.17', '30.84', '[BOLD] 38.71'], ['你们(plural you)', '0', '0', '[BOLD] 5.41'], ['它们(inanimate they)', '16.00', '[BOLD] 19.15', '13.89']]
In this section, we dive a bit deeper and look at the impact of the attention mechanism on concrete and abstract pronouns, respectively. The best results among these three variants are in boldface, and the better results between NDPR-S and NDPR-W are underlined.
Probabilistic Semantic Retrieval for Surveillance Videos with Activity Graphs
1712.06204
TABLE I: Area-Under-Curve (AUC) of precision-recall curves on VIRAT dataset with human annotated bounding boxes for Bag-of-Words approach (BoW [6]), Manually Specified Graph Matching (MSGM [5]), and our proposed approach.
['Query', 'BoW ', 'MSGM ', 'Proposed']
[['Person dismount', '15.33', '78.26', '[BOLD] 83.93'], ['Person mount', '21.37', '70.61', '[BOLD] 83.94'], ['Object deposit', '26.39', '71.34', '[BOLD] 85.69'], ['Object take-out', '8.00', '72.70', '[BOLD] 80.07'], ['2 person deposit', '14.43', '65.09', '[BOLD] 74.16'], ['2 person take-out', '19.31', '80.00', '[BOLD] 90.00'], ['Group Meeting', '25.20', '82.35', '[BOLD] 88.24'], ['Average', '18.58', '74.34', '[BOLD] 83.72']]
On human annotated data, where we assume no uncertainty at the object level, we can see that both MSGM and the proposed method significantly outperform BoW. The queries all include some level of structural constraints between objects, for example, there is an underlying distance constraint for the people, car and object involved in object deposit. In a cluttered surveillance video where multiple activities occur at the same time, when an algorithm attempts to solve for a bipartite matching between people, car and objects, while ignoring the global spatial relationships between them, unrelated agents from different activities could be chosen, resulting in low detection accuracy for BoW. This shows that global structural relationships rather than isolated object-level descriptors are important.
Sub-band Knowledge Distillation Framework for Speech Enhancement
2005.14435
Table 3: Demonstrates the effectiveness of the sub-band knowledge distillation framework. C represents the number of memory cells per layer in LSTM. #Params represents the number of parameters of the model.
['Model', 'C', '#Params (M)', 'PESQ', 'STOI (%)']
[['Noisy', '-', '-', '1.971', '92.106'], ['F', '256', '2.52', '2.420', '93.412'], ['S1', '256', '2.21', '2.404', '93.133'], ['S2', '256', '2.21', '[BOLD] 2.471', '[BOLD] 93.751'], ['F', '512', '9.23', '2.511', '93.817'], ['S1', '512', '8.61', '2.497', '93.540'], ['S2', '512', '8.61', '[BOLD] 2.563', '[BOLD] 94.129']]
Since the input of the F model is the full-band features, the number of parameters of the F model is higher than that of the S1 model and S2 model. Whether the number of memory cells is 256 or 512, we can find some similar conclusions. (1) The performance of the S1 model is slightly worse than the F model. This may be because full-band input can bring more context information, which can help the model to capture the features on the spectrogram better. However, the performance gap is tiny, only 0.56 to 0.66% and 0.29 to 0.30% higher on PESQ and STOI, respectively. This may be because the S1 model can cover 40 frequency bands at a time, which is fairly sufficient to capture most of the local feature information. On the other hand, S1 has a 7% reduction in the parameter amount compared to the F model. Considering that the speech enhancement model is usually deployed offline on hardware, this slight performance degradation is normally acceptable. (2) With the guidance of the elite teacher model, the S2 model achieved better results than S1, which shows that the supervision of the teacher models is effective. It is worth mentioning that this improvement does not increase the number of parameters and the computational cost of the model. We also note that although the S2 model also does not have complete full-band context information, its performance noticeably exceeds model F.
Sub-band Knowledge Distillation Framework for Speech Enhancement
2005.14435
Table 2: The mean square error (MSE, ℓ2 loss) of the teacher models and the student models (without the guidance of the teacher models) with the different number of memory cells per layer in LSTM for each sub-band on the test set. C is the number of memory cells per layer in LSTM.
['C', 'Student Model 0-40', 'Student Model 40-80 (10−4)', 'Student Model 80-120 (10−4)', 'Student Model 120-160 (10−4)', 'Teacher Model 0-40', 'Teacher Model 40-80 (10−4)', 'Teacher Model 80-120 (10−4)', 'Teacher Model 120-160 (10−4)']
[['256', '0.036', '10.027', '3.647', '2.801', '0.031', '9.230', '3.578', '2.204'], ['512', '0.026', '9.701', '3.594', '2.508', '0.019', '9.038', '3.494', '1.792'], ['1024', '0.025', '9.162', '3.295', '2.095', '[BOLD] 0.013', '[BOLD] 8.716', '[BOLD] 3.020', '[BOLD] 1.634']]
C is the number of memory cells. The left half of the table is the result area of the student models, and the right half is the result area of the teacher models. For example, in the result area of the student models, the value in the first column (0 to 40) of the first row (C is 256) shows the result of the student model with 256 memory cells and for the 0 to 40 sub-band. The input to the student model is one of the four sub-bands in the training sample (random reselection for each iteration). The training target of the student model is solely the corresponding sub-band of the clean speech. The value of 0.036 represents the ℓ2 loss between the enhanced test sample and the target. In the result area of the teacher models, the first column (0 to 40) of the first row (C is 256) lists the teacher model result with 64 memory cells and for the 0 to 40 sub-band. The training data is a sub-band with a frequency range of 0 to 40 in each sample, and the target is the corresponding sub-band of the clean speech. The value of 0.031 is the ℓ2 loss between the enhanced test sample and the target. 10−4 in the table means that the value in the table is multiplied by 10−4. In total, we trained 3 student models (3 different numbers of memory cells) and 12 teacher models (3 different numbers of memory cells × 4 sub-bands).
A Generative Model for Punctuation in Dependency Trees
1906.11298
Table 3: Results of the conditional perplexity experiment (Section 4), reported as perplexity per punctuation slot, where an unpunctuated sentence of n words has n+1 slots. Column “Attn.” is the BiGRU tagger with attention, and “CRF” stands for the BiLSTM-CRF tagger. “Attach” is the ablated version of our model where surface punctuation is directly attached to the nodes. Our full model “+NC” adds NoisyChannel to transduce the attached punctuation into surface punctuation. Dir is the learned direction (Section 2.2) of our full model’s noisy channel PFST: Left-to-right or Right-to-left. Our models are given oracle parse trees T. The best perplexity is boldfaced, along with all results that are not significantly worse (paired permutation test, p<0.05).
['[EMPTY]', 'Attn.', 'CRF', 'Attach', '+NC', 'Dir']
[['Arabic', '1.4676', '1.3016', '1.2230', '[BOLD] 1.1526', 'L'], ['Chinese', '1.6850', '1.4436', '1.1921', '[BOLD] 1.1464', 'L'], ['English', '1.5737', '[BOLD] 1.5247', '1.5636', '[BOLD] 1.4276', 'R'], ['Hindi', '1.1201', '1.1032', '1.0630', '[BOLD] 1.0598', 'L'], ['Spanish', '1.4397', '[BOLD] 1.3198', '[BOLD] 1.2364', '[BOLD] 1.2103', 'R']]
Also, in 4 of 5 languages, allowing a trained NoisyChannel (rather than the identity map) significantly improves the perplexity.
Enabling Cognitive Intelligence Queries in Relational Databases using Low-dimensional Word Embeddings
1603.07185
Table 3: Results from a CI query to find papers with related titles using proximityAvg(), proximityMax(), and proximityTop2Avg()
['[BOLD] Query Results using ProximityAvg()', 'Cos. Dist.']
[['Istvan_Cseri,Indexing XML Data Stored in a Relational Database.,VLDB_2004', '0.4048'], ['Rajeev_Rastogi,DataBlitz A High Performance Main-Memory Storage Manager.,VLDB_1998,', '0.3581'], ['Patricia_G._Selinger,Information Integration and XML in IBM’s DB2.,VLDB_2002', '0.3403'], ['Shinichi_Morishita,Relational-style XML query.,SIGMOD_Conference_2008,', '0.3316'], ['Roy_Goldman,DataGuides Enabling Query Formulation and Optimization in Semistructured Databases.,VLDB_1997', '0.3193'], ['[BOLD] Query Results using ProximityMax()', 'Cos. Dist.'], ['Jerome_Simeon,Implementing Xquery 1.0 The Galax Experience.,VLDB_2003', '1.0'], ['Hong_Su,Semantic Query Optimization in an Automata-Algebra Combined XQuery Engine over XML Streams.,VLDB_2004', '1.0'], ['Roy_Goldman,DataGuides Enabling Query Formulation and Optimization in Semistructured Databases.,VLDB_1997', '0.3677'], ['Bongki_Moon,FiST Scalable XML Document Filtering by Sequencing Twig Patterns.,VLDB_2005', '0.3661'], ['Istvan_Cseri,Indexing XML Data Stored in a Relational Database.,VLDB_2004', '0.35'], ['[BOLD] Query Results using ProximityTop2Avg()', 'Cos. Dist.'], ['Hong_Su,Semantic Query Optimization in an Automata-Algebra Combined XQuery Engine over XML Streams.,VLDB_2004', '0.6757'], ['Jerome_Simeon,Implementing Xquery 1.0 The Galax Experience.,VLDB_2003', '0.6530'], ['Roy_Goldman,DataGuides Enabling Query Formulation and Optimization in Semistructured Databases.,VLDB_1997', '0.3532'], ['Quanzhong_Li,Indexing and Querying XML Data for Regular Path Expressions.,VLDB_2001', '0.34'], ['Albrecht_Schmidt_0002,XMark A Benchmark for XML Data Management., VLDB_2002', '0.34']]
In this query, the title of the paper with number 471 is “Native Xquery processing in oracle XMLDB”. In this query, vectors for different tokens in a title are compared in three ways to identify similar titles: proximityAvg() selects the title whose average vectors are close, whereas proximityMax() uses maximum closeness between individual tokens to select related titles. Finally, proximityTop2Avg() uses the average of the top two cosine distances of its two token sequences arguments, thereby ’mixing’ maximum and average. proximityMax() chooses two papers with Xquery in their titles as this is the closest match (1.0), proximityAvg() returns a broader set of papers (note that there is no XML or relational token in the query title.) proximityTop2Avg () uses two top tokens to determine similarity, thus produces a different ordering. For example, the the closest title chosen by proximityTop2Avg() containts both XQuery and XML tokens.
Bidirectional Recurrent Models for Offensive Tweet Classification
1903.08808
Table 1: Holdout-validation macro F1 scores and accuracy of all models for the three different tasks.
['[EMPTY]', '[BOLD] Task A [BOLD] F1', '[BOLD] Task A [BOLD] Acc', '[BOLD] Task B [BOLD] F1', '[BOLD] Task B [BOLD] Acc', '[BOLD] Task C [BOLD] F1', '[BOLD] Task C [BOLD] Acc']
[['[BOLD] biLSTM', '0.7382', '0.7801', '0.6115', '0.5753', '0.5001', '0.6966'], ['[BOLD] CNN-biLSTM', '0.6346', '0.6473', '0.5521', '0.5068', '0.4322', '0.6142'], ['[BOLD] biLSTM-CNN', '0.7170', '0.7562', '0.5496', '0.4903', '0.4738', '0.6863'], ['[BOLD] biGRU ⊕ biLSTM', '0.7285', '0.7544', '0.5963', '0.5722', '0.5052', '0.7012']]
For most tasks, the most simple biLSTM model outperformed the other architectures, with the more complex biGRU⊕biLSTM closely following. We learned that, at least for this task and data set, more complex models did not necessarily result in better performance. The biLSTM model’s macro F1 scores on the OffensEval private test set were 0.77 for Task A, 0.64 for Task B and 0.52 for Task C, In hindsight, however, the model submitted was not optimal as it was not re-trained on the entire dataset and therefore only used 80% of the training data. We speculate that re-training the model on 100% of the provided dataset would have yielded significantly better results.
Controllable Unsupervised Text Attribute Transfer via Editing Entangled Latent Representation
1905.12926
Table 4: Results for multi-aspect attribute transfer. The kappa coefficient of the three workers is 0.67 ∈ (0.61, 0.80), which means that the consistency is substantial.
['Aspects', 'Acc', 'Att', 'Con', 'Gra']
[['Appearance', '90.2%', '3.2', '3.5', '3.8'], ['Aroma', '89.3%', '3.4', '3.9', '3.7'], ['Palate', '91.2%', '3.1', '3.8', '3.7'], ['Taste', '88.2%', '3.4', '3.7', '3.6'], ['Overall', '87.3%', '3.6', '4.0', '3.8']]
We see that the achieved sentiment accuracy is high, which means that our model can perform sentiment transfer over multiple aspects at the same time. Considering the results of human evaluation, our model has good fluency and preservation of content when performing sentiment transferring over multiple aspects. To the best of our knowledge, this is the first work investigating the aspect-based attribute transfer task.
GPT-too: A language-model-first approach for AMR-to-text generation
2005.09123
Table 1: Results on the LDC2017T10 development set using GPT-2 S(mall) and M(edium) with Rec(onstruction) loss (see §2) for different AMR representations (see §4).
['Model', 'Input', 'BLEU', 'chrF++']
[['GPT-2S\xa0Rec.', 'Only nodes AMR', '9.45', '41.59'], ['GPT-2S\xa0Rec.', 'Lin. AMR w/o edges.', '11.35', '43.25'], ['GPT-2S\xa0Rec.', 'Lin. AMR w/edges.', '20.14', '53.12'], ['GPT-2S\xa0Rec.', 'Penman AMR', '22.37', '53.92'], ['GPT-2M\xa0Rec.', 'Lin. AMR w/edges.', '22.86', '55.04'], ['GPT-2M\xa0Rec.', 'Penman AMR', '27.99', '61.26']]
Edge information, indicating relations between concepts, seems also to play a fundamental role since its absence strongly decreases performance in both DFS and PENMAN representations. Penman notation was chosen for the rest of the experiments.
GPT-too: A language-model-first approach for AMR-to-text generation
2005.09123
Table 2: Results on the LDC2017T10 development set. Rec(onstruction) uses the AMR reconstruction term (see §2) whereas Conditional does not.
['Approach', 'Decoding', 'BLEU', 'chrF++']
[['GPT-2M\xa0Conditional', 'Greedy', '25.73', '57.2'], ['GPT-2M\xa0Rec.', 'Greedy', '30.41', '61.36'], ['GPT-2M\xa0Rec.', 'BEAM', '31.8', '62.56'], ['GPT-2M\xa0Rec.', 'BEAM 10', '[BOLD] 32.32', '62.79'], ['GPT-2M\xa0Rec.', 'Sampling', '28.75', '61.19']]
The model trained using this additional term achieves 30.41 BLEU and 61.36 chrF++, as opposed to 25.73 BLEU and 57.2 chrF++ without the term. We therefore use a reconstruction term training in the rest of the experiments. With beam size 10, we obtain 32.32 BLEU and 62.79 chrF++. With nucleus sampling at a cumulative probability mass of 0.9, performance drops to 28.75 BLEU and 61.19 chrF++. Finally, cycle-consistency re-ranking of the beam search outputs improves performance (33.57 BLEU, 64.86 chrF++) over the one best output.
On the Variance of the Adaptive Learning Rate and Beyond
1908.03265
Table 2: Performance on CIFAR10 (lr = 0.1).
['1-4 steps', '5-8 steps', '8+ steps', 'test acc', 'train loss', 'train error']
[['RAdam', 'RAdam', 'RAdam', '91.08', '0.021', '0.74'], ['Adam (w. divergent var.)', 'RAdam', 'RAdam', '89.98', '0.060', '2.12'], ['SGD', 'Adam (w. convergent var.)', 'RAdam', '90.29', '0.038', '1.23']]
As a byproduct determined by math derivations, we degenerated RAdam to SGD with momentum in the first several updates. Intuitively, updates with divergent adaptive learning rate variance could be more damaging than the ones with converged variance, as divergent variance implies more instability. As a case study, we performed experiments on the CIFAR10 dataset. The optimizer fails to get an equally reliably model when changing the first 4 updates to Adam, yet the influence of switching is less deleterious when we change 5-8 updates instead. This result verifies our intuition and is in agreement with our theory — the first few updates could be more damaging than later updates. By saying that, we still want to emphasize that this part (downgrading to SGDM) is only a minor part of our algorithm design whereas our main focus is on the mechanism of warmup and the derivation of the rectification term.
On the Variance of the Adaptive Learning Rate and Beyond
1908.03265
Table 1: BLEU score on Neural Machine Translation.
['Method', 'IWSLT’14 DE-EN', 'IWSLT’14 EN-DE', 'WMT’16 EN-DE']
[['Adam with warmup', '34.66±0.014', '28.56±0.067', '27.03'], ['RAdam', '34.76±0.003', '28.48±0.054', '27.27']]
With a consistent adaptive learning rate variance, our proposed method achieves similar performance to that of previous state-of-the-art warmup heuristics. It verifies our intuition that the problematic updates of Adam are indeed caused by the undesirably large variance in the early stage.
Hero: Hierarchical Encoder for Video+Language Omni-representation Pre-training
2005.00200
Table 2: Comparison between a flat BERT-like encoder (F-Trm), a Hierarchical Transfomrer (H-Trm) baseline and Hero using TVR and TVQA validation set as benchmarks. Results in the last two rows are obtained from pre-training the models with MLM + MNCE + FOM + VSM on TV Dataset. For simplicity, we report only video moment retrieval for TVR.
['Pre-training', 'Model', 'R@1', 'TVR R@10', 'R@100', 'TVQA Acc.']
[['No', 'F-Trm', '1.99', '7.76', '13.26', '31.80'], ['No', 'H-Trm', '2.97', '10.65', '18.68', '70.09'], ['No', 'Hero', '2.98', '10.65', '18.25', '70.65'], ['Yes', 'H-Trm', '3.12', '11.08', '18.42', '70.03'], ['Yes', 'Hero', '[BOLD] 4.44', '[BOLD] 14.69', '[BOLD] 22.82', '[BOLD] 72.75']]
(i) When no pre-training is applied, F-Trm is much worse than Hero on both tasks. H-Trm achieves comparable results to Hero on TVR, but worse on TVQA. Unlike F-Trm, H-Trm and Hero explicitly utilize the inherent temporal alignment between two modalities of videos, which is uniquely important for video+language tasks. (ii) With pre-training, Hero shows significant improvement over H-Trm. Our hypothesis is that with the hierarchical design, Hero can capture cross-modal interactions between visual frames and its local textual context better than H-Trm. Such cross-modality joint understanding of visual and textual contexts is critical for video-based retrieval and QA tasks. (iii) Pre-training lifts Hero performance by a large margin, but not very helpful for H-Trm. These results provide strong evidence that cross-modal interactions and temporal alignments between visual frames and its local textual context learned by Hero are essential for these video+language tasks.
Hero: Hierarchical Encoder for Video+Language Omni-representation Pre-training
2005.00200
Table 1: Evaluation on pre-training tasks and datasets using TVR, TVQA, Howto100M-R and Howto100M-QA validation set as benchmarks. Dark and light grey colors highlight the top and second best results across all the tasks trained with TV Dataset. The best results are in bold. For simplicity, we only report video moment retrieval results for TVR and Howto100M-R.
['Pre-training Data', '[EMPTY]', 'Pre-training Tasks', 'TVR R@1', 'TVR R@10', 'TVR R@100', 'TVQA Acc.', 'Howto100M-R R@1', 'Howto100M-R R@10', 'Howto100M-R R@100', 'Howto100M-QA Acc.']
[['TV', '1', 'MLM', '2.92', '10.66', '17.52', '71.25', '2.06', '9.08', '14.45', '76.42'], ['TV', '2', 'MLM + MNCE', '3.13', '10.92', '17.52', '71.99', '2.15', '9.27', '14.98', '76.95'], ['TV', '3', 'MLM + MNCE + FOM', '3.09', '10.27', '17.43', '72.54', '2.36', '9.85', '15.97', '77.12'], ['TV', '4', 'MLM + MNCE + FOM + VSM', '[BOLD] 4.44', '[BOLD] 14.69', '[BOLD] 22.82', '72.75', '2.78', '10.41', '[BOLD] 18.77', '77.54'], ['TV', '5', 'MLM + MNCE + FOM + VSM + MFFR', '[BOLD] 4.44', '14.29', '22.37', '72.75', '2.73', '10.12', '18.05', '77.54'], ['TV & Howto100M', '6', 'MLM + MNCE + FOM + VSM', '4.34', '13.97', '21.78', '[BOLD] 74.24', '[BOLD] 2.98', '[BOLD] 11.16', '17.55', '[BOLD] 77.75']]
To evaluate the effectiveness of each pre-training task, we conduct ablation experiments through pre-training on TV dataset only. When MLM, MNCE and FOM are jointly trained (L3), there is a large performance gain in accuracy on TVQA and significant improvement on the two Howto100M downstream tasks. Comparable results are achieved on TVR. This indicates that FOM, which models sequential characteristics of video frames, can effectively benefit downstream tasks that rely on temporal reasoning (such as QA tasks).
Hero: Hierarchical Encoder for Video+Language Omni-representation Pre-training
2005.00200
Table 3: Results on four downstream tasks: TVR, Howto100M-R, TVQA and Howto100M-QA, compared with task-specific state-of-the-art method: XML for TVR and STAGE for TVQA. Only video moment retrieval results are reported for TVR and Howto100M-R.
['Method', 'TVR R@1', 'TVR R@10', 'TVR R@100', 'Howto100M-R R@1', 'Howto100M-R R@10', 'Howto100M-R R@100', 'TVQA Acc.', 'Howto100M-QA Acc.']
[['XML\xa0(Lei et al., 2020 )', '2.70', '8.93', '15.34', '2.06', '8.96', '13.27', '-', '-'], ['STAGE\xa0(Lei et al., 2019 )', '-', '-', '-', '-', '-', '-', '70.50', '-'], ['Hero w/o pre-training', '2.98', '10.65', '18.42', '2.17', '9.38', '15.65', '70.65', '76.89'], ['Hero w/ pre-training', '[BOLD] 4.34', '[BOLD] 13.97', '[BOLD] 21.78', '[BOLD] 2.98', '[BOLD] 11.16', '[BOLD] 17.55', '[BOLD] 74.24', '[BOLD] 77.75']]
First, we compare with XML Results show that our model consistently outperforms XML on both TVR and Howto100M-R, with or without pre-training.
Differentially Private Distributed Learning for Language Modeling Tasks
1712.07473
Table 7: Results of the Lilliefors test
['Experiment', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10']
[['ˆ [ITALIC] α', '15.8', '20.9', '15.1', '16.6', '16.5', '17.6', '14.9', '19.2', '15.6', '15.2'], ['ˆ [ITALIC] C', '3.25', '5.64', '2.02', '2.48', '2.70', '4.19', '1.47', '3.31', '1.65', '1.83'], ['KS statistic', '0.49', '0.91', '0.48', '0.62', '0.83', '0.59', '[BOLD] 1.39', '0.41', '0.93', '0.51'], ['Experiment', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20'], ['ˆ [ITALIC] α', '16.5', '14.4', '19.5', '18.2', '16.2', '17.2', '17.3', '14.8', '17.1', '20.5'], ['ˆ [ITALIC] C', '3.00', '1.53', '3.67', '2.20', '3.42', '2.66', '1.68', '2.18', '2.87', '4.60'], ['KS statistic', '0.76', '0.89', '0.66', '0.94', '0.67', '0.85', '0.73', '0.97', '0.65', '0.94']]
The critical value for the Lilliefors test at 5% significance level is 1.08. In 19 cases out of 20 the Lilliefors test fails to reject the null hypothesis. Exact values of KS statistics and Hill ’s It converges to the distribution with smaller critical values at the same significance levels because we overfit on the sample data when the estimator ¯¯¯r is plugged in. We chose a 5% significance level and critical value for it is 1.08. In 19 cases out of 20 the Lilliefors test failed to reject the null hypothesis at a 5% significance level. it is easy to derive that (ε, δ)-differential privacy is provided by the values ε, δ that satisfy
Differentially Private Distributed Learning for Language Modeling Tasks
1712.07473
Table 1: Random rehearsal vs learning without forgetting. For LwF mode λ is a coefficient of the ground truth probability distribution in the loss function (1)-(2). For random rehearsal mode λ is a portion of user training data in on-device training.
['Method', 'Standard English dataset (Wikipedia) PPL', 'Standard English dataset (Wikipedia) KSS, %', 'User dataset (Twitter) PPL', 'User dataset (Twitter) KSS, %', 'Av. PPL']
[['Initial server model', '100.1', '67.9', '336.0', '49.7', '192.6'], ['Random rehearsal, [ITALIC] λ=1/4', '121.3', '66.3', '127.9', '56.9', '124.8'], ['Random rehearsal, [ITALIC] λ=1/2', '131.1', '65.9', '109.7', '58.3', '[BOLD] 119.1'], ['Random rehearsal, [ITALIC] λ=3/4', '149.0', '64.8', '99.7', '59.0', '119.9'], ['Learning without forgetting, [ITALIC] λ=1/4', '128.4', '66.0', '162.8', '54.9', '146.0'], ['Learning without forgetting, [ITALIC] λ=1/2', '147.0', '64.9', '121.7', '57.5', '132.7'], ['Learning without forgetting, [ITALIC] λ=3/4', '186.5', '63.1', '101.1', '59.2', '133.9'], ['On-device re-training, [ITALIC] λ=1', '265.1', '60.2', '93.4', '59.7', '150.8']]
We see that the performance gap between the standard English and the user test sets can be considerably reduced at the cost of performance degradation on the first dataset. The best average perplexity is reached with the random rehearsal method and λ=0.5. We believe that the reason of the comparably inferior performance of the LwF method can be explained by the fact that soft labels used by LwF give a poor approximation of the true word distribution of general English so adding a small portion of true data gives better results in terms of knowledge preservation.
Differentially Private Distributed Learning for Language Modeling Tasks
1712.07473
Table 2: Averaging vs transfer learning for server-side model update.
['Method', 'Standard English dataset (Wikipedia) PPL', 'Standard English dataset (Wikipedia) KSS, %', 'User dataset (Twitter) PPL', 'User dataset (Twitter) KSS, %', 'Av. PPL']
[['Initial server model', '100.1', '67.9', '336.0', '49.7', '192.6'], ['TL on generated data (1-cycle)', '109.2', '67.2', '259.7', '50.8', '174.4'], ['TL on generated data (5-cycles)', '112.3', '67.0', '246.0', '51.2', '171.6'], ['TL on real data', '108.7', '67.2', '261.2', '50.7', '174.6'], ['Model averaging (1 round)', '102.8', '67.7', '233.8', '51.9', '[BOLD] 160.3'], ['Model averaging (300 rounds)', '105.5', '67.3', '109.3', '58.4', '[BOLD] 107.5']]
We saw no significant differences between transfer learning on real and generated data. The difference between transfer learning and averaging is more sound but still not large. At the same time model averaging is much more computationally efficient, as long as transfer learning requires calculation of labels from each of the teacher models. After 300 rounds of model updates with 3000 nodes (10 nodes per round) we ended up with an 8.7 absolute gain in KSS on the user data test with only a 0.6 absolute KSS drop on the standard English data test.
Multi-Task Learning for Sequence Tagging: An Empirical Study
1808.04151
Table 6: F1 scores for Multi-Dec. We compare All with All-but-one settings (All - ⟨task⟩). We test on each task in the columns. Beneficial settings are in green. Harmful setting are in red.
['[EMPTY]', 'upos', 'xpos', 'chunk', 'ner', 'mwe', 'sem', 'semtr', 'supsense', 'com', 'frame', 'hyp', '#↑', '#↓']
[['All', '95.04', '94.31', '93.44', '86.38', '61.43', '71.53', '74.26', '68.1', '74.54', '59.71', '51.41', '[EMPTY]', '[EMPTY]'], ['All - upos', '[EMPTY]', '94.03', '93.59', '86.03', '61.28', '70.87', '73.54', '68.27', '74.42', '58.47', '51.13', '0', '0'], ['All - xpos', '94.57 ↓', '[EMPTY]', '93.57', '86.04', '61.91', '71.12', '74.03', '67.99', '74.36', '60.16', '51.65', '0', '1'], ['All - chunk', '94.84 ↓', '94.46', '[EMPTY]', '86.05', '61.01', '71.07', '73.97', '68.26', '74.2', '60.01', '50.27', '0', '1'], ['All - ner', '94.81 ↓', '94.3', '93.59', '[EMPTY]', '62.69', '70.82', '73.51 ↓', '68.16', '74.08', '59.17', '50.86', '0', '2'], ['All - mwe', '94.93 ↓', '94.45', '93.71', '86.21', '[EMPTY]', '71.01', '73.61 ↓', '68.18', '74.7', '59.23', '50.83', '0', '2'], ['All - sem', '94.82', '94.34', '93.63', '85.81', '61.17', '[EMPTY]', '71.97 ↓', '67.36', '74.31', '58.73', '50.93', '0', '1'], ['All - semtr', '94.83', '94.35', '93.58', '86.11', '63.04', '69.72 ↓', '[EMPTY]', '68.17', '74.2', '59.49', '51.27', '0', '1'], ['All - supsense', '94.97', '94.54', '93.67', '86.43', '60.51', '71.22', '73.86 ↓', '[EMPTY]', '74.24', '59.23', '50.86', '0', '1'], ['All - com', '95.19 ↑', '94.69 ↑', '93.67', '86.6', '61.95', '72.38 ↑', '74.75 ↑', '68.67', '[EMPTY]', '62.37 ↑', '50.28', '5', '0'], ['All - frame', '95.15', '94.57', '93.7', '85.9', '62.62', '71.48', '74.24', '68.47', '75.03', '[EMPTY]', '50.89', '0', '0'], ['All - hyp', '94.93', '94.53', '93.78 ↑', '86.31', '62.04', '71.22', '74.02', '68.46', '74.62', '59.69', '[EMPTY]', '1', '0'], ['#↑', '1', '1', '1', '0', '0', '1', '1', '0', '0', '1', '0', '[EMPTY]', '[EMPTY]'], ['#↓', '4', '0', '0', '0', '0', '1', '4', '0', '0', '0', '0', '[EMPTY]', '[EMPTY]']]
How much does one particular task contribute to the performance of All MTL? To investigate this, we remove one task at a time and train the rest jointly. We find that upos, sem and semtr are in general sensitive to a task being removed from All MTL. Moreover, at least one task significantly contributes to the success of All MTL at some point; if we remove it, the performance will drop. On the other hand, com generally negatively affects the performance of All MTL as removing it often leads to performance improvement.
Neural Machine Translation Decoding with Terminology Constraints
1805.03750
Table 3: Bleu scores and speed ratios relative to unconstrained Lnmt for production system with up to c constraints per sentence (newstest2017). A: secondary attention, B, C: allow 1 or 2 extra tokens, respectively (Section 2.3). Dict (v2∗) refers to decoding with attentions but without A, B or C.
['[ITALIC] eng-ger-wmt17', 'Bleu [BOLD] /speed ratio [ITALIC] c=2', 'Bleu [BOLD] /speed ratio [ITALIC] c=2', 'Bleu [BOLD] /speed ratio [ITALIC] c=3', 'Bleu [BOLD] /speed ratio [ITALIC] c=3', 'Bleu [BOLD] /speed ratio [ITALIC] c=4', 'Bleu [BOLD] /speed ratio [ITALIC] c=4']
[['Lnmt', '26.7', '1.00', '26.7', '1.00', '26.7', '1.00'], ['+ dict (v1)', '28.2', '0.20', '28.4', '0.14', '28.5', '0.11'], ['+ dict (v2∗)', '27.8', '0.69', '28.0', '0.66', '28.1', '0.59'], ['+ A', '28.0', '0.65', '28.2', '0.61', '28.2', '0.54'], ['+ B', '28.4', '0.27', '28.6', '0.24', '28.7', '0.21'], ['+ C', '28.5', '0.21', '28.6', '0.19', '28.7', '0.17']]
Tab. Rows two and three confirm that the reduced computational complexity of our approach yields faster decoding speeds than the approach of Anderson et al. while incurring a small decrease in Bleu. Moreover, it compares favourably for larger numbers of constraints per sentence: v2 * is 3.5x faster than v1 for c=2 and more than 5x faster for c=4. Relaxing the restrictions of decoding with attentions improves the Bleu scores but increases runtime. However, the slowest v2 configuration is still faster than v1. The optimal trade-off between quality and speed is likely to differ for each language pair.
S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension
1706.04815
Table 6: The performance on MS-MARCO development set of end-to-end methods.
['[BOLD] Method', '[BOLD] ROUGE-L']
[['S2S (Question)', '8.9'], ['S2S (Question + All Passages)', '28.75'], ['S2S (Question + Selected Passage)', '37.70'], ['Matching + S2S', '6.28']]
The authors of MS-MACRO publish a baseline of training a sequence-to-sequence model with the question and answer, which only achieves 8.9 in terms of ROUGE-L. Adding all passages to the sequence-to-sequence model can obviously improve the result to 28.75. Then we only use the question and the selected passage to generate the answer. The only difference with our synthesis model is that we add the position features to the basic sequence-to-sequence model. The result is still worse than our synthesis model with a large margin, which shows the matching between question and passage is very important for generating answer. Next, we build an end-to-end framework combining matching and generation. Above results show the effectiveness of our model that solves this task with two steps. In the future, we hope the reinforcement learning can help the connection between evidence extraction and answer synthesis.
S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension
1706.04815
Table 2: The performance on the MS-MARCO test set. *Using the ensemble result of extraction models as the input of the synthesis model.
['[BOLD] Method', '[BOLD] ROUGE-L', '[BOLD] BLEU-1']
[['FastQAExt', '33.67', '33.93'], ['Prediction', '37.33', '40.72'], ['ReasoNet', '38.81', '39.86'], ['R-Net', '42.89', '42.22'], ['S-Net (Extraction)', '41.45', '44.08'], ['S-Net (Extraction, Ensemble)', '42.92', '44.97'], ['S-Net', '45.23', '43.78'], ['[BOLD] S-Net*', '[BOLD] 46.65', '[BOLD] 44.78'], ['Human Performance', '47', '46']]
Our extraction model achieves 41.45 and 44.08 in terms of ROUGE-L and BLEU-1, respectively. We sum the probability at each position of each single model to decide the ensemble result. Finally we select 13 models for ensemble, which achieves 42.92 and 44.97 in terms of ROUGE-L and BLEU-1, respectively, which achieves the state-of-the-art results of the extraction model. Then we test our synthesis model based on the extracted evidence. Our synthesis model achieves 3.78% and 3.73% improvement on the single model and ensemble model in terms of ROUGE-L, respectively. Our best result achieves 46.65 in terms of ROUGE-L and 44.78 in terms of BLEU-1, which outperforms all existing methods with a large margin and are very close to human performance. Moreover, we observe that our method only achieves significant improvement in terms of ROUGE-L compared with our baseline. The reason is that our synthesis model works better when the answer is short, which almost has no effect on BLEU as it is normalized with all questions.
S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension
1706.04815
Table 3: The performance on the MS-MARCO development set in terms of ROUGE-L. *Using the ensemble result of extraction models as the input of the synthesis model. +Wang & Jiang (2016b) report their Prediction with 37.3.
['[BOLD] Method', '[BOLD] Extraction', '[BOLD] Extraction [BOLD] +Synthesis']
[['FastQAExt', '33.7', '-'], ['BiDAF', '34.89', '38.73'], ['Prediction', '37.54+', '41.55'], ['S-Net (w/o Passage Ranking)', '39.62', '43.26'], ['S-Net', '42.23', '45.95'], ['[BOLD] S-Net*', '[BOLD] 44.11', '[BOLD] 47.76']]
Since answers on the test set are not published, we analyze our model on the development set. For the evidence extraction part, our proposed multi-task learning framework achieves 42.23 and 44.11 for the single and ensemble model in terms of ROUGE-L. For the answer synthesis, the single and ensemble models improve 3.72% and 3.65% respectively in terms of ROUGE-L. We observe the consistent improvement when applying our answer synthesis model to other answer span prediction models, such as BiDAF and Prediction.
S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension
1706.04815
Table 4: Results of passage ranking. -w/o Passage Ranking: the model that only has evidence extraction part, without passage ranking part. -Passage Ranking then Extraction: the model that selects the passage firstly and then apply the extraction model only on the selected passage.
['[BOLD] Method', '[BOLD] P@1', '[BOLD] ROUGE-L']
[['Extraction w/o Passage Ranking', '34.6', '56.7'], ['Passage Ranking then Extraction', '28.3', '52.9'], ['S-Net (Extraction)', '[BOLD] 38.9', '[BOLD] 59.4']]
We analyze the result of incorporating passage ranking as an additional task. For passage selection, our multi-task model achieves the accuracy of 38.9, which outperforms the pure answer prediction model with 4.3. Moreover, jointly learning the answer prediction part and the passage ranking part is better than solving this task by two separated steps because the answer span can provide more information with stronger supervision, which benefits the passage ranking part. The ROUGE-L is calculated by the best answer span in the selected passage, which shows our multi-task learning framework has more potential for better answer.
S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension
1706.04815
Table 5: The performance of questions in different levels of necessary of synthesis in terms of ROUGE-L on MS-MARCO development set.
['[BOLD] Category', '[BOLD] Extraction', '[BOLD] Extraction [BOLD] +Synthesis']
[['max = 1.0 (63.95%)', '50.74', '49.59'], ['0.8≤max<1.0 (20.06%)', '40.95', '41.16'], ['0.6≤max<0.8 (5.78%)', '31.21', '33.21'], ['0.4≤max<0.6 (1.54%)', '21.97', '22.44'], ['0.2≤max<0.4 (0.29%)', '13.47', '13.49'], ['max<0.2 (8.38%)', '0.01', '49.18']]
For the question whose answer can be exactly matched in the passage, our answer synthesis model performs slightly worse because the sequence-to-sequence model makes some deviation when copying extracted evidences. In other categories, our synthesis model achieves more or less improvement. For the question whose answer can be almost found in the passage (ROUGE-L≥0.8), our model achieves 0.2 improvement even though the space that can be raised is limited. For the question whose upper performance via answer extraction is between 0.6 and 0.8, our model achieves a large improvement of 2.0. Part of questions in the last category (ROUGE-L<0.2) are the polar questions whose answers are “yes” or “no”. Although the answer is not in the passage or question, our synthesis model can easily solve this problem and determine the correct answer through the extracted evidences, which leads to such improvement in this category. However, in these questions, answers are too short to influence the final score in terms of BLEU because it is normalized in all questions. Moreover, the score decreases due to the penalty of length. Due to the limitation of BLEU, we only report the result in terms of ROUGE-L in our analysis.
Copy this sentence.
1905.09856
Table 1: Best results for each model. BLEU score, seconds per epoch and number of epochs it took to converge to the best BLEU score.
['Model', 'BLEU', 'Sec/epoch', 'Epochs to converge', 'Num. of parameters']
[['LSTM', '0.03', '50', '>500', '30,019,166'], ['GRU', '0.74', '93', '16', '57,397,854'], ['CNN', '0.8302', '[BOLD] 30', '50', '49,320,286'], ['Transformer', '[BOLD] 0.8392', '66', '[BOLD] 6', '79,127,134']]
Note that while CNN achieves a similar BLEU score to Transformer, it does so much slower, taking over 50 epochs. Transformer is the clear winner, while taking not much longer to run than other models.
Neural Cross-Lingual Named Entity Recognition with Minimal Resources
1808.09861
Table 2: Comparison of different ways of using bilingual word embeddings, within our method (NER F1).
['Model', 'Spanish', 'Dutch', 'German']
[['Common space', '65.40±1.22', '66.15±1.62', '43.73±0.94'], ['Replace', '68.21±1.22', '69.37±1.33', '48.59±1.21'], ['Translation', '[BOLD] 69.21±0.95', '[BOLD] 69.39±1.21', '[BOLD] 53.94±0.66']]
The “common space” variant performs the worst by a large margin, confirming our hypothesis that discrepancy between the two embedding spaces harms the model’s ability to generalize. From the comparison between the “replace” and “translation,” we observe that having access to the target language’s character sequence helps performance, especially for German, perhaps due in part to its capitalization patterns, which differ from English. In this case, we have to lower-case all the words for character inputs in order to prevent the model from overfitting the English capitalization pattern.
Neural Cross-Lingual Named Entity Recognition with Minimal Resources
1808.09861
Table 1: NER F1 scores. ∗Approaches that use more resources than ours (“Wikipedia” means Wikipedia is used not as a monolingual corpus, but to provide external knowledge). †Approaches that use multiple languages for transfer. “Only Eng. data” is the model used in Mayhew et al. (2017) trained on their data translated from English without using Wikipedia and other languages. The “data from Mayhew et al. (2017)” is the same data translated from only English they used. “Id.c.” indicates using identical character strings between the two languages as the seed dictionary. “Adv.” indicates using adversarial training and mutual nearest neighbors to induce a seed dictionary. Our supervised results are obtained using models trained on annotated corpus from CoNLL.
['Model ∗', 'Model Täckström et\xa0al. ( 2012 )', 'Spanish 59.30', 'Dutch 58.40', 'German 40.40', 'Extra Resources parallel corpus']
[['∗', 'Nothman et\xa0al. ( 2013 )', '61.0', '64.00', '55.80', 'Wikipedia'], ['∗', 'Tsai et\xa0al. ( 2016 )', '60.55', '61.60', '48.10', 'Wikipedia'], ['∗', 'Ni et\xa0al. ( 2017 )', '65.10', '65.40', '58.50', 'Wikipedia, parallel corpus, 5K dict.'], ['∗†', 'Mayhew et\xa0al. ( 2017 )', '65.95', '66.50', '[BOLD] 59.11', 'Wikipedia, 1M dict.'], ['∗', 'Mayhew et\xa0al. ( 2017 ) (only Eng.\xa0data)', '51.82', '53.94', '50.96', '1M dict.'], ['[ITALIC] Our methods:', '[ITALIC] Our methods:', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['[EMPTY]', 'BWET (id.c.)', '71.14±0.60', '70.24±1.18', '57.03±0.25', '–'], ['[EMPTY]', 'BWET (id.c.) + self-att.', '[BOLD] 72.37±0.65', '70.40±1.16', '[BOLD] 57.76±0.12', '–'], ['[EMPTY]', 'BWET (adv.)', '70.54±0.85', '70.13±1.04', '55.71±0.47', '–'], ['[EMPTY]', 'BWET (adv.) + self-att.', '71.03±0.44', '[BOLD] 71.25±0.79', '56.90±0.76', '–'], ['[EMPTY]', 'BWET', '71.33±1.26', '69.39±0.53', '56.95±1.20', '10K dict.'], ['[EMPTY]', 'BWET + self-att.', '71.67±0.86', '70.90±1.09', '57.43±0.95', '10K dict.'], ['∗', 'BWET on data from Mayhew et\xa0al. ( 2017 )', '66.53±1.12', '69.24±0.66', '55.39±0.98', '1M dict.'], ['∗', 'BWET + self-att. on data from Mayhew et\xa0al. ( 2017 )', '66.90±0.65', '69.31±0.49', '55.98±0.65', '1M dict.'], ['∗', 'Our supervised results', '86.26±0.40', '86.40±0.17', '78.16±0.45', 'annotated corpus']]
Here “BWET” (bilingual word embedding translation) denotes using the hierarchical neural CRF model trained on data translated from English. As can be seen from the table, our methods outperform previous state-of-the-art results on Spanish and Dutch by a large margin and perform competitively on German even without using any parallel resources. We achieve similar results using different seed dictionaries, and produce the best results when adding the self-attention mechanism to our model.
Narrative Variations in a Virtual Storyteller
1708.08585
Figure 9: Means (M) and standard deviation (SD) for engagement and interest for original sentences and all variations in Perceptions of Voice and POV Experiment
['Engagement', 'Orig', '1st-out', '1st-neutr', '1st-shy', 'sch', '3rd-neutr']
[['M', '3.98', '3.27', '3.00', '2.73', '1.95', '1.93'], ['SD', '1.07', '1.39', '1.19', '1.25', '1.07', '1.06'], ['Interest', 'Orig', '1st-out', '1st-neutr', '1st-shy', 'sch', '3rd-neutr'], ['Mean', '3.91', '3.02', '3.02', '2.81', '1.90', '1.87'], ['SD', '0.99', '1.21', '1.37', '1.27', '1.05', '1.01']]
Fig. We find a clear ranking for engagement: the original sentence is scored highest, followed by first outgoing, first neutral, first shy, sch, and third neutral.
What does a Car-ssette tape tell?
1905.13448
Table 4: Results of the baseline model trained on 3 datasets.
['[EMPTY]', 'BLEU4 Model', 'BLEU4 Human', 'BERT Model', 'BERT Human']
[['Hospital', '0.127', '0.127', '0.937', '0.942'], ['Car', '0.220', '0.266', '0.919', '0.935'], ['Joint', '0.157', '0.185', '0.925', '0.954']]
In order to verify our model’s generalization capabilities, we trained the baseline model on all 3 datasets. Firstly, our model is capable of being generalized to other datasets, in particular the cross-scene dataset: both the BLEU4 and the BERT similarity score on Joint Dataset are relatively good, meaning that the baseline model can distinguish different scenes. Due to the fact that every audio in hospital scene only had 3 annotations, our model showed its preference towards the car scene: it mistakenly generated hospital-related captions for only 12 car inputs and 308 car-related captions for hospital inputs. Secondly, the current model’s capability of generalizing more unique sentences was verified. Thirdly, both BLEU4 and the BERT similarity score showed that though scene-specific and non-repetitive sentences can now be generated, a great discrepancy between human and machine outputs still existed. More future research is needed to investigate the specific drawbacks and eventually make captioning tasks feasible.
What does a Car-ssette tape tell?
1905.13448
Table 2: Token Distribution in DataSet
['Rank', 'Token', 'Train %', 'Dev %']
[['1', 'is/are 在', '6.01', '6.01'], ['2', 'driving 行驶', '5.37', '5.55'], ['3', 'automobile 汽车', '5.01', '5.11'], ['4', '‘s 的', '4.01', '4.58'], ['5', 'driver 司机', '3.35', '3.45'], ['mean # of tokens', 'mean # of tokens', '14.21', '14.03']]
The car dataset is split into a training set and a development set, which encompasses 3241 and 361 audio clips respectively. High sentence diversity is observed in both sets: only 6.7% transcriptions in the training set and 1.9% in the development set are repeated. From the distribution of the top 5 tokens in Table it can be seen that the train-development split exhibits a similar token distribution.
Siamese CBOW: Optimizing Word Embeddingsfor Sentence Representations
1606.04640
Table 3: Time spent per method on all 20 SemEval datasets, 17,608 sentence pairs, and the average time spent on a single sentence pair (time in seconds unless indicated otherwise).
['[EMPTY]', '20 sets', '1 pair']
[['Siamese CBOW (300d)', '00,007.7', '0.0004'], ['word2vec (300d)', '00,007.0', '0.0004'], ['skip-thought (1200d)', '98,804.0', '5.6']]
This considerable difference in numbers of arithmetic operations is also observed in practice. We run tests on a single CPU, using identical code for extracting sentences from the evaluation sets, for every method. The sentence pairs are presented one by one to the models. We disregard the time it takes to load models. Speedups might of course be gained for all methods by presenting the sentences in batches to the models, by computing sentence representations in parallel and by running code on a GPU. However, as we are interested in the differences between the systems, we run the most simple and straightforward scenario. The difference between word2vec and Siamese CBOW is because of a different implementation of word lookup.
A Manually Annotated Chinese Corpus forNon-task-oriented Dialogue Systems
1805.05542
Table 4: Comparison results (%). Higher scores indicate better results. Cut@N: responses with rating ≥N are considered as positive, and as negative otherwise. Larger cut indicates a stricter standard. Best results in each column is marked as bold.
['[BOLD] Category', '[BOLD] Models', '[BOLD] Cut@ [BOLD] 3 P@1', '[BOLD] Cut@ [BOLD] 3 MAP', '[BOLD] Cut@ [BOLD] 3 MRR', '[BOLD] Cut@ [BOLD] 4 P@1', '[BOLD] Cut@ [BOLD] 4 MAP', '[BOLD] Cut@ [BOLD] 4 MRR', '[BOLD] Cut@ [BOLD] 5 P@1', '[BOLD] Cut@ [BOLD] 5 MAP', '[BOLD] Cut@ [BOLD] 5 MRR']
[['[BOLD] Unsupervised', 'Cosine sim', '84.8', '91.1', '91.8', '67.5', '81.3', '82.0', '40.0', '64.9', '65.1'], ['[BOLD] Unsupervised', 'BM25', '86.1', '[BOLD] 91.4', '[BOLD] 92.4', '70.6', '82.7', '83.7', '53.8', '72.3', '73.4'], ['[BOLD] Supervised', 'SVMRank', '[BOLD] 86.2', '91.1', '92.2', '73.0', '[BOLD] 84.0', '85.0', '64.3', '78.6', '79.8'], ['[BOLD] Supervised', 'GBDT', '85.9', '91.0', '92.0', '71.4', '82.9', '83.8', '55.7', '74.4', '74.5'], ['[BOLD] Supervised', 'BiLSTM', '85.4', '90.7', '91.8', '[BOLD] 73.8', '[BOLD] 84.0', '[BOLD] 85.3', '68.1', '81.0', '82.2'], ['[BOLD] Supervised', 'CNN', '85.5', '90.6', '91.8', '72.5', '83.4', '84.6', '[BOLD] 70.5', '[BOLD] 81.2', '[BOLD] 83.0']]
We follow the paradigm of question answering to separate responses to be “positive” and “negative” when evaluating the ranked responses given a prompt. In doing so, we set rating thresholds at 3 ,4 and 5, where responses with gold-standard rating ≥N are considered as positive instances, and otherwise as negative instances. Therefore, larger N indicates stricter standard. In particular, we remove all responses of a prompt in evaluation if none of them is considered positive under a specific cut, because for this prompt, all models would score 0 no matter how its responses are ranked.
Multi-stage Pretraining for Abstractive Summarization
1909.10599
Table 4: Label coverage of our BERT-based content selection model. We also show the performance of a content selection oracle which always does perfect content selection. True positives here are groundtruth word pieces that are selected by the content selector. The oracle achieves perfect precision because the labels are used to select inputs in the first place. The oracle does not achieve perfect recall because not all input word pieces are present in the groundtruth.
['CNN/DM dev', 'Oracle', 'Precision 100.00', 'Recall 80.77', 'F1 89.01']
[['CNN/DM dev', 'Model', '56.11', '46.47', '49.95'], ['CNN/DM test', 'Oracle', '100.00', '80.49', '88.85'], ['CNN/DM test', 'Model', '55.11', '46.58', '49.58']]
Label coverage is an important metric for understanding the performance of content selectors in the context of copy mechanisms. If the content selector has a false negative on a label word, then that word cannot be copied, thus hurting performance. Similarly, if the content selector has a false positive on a label word, the usefulness of the content selector model degrades. Our content selection model seems sub-par compared to the oracle, with slightly less than half of the labels present in the content selector’s outputs, and slightly more than half of the content selector’s outputs present in the groundtruth labels.
General Evaluation for Instruction Conditioned Navigation using Dynamic Time Warping
1907.05446
Table 2: Evaluation metrics as percentages on R2R and R4R Validation Unseen sets for agents with different reward functions. In all metrics, higher means better.
['Agent', '[BOLD] R2R SR', '[BOLD] R2R SPL', '[BOLD] R2R SED', '[BOLD] R2R CLS', '[BOLD] R2R nDTW', '[BOLD] R2R SDTW', '[BOLD] R4R SR', '[BOLD] R4R SPL', '[BOLD] R4R SED', '[BOLD] R4R CLS', '[BOLD] R4R nDTW', '[BOLD] R4R SDTW']
[['random', '5.1', '3.3', '5.8', '29.0', '27.9', '3.6', '13.7', '2.2', '16.5', '22.3', '18.5', '4.1'], ['goal-oriented', '43.7', '38.4', '31.9', '53.5', '54.4', '36.1', '[BOLD] 28.7', '15.0', '[BOLD] 9.6', '33.4', '26.9', '11.4'], ['fidelity-oriented', '[BOLD] 44.4', '[BOLD] 41.4', '[BOLD] 33.9', '[BOLD] 57.5', '[BOLD] 58.3', '[BOLD] 38.3', '28.5', '[BOLD] 21.4', '9.4', '[BOLD] 35.4', '[BOLD] 30.4', '[BOLD] 12.6']]
Compared to a goal-oriented reward strategy, taking advantage of nDTW as a reward signal not only results in better performance on nDTW and SDTW metrics but also better performance on prior metrics like CLS and SPL. nDTW shows better differentiation compared to CLS on R4R between goal and fidelity oriented agents. SED scores random paths more highly than those of trained agents, and neither SR nor SED differentiate between goal and fidelity orientation. SPL appears to do so (15.0 vs 21.4), but this is only due to the fact that the fidelity-oriented agent produces paths that have more similar length to the reference paths rather than fidelity to them. As such, SDTW provides the clearest signal for indicating both success and fidelity.
General Evaluation for Instruction Conditioned Navigation using Dynamic Time Warping
1907.05446
Table 1: Binomial tests on how different metrics compare in correlation with human judgments. The sign test uses n=sum of positives and negatives; k=number of positives; p=0.5.
['[EMPTY]', '[BOLD] UC ( [ITALIC] nDTW vs) PL', '[BOLD] UC ( [ITALIC] nDTW vs) NE', '[BOLD] UC ( [ITALIC] nDTW vs) ONE', '[BOLD] UC ( [ITALIC] nDTW vs) CLS', '[BOLD] UC ( [ITALIC] nDTW vs) AD', '[BOLD] UC ( [ITALIC] nDTW vs) MD', '[BOLD] SC ( [ITALIC] SDTW vs) SR', '[BOLD] SC ( [ITALIC] SDTW vs) OSR', '[BOLD] SC ( [ITALIC] SDTW vs) SPL', '[BOLD] SC ( [ITALIC] SDTW vs) SED']
[['+/-', '242/17', '254/9', '255/9', '162/46', '254/12', '253/12', '219/16', '220/14', '219/17', '213/26'], ['sign test', '4.1e-52', '2.0e-63', '1.0e-63', '2.4e-16', '6.9e-60', '6.9e-60', '9.6e-47', '8.8e-49', '6.7e-46', '1.1e-37']]
To analyze the collected annotations, we first assign nDTW in UC study (similarly SDTW in SC study) a positive/negative sign depending if it has higher/lower correlation than competing metric for a given human ranking of query paths with respect to a reference path, and then compare across all reference paths (discarding ties) using a sign test. Both nDTW and SDTW correlate substantially better with human orderings, compared to the competing metrics in their respective categories.
Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering
1911.07176
Table 5: Ablation study, removing different components of ROCC. The scores are reported on the ARC test set and MultiRC dev set. R⋆ denotes the best approach that relies just on the R score. The hyper parameter k in R⋆, was tuned on the development partition of the respective dataset.
['#', 'Ablations', 'ARC', 'MultiRC EM0', 'MultiRC Justification']
[['[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', 'F1'], ['0', 'Full AutoROCC', '[BOLD] 56.09', '[BOLD] 25.29', '[BOLD] 56.44'], ['1', '– IDF', '54.11', '24.65', '54.19'], ['2', '– [ITALIC] C( [ITALIC] A)', '54.90', '21.82', '52.93'], ['3', '– [ITALIC] C( [ITALIC] Q)', '54.66', '23.61', '52.09'], ['4', '– O', '55.88', '24.03', '55.97'], ['5', 'R⋆', '53.90', '23.40', '44.81']]
Row 0 reports the score from the full AutoROCC model. In all the cases, we found small drops in both performance and justification scores across both the datasets, with the removal of either C(A) or C(Q) having the largest impact.
Revisiting Unsupervised Relation Extraction
2005.00087
Table 2: Study of EType+ in combination with different features. The results are average across three runs on the development set.
['[BOLD] Model EType+', '[BOLD] Model EType+', '[BOLD] B3 42.5', '[BOLD] V 40.1', '[BOLD] ARI 29.2']
[['[EMPTY]', '+Entity', '40.5', '39.9', '28.6'], ['[EMPTY]', '+BOW', '37.7', '38.0', '20.5'], ['[EMPTY]', '+DepPath', '41.4', '39.4', '26.7'], ['[EMPTY]', '+POS', '41.6', '40.4', '27.8'], ['[EMPTY]', '+Trigger', '41.7', '41.3', '29.0'], ['[EMPTY]', '+PCNN', '40.8', '39.6', '27.1']]
How is the performance when combining entity types with other features? Our experiments using only entity types surprisingly perform higher than the previous state-of-the-art methods including feature engineering and deep learning models. However, we know that context information is crucial to distinguish the relation between two entities, as many RE studies have been proposed to integrate the context information to improve the RE performance. The list of features include: ( i) Entity: textual surface form of two entities, (ii) BOW: bag of words between two entities, (iii) DepPath: words on the dependency path between two entities, (iv) POS: part-of-speech tag sequence between two entities, and (v) Trigger: DepPath without stop words. In general, naively combining entity types with other features could not improve the model performance. Additionally, BOW feature had negative effects on the RE performance. This indicates that bag of words between two entities often include uninformative and redundant words, i.e., noises, that are difficult to eliminate using simple neural architectures. While (i)-(v) are widely used hand-crafted features for RE, we also incorporated a neural-based context encoder PCNN which is the combination of Simon’s PCNN encoder, the entity masking and position-aware attention proposed in Zhang et al. However, the performance of combining PCNN is also lower than only entity types.
Revisiting Unsupervised Relation Extraction
2005.00087
Table 1: Average results (%) across three runs of different models (except the EType) on NYT-FB and TACRED. c indicates the number of clusters in each method. ⋄ indicates our implementation of the corresponding model. We note that all methods were trained on NYT-FB and evaluated on the test set of both NYT-FB and TACRED.
['[BOLD] Model NYT-FB', '[BOLD] Model NYT-FB', '[BOLD] B3 NYT-FB', '[BOLD] V NYT-FB', '[BOLD] ARI NYT-FB']
[['RelLDA', '[ITALIC] c=10', '29.1', '30.0', '13.3'], ['RelLDA1', '[ITALIC] c=10', '36.9', '34.7', '24.2'], ['March ( [ITALIC] Ls+ [ITALIC] Ld)', '[ITALIC] c=10', '37.5', '38.7', '27.6'], ['Simon', '[ITALIC] c=10', '39.4', '38.3', '[BOLD] 33.8'], ['EType+', '[ITALIC] c=10', '[BOLD] 41.9', '40.6', '30.7'], ['March⋄ ( [ITALIC] Ls+ [ITALIC] Ld)', '[EMPTY]', '36.9', '37.4', '28.1'], ['EType', '[ITALIC] c=16', '41.7', '[BOLD] 42.1', '30.7'], ['EType+', '[ITALIC] c=16', '41.5', '41.3', '30.5'], ['RelLDA1', '[ITALIC] c=100', '29.6', '-', '-'], ['March', '[ITALIC] c=100', '35.8', '-', '-'], ['TACRED', 'TACRED', 'TACRED', 'TACRED', 'TACRED'], ['March⋄ ( [ITALIC] Ls+ [ITALIC] Ld)', '[ITALIC] c=10', '31.0', '43.8', '22.6'], ['Simon⋄', '[ITALIC] c=10', '15.7', '17.1', '6.1'], ['EType+', '[ITALIC] c=10', '43.3', '59.7', '25.7'], ['March⋄ ( [ITALIC] Ls+ [ITALIC] Ld)', '[ITALIC] c=16', '34.6', '47.6', '23.2'], ['EType', '[ITALIC] c=16', '[BOLD] 48.3', '[BOLD] 64.4', '[BOLD] 29.1'], ['EType+', '[ITALIC] c=16', '46.1', '62.0', '27.4'], ['March⋄', '[ITALIC] c=100', '33.13', '43.63', '20.21']]
Our contributions are as follows: (i) We perform experiments on both automatically/manually-labelled datasets, namely NYT-FB and TACRED, respectively. We show that two methods using only entity types can outperform the state-of-the-art models including both feature-engineering and deep learning approaches. The surprising results raise questions about the current state of unsupervised relation extraction. (ii) For model design, we show that link predictor provides a good signal to train a URE model (Fig 1).
Improving Tweet Representations using Temporal and User Context
1612.06062
Table 1: User profile attribute classification - F1 Score
['[BOLD] Algorithm', '[BOLD] Spouse', '[BOLD] Education', '[BOLD] Job']
[['Paragraph2Vec\xa0', '0.3435', '0.9259', '0.5465'], ['Simple Distance model (SD)', '0.3704', '0.9068', '0.5872'], ['HDV\xa0', '0.4526', '0.8901', '0.521'], ['Ours (User = 0)', '[BOLD] 0.5416', '0.9098', '0.5935'], ['Ours (User = 1)', '0.4082', '[BOLD] 0.9274', '[BOLD] 0.6067']]
HDV’s assumption of giving equal attention value to the temporal context also results in lower accuracy compared with our models. SD model outperforms HDV in two tasks, which substantiates our claim against HDV’s naïve assumption for social media. Our model with user vector outperforming the baselines for Education and Job attribute classification, shows the need to consider the user characteristics while modelling his/her tweets. The poor results for Spouse task suggest that this dataset has too many topic shifts and that the user vector turned out to be less accurate. We observe that in some cases HDV outperforms the SD model, mainly due to the inability of the SD model to utilize the context information from farther tweets which are relevant with respect to the target tweet. Our models are 19.66%, 2.27% and 2.22% better compared to the baselines for the spouse, education and job attributes respectively.
The Lifted Matrix-Space Model for Semantic Composition
1711.03602
Table 6: Syntactic category classification accuracies (%) on SNLI development set, classified using the tags introduced in Bowman et al. (2015).
['[BOLD] Model', '[BOLD] 3-way [BOLD] Train', '[BOLD] 3-way [BOLD] Test', '[BOLD] 19-way [BOLD] Train', '[BOLD] 19-way [BOLD] Test']
[['300D BOW', '86.4', '85.6', '82.7', '82.1'], ['700D TreeLSTM', '93.2', '91.2', '90.0', '86.6'], ['576D LMS-LSTM', '97.3', '[BOLD] 96.3', '94.0', '[BOLD] 92.1']]
As a baseline, we train a bag-of-words (BOW) model which produces the hidden state of a given phrase by summing the GloVe embeddings of the words of the phrase. We train and test on the hidden states produced by BOW as well. The hidden state representations produced by LMS-LSTM yield the best results on both 3-way and 19-way classification tasks. Comparing LMS-LSTM and TreeLSTM representations, we see a 5.1% gain on the 3-way classification and a 5.5% gain on the 19-way classification.
ShanghaiTech at MRP 2019: Sequence-to-Graph Transduction with Second-Order Edge Inference for Cross-Framework Meaning Representation Parsing
2004.03849
Table 7: Comparing Labeled F1 scores of models with different types of embedding combinations on the development set of the gold DM dataset. Baseline represents the parser of wang-etal-2019-second. Base represents the pre-trained BERT-Base uncased model and Large represents the pre-trained BERT-Large uncased model. fixed and tuned represents whether to fine-tune the BERT model. BERT in the last block represents the last embedding combination (Large-fixed + Glove + Lemma + Char) in the first block. First represents first subtoken pooling, Avg represents average pooling over subtokens. dep-tree represents adding dependency information into embeddings. For each case, we report the highest Labeled F1 score on the development set in our experiments.
['[EMPTY]', 'LF1']
[['Baseline', '93.41'], ['Base-fixed', '94.17'], ['Base-tuned', '94.22'], ['Base-fixed + Glove', '94.45'], ['Base-tuned + Glove', '94.48'], ['Large-fixed + Glove', '94.62'], ['Large-tuned + Glove', '94.64'], ['Large-fixed + Glove + Lemma', '95.10'], ['Large-fixed + Glove + Lemma + Char', '95.22'], ['ELMo + Large-fixed + Glove + Lemma', '94.78'], ['ELMo + Glove + Lemma + Char', '95.06'], ['BERT-First', '95.22'], ['BERT-Avg', '95.28'], ['BERT-Avg + dep-tree', '95.30']]
We use BERT devlin-etal-2019-bert embedding in our model. We compared the performance of DM in the original SDP dataset with different subtoken pooling methods, and we also explored whether combining other embeddings such as pre-trained word embedding Glove pennington2014glove and contextual embedding ELMo peters-etal-2018-deep will further improve the performance. We found that Glove, lemma and character embeddings are helpful for DM and fine-tuning on the training set slightly improves the performance. ELMo embedding is also helpful but cannot outperform BERT embedding. However, the performance dropped when ELMo embedding and BERT embedding are combined. We speculate that the drop is caused by the conflict between the two types of contextual information. For subtoken pooling, we compared the performance of using first subtoken pooling and average pooling as token embedding. We found that average pooling is slightly better than first pooling. For syntactic information, we encode each head word and dependency label as embeddings and concatenate them together with other embeddings. The result shows that syntactic information as embeddings is not very helpful for the task. We will try other methods utilizing syntactic information in future work.
ShanghaiTech at MRP 2019: Sequence-to-Graph Transduction with Second-Order Edge Inference for Cross-Framework Meaning Representation Parsing
2004.03849
Table 1: Comparison of cross-framework F1 scores achieved by our system and best scores of other teams for each metric. all represents the F1 score over the full test set for each framework. lpps represents a 100-sentence sample from the little prince containing graphs over all the frameworks.
['[EMPTY]', 'DM', 'PSD', 'EDS', 'UCCA', 'AMR']
[['Ours-all', '94.88', '89.49', '86.90', '-', '63.59'], ['Best-all', '[BOLD] 95.50', '[BOLD] 91.28', '[BOLD] 94.47', '[BOLD] 81.67', '[BOLD] 73.38'], ['Ours-lpps', '94.28', '85.22', '87.49', '-', '66.82'], ['Best-lpps', '[BOLD] 94.96', '[BOLD] 88.46', '[BOLD] 92.82', '[BOLD] 82.61', '[BOLD] 73.11']]
Due to an unexpected bug in UCCA anchor prediction, we failed to submit our UCCA prediction. Our results are still competitive to those of the other teams and we get the \nth3 place for the DM framework in the official metrics. Our system performs well on the DM framework with an F1 score only 0.4 percent F1 below the best score on DM. Note that our system does not learn to predict node labels for DM and PSD and simply uses lemmas from the companion data as node labels. We find that compared to gold lemmas from the original SDP dataset, lemmas from the companion data have only 71.4% accuracy. We believe that it is the main reason for the F1 score gap between our system and the best one on DM and PSD. For PSD, EDS and AMR graph, our system ranks \nth6, \nth5 and \nth7 among 13 teams.
ShanghaiTech at MRP 2019: Sequence-to-Graph Transduction with Second-Order Edge Inference for Cross-Framework Meaning Representation Parsing
2004.03849
Table 8: F1 score averaged over the labeled F1 score and the frame F1 score on the development sets of DM and PSD. basic represents our model with embeddings described in 3.1 except lemma and named entity embeddings.
['[EMPTY]', 'DM', 'PSD']
[['basic', '96.01', '90.80'], ['+lemma', '96.09', '90.79'], ['+ner', '96.07', '90.80'], ['+lemma & ner', '[BOLD] 96.16', '[BOLD] 90.88']]
We found that one of the difference is about the lemma annotations of entities, for example, lemmas of “Pierre Vinken” are “Pierre” and “Vinken” in the companion data while the lemmas are named-entity-like tags “Pierre” and “_generic_proper_ne” in the original SDP dataset. Based on this discovery, we experimented on the influence of named entity tags on parsing performance. We used Illinois Named Entity Tagger RatinovRo09 in white list to predict named entity tags and compared the performance on the development sets of DM and PSD. We tuned the hyperparameters for all the embedding conditions in the table, and we found that adding lemma or named entity embeddings results in a slight improvement on DM but does not help on PSD. With both lemma and named entity embeddings, there is a further improvement on both DM and PSD, which shows the named entity tags are helpful for semantic dependency parsing. As a result, we apply named entity information in parsing other frameworks.
Cross-Lingual Cross-Platform Rumor Verification Pivoting on Multimedia Content
1808.04911
Table 4: Top six features correlated with fake news.
['[BOLD] Feature', '[BOLD] PCC']
[['unrelated variance', '0.306'], ['distance variance', '0.286'], ['agree variance', '0.280'], ['discuss mean', '-0.231'], ['unrelated mean', '0.210'], ['containsExclamationMark', '0.192']]
To further explore the quality of the cross-lingual cross-platform features, we calculated the Pearson correlation coefficient (PCC) between each feature with respect to the tweet’s label (fake or real). We evaluated both TFG and features used by baseline models. A positive value indicates this feature positively correlates with fake news. We can see four out of the top six features are cross-lingual cross-platform features. The variance of the unrelated probability (unrelated variance) has the highest score, which further validates our design intuition that tweets might convey false information when they have different agreement with all other webpages that shared similar multimedia content. The second feature, ‘‘distance var" is also highly correlated with fake news. This result supports our hypothesis that if there is a large information dissimilarity across different platforms, there is a high probability of fake information involved. The only feature from baselines (56 features in total) in the top six features is whether a tweet contains an exclamation mark or not (containsExclaimationMark).
Leveraging Deep Graph-Based Text Representation for Sentiment Polarity Applications
1902.10247
Table 6: Comparison of different CNN models on the sampled data with graph embeddings
['Method', 'Negative class (%) precision', 'Negative class (%) recall', 'Negative class (%) F1', 'Positive class (%) precision', 'Positive class (%) recall', 'Positive class (%) F1', 'Overall (%) accuracy', 'Overall (%) F1']
[['[BOLD] Sampled data', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['CNN-rand', '51.79', '60.42', '55.77', '56.82', '48.08', '52.08', '54.00', '52.08'], ['CNN-static', '64.29', '52.94', '58.06', '58.62', '69.39', '63.55', '61.00', '63.55'], ['CNN-non-static', '54.55', '62.50', '58.25', '60.00', '51.92', '55.67', '57.00', '55.67'], ['CNN-multichannel', '52.17', '51.06', '51.61', '57.41', '58.49', '57.94', '55.00', '57.94']]
We demonstrate the performance of the above described models on the sampled data from all available datasets (250 negative and 250 positive documents divided into train and test on 80-20 ratio) to figure out which model is the best choice to be coupled with graph embeddings. The results reveal that the CNN-static model is close and at some levels is better than CNN-non-static. Moreover, this indicates that the feature set extracted from the graphs are rich enough and don’t require optimization and fine-tuning.