paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
Asking Clarifying Questions in Open-Domain Information-Seeking Conversations
1907.06554
Table 2. Performance of question retrieval model. The superscript * denotes statistically significant differences compared to all the baselines (p<0.001).
['[BOLD] Method', 'MAP', 'Recall@10', 'Recall@20', 'Recall@30']
[['QL', '0.6714', '0.5917', '0.6946', '0.7076'], ['BM25', '0.6715', '0.5938', '0.6848', '0.7076'], ['RM3', '0.6858', '0.5970', '0.7091', '0.7244'], ['LambdaMART', '0.7218', '0.6220', '0.7234', '0.7336'], ['RankNet', '0.7304', '0.6233', '0.7314', '0.7500'], ['BERT-LeaQuR', '[BOLD] 0.8349*', '[BOLD] 0.6775*', '[BOLD] 0.8310*', '[BOLD] 0.8630*']]
Question retrieval. As we see, BERT-LeaQuR is able to outperform all baselines. It is worth noting that the model’s performance gets better as the number of retrieved documents increases. This indicates that BERT-LeaQuR is able to capture the relevance of query and questions when they lack common terms. In fact, we see that all term-matching retrieval models such as BM25 are significantly outperformed in terms of all evaluation metrics.
Asking Clarifying Questions in Open-Domain Information-Seeking Conversations
1907.06554
Table 3. Performance comparison with baselines. WorstQuestion and BestQuestion respectively determine the lower and upper bounds. The superscript * denotes statistically significant differences compared to all the baselines (p<0.001).
['[BOLD] Method', '[BOLD] Qulac-T Dataset MRR', '[BOLD] Qulac-T Dataset P@1', '[BOLD] Qulac-T Dataset nDCG@1', '[BOLD] Qulac-T Dataset nDCG@5', '[BOLD] Qulac-T Dataset nDCG@20', '[BOLD] Qulac-F Dataset', '[BOLD] Qulac-F Dataset MRR', '[BOLD] Qulac-F Dataset P@1', '[BOLD] Qulac-F Dataset nDCG@1', '[BOLD] Qulac-F Dataset nDCG@5', 'nDCG@20']
[['OriginalQuery', '0.2715', '0.1842', '0.1381', '0.1451', '0.1470', '[EMPTY]', '0.2715', '0.1842', '0.1381', '0.1451', '0.1470'], ['[ITALIC] σ-QPP', '0.3570', '0.2548', '0.1960', '0.1938', '0.1812', '[EMPTY]', '0.3570', '0.2548', '0.1960', '0.1938', '0.1812'], ['LambdaMART', '0.3558', '0.2537', '0.1945', '0.1940', '0.1796', '[EMPTY]', '0.3501', '0.2478', '0.1911', '0.1896', '0.1773'], ['RankNet', '0.3573', '0.2562', '0.1979', '0.1943', '0.1804', '[EMPTY]', '0.3568', '0.2559', '0.1986', '0.1944', '0.1809'], ['NeuQS', '[BOLD] 0.3625*', '[BOLD] 0.2664*', '[BOLD] 0.2064*', '[BOLD] 0.2013*', '[BOLD] 0.1862*', '[EMPTY]', '[BOLD] 0.3641*', '[BOLD] 0.2682*', '[BOLD] 0.2110*', '[BOLD] 0.2018*', '[BOLD] 0.1867*'], ['WorstQuestion', '0.2479', '0.1451', '0.1075', '0.1402', '0.1483', '[EMPTY]', '0.2479', '0.1451', '0.1075', '0.1402', '0.1483'], ['BestQuestion', '0.4673', '0.3815', '0.3031', '0.2410', '0.2077', '[EMPTY]', '0.4673', '0.3815', '0.3031', '0.2410', '0.2077']]
Oracle question selection: performance. Here we study the performance of an oracle model, i.e. assuming that an oracle model is aware of the answers to the questions. The goal is to show to what extent clarifying questions can improve the performance of a retrieval system. This shows the high potential gain of asking good clarifying questions on the performance of a conversational system. Particularly, we examine the relative improvement of the system after asking only one question and observe that BestQuestion achieves over 100% relative improvement in terms of different evaluation metrics (MRR: 0.2820→0.5677, P@1: 0.1933→0.4986, nDCG@1: 0.1460→0.3988, nDCG@5: 0.1503→0.2793, nDCG@20: 0.1520→0.2265). It is worth mentioning that we observe the highest relative improvements in terms of nDCG@1 (=173%) and P@1 (=158%), exhibiting a high potential impact on voice-only conversational systems. Question selection. We see that all models outperform OriginalQuery, confirming that asking clarifying questions is crucial in a conversation, leading to high performance gain. For instance, compared to OriginalQuery, a model as simple as σ-QPP achieves a 31% relative improvement in terms of MRR. Also, NeuQS consistently outperforms all the baselines in terms of all evaluation metrics on both data splits. All the improvements are statistically significant. Moreover, NeuQS achieves a remarkable improvement in terms of both P@1 and nDCG@1. These two evaluation metrics are particularly important for voice-only conversational systems where the system must return only one result to the user. The obtained improvements highlight the necessity and effectiveness of asking clarifying questions in a conversational search system, where they are perceived as natural means of interactions with users.
Entity Commonsense Representationfor Neural Abstractive Summarization
1806.05504
Table 4: Human evaluations on the Gigaword dataset. Bold-faced values are the best while red-colored values are the worst among the values in the evaluation metric.
['Model', '1st', '2nd', '3rd', '4th', 'mean']
[['gold', '0.27', '0.34', '0.21', '0.18', '2.38'], ['base', '0.14', '0.15', '0.28', '0.43', '3.00'], ['base+E2Trnn', '0.12', '0.24', '0.39', '0.25', '2.77'], ['base+E2Tcnn', '[BOLD] 0.47', '0.27', '0.12', '0.14', '[BOLD] 1.93']]
Automatic evaluation on the Gigaword dataset shows that the CNN and RNN variants of base+E2T have similar performance. To break the tie between both models, we also conduct human evaluation on the Gigaword dataset. We instruct two annotators to read the input sentence and rank the competing summaries from first to last according to their relevance and fluency: (a) the original summary gold, and from models (b) base, (c) base+E2Tcnn, and (d) base+E2Trnn. We then compute (i) the proportion of every ranking of each model and (ii) the mean rank of each model. The model with the best mean rank is base+E2Tcnn, followed by gold, then by base+E2Trnn and base, respectively. We also perform ANOVA and post-hoc Tukey tests to show that the CNN variant is significantly (p<0.01) better than the RNN variant and the base model. The RNN variant does not perform as well as the CNN variant, contrary to the automatic ROUGE evaluation above. Interestingly, the CNN variant produces better (but with no significant difference) summaries than the gold summaries. We posit that this is due to the fact that the article title does not correspond to the summary of the first sentence.
Entity Commonsense Representationfor Neural Abstractive Summarization
1806.05504
Table 6: Examples with highest/lowest disambiguation gate d values of two example entities (United States and gold). The tagged part of text is marked bold and preceded with at sign (@).
['Text', '[ITALIC] d']
[['Linked entity: https://en.wikipedia.org/wiki/United_States', 'Linked entity: https://en.wikipedia.org/wiki/United_States'], ['[BOLD] E1.1: andy roddick got the better of dmitry tursunov in straight sets on friday , assuring the @ [BOLD] united states a #-# lead over defending champions russia in the #### davis cup final .', '0.719'], ['[BOLD] E1.2: sir alex ferguson revealed friday that david beckham ’s move to the @ [BOLD] united states had not surprised him because he knew the midfielder would not return to england if he could not come back to manchester united .', '0.086'], ['Linked entity: https://en.wikipedia.org/wiki/Gold', 'Linked entity: https://en.wikipedia.org/wiki/Gold'], ['[BOLD] E2.1: following is the medal standing at the ##th olympic winter games -lrb- tabulated under team , @ [BOLD] gold , silver and bronze -rrb- : UNK', '0.862'], ['[BOLD] E2.2: @ [BOLD] gold opened lower here on monday at ###.##-### .## us dollars an ounce , against friday ’s closing rate of ###.##-### .## .', '0.130']]
We show the effectiveness of the selective disambiguation gate d in selecting which entities to disambiguate or not. In the first example, sentence E1.1 contains the entity “United States” and is linked with the country entity of the same name, however the correct linked entity should be “United States Davis Cup team”, and therefore is given a high d value. On the other hand, sentence E1.2 is linked correctly to the country “United States”, and thus is given a low d value.. The second example provides a similar scenario, where sentence E2.1 is linked to the entity “Gold” but should be linked to the entity “Gold medal”. Sentence E2.2 is linked correctly to the chemical element. Hence, the former case received a high value d while the latter case received a low d value.
Contextualized Sparse Representations forReal-Time Open-Domain Question Answering
1911.02896
Table 1: Results on two open-domain QA datasets. See Appendix A for how s/Q is computed.
['[BOLD] Model', 'C.TREC EM', 'SQuAD-Open EM', 'SQuAD-Open F1', 's/Q']
[['[ITALIC] Models with Dedicated Search Engines', '[ITALIC] Models with Dedicated Search Engines', '[ITALIC] Models with Dedicated Search Engines', '[ITALIC] Models with Dedicated Search Engines', '[ITALIC] Models with Dedicated Search Engines'], ['DrQA', '25.4*', '29.8**', '-', '35'], ['R3', '28.4*', '29.1', '37.5', '-'], ['Paragraph Ranker', '[BOLD] 35.4*', '30.2', '-', '161'], ['Multi-Step-Reasoner', '-', '31.9', '39.2', '-'], ['BERTserini', '-', '38.6', '46.1', '115'], ['Multi-passage BERT', '-', '[BOLD] 53.0', '[BOLD] 60.9', '84'], ['[ITALIC] End-to-End Models', '[ITALIC] End-to-End Models', '[ITALIC] End-to-End Models', '[ITALIC] End-to-End Models', '[ITALIC] End-to-End Models'], ['ORQA', '30.1', '20.2', '-', '8.0'], ['DenSPI', '31.6†', '36.2', '44.4', '0.71'], ['DenSPI + Sparc\xa0(Ours)', '[BOLD] 35.7†', '[BOLD] 40.7', '[BOLD] 49.0', '0.78']]
On both datasets, our model with contextualized sparse representations ( DenSPI + Sparc) largely improves the performance of the phrase-indexing baseline model (DenSPI) by more than 4%. Also, our method runs significantly faster than other models that need to run heavy QA models during the inference. On CuratedTREC, which is constructed from real user queries, our model achieves state-of-the-art performance at the time of submission. Even though our model is only trained on SQuAD (i.e., zero-shot), it outperforms all other models which are either distant- or semi-supervised with at least 45x faster inference.
Contextualized Sparse Representations forReal-Time Open-Domain Question Answering
1911.02896
Table 3: Results on the SQuAD development set. LSTM+SA+ELMo is a query-agnostic baseline from Seo et al. (2018).
['[EMPTY]', '[BOLD] Model', 'EM', 'F1']
[['Original', 'DrQA\xa0(Chen et\xa0al., 2017 )', '69.5', '78.8'], ['Original', 'BERT\xa0(Devlin et\xa0al., 2019 )', '84.1', '90.9'], ['Query-Agnostic', 'LSTM + SA + ELMo', '52.7', '62.7'], ['Query-Agnostic', 'DenSPI', '73.6', '81.7'], ['Query-Agnostic', 'DenSPI +\xa0Sparc', '76.4', '84.8']]
While BERT-Large that jointly encodes a passage and a question still has a higher performance than ours, we have closed the gap to 6.1 F1 score in a query-agnostic setting.
Contextualized Sparse Representations forReal-Time Open-Domain Question Answering
1911.02896
Table 5: Exact match scores of Sparc in different search strategies. SFS: Sparse First Search. DFS: Dense First Search. Hybrid: Combination of SFS + DFS. Exact match scores are reported.
['[BOLD] Model', 'SQuAD-Open DenSPI', 'SQuAD-Open + Sparc', 'CuratedTREC DenSPI', 'CuratedTREC + Sparc']
[['SFS', '33.3', '36.9 (+3.6)', '28.8', '30.0 (+1.2)'], ['DFS', '28.5', '34.4 (+5.9)', '29.5', '34.3 (+4.8)'], ['Hybrid', '36.2', '40.7 (+4.5)', '31.6', '35.7 (+4.1)']]
Note that on CuratedTREC where the questions more resemble real user queries, DFS outperforms SFS showing the effectiveness of dense search when not knowing which documents to read.
Neural Machine Translation Leveraging Phrase-based Models in a Hybrid Search
1708.03271
Table 5: Effect of the threshold parameters on the hybrid approach on the e-commerce English→Russian task.
['[ITALIC] τfocus', '[ITALIC] τcov', 'Item descr. BLEU', 'Item descr. TER', 'Product descr. BLEU', 'Product descr. TER']
[['[EMPTY]', '[EMPTY]', '[%]', '[%]', '[%]', '[%]'], ['0.3', '0.7', '27.4', '55.4', '30.8', '50.5'], ['0.3', '1.0', '27.2', '55.4', '30.3', '50.3'], ['0.3', '∞', '27.5', '55.4', '30.4', '50.9']]
Tuning Again, we retune the system for each choice. Setting the coverage threshold to 1.0 or even disabling the coverage check (by setting τcov=∞) has little effect on the translation scores on this task. This can be explained by the fact that translation from English to Russian is mostly monotonic. We also tried varying the focus threshold τfocus between 0.0 and 0.3 but did not notice any significant effect on this task.
Neural Machine Translation Leveraging Phrase-based Models in a Hybrid Search
1708.03271
Table 3: Translation results of the hybrid approach on the e-commerce English→Russian task with different SMT model combinations. The first row shows results with all models enabled. In the following rows, we either remove or limit exactly one model compared to the full system.
['System description', 'Item descriptions BLEU [%]', 'Item descriptions TER [%]', 'Product descriptions BLEU [%]', 'Product descriptions TER [%]']
[['Full hybrid approach', '27.4', '55.4', '30.8', '50.5'], ['Without LM', '26.5', '55.9', '29.2', '51.0'], ['Without source word coverage feature', '26.7', '56.1', '29.4', '51.2'], ['Without phrase scores', '27.2', '55.9', '30.6', '50.6'], ['Maximal source phrase length 1', '26.7', '56.4', '29.1', '51.6'], ['Minimal source phrase length 2', '27.0', '55.9', '30.0', '51.1']]
To analyze the improvements of the hybrid system, we perform experiments in which we either disable or limit some of the SMT models. Without the language model, the hybrid approach has almost no improvements over the NMT baseline. This indicates that the language model is crucial in selecting appropriate phrase candidates. Similarly, when we disable the source word coverage feature, the translation quality is degraded, suggesting that this feature helps choose between phrase hypotheses and word hypotheses during the search. Next, we do not use phrase-level scores. Here, we observe only a small degradation of translation quality. Finally, we limit the source length of phrases used in the search, allowing only one-word source phrases in one experiment and only source phrases with two or more words in another experiment. In both cases, the translation quality decreases. Thus, both one-word phrases and longer phrases are necessary to obtain the best results.
Neural Machine Translation Leveraging Phrase-based Models in a Hybrid Search
1708.03271
Table 4: Effect of the beam size (word beam size Nw + phrase beam size Np) for the hybrid approach on the e-commerce English→Russian task.
['Beam size [ITALIC] Np', 'Beam size [ITALIC] Nw', 'Item descr. BLEU', 'Item descr. TER', 'Product descr. BLEU', 'Product descr. TER']
[['[EMPTY]', '[EMPTY]', '[%]', '[%]', '[%]', '[%]'], ['116', '12', '26.7', '55.9', '29.8', '51.1'], ['96', '32', '27.4', '55.4', '30.8', '50.5'], ['64', '64', '26.8', '55.6', '30.1', '50.7'], ['32', '32', '27.1', '55.8', '30.7', '50.5']]
-S4SS1SSSx3 Tuning the beam size Next, we study the effect of different beam sizes on translation quality. Note that we retune the system for each choice. With a total beam size of 128, we get the best results by using a phrase beam size of 96 and a word beam size of 32. When we use a phrase beam size of 116 or 64 instead, the translation quality worsens. In another experiment, we decrease the total beam size to 64. The translation quality degrades only slightly, which means that we can still expect MT quality improvements with hybrid search even if we optimize the system for speed. To further test this, we reduce the beam sizes to Nw=12 and Np=4 after tuning with Nw=32 and Np=96. We get BLEU scores of 27.1% on item descriptions and 30.1% on product descriptions, losing 0.3% and 0.7% BLEU respectively compared to the full beam size.
Adaptive Name Entity Recognition under Highly Unbalanced Data
2003.10296
Table 4: Comparing our methods to baseline based on Bi-LSTM-CRF and BERT-FeedForward on F1 metric. The bold value indicates for the best result. The first version Our Method4 is modified from Bi-LSTM-CRF and the other based on BERT-FeedForward.
['[BOLD] Method', '[BOLD] Bi-LSTM-CRF', '[BOLD] BERT - FF', '[BOLD] Double Bi-LSTM-CRF', '[BOLD] Double BERT - FF', '[BOLD] Our Method ', '[BOLD] Our Method']
[['[BOLD] Tim', '0.85', '[BOLD] 0.88', '0.84', '[BOLD] 0.88', '0.84', '[BOLD] 0.88'], ['[BOLD] Per', '0.69', '[BOLD] 0.78', '0.67', '[BOLD] 0.78', '0.67', '[BOLD] 0.78'], ['[BOLD] Geo', '0.84', '[BOLD] 0.88', '0.84', '0.87', '0.84', '0.87'], ['[BOLD] Org', '0.64', '0.70', '0.64', '[BOLD] 0.72', '0.64', '[BOLD] 0.72'], ['[BOLD] Gpe', '0.95', '[BOLD] 0.96', '0.95', '[BOLD] 0.96', '0.95', '[BOLD] 0.96'], ['[BOLD] Nat', '0.26', '0.42', '0.20', '0.12', '[BOLD] 0.43', '0.42'], ['[BOLD] Art', '0.00', '0.17', '0.01', '0.05', '0.10', '[BOLD] 0.34'], ['[BOLD] Eve', '0.17', '0.30', '0.00', '0.00', '0.24', '[BOLD] 0.37'], ['[BOLD] All classes', '0.80', '[BOLD] 0.84', '0.75', '0.83', '0.77', '[BOLD] 0.84'], ['[BOLD] Weighted Average', '0.79', '[BOLD] 0.84', '0.79', '[BOLD] 0.84', '0.79', '[BOLD] 0.84'], ['[BOLD] Macro Average', '0.55', '0.64', '0.52', '0.55', '0.59', '[BOLD] 0.67']]
The probability for the sequence y is given by: p(y|X)=exp(S(X,y)∑y′∈exp(S(X,y′)) (4) The objective function is the maximum likelihood of the probability distribution denoted as: lnp(y|X)=S(X,y)−ln∑y′∈yexp(S(X,y′)) (5) The maximum likelihood of the probability of predictions are maximized during training and the final tag y∗ is given by: y∗=argmaxy′∈yS(X,y) (6) It has been shown that combing CRFs with Bi-LSTM for modelling the tagging task improves the tagging accuracy in general by effectively incorporating several dependencies across the output labels. In order to mitigate this issue, we tried relabeling the samples and training two separate models. For the first model, we mask the Weak class to “Other” and train it to predict tags for Strong class and “other””. Similarly for the second model, the Strong classes are masked to “other” and a new model is trained this relabelled data. It is evident that this relabeling method does not improve the results and thus motivates to follow a novel method of classification using RNN-CNN followed by NER prediction. We also compared the derived results to four other baselines include original Bi-LSTM-CRF, BERT- Feedforward and Double version of each architecture. In Double version of each architecture, we train one model for weak classes, one model for strong classes and finally merge two labels to form the final prediction using a simple merging schedule. Sentence type classification is not used in this version. In the result table, the bold value denotes for the best value cross all methods. For each entity category, we compute the F1 score and using global F1, weighted average F1 and macro-average of F1 for all classes by the following formula:
Adaptive Name Entity Recognition under Highly Unbalanced Data
2003.10296
Table 1: Label statistic
['[BOLD] Label O', '[BOLD] Count 88791', '[BOLD] Label Geo', '[BOLD] Count 37644']
[['Tim', '20333', 'Org', '20143'], ['Per', '16990', 'Gpe', '15869'], ['Art', '402', 'Eve', '308'], ['Nat', '201', '[EMPTY]', '[EMPTY]']]
Depending on specific applications, the set of entity types might vary. However, the label set most commonly used in academic research contains four labels, including LOC(location), ORG(organization), PER(person) and MISC(miscellaneous). In this project, the given corpus was annotated with 8 types of entities: Art(artifact), Eve(event), Nat(natural phenomena), Geo(geographical entity), Gpe(geopolitical entity), Tim(time indicator), Org(organization), Per(person). In this project, we implemented Bi-LSTM-CRF architecture using Pytorch framework, and we also experimented with one traditional model using CRF (Stanford NLP) and one SOTA transformer based neural network model (BERT). The best result among above models was 0.85 F1 score on test set. However, these models all suffered from the problem of imperfect annotations (wrong annotations) and highly imbalanced dataset. reported from the whole train/test/val corpus. It could be seen that the distribution of labels is highly skewed. Specifically, the total samples of five groups (Strong class): “Geo”, “Time”, “Org”, “Per” and “Gpe” is approximately 50 times more than the total samples of the last three groups (Weak class): “Art”, “Eve” and “Nat”. This characteristic poses a considerable difficulty for almost every models as most of the features learned are biased to the Strong classes. Therefore, in latter part of this report, we will also describe an approach to solve this problem.
Adaptive Name Entity Recognition under Highly Unbalanced Data
2003.10296
Table 3: Accuracy of baseline methods using full training data with weighted loss function.
['[BOLD] Method', '[BOLD] 0-Class', '[BOLD] 1-Class']
[['LSTM - Attention', '98.7', '60.1'], ['Self - Attention', '99.1', '54.7'], ['[BOLD] RNN - CNN', '[BOLD] 99.6', '[BOLD] 75.0']]
To overcome this issue, we introduce a modified binary loss function that exploits the whole data for training procedure: Lwce=−p0t0log(s0)−p1t1log(s1) (7) where (t0,s0),(t1,s1) denote for the ground truth and scores for class 0 and 1, respectively. p0,p1 denote for the occurrence probability of class 0 and 1 in the training data. In our experiment, the p0≈0.02 and p1≈0.98. It is really interested when the RNN-CNN improves the accuracy on both 0 and 1 class with a large margin compared two remaining methods, holding 75% on ”1” class and 99.6% on ”0” class. This improvement can be reasoning as the effect of setting small weights for class 0 and large weights for class 1. It forces the model to damp loss values when predict correctly ”0” class and amplifying error for incorrect assignment on ”1” class.
DAM: Deliberation, Abandon and Memory Networks for Generating Detailed and Non-repetitive Responses in Visual Dialogue
2007.03310
Table 3: Human evaluation of 100 sample responses on VisDial v1.0 validation set. M1: percentage of responses that pass the Turing Test. M2: percentage of responses that are evaluated better or equal to human responses. Repetition: percentage of responses that have meaningless repeated words. Richness: percentage of responses that contain detailed content to answer the question.
['Model', 'M1 ↑', 'M2 ↑', 'Repetition↓', 'Richness ↑']
[['RSL(DualVD-G): RSL only', '0.60', '0.47', '0.20', '0.03'], ['WDL: WDL only', '0.69', '0.54', '0.07', '[BOLD] 0.15'], ['[BOLD] DualVD-DAM', '[BOLD] 0.75', '[BOLD] 0.61', '[BOLD] 0.01', '0.13']]
Distinct from previous works, we add Repetition and Richness metrics, and for all metrics, we record the score when at least 2 persons agree. After incorporating RSL and Memory Unit with WDL, the repetition further reduces by 0.06 while M1 and M2 outperform by 0.06 and 0.07 respectively, which proves the complementary advantages between these two level information. We also notice that Richness decreases slightly. This is mainly because the information from RSL concentrates more attention on the global information, rather than detailed information.
DAM: Deliberation, Abandon and Memory Networks for Generating Detailed and Non-repetitive Responses in Visual Dialogue
2007.03310
Table 1: Result comparison on validation set of VisDial v1.0.
['Model', 'MRR', 'R@1', 'R@5', 'R@10', 'Mean', 'NDCG']
[['HCIAE-G ', '49.07', '39.72', '58.23', '64.73', '18.43', '59.70'], ['CoAtt-G ', '49.64', '40.09', '59.37', '65.92', '17.86', '59.24'], ['Primary-G ', '49.01', '38.54', '59.82', '66.94', '16.69', '-'], ['ReDAN-G ', '49.60', '39.95', '59.32', '65.97', '17.79', '59.41'], ['DMRM ', '50.16', '40.15', '60.02', '67.21', '[BOLD] 15.19', '-'], ['LF-G ', '44.67', '34.84', '53.64', '59.69', '21.11', '52.23'], ['MN-G ', '45.51', '35.40', '54.91', '61.20', '20.24', '51.86'], ['DualVD-G ', '49.78', '39.96', '59.96', '66.62', '17.49', '60.08'], ['[BOLD] LF-DAM (ours)', '45.08', '35.01', '54.48', '60.57', '20.83', '52.68'], ['[BOLD] MN-DAM (ours)', '46.16', '35.87', '55.99', '62.45', '19.57', '52.82'], ['[BOLD] DualVD-DAM (ours)', '[BOLD] 50.51', '[BOLD] 40.53', '[BOLD] 60.84', '[BOLD] 67.94', '16.65', '[BOLD] 60.93']]
, re-trained by us). ReDAN-G and DMRM adopted complex multi-step reasoning, while HCIAE-G, CoAtt-G and Primary-G are attention-based models. For fairness, we only compare the original generative ability without re-ranking. We just replace the decoders in baseline models by our proposed DAM. Compared with the baseline models, our models outperform them on all the metrics, which indicates the complementary advantages between DAM and existing encoders in visual dialogue. Though DualVD-G performs lower than DMRM on Mean, DualVD-DAM outperforms DMRM on all the other metrics without multi-step reasoning, which is the advantages in DMRM over our models.
DAM: Deliberation, Abandon and Memory Networks for Generating Detailed and Non-repetitive Responses in Visual Dialogue
2007.03310
Table 2: Ablation study of each unit on VisDial v1.0 validation set.
['Base Model', 'Model', 'MRR', 'R@1', 'R@5', 'R@10', 'Mean', 'NDCG']
[['LF-DAM', '2LSTM', '44.43', '34.53', '53.55', '59.48', '21.38', '51.99'], ['LF-DAM', '2L-M', '44.77', '34.85', '54.06', '60.03', '21.13', '52.04'], ['LF-DAM', '2L-DM', '45.06', '34.90', '54.24', '60.39', '20.87', '52.58'], ['LF-DAM', '2L-DAM', '[BOLD] 45.08', '[BOLD] 35.01', '[BOLD] 54.48', '[BOLD] 60.57', '[BOLD] 20.83', '[BOLD] 52.68'], ['MN-DAM', '2LSTM', '45.58', '35.27', '55.38', '61.54', '19.96', '52.38'], ['MN-DAM', '2L-M', '45.67', '35.29', '55.57', '61.97', '19.91', '52.11'], ['MN-DAM', '2L-DM', '45.77', '35.53', '55.40', '62.05', '19.95', '52.51'], ['MN-DAM', '2L-DAM', '[BOLD] 46.16', '[BOLD] 35.87', '[BOLD] 55.99', '[BOLD] 62.45', '[BOLD] 19.57', '[BOLD] 52.82'], ['DualVD-DAM', '2LSTM', '49.72', '40.04', '59.52', '66.41', '17.62', '59.79'], ['DualVD-DAM', '2L-M', '50.09', '40.38', '59.94', '66.77', '17.31', '59.85'], ['DualVD-DAM', '2L-DM', '50.20', '40.33', '60.22', '67.48', '17.15', '59.72'], ['DualVD-DAM', '2L-DAM', '[BOLD] 50.51', '[BOLD] 40.53', '[BOLD] 60.84', '[BOLD] 67.94', '[BOLD] 16.65', '[BOLD] 60.93']]
We consider the following ablation models to illustrate the effectiveness of each unit of our model: 1) 2L-DAM : this is our full model that adaptively selects related information for decoding. 2) 2L-DM: full model w/o Abandon Unit. 3) 2L-M: 2L-DM w/o Deliberation Unit. 4) 2-LSTM: 2L-M w/o Memory Unit. Similar trend exists in LF-DAM and MN-DAM, which indicates the effectiveness of each unit in DAM. Since the space limitation and similar observations, we show the ablation studies on DualVD-DAM in the following experiments.
DAM: Deliberation, Abandon and Memory Networks for Generating Detailed and Non-repetitive Responses in Visual Dialogue
2007.03310
Table 4: Ablation study of Deliberation Unit on VisDial v1.0.
['Model', 'MRR', 'R@1', 'R@5', 'R@10', 'Mean', 'NDCG']
[['I-S', '50.01', '40.25', '59,78', '66.76', '17.67', '59.09'], ['I-V', '50.03', '40.30', '59.34', '66.90', '17.34', '58.93'], ['I-SV', '50.13', '40.34', '60.09', '67.06', '17.34', '59.51'], ['H', '50.19', '40.36', '60.09', '66.96', '17.27', '59.92'], ['[BOLD] DualVD-DAM', '[BOLD] 50.51', '[BOLD] 40.53', '[BOLD] 60.84', '[BOLD] 67.94', '[BOLD] 16.65', '[BOLD] 60.93']]
The relatively higher results of H model indicate that the history information plays a more important role in the decoder. By jointly incorporating all the structure-aware information from the encoder, DualVD-DAM achieves the best performance on all the metrics. It proves the advantages of DAM via fully utilizing the information from the elaborate encoder, which is beneficial for enhancing the existing generation models by incorporating their encoders with DAM adaptively.
Learning Word Representations with Hierarchical Sparse Coding
1406.2035
Table 1: Summary of results. We report Spearman’s correlation coefficient for the word similarity task and accuracies (%) for other tasks. Higher values are better (higher correlation coefficient or higher accuracy). The last two methods (columns) are new to this paper, and our proposed method is in the last column.
['[ITALIC] M', 'Task', 'PCA', 'RNN', 'NCE', 'CBOW', 'SG', 'SC', 'forest']
[['52', 'Word similarity', '0.39', '0.26', '0.48', '0.43', '0.49', '0.49', '[BOLD] 0.52'], ['52', 'Syntactic analogies', '18.88', '10.77', '24.83', '23.80', '[BOLD] 26.69', '11.84', '24.38'], ['52', 'Semantic analogies', '8.39', '2.84', '[BOLD] 25.29', '8.45', '19.49', '4.50', '9.86'], ['52', 'Sentence completion', '27.69', '21.31', '[BOLD] 30.18', '25.60', '26.89', '25.10', '28.88'], ['52', 'Sentiment analysis', '74.46', '64.85', '70.84', '68.48', '71.99', '75.51', '[BOLD] 75.83'], ['520', 'Word similarity', '0.50', '0.31', '0.59', '0.53', '0.58', '0.58', '[BOLD] 0.66'], ['520', 'Syntactic analogies', '40.67', '22.39', '33.49', '52.20', '[BOLD] 54.64', '22.02', '48.00'], ['520', 'Semantic analogies', '28.82', '5.37', '[BOLD] 62.76', '12.58', '39.15', '15.46', '41.33'], ['520', 'Sentence completion', '30.58', '23.11', '33.07', '26.69', '26.00', '28.59', '[BOLD] 35.86'], ['520', 'Sentiment analysis', '81.70', '72.97', '78.60', '77.38', '79.46', '78.20', '[BOLD] 81.90']]
In the similarity ranking and sentiment analysis tasks, our method performed the best in both low and high dimensional embeddings. In the sentence completion challenge, our method performed best in the high-dimensional case and second-best in the low-dimensional case. Importantly, forest outperforms PCA and unstructured sparse coding (SC) on every task. We take this collection of results as support for the idea that coarse-to-fine organization of latent dimensions of word representations captures the relationships between words’ meanings better compare to unstructured organization.
Learning Word Representations with Hierarchical Sparse Coding
1406.2035
Table 2: Results on the syntactic and semantic analogies tasks with a bigger corpus (M=260).
['Task', 'CBOW', 'SG', 'forest']
[['Syntactic', '61.37', '63.61', '[BOLD] 65.11'], ['Semantic', '23.13', '[BOLD] 54.41', '52.07']]
We hypothesize that this is because performing well on these tasks requires training on a bigger corpus. We combine our WMT-2011 corpus with other news corpora and Wikipedia to obtain a corpus of 6.8 billion words. The size of the vocabulary of this corpus is 401,150. (M=52 does not perform as well, and M=520 is computationally expensive). All models benefit significantly from a bigger corpus, and the performance levels are now comparable with previous work. On the syntactic analogies task, forest is the best model. On the semantic analogies task, SG outperformed forest, and they both are better than CBOW.
The Right Tool for the Job: Matching Model and Instance Complexities
2004.07453
Table 2: Fine-tuning times (in minutes) of our model compared to the most accurate baseline: the standard BERT-large model with a single output layer.
['[BOLD] Dataset', '[BOLD] Training Time Ours', '[BOLD] Training Time Standard']
[['AG', '052', '053'], ['IMDB', '056', '057'], ['SST', '004', '004'], ['SNLI', '289', '300'], ['MNLI', '852', '835']]
Fine-tuning BERT-large with our approach has a similar cost to fine-tuning the standard BERT-large model, with a single output layer. Our model is not slower to fine-tune in four out of five cases, and is even slightly faster in three of them.
The Right Tool for the Job: Matching Model and Instance Complexities
2004.07453
Table 3: Spearman’s ρ correlation between confidence levels for our most efficient classifier and two measures of difficulty: document length and consistency. Confidence is correlated reasonably with consistency across all datasets. For all datasets except AG, confidence is (loosely) negatively correlated with document length. For the AG topic classification dataset, confidence is (loosely) positively correlated. Results for the other layers show a similar trend.
['[BOLD] Dataset', '[BOLD] Length', '[BOLD] Consistency']
[['AG', '–0.13', '0.37'], ['IMDB', '–0.17', '0.47'], ['SST', '–0.19', '0.36'], ['SNLI', '–0.08', '0.44'], ['MNLI', '–0.13', '0.39']]
Moreover, as expected, across four out of five datasets, the (weak) correlation between confidence and length is negative; our model is somewhat more confident in its prediction on shorter documents. The fifth dataset (AG), shows the opposite trend: confidence is positively correlated with length. This discrepancy might be explained by the nature of the tasks we consider. For instance, IMDB and SST are sentiment analysis datasets, where longer texts might include conflicting evidence and thus be harder to classify. In contrast, AG is a news topic detection dataset, where a conflict between topics is uncommon, and longer documents provide more opportunities to find the topic. Our next criterion for “difficulty” is the consistency of model predictions. Toneva et al. Such instances can be thought of as “easy” or memorable examples. Similarly, Sakaguchi et al. Inspired by these works, we define the criterion of consistency: whether all classifiers in our model agree on the prediction of a given instance, regardless of whether it is correct or not. ρ correlation between the confidence of the most efficient classifier and this measure of consistency. Our analysis reveals a medium correlation between confidence and consistency across all datasets (0.37≤ρ≤0.47), which indicates that the measure of confidence generally agrees with the measure of consistency.
The Right Tool for the Job: Matching Model and Instance Complexities
2004.07453
Table 4: Spearman’s ρ correlation between confidence levels for our classifiers (of different layers) on the validation sets of SNLI and MNLI, and two measures of difficulty: hypothesis-only classifier predictions (Hyp.-Only) and inter-annotator consensus (IAC).
['[BOLD] Layer', '[BOLD] SNLI Hyp.-Only', '[BOLD] SNLI IAC', '[BOLD] MNLI Hyp.-Only', '[BOLD] MNLI IAC']
[['0', '0.39', '0.14', '0.37', '0.08'], ['4', '0.31', '0.25', '0.35', '0.21'], ['12', '0.31', '0.31', '0.32', '0.27'], ['23', '0.28', '0.32', '0.30', '0.32']]
Gururangan et al. They argued that such instances are “easier” for machines, compared to those which required access to the full input, which they considered “harder.” Similarly to the consistency results, we see that the confidence of our most efficient classifier is reasonably correlated with the predictions of the hypothesis-only classifier. As expected, as we move to larger, more accurate classifiers, which presumably are able to make successful predictions on harder instances, this correlation decreases. Both NLI datasets include labels from five different annotators. We treat the inter-annotator consensus (IAC) as another measure of difficulty: the higher the consensus is, the easier the instance. We compute IAC for each example as the fraction of annotators who agreed on the majority label, hence this number ranges from 0.6 to 1.0 for five annotators. The correlation with our most efficient classifiers is rather weak, only 0.08 and 0.14. Surprisingly, as we move to larger models, the correlation increases, up to 0.32 for the most accurate classifiers. This indicates that the two measures perhaps capture a different notion of difficulty.
Exploring Explainable Selection to Control Abstract Generation
2004.11779
Table 7: QA-based and criteria-based human evaluation. ∗ mark indicates the improvements from the baselines to the ESCA-BERT are statistically significant using a paired t-test (p<0.05). Gold summary was not included in QA evaluation. ♭ means unnecessary value.
['Models', 'QA', 'Criteria Infor.', 'Criteria Nov.', 'Criteria Rel.']
[['PG+Cov.', '26.0∗', '-0.28 ∗', '-0.43 ∗', '-0.05 ∗'], ['Bottom-Up', '31.3∗', '-0.07∗', '0.02 ∗', '-0.08 ∗'], ['Inconsistency-Loss', '29.8∗', '-0.10∗', '-0.12∗', '-0.15∗'], ['ESCA-BERT', '39.2', '0.15', '0.14', '0.15'], ['Gold', '♭', '0.3', '0.4', '0.13'], ['Bottom-Up', '♭', '-0.23', '-0.07', '-0.15'], ['ESCA-BERT', '♭', '0.10', '0.03', '0.05'], ['ESCA( [ITALIC] ϵn=0.45)', '♭', '0.05', '0.10', '0.02'], ['ESCA( [ITALIC] ϵr=0.5)', '♭', '0.07', '-0.02', '0.07']]
In the first block of criteria ranking, 5 systems were simultaneously ranked. The gold summary set the upper bound except for the relevance. Unsurprisingly, since the gold summaries of CNN/DailyMaily are mostly top sentences in the articles, their relevance can not be guaranteed. We also found the ESCA-BERT produced the most popular summaries. In the second block, the ESCA with novelty and relevance control were also evaluated together with the bottom-up and the original ESCA. The ranking difference slightly varies, but it obviously proves that the ESCA under novelty or relevance control gained the corresponding highest rank.
Exploring Explainable Selection to Control Abstract Generation
2004.11779
Table 6: Controllability: ROUGE Recall scores on the CNN/DailyMail dataset regarding different explainable aspects under different thresholds, such as novelty ϵn and relevance ϵr.
['[BOLD] Models ESCA', '[BOLD] R-1', '[BOLD] R-2', '[BOLD] R-L']
[['[ITALIC] ϵr=0 [ITALIC] ϵn=0 [ITALIC] ϵs=0', '46.49', '20.64', '42.57'], ['Novelty [ITALIC] ϵn=0.45', '47.31', '20.90', '43.53'], ['Relevance [ITALIC] ϵr=0.5', '46.97', '20.82', '42.99']]
We evaluated the controlled performance by ROUGE recall because it shows whether additional relevant summaries can be included under the deterministic controls. Additionally, there is always a trade-off between controllability and summary performance since the gold summary was automatically constructed with bias and not based on these controllability properties.
When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation?
1804.06323
Table 5: Effect of pre-training on multilingual translation into English. bi is a bilingual system trained on only the eval source language and all others are multi-lingual systems trained on two similar source languages.
['[BOLD] Train', '[BOLD] Eval', 'bi', 'std', 'pre', 'align']
[['Gl + Pt', 'Gl', '2.2', '17.5', '20.8', '[BOLD] 22.4'], ['Az + Tr', 'Az', '1.3', '5.4', '5.9', '[BOLD] 7.5'], ['Be + Ru', 'Be', '1.6', '[BOLD] 10.0', '7.9', '9.6']]
When applying pre-trained embeddings, the gains in each translation pair are roughly in order of their similarity, with Gl/Pt showing the largest gains, and Be/Ru showing a small decrease. In addition, it is also interesting to note that as opposed to previous section, aligning the word embeddings helps to increase the BLEU scores for all three tasks. These increases are intuitive, as a single encoder is used for both of the source languages, and the encoder would have to learn a significantly more complicated transform of the input if the word embeddings for the languages were in a semantically separate space. Pre-training and alignment ensures that the word embeddings of the two source languages are put into similar vector spaces, allowing the model to learn in a similar fashion as it would if training on a single language.
When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation?
1804.06323
Table 1: Number of sentences for each language pair.
['[BOLD] Dataset', 'train', 'dev', 'test']
[['Gl → En', '10,017', '682', '1,007'], ['Pt → En', '51,785', '1,193', '1,803'], ['Az → En', '5,946', '671', '903'], ['Tr → En', '182,450', '4,045', '5,029'], ['Be → En', '4,509', '248', '664'], ['Ru → En', '208,106', '4,805', '5,476']]
In order to perform experiments in a controlled, multilingual setting, we created a parallel corpus from TED talks transcripts. (Az) and Turkish (Tr), and Belarusian (Be) and Russian (Ru). The languages in each pair are similar in vocabulary, grammar and sentence structure Matthews They also represent different language families – Gl/Pt are Romance; Az /Tr are Turkic; Be/ Ru are Slavic – allowing for comparison across languages with different caracteristics. Tokenization was done using Moses tokenizer
When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation?
1804.06323
Table 2: Effect of pre-training on BLEU score over six languages. The systems use either random initialization (std) or pre-training (pre) on both the source and target sides.
['[BOLD] Src → → [BOLD] Trg', 'std', 'pre std', 'std pre', 'pre']
[['Gl → En', '2.2', '[BOLD] 13.2', '2.8', '12.8'], ['Pt → En', '26.2', '[BOLD] 30.3', '26.1', '[BOLD] 30.8'], ['Az → En', '1.3', '[BOLD] 2.0', '1.6', '[BOLD] 2.0'], ['Tr → En', '14.9', '17.6', '14.7', '[BOLD] 17.9'], ['Be → En', '1.6', '2.5', '1.3', '[BOLD] 3.0'], ['Ru → En', '18.5', '[BOLD] 21.2', '18.7', '[BOLD] 21.1']]
Comparing the second and third columns, we can see the increase is much more significant with pre-trained source language embeddings. This indicates that the majority of the gain from pre-trained word embeddings results from a better encoding of the source sentence.
When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation?
1804.06323
(a) Pairwise comparison between two bilingual models
['bi:std ) so', 'bi:std 2/0', 'bi:pre about', 'bi:pre 0/53']
[['( laughter ) i', '2/0', 'people', '0/49'], [') i', '2/0', 'or', '0/43'], ['laughter ) i', '2/0', 'these', '0/39'], [') and', '2/0', 'with', '0/38'], ['they were', '1/0', 'because', '0/37'], ['have to', '5/2', 'like', '0/36'], ['a new', '1/0', 'could', '0/35'], ['to do ,', '1/0', 'all', '0/34'], ['‘‘ and then', '1/0', 'two', '0/32']]
We additionally performed pairwise comparisons between the top 10 n-grams that each system (selected from the task Gl → En) is better at generating, to further understand what kind of words pre-training is particularly helpful for. On the other hand, the improvements in systems without pre-trained embeddings were not very consistent, and largely focused on high-frequency words.
When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation?
1804.06323
(b) Pairwise comparison between two multilingual models
['multi:std here', 'multi:std 6/0', 'multi:pre+align on the', 'multi:pre+align 0/14']
[['again ,', '4/0', 'like', '1/20'], ['several', '4/0', 'should', '0/9'], ['you ’re going', '4/0', 'court', '0/9'], ['’ve', '4/0', 'judge', '0/7'], ['we ’ve', '4/0', 'testosterone', '0/6'], ['you ’re going to', '4/0', 'patents', '0/6'], ['people ,', '4/0', 'patent', '0/6'], ['what are', '3/0', 'test', '0/6'], ['the room', '3/0', 'with', '1/12']]
We additionally performed pairwise comparisons between the top 10 n-grams that each system (selected from the task Gl → En) is better at generating, to further understand what kind of words pre-training is particularly helpful for. On the other hand, the improvements in systems without pre-trained embeddings were not very consistent, and largely focused on high-frequency words.
Automatic Generation of High Quality CCGbanks for Parser Domain Adaptation
1906.01834
Table 4: Results on question sentences (§5.3). All of baseline C&C, EasySRL and depccg parsers are retrained on Questions data.
['Method', 'P', 'R', 'F1']
[['C&C', '-', '-', '86.8'], ['EasySRL', '88.2', '87.9', '88.0'], ['depccg', '90.42', '[BOLD] 90.15', '[BOLD] 90.29'], ['+ ELMo', '[BOLD] 90.55', '89.86', '90.21'], ['+ Proposed', '90.27', '89.97', '90.12']]
Contrary to our expectation, the plain depccg retrained on Questions data performs the best, with neither ELMo nor the proposed method taking any effect. We hypothesize that, since the evaluation set contains sentences with similar constructions, the contributions of the latter two methods are less observable on top of Questions data. Inspection of the output trees reveals that this is actually the case; the majority of differences among parser ’s configurations are irrelevant to question constructions, suggesting that the models capture well the syntax of question in the data.
Automatic Generation of High Quality CCGbanks for Parser Domain Adaptation
1906.01834
Table 1: The performance of baseline CCG parsers and the proposed converter on WSJ23, where UF1 and LF1 represents unlabeled and labeled F1, respectively.
['Method', 'UF1', 'LF1']
[['depccg', '94.0', '88.8'], ['+ ELMo', '94.98', '90.51'], ['Converter', '96.48', '92.68']]
In short, the method of depccg is equivalent to omitting the dependence on a dependency tree z from P(y|x,z) of our converter model, and running an A* parsing-based decoder on ptag|dep calculated on h1,... ,hN =Ω(ex1,... ,exN), as in our method. In this work, on top of that, we include as a baseline a setting where the affix vectors are replaced by contextualized word representation (ELMo; Peters et al. This can be regarded as evaluating the upper bound of the conversion quality, since the evaluated data comes from the same domain as the converter’s training data. Our converter shows much higher scores compared to the current best-performing depccg combined with ELMo (1.5% and 2.17% up in unlabeled/labeled F1 scores), suggesting that, using the proposed converter, we can obtain CCGbanks of high quality.
Automatic Generation of High Quality CCGbanks for Parser Domain Adaptation
1906.01834
Table 3: Results on the biomedical domain dataset (§5.3). P and R represent precision and recall, respectively. The scores of C&C and EasySRL fine-tuned on the GENIA1000 is included for comparison (excerpted from Lewis et al. (2016)).
['Method', 'P', 'R', 'F1']
[['C&C', '77.8', '71.4', '74.5'], ['EasySRL', '81.8', '82.6', '82.2'], ['depccg', '83.11', '82.63', '82.87'], ['+ ELMo', '85.87', '85.34', '85.61'], ['+ GENIA1000', '85.45', '84.49', '84.97'], ['+ Proposed', '[BOLD] 86.90', '[BOLD] 86.14', '[BOLD] 86.52']]
The plain depccg already achieves higher scores than these methods, and boosts when combined with ELMo (improvement of 2.73 points in terms of F1). Fine-tuning the parser on GENIA1000 results in a mixed result, with slightly lower scores. This is presumably because the automatically annotated Head First dependencies are not accurate. Finally, by fine-tuning on the Genia CCGbank, we observe another improvement, resulting in the highest 86.52 F1 score.
Automatic Generation of High Quality CCGbanks for Parser Domain Adaptation
1906.01834
Table 7: Results on speech conversation texts (§5.4), on the whole test set and the manually annotated subset.
['Method', '[BOLD] Whole P', '[BOLD] Whole R', '[BOLD] Whole F1', '[BOLD] Subset UF1', '[BOLD] Subset LF1']
[['depccg', '74.73', '73.91', '74.32', '90.68', '82.46'], ['+ ELMo', '75.76', '76.62', '76.19', '93.23', '86.46'], ['+ Proposed', '[BOLD] 78.03', '[BOLD] 77.06', '[BOLD] 77.54', '[BOLD] 95.63', '[BOLD] 92.65']]
Though the entire scores are relatively lower, the result suggests that the proposed method is effective to this domain on the whole. By directly evaluating the parser’s performance in terms of predicate argument relations (Subset columns) , we observe that it actually recovers the most of the dependencies, with the fine-tuned depccg achieving as high as 95.63% unlabeled F1 score.
Automatic Generation of High Quality CCGbanks for Parser Domain Adaptation
1906.01834
Table 8: Results on math problems (§5.5).
['Method', 'UF1', 'LF1']
[['depccg', '88.49', '66.15'], ['+ ELMo', '89.32', '70.74'], ['+ Proposed', '[BOLD] 95.83', '[BOLD] 80.53']]
Remarkably, we observe huge additive performance improvement. While, in terms of labeled F1, ELMo contributes about 4 points on top of the plain depccg, adding the new training set (converted from dependency trees) improves more than 10 points. Examining the resulting trees, we observe that the huge gain is primarily involved with expressions unique to math. However, after fine-tuning, it successfully produces the correct “ If S1 and S2, S3” structure, recognizing that the equal sign is a predicate.
The Dialogue Dodecathlon: Open-Domain Knowledge and Image Grounded Conversational Agents
1911.03768
Table 7: Test performance for various metrics on the dodecaDialogue tasks comparing our multi-task and multi-task + fine-tuned methods to existing approaches (cited). Dashes mean metric was not provided. ∗ was reported on validation only. Score is defined on a per-task basis in the metric column.
['[EMPTY]', 'Existing Approaches (independent) Approach', 'Existing Approaches (independent) PPL', 'Existing Approaches (independent) Score', 'Existing Approaches (independent) (Metric)', 'MT + FT PPL', 'MT + FT Score', 'All Tasks MT PPL', 'All Tasks MT Score']
[['ConvAI2', 'lewis2019bart', '*11.9', '*20.7', 'F1', '11.1', '21.6', '[BOLD] 10.8', '[BOLD] 21.7'], ['DailyDialog', '(mixreview)', '11.1', '-', 'F1', '[BOLD] 10.4', '[BOLD] 18.2', '12.0', '16.2'], ['Wiz. of Wikipedia', '(dinan2018wizard)', '23.1', '35.5', 'F1', '[BOLD] 8.3', '[BOLD] 38.4', '8.4', '[BOLD] 38.4'], ['Empathetic Dialog', '(rashkin2019empathy)', '21.2', '6.27', 'Avg-BLEU', '[BOLD] 11.4', '8.1', '11.5', '[BOLD] 8.4'], ['Cornell Movie', '(mixreview)', '27.5', '-', 'F1', '[BOLD] 20.2', '[BOLD] 12.4', '22.2', '11.9'], ['LIGHT', 'urbanek2019learning', '∗27.1', '∗13.9', 'F1', '[BOLD] 18.9', '[BOLD] 16.2', '19.3', '16.1'], ['ELI5', 'lewis2019bart', '24.2', '20.4', 'Avg-ROUGE', '[BOLD] 21.0', '[BOLD] 22.6', '24.9', '20.7'], ['Ubuntu', '(DBLP:journals/corr/LuanJO16)', '46.8', '-', 'F1', '[BOLD] 17.1', '12.7', '23.1', '12.1'], ['Twitter', '[EMPTY]', '-', '-', 'F1', '30.7', '9.9', '38.2', '9.8'], ['pushshift.io Reddit', '[EMPTY]', '-', '-', 'F1', '25.6', '13.6', '27.8', '13.5'], ['Image Chat', 'shuster2018imagechat', '-', '27.4', 'ROUGE-L (1 [ITALIC] st turn)', '[BOLD] 18.8', '[BOLD] 43.8', '22.3', '39.7'], ['IGC', '(igc)', '-', '1.57', 'BLEU (responses)', '11.9', '[BOLD] 9.9', '12.0', '8.2']]
Here, for the multi-task model we have fine-tuned the decoding hyperparameters per task. We generally find across all metrics a similar story as before when comparing the fine-tuning with multi-tasking: multi-tasking is successful, but the challenge is still to do better.
The Dialogue Dodecathlon: Open-Domain Knowledge and Image Grounded Conversational Agents
1911.03768
Table 3: Transfer performance of various multi-task models (validation perplexity).
['Model', 'angle=45,lap=0pt-(1em)ConvAI2', 'angle=45,lap=0pt-(1em)Wiz. of Wikipedia', 'angle=45,lap=0pt-(1em)EmpatheticDialog']
[['Reddit', '18.3', '15.3', '14.4'], ['Reddit+ConvAI2', '[BOLD] 11.4', '14.2', '14.7'], ['Reddit+Wiz. of Wikipedia', '16.3', '[BOLD] 8.7', '14.0'], ['Reddit+Empathetic Dialog', '17.9', '15.3', '11.3'], ['Multi-Tasking All 4 Tasks', '11.6', '[BOLD] 8.7', '[BOLD] 11.2']]
We first perform a preliminary study on a subset of the tasks: Reddit, ConvAI2, Wizard of Wikipedia and Empathetic Dialogues, and report the transfer ability of training on some of them, and testing on all of them (using the validation set), reporting perplexity. They show that training on pushshift.io Reddit alone, a huge dataset, is effective at transfer to other tasks, but never as effective as fine-tuning on the task itself. Moreover, fine-tuning on most of the smaller tasks actually provides improvements over pushshift.io Reddit training alone at transfer, likely because the three tasks selected are more similar to each other than to pushshift.io Reddit. Finally, training on all four tasks is the most effective strategy averaged over all tasks compared to any other single model, although this does not beat switching between different fine-tuned models on a per-task basis. The performance is impressive, with some tasks yielding lower perplexity than BERT pre-training + single task fine-tuning. However, it still lags significantly behind fine-tuning applied after pushshift.io Reddit pre-training.
The Dialogue Dodecathlon: Open-Domain Knowledge and Image Grounded Conversational Agents
1911.03768
Table 14: Human evaluations on Wizard of Wikipedia (unseen) test set, comparing various decoding schemes for our Image+Seq2Seq model trained on all tasks MT, as well as comparisons with human outputs, using ACUTE-Eval. All scores are statistically significant (binomial test, p
['Lose Percentage', '[EMPTY]', 'Win Percentage dinan2018wizard', 'Win Percentage Image+Seq2Seq', 'Win Percentage Image+Seq2Seq', 'Win Percentage Human']
[['Lose Percentage', '[EMPTY]', '[EMPTY]', 'Nucleus', 'Beam', '[EMPTY]'], ['Lose Percentage', 'dinan2018wizard', '-', '62.3', '64.1', '75.8'], ['Lose Percentage', 'Image+Seq2Seq Nucleus', '37.7', '-', '-', '72.8'], ['Lose Percentage', 'Image+Seq2Seq Beam', '35.9', '-', '-', '60.5'], ['[EMPTY]', 'Human', '24.2', '27.2', '39.5', '-']]
As with our experiments regarding automatic metrics, we additionally explored nucleus sampling, with parameter p=0.7, and compared to both the baseline models as well as human outputs. The findings are similar to the pairwise ACUTE-Eval results in the main paper.
Cross-lingual Entity Alignment viaJoint Attribute-Preserving Embedding
1708.05045
(a)
['DBP15KZH-EN', 'DBP15KZH-EN', 'ZH→EN [ITALIC] Hits@1', 'ZH→EN [ITALIC] Hits@10', 'ZH→EN [ITALIC] Hits@50', 'ZH→EN [ITALIC] Mean', 'EN→ZH [ITALIC] Hits@1', 'EN→ZH [ITALIC] Hits@10', 'EN→ZH [ITALIC] Hits@50', 'EN→ZH [ITALIC] Mean']
[['JE', 'JE', '21.27', '42.77', '56.74', '766', '19.52', '39.36', '53.25', '841'], ['MTransE', 'MTransE', '30.83', '61.41', '79.12', '154', '24.78', '52.42', '70.45', '208'], ['JAPE', 'SE w/o neg.', '38.34', '68.86', '84.07', '103', '31.66', '59.37', '76.33', '147'], ['JAPE', 'SE', '39.78', '72.35', '87.12', '84', '32.29', '62.79', '80.55', '109'], ['JAPE', 'SE+AE', '[BOLD] 41.18', '[BOLD] 74.46', '[BOLD] 88.90', '[BOLD] 64', '[BOLD] 40.15', '[BOLD] 71.05', '[BOLD] 86.18', '[BOLD] 73']]
We used a certain proportion of the gold standards as seed alignment while left the remaining as testing data, i.e., the latent aligned entities to discover. The variation of Hits@k with different proportions will be shown shortly. For relationships and attributes, we simply extracted the property pairs with exactly the same labels, which only account for a small portion of the seed alignment. Besides, JE does not give a mandatory constraint on the length of vectors. Instead, it only minimizes ∥v∥22−1 to restrain vector length and brings adverse effect. For MTransE, it models the structures of KBs in different vector spaces, and information loss happens when learning the translation between vector spaces. We found that involving negative triples in structure embedding reduces the random distribution of entities, and involving attribute embedding as constraint further refines the distribution of entities. The two improvements demonstrate that systematic distribution of entities makes for the cross-lingual entity alignment task.
Cross-lingual Entity Alignment viaJoint Attribute-Preserving Embedding
1708.05045
(b)
['DBP15KJA-EN', 'DBP15KJA-EN', 'JA→EN [ITALIC] Hits@1', 'JA→EN [ITALIC] Hits@10', 'JA→EN [ITALIC] Hits@50', 'JA→EN [ITALIC] Mean', 'EN→JA [ITALIC] Hits@1', 'EN→JA [ITALIC] Hits@10', 'EN→JA [ITALIC] Hits@50', 'EN→JA [ITALIC] Mean']
[['JE', 'JE', '18.92', '39.97', '54.24', '832', '17.80', '38.44', '52.48', '864'], ['MTransE', 'MTransE', '27.86', '57.45', '75.94', '159', '23.72', '49.92', '67.93', '220'], ['JAPE', 'SE w/o neg.', '33.10', '63.90', '80.80', '114', '29.71', '56.28', '73.84', '156'], ['JAPE', 'SE', '34.27', '66.39', '83.61', '104', '31.40', '60.80', '78.51', '127'], ['JAPE', 'SE+AE', '[BOLD] 36.25', '[BOLD] 68.50', '[BOLD] 85.35', '[BOLD] 99', '[BOLD] 38.37', '[BOLD] 67.27', '[BOLD] 82.65', '[BOLD] 113']]
We used a certain proportion of the gold standards as seed alignment while left the remaining as testing data, i.e., the latent aligned entities to discover. The variation of Hits@k with different proportions will be shown shortly. For relationships and attributes, we simply extracted the property pairs with exactly the same labels, which only account for a small portion of the seed alignment. Besides, JE does not give a mandatory constraint on the length of vectors. Instead, it only minimizes ∥v∥22−1 to restrain vector length and brings adverse effect. For MTransE, it models the structures of KBs in different vector spaces, and information loss happens when learning the translation between vector spaces. We found that involving negative triples in structure embedding reduces the random distribution of entities, and involving attribute embedding as constraint further refines the distribution of entities. The two improvements demonstrate that systematic distribution of entities makes for the cross-lingual entity alignment task.
Cross-lingual Entity Alignment viaJoint Attribute-Preserving Embedding
1708.05045
(c)
['DBP15KFR-EN', 'DBP15KFR-EN', 'FR→EN [ITALIC] Hits@1', 'FR→EN [ITALIC] Hits@10', 'FR→EN [ITALIC] Hits@50', 'FR→EN [ITALIC] Mean', 'EN→FR [ITALIC] Hits@1', 'EN→FR [ITALIC] Hits@10', 'EN→FR [ITALIC] Hits@50', 'EN→FR [ITALIC] Mean']
[['JE', 'JE', '15.38', '38.84', '56.50', '574', '14.61', '37.25', '54.01', '628'], ['MTransE', 'MTransE', '24.41', '55.55', '74.41', '139', '21.26', '50.60', '69.93', '156'], ['JAPE', 'SE w/o neg.', '29.55', '62.18', '79.36', '123', '25.40', '56.55', '74.96', '133'], ['JAPE', 'SE', '29.63', '64.55', '81.90', '95', '26.55', '60.30', '78.71', '107'], ['JAPE', 'SE+AE', '[BOLD] 32.39', '[BOLD] 66.68', '[BOLD] 83.19', '[BOLD] 92', '[BOLD] 32.97', '[BOLD] 65.91', '[BOLD] 82.38', '[BOLD] 97']]
We used a certain proportion of the gold standards as seed alignment while left the remaining as testing data, i.e., the latent aligned entities to discover. The variation of Hits@k with different proportions will be shown shortly. For relationships and attributes, we simply extracted the property pairs with exactly the same labels, which only account for a small portion of the seed alignment. Besides, JE does not give a mandatory constraint on the length of vectors. Instead, it only minimizes ∥v∥22−1 to restrain vector length and brings adverse effect. For MTransE, it models the structures of KBs in different vector spaces, and information loss happens when learning the translation between vector spaces. We found that involving negative triples in structure embedding reduces the random distribution of entities, and involving attribute embedding as constraint further refines the distribution of entities. The two improvements demonstrate that systematic distribution of entities makes for the cross-lingual entity alignment task.
Cross-lingual Entity Alignment viaJoint Attribute-Preserving Embedding
1708.05045
(a)
['DBP100K', 'ZH→EN', 'EN→ZH']
[['JE', '16.95', '16.63'], ['MTransE', '34.31', '29.18'], ['JAPE', '[BOLD] 41.75', '[BOLD] 40.13']]
, we built three larger datasets by choosing 100 thousand ILLs between English and Chinese, Japanese and French in the same way as DBP15K. The threshold of relationship triples to select ILLs was set to 2. Each dataset contains several hundred thousand entities and several million triples. We set d=100,β=0.1 and keep other parameters the same as DBP15K. For JE, the training takes 2000 epochs as reported in its paper. Due to lack of space, only Hits@10 is reported. We found that similar results and conclusions stand for DBP100K compared with DBP15K, which indicate the scalability and stability of JAPE.
Cross-lingual Entity Alignment viaJoint Attribute-Preserving Embedding
1708.05045
(b)
['JA→EN', 'EN→JA']
[['21.17', '20.98'], ['33.93', '27.22'], ['[BOLD] 42.00', '[BOLD] 39.30']]
, we built three larger datasets by choosing 100 thousand ILLs between English and Chinese, Japanese and French in the same way as DBP15K. The threshold of relationship triples to select ILLs was set to 2. Each dataset contains several hundred thousand entities and several million triples. We set d=100,β=0.1 and keep other parameters the same as DBP15K. For JE, the training takes 2000 epochs as reported in its paper. Due to lack of space, only Hits@10 is reported. We found that similar results and conclusions stand for DBP100K compared with DBP15K, which indicate the scalability and stability of JAPE.
Cross-lingual Entity Alignment viaJoint Attribute-Preserving Embedding
1708.05045
(c)
['FR→EN', 'EN→FR']
[['22.98', '22.63'], ['44.84', '39.19'], ['[BOLD] 53.64', '[BOLD] 50.51']]
, we built three larger datasets by choosing 100 thousand ILLs between English and Chinese, Japanese and French in the same way as DBP15K. The threshold of relationship triples to select ILLs was set to 2. Each dataset contains several hundred thousand entities and several million triples. We set d=100,β=0.1 and keep other parameters the same as DBP15K. For JE, the training takes 2000 epochs as reported in its paper. Due to lack of space, only Hits@10 is reported. We found that similar results and conclusions stand for DBP100K compared with DBP15K, which indicate the scalability and stability of JAPE.
Graph-Community Detection for Cross-Document Topic Segment Relationship Identification
1606.04081
Table 9: Summary of the results in the 4S corpus. In bold are the highest scores obtained.
['[EMPTY]', '[ITALIC] [BOLD] A [BOLD] R [BOLD] I', '[ITALIC] [BOLD] F [BOLD] 1', '[ITALIC] [BOLD] A [BOLD] c [BOLD] c']
[['Spectral Clustering', '[BOLD] 0.27', '[BOLD] 0.41', '0.47'], ['Topic Models', '0.17', '0.30', '0.46'], ['Louvain', '0.26', '0.40', '[BOLD] 0.48']]
After having carried out all experiments with the 4S corpus, this section provides a discussion of the obtained results. Contrary to the AVL corpus, results were close between two techniques: Spectral Clustering and Louvain. The former obtained a better overall performance, since it had the highest scores in two of the metrics.
Graph-Community Detection for Cross-Document Topic Segment Relationship Identification
1606.04081
Table 3: AVL corpus used in the experiments.
['[BOLD] Topic', '[BOLD] Segments', '[BOLD] #Words', '[BOLD] #Vocab']
[['[BOLD] BST', '7', '1822', '284'], ['[BOLD] Tree Height', '5', '1800', '338'], ['[BOLD] Tree Rotation', '13', '3762', '538'], ['[BOLD] Tree Balance', '13', '3670', '483'], ['[BOLD] AVL Rebalance', '11', '8142', '700']]
The segmented AVL corpus was then annotated with equivalence relationships. This annotation was done in the same spirit as the 4S corpus, using a user-centered approach. The difficulty in this task is in the presence of multiple topics in a single segment. The annotation process consisted of going through each document individually. In the case of the first document, the segments were tagged with the topic they discussed. On subsequent documents, it was assessed if the segments should be assigned with an existed tag or if a new tag should be created. After the annotation process, a total of 15 different relationships were found in the set of 86 segments. The experiments only used the topics that appeared in the majority of the documents (49 segments).
Graph-Community Detection for Cross-Document Topic Segment Relationship Identification
1606.04081
Table 4: Results of the clustering algorithms in the AVL corpus. Best results are in bold.
['[BOLD] Clustering Algorithm', '[ITALIC] [BOLD] A [BOLD] R [BOLD] I', '[ITALIC] [BOLD] F [BOLD] 1', '[ITALIC] [BOLD] A [BOLD] c [BOLD] c']
[['k-means', '0.011', '0.34', '0.31'], ['Agglomerative', '0.11', '0.32', '[BOLD] 0.52'], ['DBSCAN', '0.03', '0.26', '0.27'], ['Mean Shift', '0.009', '0.35', '0.29'], ['Spectral', '[BOLD] 0.15', '[BOLD] 0.36', '0.41'], ['NMF', '0.046', '0.22', '0.45']]
Results show that most of the techniques obtained low scores in all evaluation metrics. The only exceptions were the Agglomerative and Spectral clustering. It should be noted that not all evaluation metrics agree as to the best technique, since Spectral clustering had higher scores in ARI and F1, whereas Agglomerative clustering had the best Acc score. Also, there is not a strict correlation between the evaluation metrics. For example, Mean Shift obtained the lowest ARI, but would be considered as one of the best performing techniques in F1. In this context, we set as a baseline the Spectral clustering technique, since it was the best in two of evaluation metrics and still performed well on the third one.
Graph-Community Detection for Cross-Document Topic Segment Relationship Identification
1606.04081
Table 5: Best results obtained with graph-community detection algorithms. In bold are the best results obtained.
['[BOLD] Algorithm', '[BOLD] Weighting', '[BOLD] Scoring Function', '[ITALIC] [BOLD] A [BOLD] R [BOLD] I', '[ITALIC] [BOLD] F [BOLD] 1', '[ITALIC] [BOLD] A [BOLD] c [BOLD] c']
[['LP', '-', '[ITALIC] scorec', '0.0', '0.35', '0.27'], ['CNM', '-', '[ITALIC] scoretfif', '0.05', '0.31', '0.42'], ['Louvain', 'Best [ITALIC] tf-idf', '[ITALIC] scorec', '0.12', '0.31', '0.42'], ['Walktraps', 'Count + Avg [ITALIC] tf-idf', '[ITALIC] scorec', '[BOLD] 0.19', '[BOLD] 0.39', '[BOLD] 0.48'], ['Leading Eigenvector', 'Count + Best [ITALIC] tf-idf', '[ITALIC] scoretfif', '0.05', '0.35', '0.38'], ['Bigclam', '-', '[ITALIC] scoreseg', '0.06', '0.1', '0.25']]
It should be noted that some of the techniques are not sensitive to edge weight. Another aspect to consider is that there are words in the documents that should not be taken into account, since they are either too common or too rare to be representative of a subtopic in a document. In this experiment we also adopted this approach by setting for each segment a cutoff at the top-100 words with the highest tf-idf score. It is possible to observe that the Walktraps algorithm performed better in all evaluation metrics.
Graph-Community Detection for Cross-Document Topic Segment Relationship Identification
1606.04081
Table 6: Summary of the results in the document relationship identification task.
['[BOLD] Algorithm', '[ITALIC] [BOLD] A [BOLD] R [BOLD] I', '[ITALIC] [BOLD] F [BOLD] 1', '[ITALIC] [BOLD] A [BOLD] c [BOLD] c']
[['Spectral Clustering', '0.15', '0.36', '0.41'], ['Topic Models', '0.10', '0.31', '0.44'], ['Walktraps', '[BOLD] 0.30', '[BOLD] 0.46', '[BOLD] 0.52']]
The Walktraps approach provided the best performance in the document relationship identification task, obtaining better results in all evaluation metrics used. The most important conclusion from these results is that graph-community detection-based techniques are more suitable for the relationship identification task. The difference in the evaluation metrics is substantial, with a value twice as high in ARI, and a 10% increase in F1 and Acc. The problem with clustering techniques is that they rely on a similarity space, which is not appropriate for the nature of this task. Given the results obtained, we claim that the document relationship identification task should be done by exploring network properties of a co-occurrence graph using a graph-community detection approach.
Graph-Community Detection for Cross-Document Topic Segment Relationship Identification
1606.04081
Table 7: Results of the clustering algorithms in the 4S corpus. Best results are in bold.
['[BOLD] Clustering Algorithm', '[ITALIC] [BOLD] A [BOLD] R [BOLD] I', '[ITALIC] [BOLD] F [BOLD] 1', '[ITALIC] [BOLD] A [BOLD] c [BOLD] c']
[['k-means', '0.04', '0.33', '0.37'], ['Agglomerative', '0.16', '0.38', '0.47'], ['DBSCAN', '0.08', '0.32', '0.38'], ['Mean Shift', '0.03', '0.32', '0.40'], ['Spectral', '[BOLD] 0.27', '[BOLD] 0.41', '[BOLD] 0.47'], ['NMF', '0.21', '0.34', '0.43']]
Spectral clustering obtained again the best performance. The heat map analyzes of the 4S corpus similarity graph provided similar conclusions to the AVL corpus. Therefore, many segments are wrongly perceived as similar by the metrics.
Graph-Community Detection for Cross-Document Topic Segment Relationship Identification
1606.04081
Table 8: Best results obtained in the 4S corpus. In bold are the highest scores obtained.
['[BOLD] Algorithm', '[BOLD] Weighting', '[BOLD] Scoring Function', '[ITALIC] [BOLD] A [BOLD] R [BOLD] I', '[ITALIC] [BOLD] F [BOLD] 1', '[ITALIC] [BOLD] A [BOLD] c [BOLD] c']
[['LP', '-', '[ITALIC] scoretfif', '0.06', '0.17', '0.26'], ['CNM', '-', '[ITALIC] scoreseg', '0.06', '0.25', '0.35'], ['Louvain', 'Count', '[ITALIC] scoreseg', '[BOLD] 0.21', '[BOLD] 0.36', '[BOLD] 0.42'], ['Walktraps', 'Count + Best tf-idf', '[ITALIC] scoreseg', '0.15', '0.30', '0.40'], ['Bigclam', '-', '[ITALIC] scoreseg', '0.05', '0.09', '0.19']]
In these experiments the Louvain technique performed better in all metrics. It should be noted that the best weighting scheme and scoring function were Count and Best tf-idf. These two combinations are different from one another in the sense that one just uses raw counts whereas the other takes into account an importance value of the words. The fact that different scoring functions worked better in different scenarios, shows that the corpora have particular characteristics and, thus, different ways to achieve a better model for the data are necessary.
Character-based NMT with Transformer
1911.04997
5,000
['train \\test', 'clean', 'all', 'delete', 'insert', 'replace', 'switch', 'avg']
[['clean', '[BOLD] 27.4', '17.6', '18.0', '17.5', '17.2', '17.4', '19.2'], ['all', '25.9', '[BOLD] 25.4', '24.7', '26.0', '24.9', '25.5', '[BOLD] 25.4'], ['delete', '26.1', '22.1', '[BOLD] 25.5', '20.1', '19.8', '22.9', '22.8'], ['insert', '26.2', '23.7', '21.9', '[BOLD] 26.6', '23.4', '22.6', '24.1'], ['replace', '26.0', '24.1', '22.1', '25.5', '[BOLD] 25.4', '22.5', '24.3'], ['switch', '26.3', '22.4', '22.6', '20.9', '19.9', '[BOLD] 26.2', '23.1']]
Adding noise helps. Training on similar type of noisy data improves performance for all vocabularies. By training on similar kinds of noise in the training data, we are able to robustify BPE models to the same level as character level models without sacrificing too much performance on the clean test set. Effect on clean data. However, in the case of BPE 30 000, training on noisy data significantly boosted performance (eg, improvement of 6 BLEU for DE-EN and 1.7 for EN-DE when training with delete and testing on clean data). We hypothesize that the increased diversity of tokens during training (due to the presence of noise) acts as a regularizer boosting performance on the test set.
Character-based NMT with Transformer
1911.04997
Table 2: Similarity metrics between test sets and training sets.
['Dataset', '# Sents', 'DE-EN % Unseen', 'DE-EN PPL', 'EN-DE % Unseen', 'EN-DE PPL', 'DE-EN (high) % Unseen', 'DE-EN (high) PPL']
[['IWSLT14', '6\u2009750', '4.4', '583', '2.2', '282', '2.0', '740'], ['WMT-IT', '2\u2009000', '14.4', '2\u2009540', '13.2', '2\u2009322', '5.8', '996'], ['WMT-Bio.', '321', '20.0', '5\u2009540', '12.1', '3\u2009035', '9.2', '3\u2009404'], ['newstest 2016', '2\u2009999', '12.7', '2,712', '9.0', '1,659', '4.5', '1\u2009703'], ['Europarl', '3\u2009000', '9.0', '1,765', '4.4', '771', '0', '10'], ['commoncrawl', '3\u2009000', '17.8', '5\u2009024', '12.6', '2\u2009711', '0', '9'], ['avg', '3\u2009011', '13.0', '3\u2009022', '8.9', '1,797', '3.6', '1\u2009144']]
“% Unseen” is the percentage of words in the test set that are not present in the training corpus. “PPL” is the perplexity measure of the test set using a language model trained on the training data. DE-EN low resource. Character level models are better for all out of domain datasets, except for Europarl. This suggests that character level models outperform BPE when evaluated on data sufficiently different from the training domain in this low resource setting. DE-EN high resource. Character level models are now only better when testing on the WMT-Biomedical test set. For all other test sets, BPE 30 000 leads to the best BLEU scores. EN-DE low resource. We see similar performance for in and out of domain data. Good BPE models on in-domain test set are still better on out-of-domain test sets. A possible explanation is the lower proportion of unseen words as compared to German and seeing the words more frequently in the training corpus.
Character-based NMT with Transformer
1911.04997
Table 3: Results for the low resource setting. “PreN + T” refers to an architecture with layer normalization before each sub-layer and transparent attention. Best results for each layer depth are shown in bold.
['enc', 'Char PreN', 'Char PreN+T', '5\u2009000 PreN', '5\u2009000 PreN+T']
[['6', '33.4', '33.2', '[BOLD] 34.6', '[BOLD] 34.6'], ['12', '33.8', '34.5', '[BOLD] 34.8', '[BOLD] 34.8'], ['16', '33.5', '34.5', '[BOLD] 35.2', '34.9'], ['20', '34.4', '34.7', '[BOLD] 35.3', '34.9'], ['24', '34.1', '34.7', '35.1', '[BOLD] 35.4'], ['28', '34.4', '34.3', '[BOLD] 35', '[BOLD] 35'], ['32', '34.1', '34.5', '34.7', '[BOLD] 35.2']]
While adding transparent attention is beneficial for almost all depths in the character level models, it gives mixed results for the BPE 5 000 model. Hence, we see that by training deeper models we are able to marginally improve performance for both vocabularies. For character level models we improve by 1 BLEU points (from 33.7 to 34.7). For 5 000, the gain is a more modest 0.4 BLEU points (from 35 to 35.4). We are able to narrow but not close the gap between character and BPE.
Character-based NMT with Transformer
1911.04997
Table 4: Results for the high resource setting.
['enc', 'Char PreN', 'Char PreN + T', '5\u2009000 PreN', '5\u2009000 PreN+T', '30\u2009000 PreN', '30\u2009000 PreN+T']
[['6', '36.3', '36.5', '36.2', '36.4', '[BOLD] 37.2', '36.9'], ['12', '36.8', '[BOLD] 37.5', '37.1', '36.9', '37.4', '[BOLD] 37.5'], ['18', '37.3', '37.4', '37.6', '37.7', '[BOLD] 37.9', '37.8'], ['24', '37.7', '37.7', '37.6', '37.6', '[BOLD] 37.8', '[BOLD] 37.8'], ['32', '37.2', '37.4', '37.9', '[BOLD] 38', '[BOLD] missing38', '37.9']]
, we no longer train models with post layer normalization and restrict ourselves to pre layer normalization and transparent attention. Here, we also experiment with the BPE 30 000 vocabulary. Here again we see an improvement in BLEU scores with increasing depth of 1-2 points when going beyond 6 encoder layers. Transparent attention seems to help consistently for character level models but barely does anything for the two BPE models. Further, with increased depth, BPE 5 000 and 30 000 perform similarly in contrast to shallow models where there is a 1 BLEU difference. However, character level models are still slightly worse than the BPE models with a max score of 37.7 rather than 38 for the BPE models.
Character-based NMT with Transformer
1911.04997
Character
['train \\test', 'clean', 'all', 'delete', 'insert', 'replace', 'switch', 'avg']
[['clean', '[BOLD] 34.1', '27.6', '27.9', '27.7', '25.4', '30.6', '28.9'], ['all', '32.7', '[BOLD] 33.1', '30.4', '32.1', '28.1', '32.4', '[BOLD] 31.5'], ['delete', '33.0', '28.1', '[BOLD] 30.7', '25.9', '24.6', '30.6', '28.8'], ['insert', '32.7', '28.8', '25.5', '[BOLD] 32.4', '25.7', '30.2', '29.2'], ['replace', '32.9', '29.7', '28.2', '30.3', '[BOLD] 31.1', '30.2', '30.4'], ['switch', '33.1', '28.4', '26.8', '28.0', '24.8', '[BOLD] 32.9', '29.0']]
Adding noise helps. Training on similar type of noisy data improves performance for all vocabularies. By training on similar kinds of noise in the training data, we are able to robustify BPE models to the same level as character level models without sacrificing too much performance on the clean test set. Effect on clean data. However, in the case of BPE 30 000, training on noisy data significantly boosted performance (eg, improvement of 6 BLEU for DE-EN and 1.7 for EN-DE when training with delete and testing on clean data). We hypothesize that the increased diversity of tokens during training (due to the presence of noise) acts as a regularizer boosting performance on the test set.
Character-based NMT with Transformer
1911.04997
5,000
['train \\test', 'clean', 'all', 'delete', 'insert', 'replace', 'switch', 'avg']
[['clean', '[BOLD] 35.0', '23.5', '24.9', '23.5', '22.7', '25.0', '25.8'], ['all', '34.4', '[BOLD] 32.9', '32.0', '33.4', '31.6', '33.6', '[BOLD] 33.0'], ['delete', '34.3', '29.6', '[BOLD] 32.7', '27.3', '25.5', '31.1', '30.1'], ['insert', '34.4', '30.9', '29.1', '[BOLD] 33.8', '30.4', '30.8', '31.6'], ['replace', '34.0', '30.7', '29.2', '32.1', '[BOLD] 31.9', '29.9', '31.3'], ['switch', '34.1', '29.3', '29.2', '28.0', '25.2', '[BOLD] 33.7', '29.9']]
Adding noise helps. Training on similar type of noisy data improves performance for all vocabularies. By training on similar kinds of noise in the training data, we are able to robustify BPE models to the same level as character level models without sacrificing too much performance on the clean test set. Effect on clean data. However, in the case of BPE 30 000, training on noisy data significantly boosted performance (eg, improvement of 6 BLEU for DE-EN and 1.7 for EN-DE when training with delete and testing on clean data). We hypothesize that the increased diversity of tokens during training (due to the presence of noise) acts as a regularizer boosting performance on the test set.
Character-based NMT with Transformer
1911.04997
30,000
['train \\test', 'clean', 'all', 'delete', 'insert', 'replace', 'switch', 'avg']
[['clean', '28.2', '17.3', '18.2', '17.0', '17.9', '19.0', '19.6'], ['all', '34.0', '[BOLD] 32.0', '31.5', '32.3', '30.2', '32.9', '[BOLD] 32.2'], ['delete', '[BOLD] 34.2', '28.1', '[BOLD] 32.5', '25.5', '24.2', '29.3', '29.0'], ['insert', '[BOLD] 34.2', '30.4', '28.8', '[BOLD] 32.9', '29.6', '29.7', '30.9'], ['replace', '33.8', '30.4', '28.8', '31.5', '[BOLD] 30.9', '29.6', '30.8'], ['switch', '[BOLD] 34.2', '28.0', '28.2', '25.1', '23.2', '[BOLD] 33.7', '28.7']]
Adding noise helps. Training on similar type of noisy data improves performance for all vocabularies. By training on similar kinds of noise in the training data, we are able to robustify BPE models to the same level as character level models without sacrificing too much performance on the clean test set. Effect on clean data. However, in the case of BPE 30 000, training on noisy data significantly boosted performance (eg, improvement of 6 BLEU for DE-EN and 1.7 for EN-DE when training with delete and testing on clean data). We hypothesize that the increased diversity of tokens during training (due to the presence of noise) acts as a regularizer boosting performance on the test set.
Character-based NMT with Transformer
1911.04997
Character
['train \\test', 'clean', 'all', 'delete', 'insert', 'replace', 'switch', 'avg']
[['clean', '[BOLD] 26.9', '21.2', '21.3', '21.4', '19.3', '22.5', '22.1'], ['all', '25.7', '[BOLD] 25.3', '24.1', '26.2', '24.2', '25.5', '[BOLD] 25.2'], ['delete', '25.3', '22.0', '[BOLD] 24.9', '20.2', '19.8', '23.2', '22.6'], ['insert', '25.9', '22.8', '20.7', '[BOLD] 26.5', '21.1', '22.7', '23.3'], ['replace', '26.0', '23.5', '21.9', '24.3', '[BOLD] 25.3', '22.8', '24.0'], ['switch', '25.6', '22.1', '21.9', '21.5', '19.5', '[BOLD] 25.9', '22.8']]
Adding noise helps. Training on similar type of noisy data improves performance for all vocabularies. By training on similar kinds of noise in the training data, we are able to robustify BPE models to the same level as character level models without sacrificing too much performance on the clean test set. Effect on clean data. However, in the case of BPE 30 000, training on noisy data significantly boosted performance (eg, improvement of 6 BLEU for DE-EN and 1.7 for EN-DE when training with delete and testing on clean data). We hypothesize that the increased diversity of tokens during training (due to the presence of noise) acts as a regularizer boosting performance on the test set.
Character-based NMT with Transformer
1911.04997
30,000
['train \\test', 'clean', 'all', 'delete', 'insert', 'replace', 'switch', 'avg']
[['clean', '24.5', '14.9', '15.4', '14.3', '14.5', '14.8', '16.4'], ['all', '26.2', '[BOLD] 25.2', '25.1', '25.7', '24.6', '25.7', '[BOLD] 25.4'], ['delete', '26.2', '21.6', '[BOLD] 25.6', '19.7', '19.3', '22.1', '22.4'], ['insert', '26.3', '23.8', '21.9', '[BOLD] 26.2', '23.7', '22.4', '24.1'], ['replace', '26.2', '23.8', '22.1', '24.9', '[BOLD] 25.2', '22.4', '24.1'], ['switch', '[BOLD] 26.6', '21.7', '21.5', '19.5', '19.0', '[BOLD] 26.4', '22.5']]
Adding noise helps. Training on similar type of noisy data improves performance for all vocabularies. By training on similar kinds of noise in the training data, we are able to robustify BPE models to the same level as character level models without sacrificing too much performance on the clean test set. Effect on clean data. However, in the case of BPE 30 000, training on noisy data significantly boosted performance (eg, improvement of 6 BLEU for DE-EN and 1.7 for EN-DE when training with delete and testing on clean data). We hypothesize that the increased diversity of tokens during training (due to the presence of noise) acts as a regularizer boosting performance on the test set.
Two-pass Discourse Segmentation with Pairing and Global Features
1407.8215
Table 5: The result of discourse parsing using different segmentation. The performance is evaluated on intra-sentential, multi-sentential, and text level separately, using the unlabeled and labeled F-score.
['Level', 'Segmentation', 'Span', 'Nuc', 'Rel']
[['Intra', 'Joty et al.', '78.7', '70.8', '60.8'], ['Intra', 'Ours', '[BOLD] 85.1', '[BOLD] 77.5', '[BOLD] 66.8'], ['Intra', 'Manual', '96.3', '87.4', '75.1'], ['Multi', 'Joty et al.', '[BOLD] 71.1', '49.0', '33.2'], ['Multi', 'Ours', '[BOLD] 71.1', '[BOLD] 49.6', '[BOLD] 33.7'], ['Multi', 'Manual', '72.6', '50.3', '34.7'], ['Text', 'Joty et al.', '75.4', '61.7', '49.1'], ['Text', 'Ours', '[BOLD] 78.7', '[BOLD] 64.8', '[BOLD] 51.8'], ['Text', 'Manual', '85.7', '71.0', '58.2']]
As can be seen, on the intra-sentential level, the influence of segmentation is significant. Evaluated on Span, Nuclearity, and Relation, using our own segmentation results in a 10% difference in F-score (p
Two-pass Discourse Segmentation with Pairing and Global Features
1407.8215
Table 2: Characteristics of the training and the test set in RST-DT.
['[EMPTY]', '[BOLD] Training', '[BOLD] Test']
[['# of documents', '347', '38'], ['# of sentences', '7,455', '992'], ['# of EDUs', '18,765', '2,346'], ['# of in-sentence boundaries', '11,310', '1,354']]
We first study how our proposed two-pass discourse segmenter based on pairing features performs against existing segmentation models. In this experiment, we train our linear-chain CRF models on the RST Discourse Treebank (RST-DT) By convention, the corpus is partitioned into a training set of 347 documents and a test set of 38 documents.
Two-pass Discourse Segmentation with Pairing and Global Features
1407.8215
Table 6: The effect of removing pairing features (−p), removing global features (−g), and removing both (−pg), in comparison with the full model in the first section, across different segmentation frameworks. CRF stands for our standard two-pass segmentation models based on linear-chain CRFs, while LR and SVM stand for two different classifiers in the framework of independent binary classification.
['[BOLD] Model', '[BOLD] Precision', '[BOLD] Recall', '[ITALIC] F1 [BOLD] score']
[['CRF', '92.8', '92.3', '92.6'], ['LR', '[BOLD] 92.9', '92.2', '92.5'], ['SVM', '92.6', '[BOLD] 92.8', '[BOLD] 92.7'], ['CRF− [ITALIC] p', '91.3', '91.1', '91.2'], ['LR− [ITALIC] p', '91.0', '90.5', '90.7'], ['SVM− [ITALIC] p', '90.4', '92.5', '91.4'], ['CRF− [ITALIC] g', '92.5', '91.0', '91.7'], ['LR− [ITALIC] g', '91.7', '91.0', '91.3'], ['SVM− [ITALIC] g', '84.7', '94.7', '89.4'], ['CRF− [ITALIC] pg', '87.0', '82.5', '84.7'], ['LR− [ITALIC] pg', '86.9', '83.0', '84.9'], ['SVM− [ITALIC] pg', '70.5', '94.6', '80.8']]
The first section lists the performance of our full model in different segmentation frameworks. As can be seen, our full models perform similarly across different frameworks, where the absolute difference in F1 is less than 0.2% and insignificant. This is consistent with \newciteHernault:2010 :SMD:2175352.2175383’s finding that, when a large number of contextual features are incorporated, binary classifiers such as SVM can achieve competitive performance with CRFs. The second section lists the performance of our models with no pairing features (−p). For all three resulting models, CRF−p, LR−p, and SVM−p, their performance is significantly poorer (p
Distilling Task-Specific Knowledge from BERT intoSimple Neural Networks
1903.12136
Table 1: Test results on different datasets. The BiLSTM results reported by other papers are drawn from Zhou et al. (2016),† Wang et al. (2017),‡ and Williams et al. (2017).∗ All of our test results are obtained from the GLUE benchmark website.
['[BOLD] #', 'Model', 'SST-2', 'QQP', 'MNLI-m', 'MNLI-mm']
[['[BOLD] #', 'Model', 'Acc', 'F1/Acc', 'Acc', 'Acc'], ['1', 'BERTLARGE\xa0Devlin et\xa0al. ( 2018 )', '94.9', '72.1/89.3', '86.7', '85.9'], ['2', 'BERTBASE\xa0Devlin et\xa0al. ( 2018 )', '93.5', '71.2/89.2', '84.6', '83.4'], ['3', 'OpenAI GPT\xa0Radford et\xa0al. ( 2018 )', '91.3', '70.3/88.5', '82.1', '81.4'], ['4', 'BERT ELMo baseline\xa0Devlin et\xa0al. ( 2018 )', '90.4', '64.8/84.7', '76.4', '76.1'], ['5', 'GLUE ELMo baseline\xa0Wang et\xa0al. ( 2018 )', '90.4', '63.1/84.3', '74.1', '74.5'], ['6', 'Distilled BiLSTMSOFT', '[BOLD] 90.7', '[BOLD] 68.2/88.1', '[BOLD] 73.0', '[BOLD] 72.6'], ['7', 'BiLSTM (our implementation)', '86.7', '63.7/86.2', '68.7', '68.3'], ['8', 'BiLSTM (reported by GLUE)', '85.9', '61.4/81.7', '70.3', '70.8'], ['9', 'BiLSTM (reported by other papers)', '87.6†', '– /82.6‡', '66.9*', '66.9*']]
For QQP, we report both F1 and accuracy, since the dataset is slightly unbalanced. Following GLUE, we report the average score of each model on the datasets.
Spoken dialect identification in Twitter using a multi-filter architecture
2006.03564
Table 3: First filter performance on Leipzig + SwissCrawl corpora
['Language', 'Precision', 'Recall', 'F1-score']
[['Afrikaans', '0.9982', '0.9981', '0.9982'], ['German', '0.9976', '0.9949', '0.9962'], ['English', '0.9994', '0.9992', '0.9993'], ['Swiss-German', '0.9974', '0.9994', '0.9984'], ['GSW-like', '0.9968', '0.9950', '0.9959'], ['Luxembourgian', '0.9994', '0.9989', '0.9992'], ['Dutch', '0.9956', '0.9965', '0.9960'], ['Other', '0.9983', '0.9989', '0.9986']]
We first evaluate our BERT filter on the test set of the first filter (Leipzig corpora + SwissCrawl). The filter has an F1-score of 99.8% on the GSW test set. However, when this model is applied to Twitter data, we expect a decrease in performance due to having short and also informal messages.
A Capsule Network-based Model for Learning Node Embeddings
1911.04822
Table 5: Accuracy results on the Cora validation sets w.r.t each data split and each value m>1 of routing iterations for the transductive and inductive settings. Regarding Algorithm 1 when m>1, “Ours” denotes our update rule (bi←^u(i)vi⋅ev), while “Sab.” denotes the update rule (bi←bi+^u(i)vi⋅ev) originally used by sabour2017dynamic .
['Split', '[BOLD] Transductive [ITALIC] m=3', '[BOLD] Transductive [ITALIC] m=3', '[BOLD] Transductive [ITALIC] m=5', '[BOLD] Transductive [ITALIC] m=5', '[BOLD] Transductive [ITALIC] m=7', '[BOLD] Transductive [ITALIC] m=7', '[BOLD] Inductive [ITALIC] m=3', '[BOLD] Inductive [ITALIC] m=3', '[BOLD] Inductive [ITALIC] m=5', '[BOLD] Inductive [ITALIC] m=5', '[BOLD] Inductive [ITALIC] m=7', '[BOLD] Inductive [ITALIC] m=7']
[['Split', 'Ours', 'Sab.', 'Ours', 'Sab.', 'Ours', 'Sab.', 'Ours', 'Sab.', 'Ours', 'Sab.', 'Ours', 'Sab.'], ['1st', '80.1', '80.1', '80.2', '79.6', '79.7', '79.3', '70.2', '70.3', '70.2', '69.2', '70.6', '68.3'], ['2nd', '79.4', '79.6', '79.7', '78.9', '79.7', '78.6', '66.0', '65.9', '65.7', '64.4', '65.6', '64.3'], ['3rd', '78.5', '78.5', '78.6', '78.6', '78.5', '78.4', '68.2', '67.6', '68.3', '68.4', '69.2', '67.6'], ['4th', '81.3', '80.8', '81.1', '80.1', '81.1', '79.3', '66.5', '66.3', '66.5', '65.4', '66.4', '65.9'], ['5th', '81.9', '81.6', '81.7', '81.5', '81.7', '80.9', '69.4', '68.7', '69.9', '68.5', '69.5', '68.1'], ['6th', '78.6', '79.0', '78.8', '78.7', '78.7', '78.0', '66.7', '67.1', '66.7', '66.2', '67.5', '65.3'], ['7th', '80.1', '80.2', '80.5', '80.0', '79.9', '79.4', '70.4', '70.1', '70.4', '69.9', '70.4', '68.8'], ['8th', '81.8', '82.1', '82.1', '81.5', '82.3', '81.2', '69.6', '69.0', '68.7', '67.8', '69.7', '67.5'], ['9th', '79.3', '79.4', '79.7', '78.1', '78.6', '77.8', '71.2', '70.8', '71.5', '71.7', '72.2', '70.1'], ['10th', '78.8', '79.3', '79.7', '78.9', '79.4', '78.7', '70.3', '69.7', '69.5', '68.8', '69.9', '68.3'], ['Overall', '79.98', '[BOLD] 80.06', '[BOLD] 80.21', '79.59', '[BOLD] 79.96', '79.16', '[BOLD] 68.85', '68.55', '[BOLD] 68.74', '68.03', '[BOLD] 69.10', '67.42']]
Therefore, we propose to use the new update rule (bi←^u(i)vi⋅ev) as this new rule generally helps obtain a higher performance for each setup. w.r.t each data split and each value m>1 of routing iterations.
A Capsule Network-based Model for Learning Node Embeddings
1911.04822
Table 2: Multi-label classification results on PPI, POS and BlogCatalog. Baseline results are from duran2017learning .
['[BOLD] Method (Micro-F1)', '[BOLD] POS [ITALIC] γ=10%', '[BOLD] POS [ITALIC] γ=50%', '[BOLD] POS [ITALIC] γ=90%', '[BOLD] PPI [ITALIC] γ=10%', '[BOLD] PPI [ITALIC] γ=50%', '[BOLD] PPI [ITALIC] γ=90%', 'BlogCatalog [ITALIC] γ=10%', 'BlogCatalog [ITALIC] γ=50%', 'BlogCatalog [ITALIC] γ=90%']
[['DeepWalk', '45.02', '49.10', '49.33', '17.14', '[BOLD] 23.52', '25.02', '34.48', '38.11', '38.34'], ['LINE', '45.22', '[BOLD] 51.64', '52.28', '16.55', '23.01', '[BOLD] 25.28', '34.83', '38.99', '38.77'], ['Node2Vec', '44.66', '48.73', '49.73', '17.00', '23.31', '24.75', '[BOLD] 35.54', '39.31', '40.03'], ['EP-B', '[BOLD] 46.97', '49.52', '50.05', '17.82', '23.30', '24.74', '35.05', '[BOLD] 39.44', '40.41'], ['Our Caps2NE', '46.01', '50.93', '[BOLD] 53.92', '[BOLD] 18.52', '23.15', '25.08', '34.31', '38.35', '[BOLD] 40.79'], ['[BOLD] Method', '[BOLD] POS', '[BOLD] POS', '[BOLD] POS', '[BOLD] PPI', '[BOLD] PPI', '[BOLD] PPI', 'BlogCatalog', 'BlogCatalog', 'BlogCatalog'], ['(Macro-F1)', '[ITALIC] γ=10%', '[ITALIC] γ=50%', '[ITALIC] γ=90%', '[ITALIC] γ=10%', '[ITALIC] γ=50%', '[ITALIC] γ=90%', '[ITALIC] γ=10%', '[ITALIC] γ=50%', '[ITALIC] γ=90%'], ['DeepWalk', '8.20', '10.84', '12.23', '13.01', '18.73', '20.01', '18.16', '22.65', '22.86'], ['LINE', '8.49', '12.43', '12.40', '12,79', '18.06', '[BOLD] 20.59', '18.13', '22.56', '23.00'], ['Node2Vec', '8.32', '11.07', '12.11', '13.32', '18.57', '19.66', '[BOLD] 19.08', '23.97', '24.82'], ['EP-B', '8.85', '10.45', '12.17', '13.80', '18.96', '20.36', '[BOLD] 19.08', '[BOLD] 25.11', '25.97'], ['Our Caps2NE', '[BOLD] 9.71', '[BOLD] 13.16', '[BOLD] 14.11', '[BOLD] 15.20', '[BOLD] 19.63', '20.27', '18.40', '24.80', '[BOLD] 26.63']]
PPI, POS and BlogCatalog. the Micro-F1 and Macro-F1 scores on test sets in the transductive setting. Especially, on POS, Caps2NE produces a new state-of-the-art Macro-F1 score for each of the three fraction values γ, the highest Micro-F1 score when γ=90% and the second highest Micro-F1 scores when γ∈{10%,50%}. Caps2NE obtains new highest F1 scores on PPI and BlogCatalog when γ=10% and γ=90%, respectively. On PPI, Caps2NE also achieves the highest Macro-F1 score when γ=50% and the second highest Micro-F1 score when γ=90%. On BlogCatalog, Caps2NE also achieves the second highest Macro-F1 scores when γ∈{10%,50%}.
A Capsule Network-based Model for Learning Node Embeddings
1911.04822
Table 3: Accuracies on the Cora, Citeseer and Pubmed test sets in the transductive and inductive settings. “Un-sup.” denotes unsupervised graph embedding models, where the best score is in bold while the second best score is in underline. “Sup.” denotes supervised graph embedding models that additionally use node labels when training the models.
['[BOLD] Transductive Un-sup.', '[BOLD] Transductive BoW', '[BOLD] Cora 58.63', '[BOLD] Citeseer 58.07', '[BOLD] Pubmed 70.49']
[['Un-sup.', 'DeepWalk', '71.11', '47.60', '73.49'], ['Un-sup.', 'DeepWalk+BoW', '76.15', '61.87', '77.82'], ['Un-sup.', 'EP-B', '78.05', '71.01', '[BOLD] 79.56'], ['Un-sup.', 'Our [BOLD] Caps2NE', '[BOLD] 80.53', '[BOLD] 71.34', '78.45'], ['Sup.', 'GAT', '81.72', '70.80', '79.56'], ['Sup.', 'GCN', '79.59', '69.21', '77.32'], ['Sup.', 'Planetoid', '71.90', '58.58', '74.49'], ['[BOLD] Inductive', '[BOLD] Inductive', '[BOLD] Cora', '[BOLD] Citeseer', '[BOLD] Pubmed'], ['Un-sup.', 'DeepWalk+BoW', '68.35', '59.47', '74.87'], ['Un-sup.', 'EP-B', '73.09', '68.61', '[BOLD] 79.94'], ['Un-sup.', 'Our [BOLD] Caps2NE', '[BOLD] 76.54', '[BOLD] 69.84', '78.98'], ['Sup.', 'GAT', '69.37', '59.55', '71.29'], ['Sup.', 'GCN', '67.76', '63.40', '73.47'], ['Sup.', 'Planetoid', '64.80', '61.97', '75.73']]
Cora, Citeseer and Pubmed. BoW is evaluated by directly using the pre-computed bag-of-words feature vectors for learning the classifier, thus its performance is the same for both settings. DeepWalk+BoW concatenates the learned embedding of a node from DeepWalk with the pre-computed BoW feature vector of the node. Inductive setting: As previously discussed in the last paragraph in the “The proposed Caps2NE” section, we re-emphasize that our unsupervised Caps2NE model notably outperforms the supervised models GCN and GAT for this inductive setting. In particular, Caps2NE achieves 4+% absolute higher accuracies than both GCN and GAT on the three datasets, clearly showing the effectiveness of Caps2NE to infer embeddings for unseen nodes.
Human language reveals a universal positivity bias
1406.3855
Table S4: Pearson correlation coefficients for translation-stable words for all language pairs. All p-values are <10−118. These values are included in Fig. 2 and reproduced here for to facilitate comparison.
['[EMPTY]', 'Spanish', 'Portuguese', 'English', 'Indonesian', 'French', 'German', 'Arabic', 'Russian', 'Korean', 'Chinese']
[['Spanish', '1.00', '0.89', '0.87', '0.82', '0.86', '0.82', '0.83', '0.73', '0.79', '0.79'], ['Portuguese', '0.89', '1.00', '0.87', '0.82', '0.84', '0.81', '0.84', '0.84', '0.79', '0.76'], ['English', '0.87', '0.87', '1.00', '0.88', '0.86', '0.82', '0.86', '0.87', '0.82', '0.81'], ['Indonesian', '0.82', '0.82', '0.88', '1.00', '0.79', '0.77', '0.83', '0.85', '0.79', '0.77'], ['French', '0.86', '0.84', '0.86', '0.79', '1.00', '0.84', '0.77', '0.84', '0.79', '0.76'], ['German', '0.82', '0.81', '0.82', '0.77', '0.84', '1.00', '0.76', '0.80', '0.73', '0.74'], ['Arabic', '0.83', '0.84', '0.86', '0.83', '0.77', '0.76', '1.00', '0.83', '0.79', '0.80'], ['Russian', '0.73', '0.84', '0.87', '0.85', '0.84', '0.80', '0.83', '1.00', '0.80', '0.82'], ['Korean', '0.79', '0.79', '0.82', '0.79', '0.79', '0.73', '0.79', '0.80', '1.00', '0.81'], ['Chinese', '0.79', '0.76', '0.81', '0.77', '0.76', '0.74', '0.80', '0.82', '0.81', '1.00']]
July 16, 2020 \newmdenv [linecolor=lightgrey,backgroundcolor= white]myframe Human language—our great social technology—reflects that which it describes through the stories it allows to be told, and us, the tellers of those stories. In 1969, Boucher and Osgood framed the Pollyanna Hypothesis: a hypothetical, universal positivity bias in human communication Boucher and Osgood From a selection of small-scale, cross-cultural studies, they marshaled evidence that positive words are likely more prevalent, more meaningful, more diversely used, and more readily learned. However, in being far from an exhaustive, data-driven analysis of language—the approach we take here—their findings could only be regarded as suggestive. To deeply explore the positivity of human language, we constructed 24 corpora spread across 10 languages (see Supplementary Online Material). Our global coverage of linguistically and culturally diverse languages includes English, Spanish, French, German, Brazilian Portuguese, Korean, Chinese (Simplified), Russian, Indonesian, and Arabic. The sources of our corpora are similarly broad, Our work here greatly expands upon our earlier study of English alone, where we found strong evidence for a usage-invariant positivity bias Kloumann et al. We address the social nature of language in two important ways: (1) we focus on the words people most commonly use, and (2) we measure how those same words are received by individuals. We take word usage frequency as the primary organizing measure of a word’s importance. Such a data-driven approach is crucial for both understanding the structure of language and for creating linguistic instruments for principled measurements Dodds et al. By contrast, earlier studies focusing on meaning and emotion have used ‘expert’ generated word lists, and these fail to statistically match frequency distributions of natural language Osgood et al. ; Stone et al. , confounding attempts to make claims about language in general. For each of our corpora we selected between 5,000 to 10,000 of the most frequently used words, choosing the exact numbers so that we obtained approximately 10,000 words for each language. (see also Supplementary Online Material). This happy-sad semantic differential Osgood et al. Participants were restricted to certain regions or countries (for example, Portuguese was rated by residents of Brazil). Overall, we collected 50 ratings per word for a total of around 5,000,000 individual human assessments, and we provide all data sets as part of the Supplementary Online Material.
Human language reveals a universal positivity bias
1406.3855
Table S5: Spearman correlation coefficients for translation-stable words. All p-values are <10−82.
['[EMPTY]', 'Spanish', 'Portuguese', 'English', 'Indonesian', 'French', 'German', 'Arabic', 'Russian', 'Korean', 'Chinese']
[['Spanish', '1.00', '0.85', '0.83', '0.77', '0.81', '0.77', '0.75', '0.74', '0.74', '0.68'], ['Portuguese', '0.85', '1.00', '0.83', '0.77', '0.78', '0.77', '0.77', '0.81', '0.75', '0.66'], ['English', '0.83', '0.83', '1.00', '0.82', '0.80', '0.78', '0.78', '0.81', '0.75', '0.70'], ['Indonesian', '0.77', '0.77', '0.82', '1.00', '0.72', '0.72', '0.76', '0.77', '0.71', '0.71'], ['French', '0.81', '0.78', '0.80', '0.72', '1.00', '0.80', '0.67', '0.79', '0.71', '0.64'], ['German', '0.77', '0.77', '0.78', '0.72', '0.80', '1.00', '0.69', '0.76', '0.64', '0.62'], ['Arabic', '0.75', '0.77', '0.78', '0.76', '0.67', '0.69', '1.00', '0.74', '0.69', '0.68'], ['Russian', '0.74', '0.81', '0.81', '0.77', '0.79', '0.76', '0.74', '1.00', '0.70', '0.66'], ['Korean', '0.74', '0.75', '0.75', '0.71', '0.71', '0.64', '0.69', '0.70', '1.00', '0.71'], ['Chinese', '0.68', '0.66', '0.70', '0.71', '0.64', '0.62', '0.68', '0.66', '0.71', '1.00']]
July 16, 2020 \newmdenv [linecolor=lightgrey,backgroundcolor= white]myframe Human language—our great social technology—reflects that which it describes through the stories it allows to be told, and us, the tellers of those stories. In 1969, Boucher and Osgood framed the Pollyanna Hypothesis: a hypothetical, universal positivity bias in human communication Boucher and Osgood From a selection of small-scale, cross-cultural studies, they marshaled evidence that positive words are likely more prevalent, more meaningful, more diversely used, and more readily learned. However, in being far from an exhaustive, data-driven analysis of language—the approach we take here—their findings could only be regarded as suggestive. To deeply explore the positivity of human language, we constructed 24 corpora spread across 10 languages (see Supplementary Online Material). Our global coverage of linguistically and culturally diverse languages includes English, Spanish, French, German, Brazilian Portuguese, Korean, Chinese (Simplified), Russian, Indonesian, and Arabic. The sources of our corpora are similarly broad, Our work here greatly expands upon our earlier study of English alone, where we found strong evidence for a usage-invariant positivity bias Kloumann et al. We address the social nature of language in two important ways: (1) we focus on the words people most commonly use, and (2) we measure how those same words are received by individuals. We take word usage frequency as the primary organizing measure of a word’s importance. Such a data-driven approach is crucial for both understanding the structure of language and for creating linguistic instruments for principled measurements Dodds et al. By contrast, earlier studies focusing on meaning and emotion have used ‘expert’ generated word lists, and these fail to statistically match frequency distributions of natural language Osgood et al. ; Stone et al. , confounding attempts to make claims about language in general. For each of our corpora we selected between 5,000 to 10,000 of the most frequently used words, choosing the exact numbers so that we obtained approximately 10,000 words for each language. (see also Supplementary Online Material). This happy-sad semantic differential Osgood et al. Participants were restricted to certain regions or countries (for example, Portuguese was rated by residents of Brazil). Overall, we collected 50 ratings per word for a total of around 5,000,000 individual human assessments, and we provide all data sets as part of the Supplementary Online Material.
Human language reveals a universal positivity bias
1406.3855
Table S6: Pearson correlation coefficients and p-values, Spearman correlation coefficients and p-values, and linear fit coefficients, for average word happiness havg as a function of word usage frequency rank r. We use the fit is havg=αr+β for the most common 5000 words in each corpora, determining α and β via ordinary least squares, and order languages by the median of their average word happiness scores (descending). We note that stemming of words may affect these estimates.
['Language: Corpus Spanish: Google Web Crawl', '[ITALIC] ρp -0.114', '[ITALIC] p-value 3.38×10−22', '[ITALIC] ρs -0.090', '[ITALIC] p-value 1.85×10−14', '[ITALIC] α -5.55×10−5', '[ITALIC] β 6.10']
[['Spanish: Google Books', '-0.040', '1.51×10−3', '-0.016', '1.90×10−1', '-2.28×10−5', '5.90'], ['Spanish: Twitter', '-0.048', '1.14×10−4', '-0.032', '1.10×10−2', '-3.10×10−5', '5.94'], ['Portuguese: Google Web Crawl', '-0.085', '6.33×10−13', '-0.060', '3.23×10−7', '-3.98×10−5', '5.96'], ['Portuguese: Twitter', '-0.041', '5.98×10−4', '-0.030', '1.15×10−2', '-2.40×10−5', '5.73'], ['English: Google Books', '-0.042', '3.03×10−3', '-0.013', '3.50×10−1', '-3.04×10−5', '5.62'], ['English: New York Times', '-0.056', '6.93×10−5', '-0.044', '1.99×10−3', '-4.17×10−5', '5.61'], ['German: Google Web Crawl', '-0.096', '1.11×10−15', '-0.082', '6.75×10−12', '-3.67×10−5', '5.65'], ['French: Google Web Crawl', '-0.105', '9.20×10−19', '-0.080', '1.99×10−11', '-4.50×10−5', '5.68'], ['English: Twitter', '-0.097', '6.56×10−12', '-0.103', '2.37×10−13', '-7.78×10−5', '5.67'], ['Indonesian: Movie subtitles', '-0.039', '1.48×10−3', '-0.063', '2.45×10−7', '-2.04×10−5', '5.45'], ['German: Twitter', '-0.054', '1.47×10−5', '-0.036', '4.02×10−3', '-2.51×10−5', '5.58'], ['Russian: Twitter', '-0.052', '2.38×10−5', '-0.028', '2.42×10−2', '-2.55×10−5', '5.52'], ['French: Google Books', '-0.043', '6.80×10−4', '-0.030', '1.71×10−2', '-2.31×10−5', '5.49'], ['German: Google Books', '-0.003', '8.12×10−1', '+0.014', '2.74×10−1', '-1.38×10−6', '5.45'], ['French: Twitter', '-0.049', '6.08×10−5', '-0.023', '6.31×10−2', '-2.54×10−5', '5.54'], ['Russian: Movie and TV subtitles', '-0.029', '2.36×10−2', '-0.033', '9.17×10−3', '-1.57×10−5', '5.43'], ['Arabic: Movie and TV subtitles', '-0.045', '7.10×10−6', '-0.029', '4.19×10−3', '-1.66×10−5', '5.44'], ['Indonesian: Twitter', '-0.051', '2.14×10−5', '-0.018', '1.24×10−1', '-2.50×10−5', '5.46'], ['Korean: Twitter', '-0.032', '8.29×10−3', '-0.016', '1.91×10−1', '-1.24×10−5', '5.38'], ['Russian: Google Books', '+0.030', '2.09×10−2', '+0.070', '5.08×10−8', '+1.20×10−5', '5.35'], ['English: Music Lyrics', '-0.073', '2.53×10−7', '-0.081', '1.05×10−8', '-6.12×10−5', '5.45'], ['Korean: Movie subtitles', '-0.187', '8.22×10−44', '-0.180', '2.01×10−40', '-9.66×10−5', '5.41'], ['Chinese: Google Books', '-0.067', '1.48×10−11', '-0.050', '5.01×10−7', '-1.72×10−5', '5.21']]
July 16, 2020 \newmdenv [linecolor=lightgrey,backgroundcolor= white]myframe Human language—our great social technology—reflects that which it describes through the stories it allows to be told, and us, the tellers of those stories. In 1969, Boucher and Osgood framed the Pollyanna Hypothesis: a hypothetical, universal positivity bias in human communication Boucher and Osgood From a selection of small-scale, cross-cultural studies, they marshaled evidence that positive words are likely more prevalent, more meaningful, more diversely used, and more readily learned. However, in being far from an exhaustive, data-driven analysis of language—the approach we take here—their findings could only be regarded as suggestive. To deeply explore the positivity of human language, we constructed 24 corpora spread across 10 languages (see Supplementary Online Material). Our global coverage of linguistically and culturally diverse languages includes English, Spanish, French, German, Brazilian Portuguese, Korean, Chinese (Simplified), Russian, Indonesian, and Arabic. The sources of our corpora are similarly broad, Our work here greatly expands upon our earlier study of English alone, where we found strong evidence for a usage-invariant positivity bias Kloumann et al. We address the social nature of language in two important ways: (1) we focus on the words people most commonly use, and (2) we measure how those same words are received by individuals. We take word usage frequency as the primary organizing measure of a word’s importance. Such a data-driven approach is crucial for both understanding the structure of language and for creating linguistic instruments for principled measurements Dodds et al. By contrast, earlier studies focusing on meaning and emotion have used ‘expert’ generated word lists, and these fail to statistically match frequency distributions of natural language Osgood et al. ; Stone et al. , confounding attempts to make claims about language in general. For each of our corpora we selected between 5,000 to 10,000 of the most frequently used words, choosing the exact numbers so that we obtained approximately 10,000 words for each language. (see also Supplementary Online Material). This happy-sad semantic differential Osgood et al. Participants were restricted to certain regions or countries (for example, Portuguese was rated by residents of Brazil). Overall, we collected 50 ratings per word for a total of around 5,000,000 individual human assessments, and we provide all data sets as part of the Supplementary Online Material.
Human language reveals a universal positivity bias
1406.3855
Table S7: Pearson correlation coefficients and p-values, Spearman correlation coefficients and p-values, and linear fit coefficients for standard deviation of word happiness hstd as a function of word usage frequency rank r. We consider the fit is hstd=αr+β for the most common 5000 words in each corpora, determining α and β via ordinary least squares, and order corpora according to their emotional variance (descending).
['Language: Corpus Portuguese: Twitter', '[ITALIC] ρp +0.090', '[ITALIC] p-value 2.55×10−14', '[ITALIC] ρs +0.095', '[ITALIC] p-value 1.28×10−15', '[ITALIC] α 1.19×10−5', '[ITALIC] β 1.29']
[['Spanish: Twitter', '+0.097', '8.45×10−15', '+0.104', '5.92×10−17', '1.47×10−5', '1.26'], ['English: Music Lyrics', '+0.129', '4.87×10−20', '+0.134', '1.63×10−21', '2.76×10−5', '1.33'], ['English: Twitter', '+0.007', '6.26×10−1', '+0.012', '4.11×10−1', '1.47×10−6', '1.35'], ['English: New York Times', '+0.050', '4.56×10−4', '+0.044', '1.91×10−3', '9.34×10−6', '1.32'], ['Arabic: Movie and TV subtitles', '+0.101', '7.13×10−24', '+0.101', '3.41×10−24', '9.41×10−6', '1.01'], ['English: Google Books', '+0.180', '1.68×10−37', '+0.176', '4.96×10−36', '3.36×10−5', '1.27'], ['Spanish: Google Books', '+0.066', '1.23×10−7', '+0.062', '6.53×10−7', '9.17×10−6', '1.26'], ['Indonesian: Movie subtitles', '+0.026', '3.43×10−2', '+0.027', '2.81×10−2', '2.87×10−6', '1.12'], ['Russian: Movie and TV subtitles', '+0.083', '7.60×10−11', '+0.075', '3.28×10−9', '1.06×10−5', '0.89'], ['French: Twitter', '+0.072', '4.77×10−9', '+0.076', '8.94×10−10', '1.07×10−5', '1.05'], ['Indonesian: Twitter', '+0.072', '1.17×10−9', '+0.072', '1.73×10−9', '8.16×10−6', '1.12'], ['French: Google Books', '+0.090', '1.02×10−12', '+0.085', '1.67×10−11', '1.25×10−5', '1.02'], ['Russian: Twitter', '+0.055', '6.83×10−6', '+0.053', '1.67×10−5', '7.39×10−6', '0.91'], ['Spanish: Google Web Crawl', '+0.119', '4.45×10−24', '+0.106', '2.60×10−19', '1.45×10−5', '1.23'], ['Portuguese: Google Web Crawl', '+0.093', '4.06×10−15', '+0.083', '2.91×10−12', '1.07×10−5', '1.26'], ['German: Twitter', '+0.051', '4.45×10−5', '+0.050', '5.15×10−5', '7.39×10−6', '1.15'], ['French: Google Web Crawl', '+0.104', '2.12×10−18', '+0.088', '9.64×10−14', '1.27×10−5', '1.01'], ['Korean: Movie subtitles', '+0.171', '1.39×10−36', '+0.185', '8.85×10−43', '2.58×10−5', '0.88'], ['German: Google Books', '+0.157', '6.06×10−35', '+0.162', '4.96×10−37', '2.17×10−5', '1.03'], ['Korean: Twitter', '+0.056', '4.07×10−6', '+0.062', '4.25×10−7', '6.98×10−6', '0.93'], ['German: Google Web Crawl', '+0.099', '2.05×10−16', '+0.085', '1.18×10−12', '1.20×10−5', '1.07'], ['Chinese: Google Books', '+0.099', '3.07×10−23', '+0.097', '3.81×10−22', '8.70×10−6', '1.16'], ['Russian: Google Books', '+0.187', '5.15×10−48', '+0.177', '2.24×10−43', '2.28×10−5', '0.81']]
July 16, 2020 \newmdenv [linecolor=lightgrey,backgroundcolor= white]myframe Human language—our great social technology—reflects that which it describes through the stories it allows to be told, and us, the tellers of those stories. In 1969, Boucher and Osgood framed the Pollyanna Hypothesis: a hypothetical, universal positivity bias in human communication Boucher and Osgood From a selection of small-scale, cross-cultural studies, they marshaled evidence that positive words are likely more prevalent, more meaningful, more diversely used, and more readily learned. However, in being far from an exhaustive, data-driven analysis of language—the approach we take here—their findings could only be regarded as suggestive. To deeply explore the positivity of human language, we constructed 24 corpora spread across 10 languages (see Supplementary Online Material). Our global coverage of linguistically and culturally diverse languages includes English, Spanish, French, German, Brazilian Portuguese, Korean, Chinese (Simplified), Russian, Indonesian, and Arabic. The sources of our corpora are similarly broad, Our work here greatly expands upon our earlier study of English alone, where we found strong evidence for a usage-invariant positivity bias Kloumann et al. We address the social nature of language in two important ways: (1) we focus on the words people most commonly use, and (2) we measure how those same words are received by individuals. We take word usage frequency as the primary organizing measure of a word’s importance. Such a data-driven approach is crucial for both understanding the structure of language and for creating linguistic instruments for principled measurements Dodds et al. By contrast, earlier studies focusing on meaning and emotion have used ‘expert’ generated word lists, and these fail to statistically match frequency distributions of natural language Osgood et al. ; Stone et al. , confounding attempts to make claims about language in general. For each of our corpora we selected between 5,000 to 10,000 of the most frequently used words, choosing the exact numbers so that we obtained approximately 10,000 words for each language. (see also Supplementary Online Material). This happy-sad semantic differential Osgood et al. Participants were restricted to certain regions or countries (for example, Portuguese was rated by residents of Brazil). Overall, we collected 50 ratings per word for a total of around 5,000,000 individual human assessments, and we provide all data sets as part of the Supplementary Online Material.
Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking
2003.05473
Table 2: Investigating the types of strong precision errors of BERT+Entity trained in Setting I on CoNLL’03/AIDA (testa) on 100 randomly sampled strong precision errors from the validation dataset.
['Reason for error', '#']
[['no prediction', '57'], ['different than gold annotation', '[EMPTY]'], ['no obvious reason', '13'], ['semantic close', '4'], ['lexical overlap', '5'], ['nested entity', '5'], ['gold annotation wrong', '12'], ['span error', '3'], ['unclear', '1'], ['[EMPTY]', '100']]
The largest source of errors was that BERT+Entity did predict Nil instead of an entity. We hypothesized that most of the no prediction errors are because those entities have only a low frequency in the training data, i.e. this could be solved by increasing the model size and improving the training time. Another source of error we observed was that the context size was too small due to the fragment size. A surprisingly positive result from the error analysis was that in only 3% a wrong span caused the error. Motivated by the observations we devised the follow-up experiment Setting II (see
Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking
2003.05473
Table 1: Comparing entity linking results on CoNLL’03/AIDA. strong F1 and weak F1 denote InKB F1 scores. ED is Precision@1 for InKB. Kolitsas et al. (2018) also study a neural model, however, they only model MD and ED. The independent baseline shows how their model performs when they use mentions detected by Stanford NLP. In Frozen-BERT+Entity BERT is not trained and only the entity classifier on-top is trained.
['[EMPTY]', '[EMPTY]', 'AIDA/testa strong F1', 'AIDA/testa weak F1', 'AIDA/testa ED', 'AIDA/testb strong F1', 'AIDA/testb weak F1', 'AIDA/testb ED']
[['Kolitsas et al. ( 2018 ) indep. baseline', 'Kolitsas et al. ( 2018 ) indep. baseline', '80.3', '80.5', '-', '74.6', '75.0', '-'], ['Kolitsas et al. ( 2018 )', 'Kolitsas et al. ( 2018 )', '89.4', '89.8', '93.7', '82.4', '82.8', '87.3'], ['BERT', '[EMPTY]', '63.3', '66.6', '67.6', '49.6', '52.4', '52.8'], ['Setting I', 'Frozen-BERT+Entity', '76.8', '79.6', '80.6', '64.7', '68.0', '68.6'], ['[EMPTY]', 'BERT+Entity', '82.8', '84.4', '86.6', '74.8', '76.5', '78.8'], ['Setting II', 'Frozen-BERT+Entity', '76.5', '80.1', '79.6', '67.8', '71.9', '67.8'], ['[EMPTY]', 'BERT+Entity', '86.0', '87.3', '92.3', '79.3', '81.1', '87.9']]
we compare our results to the most recent results by Kolitsas et al. They also provide a baseline in which they show how their classifier performs when MD and ED are independent, i.e. linking mentions detected by Stanford NLP.
Structured Pruning of Large Language Models
1910.04732
(a) SRU
['[BOLD] Parameters 35M', '[BOLD] Parameters (100%)', '[BOLD] FLOP 1.24', '[BOLD] AGP (unstructured) -', '[BOLD] AGP (structured) -', '[BOLD] Dense Model -']
[['11M', '(30%)', '[BOLD] 1.25', '1.28', '1.33', '1.36'], ['7.6M', '(20%)', '[BOLD] 1.27', '1.30', '1.36', '1.40'], ['5.9M', '(15%)', '[BOLD] 1.29', '1.34', '1.39', '1.43'], ['4.2M', '(10%)', '[BOLD] 1.33', '1.39', '1.46', '1.48']]
The results conform to our expectations and to the results reported in previous work – pruning a large model is consistently better than training a small dense model from scratch. Furthermore, FLOP exceeds the performance of the unstructured AGP method at all sparsity levels tested. For instance, we achieve a loss of 0.01 bits-per-character (BPC) (less than 1% relative performance) using 30% of the parameters, while the AGP baseline has a loss of 0.04 BPC. Again, both pruning methods significantly outperform training small dense models from scratch. Our method achieves results on par with the unstructured pruning baseline, being marginally worse at 90% sparsity but slightly better at 80% sparsity.
Structured Pruning of Large Language Models
1910.04732
(a) SRU
['[BOLD] Parameters 41M', '[BOLD] Parameters (100%)', '[BOLD] FLOP 1.10', '[BOLD] AGP (unstructured) -', '[BOLD] Dense Model -']
[['8.4M', '(20%)', '[BOLD] 1.16', '1.17', '1.24'], ['5.3M', '(10%)', '1.19', '[BOLD] 1.17', '1.36']]
The results conform to our expectations and to the results reported in previous work – pruning a large model is consistently better than training a small dense model from scratch. Furthermore, FLOP exceeds the performance of the unstructured AGP method at all sparsity levels tested. For instance, we achieve a loss of 0.01 bits-per-character (BPC) (less than 1% relative performance) using 30% of the parameters, while the AGP baseline has a loss of 0.04 BPC. Again, both pruning methods significantly outperform training small dense models from scratch. Our method achieves results on par with the unstructured pruning baseline, being marginally worse at 90% sparsity but slightly better at 80% sparsity.
Structured Pruning of Large Language Models
1910.04732
Table 4: Compression on downstream fine-tuning
['[BOLD] Parameters 125M', '[BOLD] Parameters (100%)', '[BOLD] SST2 92.43', '[BOLD] MRPC 90.9', '[BOLD] STS-B 90.22', '[BOLD] QNLI 89.77', '[BOLD] Average [BOLD] 90.83']
[['80M', '(65%)', '92.09', '88.61', '88.18', '89.05', '[BOLD] 89.48']]
We are able to conserve nearly 99% of the performance while reducing the number of parameters by 35%. Our target sparsity level is limited by the fact that the embedding layers consist of a significant portion of the remaining parameters. We believe that higher levels of sparsity could be obtained by also factorizing the embedding layer, similar to Lan et al.
Progressively Pretrained Dense Corpus Index forOpen-Domain Question Answering
2005.00038
Table 3: Ablation studies on different pretraining strategies. The retrieval modules (Recall@k) are tested on WebQuestions. ⋆Our reimplementation.
['Method', 'R@5', 'R@10', 'R@20']
[['ProQA (90k)', '[BOLD] 46.9', '[BOLD] 56.7', '[BOLD] 64.4'], ['ORQA⋆ (90k)', '21.4', '29.6', '38.8'], ['ProQA (no clustering, 90k)', '36.9', '47.7', '57.0'], ['ProQA (no clustering; 70k)', '39.0', '47.0', '56.2'], ['ProQA (no clustering; 50k)', '35.7', '44.1', '52.9']]
To validate the sample efficiency of our method, we replicate the inverse-cloze pretraining approach from ORQA using the same amount of resource as we used while training our model, i.e., the same batch size and updates (90k). We also study the effect of the progressive training paradigm by pretraining the model with the same generated data but without the clustering-based sampling. We test the retrieval performance on WebQuestions before any finetuning. We use Recall@k as the evaluation metric, which measures how often the answer paragraphs appear in the top-k retrieval. With the limited results of the non-clutering version of our method, we validate the strong effect of the clustering-based progressive training algorithm, which brings 7-10% improvements on different metrics. We also report the results of early checkpoints of the non-clustering version. We can see that with the limited batch size, the improvements are diminishing as training goes on. This suggests the importance of introducing more challenging negative examples in the batch. Comparing the no-clustering version of our method and ORQA, we see that using the generated data results in much better retrieval performance (more than 15% improvements on all metrics).
Progressively Pretrained Dense Corpus Index forOpen-Domain Question Answering
2005.00038
Table 2: Resource comparison with SOTA models. EM scores are measured on NaturalQuestions-Open. batch size and updates all refer to the dense index pretraining. Note that REALM uses ORQA to initialize its parameters and we only report the numbers after ORQA initialization. ”80*8” indicates that we use a batch size of 80 and accumulate the gradients every 8 batches.
['Method', 'EM', 'model size', 'batch size', '# updates']
[['ORQA', '33.3', '330M', '4096', '100K'], ['T5', '34.5', '11318M', '-', '-'], ['REALM', '40.4', '330M', '512', '200K'], ['[BOLD] ProQA', '33.4', '330M', '80*8', '90K']]
It is worth noting that stronger open-domain QA performance have been achieved with much larger pretrained model DBLP:journals/corr/abs-1910-10683, i.e., T5 roberts2020 or better designed pretraining paradigm combined with more updates, i.e., REALM guu2020realm. As T5 simply converts the QA problem into a sequence-to-sequence (decode answers after encoding questions) problem, it does not pretrain the corpus index. The disadvantage of this method is its inefficiency at inference time, as the model is orders of magnitude larger than the others. In contrast, REALM uses the same amount of parameters as our approach and achieves significant improvements. However, it relies on ORQA initialization and further pretraining updates, thus is still computational expensive at pretraining. As our method directly improves the ORQA pretraining, we believe our method is complementary to the REALM approach.
ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning
2002.04326
Table 6: Average accuracy of each model using four different random seeds with only answer options as input, and the number of their common correct predictions.
['[BOLD] Model', '[BOLD] Val', '[BOLD] Test', '[BOLD] Number']
[['Chance', '25.0', '25.0', '3.9'], ['GPT', '45.8', '42.2', '238'], ['GPT-2', '46.8', '42.6', '245'], ['BERT \\textsc [ITALIC] BASE', '47.2', '43.2', '234'], ['XLNet \\textsc [ITALIC] BASE', '47.5', '43.2', '225'], ['RoBERTa \\textsc [ITALIC] BASE', '48.8', '41.7', '200'], ['Union', '–', '–', '440']]
CEASY = (Cseed1GPT∩Cseed2GPT∩Cseed3GPT∩Cseed4GPT) (1) ∪(Cseed1GPT−2∩Cseed2GPT−2∩Cseed3GPT−2∩Cseed4GPT−2) ∪(Cseed1XLNet∩Cseed2XLNet∩Cseed3XLNet∩Cseed4XLNet) ∪(Cseed1RoBERTa∩Cseed2RoBERTa∩Cseed3RoBERTa∩Cseed4RoBERTa), CHARD = CTEST−CEASY, where Cseed1BERT denotes the set of data points which are predicted correctly by BERT\textscBASE with seed 1, and similarly for the rest. Finally, we get 440 data points from the testing set CTEST and we denote this subset as EASY set CEASY and the other as HARD set CHARD.
ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning
2002.04326
Table 7: Accuracy (%) of models and human performance. The column Input means whether to input context (C), question (Q) and answer options (A). The RACE column represents whether to first use RACE to fine-tune before training on ReClor.
['[BOLD] Model', '[BOLD] Input', '[BOLD] RACE', '[BOLD] Val', '[BOLD] Test', '[BOLD] Test-E', '[BOLD] Test-H']
[['Chance', '(C, Q, A)', '[EMPTY]', '25.0', '25.0', '25.0', '25.0'], ['fastText', '(C, Q, A)', '[EMPTY]', '32.0', '30.8', '40.2', '23.4'], ['Bi-LSTM', '(C, Q, A)', '[EMPTY]', '27.8', '27.0', '26.4', '27.5'], ['GPT', '(C, Q, A)', '[EMPTY]', '47.6', '45.4', '73.0', '23.8'], ['GPT-2', '(C, Q, A)', '[EMPTY]', '52.6', '47.2', '73.0', '27.0'], ['BERT \\textsc [ITALIC] BASE', '(C, Q, A)', '[EMPTY]', '54.6', '47.3', '71.6', '28.2'], ['BERT \\textsc [ITALIC] BASE', '(C, Q, A)', '✓', '55.2', '49.5', '68.9', '34.3'], ['BERT \\textsc [ITALIC] LARGE', '(A)', '[EMPTY]', '46.4', '42.4', '69.3', '21.3'], ['BERT \\textsc [ITALIC] LARGE', '(Q, A)', '[EMPTY]', '48.8', '43.4', '72.7', '20.4'], ['BERT \\textsc [ITALIC] LARGE', '(C, Q, A)', '[EMPTY]', '53.8', '49.8', '72.0', '32.3'], ['BERT \\textsc [ITALIC] LARGE', '(C, Q, A)', '✓', '55.6', '54.5', '73.9', '39.3'], ['XLNet \\textsc [ITALIC] BASE', '(C, Q, A)', '[EMPTY]', '55.8', '50.4', '75.2', '30.9'], ['XLNet \\textsc [ITALIC] BASE', '(C, Q, A)', '✓', '62.0', '55.5', '76.1', '39.3'], ['XLNet \\textsc [ITALIC] LARGE', '(A)', '[EMPTY]', '45.0', '42.9', '66.1', '24.6'], ['XLNet \\textsc [ITALIC] LARGE', '(Q, A)', '[EMPTY]', '47.8', '43.4', '68.6', '23.6'], ['XLNet \\textsc [ITALIC] LARGE', '(C, Q, A)', '[EMPTY]', '62.0', '56.0', '75.7', '40.5'], ['XLNet \\textsc [ITALIC] LARGE', '(C, Q, A)', '✓', '70.8', '62.4', '77.7', '50.4'], ['RoBERTa \\textsc [ITALIC] BASE', '(C, Q, A)', '[EMPTY]', '55.0', '48.5', '71.1', '30.7'], ['RoBERTa \\textsc [ITALIC] BASE', '(C, Q, A)', '✓', '56.8', '53.0', '72.5', '37.7'], ['RoBERTa \\textsc [ITALIC] LARGE', '(A)', '[EMPTY]', '48.8', '43.2', '69.5', '22.5'], ['RoBERTa \\textsc [ITALIC] LARGE', '(Q, A)', '[EMPTY]', '49.8', '45.8', '72.0', '25.2'], ['RoBERTa \\textsc [ITALIC] LARGE', '(C, Q, A)', '[EMPTY]', '62.6', '55.6', '75.5', '40.0'], ['RoBERTa \\textsc [ITALIC] LARGE', '(C, Q, A)', '✓', '68.0', '65.1', '78.9', '54.3'], ['Graduate Students', '(C, Q, A)', '[EMPTY]', '–', '63.0', '57.1', '67.2'], ['Ceiling Performance', '(C, Q, A)', '[EMPTY]', '–', '100', '100', '100']]
This dataset is built on questions designed for students who apply for admission to graduate schools, thus we randomly choose 100 samples from the testing set and divide them into ten tests, which are distributed to ten different graduate students in a university. We take the average of their scores and present it as the baseline of graduate students. The data of ReClor are carefully chosen and modified from only high-quality questions from standardized graduate entrance exams. We set the ceiling performance to 100% since ambiguous questions are not included in the dataset.
Shielding Google’s language toxicity model against adversarial attacks
1801.01828
Figure 4: Effectiveness in disarming the obfuscation attack. These radial histograms show the proportion of obfuscated comments correctly scored with at least the same toxicity of the original comment, out of 1000 variants (red area), as obtained by GP model (green sector) and TP+GP method (amber sector). (a) On the obfuscation-50 dataset. (b) On the obfuscation-99 dataset.
['(a)', '(b)']
[]
In order to further compare the effectiveness of the proposed method in disarming the obfuscation attack, we illustrate in Fig. It can be seen how on the obfuscation-50 dataset the TP+GP method (amber area) is able to correctly score a large proportion of comments in all topics. This pattern is replicated in the obfuscation-99 dataset, although proportions slightly decline due to the higher corruption level, which yields more acid attacks.
Robust Text-to-SQL Generation with Execution-Guided Decoding
1807.03100
Table 1: Test and Dev accuracy (%) of the models on WikiSQL data, where Accsyn refers to syntactical accuracy and Accex refers to execution accuracy. “+ EG (k)” indicates that model outputs are generated using the execution-guided strategy with beam size k.
['Model', 'Dev Accsyn', 'Dev Accex', 'Test Accsyn', 'Test Accex']
[['Pointer-SQL (chenglong)', '61.8', '72.5', '62.3', '71.9'], ['Pointer-SQL + EG (3)', '66.6', '77.3', '66.7', '76.9'], ['Pointer-SQL + EG (5)', '[BOLD] 67.5', '[BOLD] 78.4', '[BOLD] 67.9', '[BOLD] 78.3'], ['Coarse2Fine (coarse2fine)', '72.9', '79.2', '71.7', '78.4'], ['Coarse2Fine + EG (3)', '75.6', '83.4', '74.8', '83.0'], ['Coarse2Fine + EG (5)', '[BOLD] 76.0', '[BOLD] 84.0', '[BOLD] 75.4', '[BOLD] 83.8']]
We report both the syntactical accuracy Accsyn corresponding to the ratio of predictions that are exactly the ground truth SQL query, as well as the execution accuracy Accex corresponding to the ratio of predictions that return the same result as the ground truth when executed. Note that the execution accuracy is higher than syntactical accuracy as syntactically different programs can generate the same results (e.g., programs differing only in predicate order). In execution-guided decoding, we report two model variants, one using a beam size of 3 and the other a beam size of 5.
Neural Language Correction with Character-Based Attention
1603.09727
Table 6: CoNLL development set recall for 5 most frequent error categories with and without training on data with synthesized article/determiner and noun number errors. Wci denotes wrong collocation/idiom errors.
['Type', 'Count', '[ITALIC] R no aug', '[ITALIC] R aug']
[['[BOLD] ArtOrDet', '717', '20.08', '29.14'], ['Wci', '434', '2.30', '1.61'], ['[BOLD] Nn', '400', '31.50', '51.00'], ['Preposition', '315', '13.01', '7.93'], ['Word form', '223', '26.90', '19.73']]
Effects of Data Augmentation We obtain promising improvements using data augmentation, boosting F0.5-score on the development set from 31.55 to 34.81. The same phenomenon has been observed by \newciterozovskaya2012ui. Interestingly, the recall of other error types (see \newciteng2014conll for descriptions) decreases. We surmise this is because the additional training data contains only ArtOrDet and Nn errors, and hence the network is encouraged to simply copy the output when those error types are not present. We hope synthesizing data with a variety of other error types may fix this issue and improve performance.
Neural Language Correction with Character-Based Attention
1603.09727
Table 1: Performance on Lang-8 test set. Adding the language model results in a negligible increase in performance, illustrating the difficulty of the user-generated forum setting.
['Method', 'Test BLEU']
[['No edits', '59.54'], ['Spell check', '58.91'], ['RNN', '61.63'], ['RNN + LM', '[BOLD] 61.70']]
Note that since there may be multiple ways to correct an error and some errors are left uncorrected, the baseline of using uncorrected sentences is more difficult to improve upon than it may initially appear. We suspect due to proper nouns, acronyms, and inconsistent capitalization conventions in Lang-8, however, this actually decreased BLEU slightly. To the best of our knowledge, no other work has reported results on this challenging task.
DynaBERT: Dynamic BERT with Adaptive Width and Depth1footnote 11footnote 1Work in progress.
2004.04037
Table 5: Comparison of results on development set between using conventional distillation and inplace distillation. For DynaBERTW, the average accuracy over four width multipliers are reported. For DynaBERT, the average accuracy over four width multipliers and three depth multipliers are reported. The higher accuracy in each group is highlighted.
['[EMPTY]', 'Distillation type', 'SST-2', 'CoLA', 'MRPC', 'RTE']
[['DynaBERTW', 'Conventional', '92.7', '[BOLD] 55.9', '86.1', '69.5'], ['DynaBERTW', 'Inplace', '92.6', '[BOLD] 55.9', '[BOLD] 87.0', '[BOLD] 70.0'], ['DynaBERT', 'Conventional', '[BOLD] 92.7', '[BOLD] 54.8', '83.2', '[BOLD] 69.5'], ['DynaBERT', 'Inplace', '92.5', '54.5', '[BOLD] 84.3', '69.0']]
As can be seen, inplace distillation has higher average accuracy on MRPC and CoLA when training DynaBERTW. However, in the training of DynaBERT, the model initialized with the inplace distillation can lead to even worse performance than that with the conventional distillation.
DynaBERT: Dynamic BERT with Adaptive Width and Depth1footnote 11footnote 1Work in progress.
2004.04037
Table 2: Results on test set of our proposed DynaBERT and DynaRoBERTa. Note that the evaluation metric for QQP and MRPC here is “F1”.
['[EMPTY]', 'MNLI-m', 'MNLI-mm', 'QQP', 'QNLI', 'SST-2', 'CoLA', 'STS-B', 'MRPC', 'RTE']
[['BERTBASE', '84.6', '83.6', '71.9', '90.7', '93.4', '51.5', '85.2', '87.5', '69.6'], ['DynaBERT ( [ITALIC] mw, [ITALIC] md=1,1)', '84.5', '84.1', '72.1', '91.3', '93.0', '54.9', '84.4', '87.9', '69.9'], ['RoBERTaBASE', '86.0', '85.4', '70.9', '92.5', '94.6', '50.5', '88.1', '90.0', '73.0'], ['DynaRoBERTa ( [ITALIC] mw, [ITALIC] md=1,1)', '86.9', '86.7', '71.9', '92.5', '94.7', '54.1', '88.4', '90.8', '73.7']]
Again, the proposed DynaBERT achieves comparable accuracy than BERTBASE with the same size. Interestingly, the proposed DynaRoBERTa outperforms RoBERTaBASE on seven out of eight tasks. A possible reason is that allowing adaptive width and depth increases the training difficulty and acts as regularization, and so contributes positively to the performance.
DynaBERT: Dynamic BERT with Adaptive Width and Depth1footnote 11footnote 1Work in progress.
2004.04037
Table 4: Ablation study in the training of DynaBERT. Results on the development set are reported. The highest average accuracy over four width multipliers for each depth multiplier is highlighted.
['[EMPTY]', '( [ITALIC] mw, [ITALIC] md)', 'SST-2 1.0x', 'SST-2 0.75x', 'SST-2 0.5x', 'CoLA 1.0x', 'CoLA 0.75x', 'CoLA 0.5x', 'MRPC 1.0x', 'MRPC 0.75x', 'MRPC 0.5x', 'RTE 1.0x', 'RTE 0.75x', 'RTE 0.5x']
[['[EMPTY]', '1.0x', '92.0', '91.6', '90.9', '58.5', '57.7', '42.9', '85.3', '83.8', '78.4', '67.9', '66.8', '66.4'], ['Vanilla DynaBERT', '0.75x', '92.3', '91.6', '91.1', '57.9', '56.4', '42.4', '86.0', '83.1', '78.7', '69.0', '66.8', '63.9'], ['[EMPTY]', '0.5x', '91.9', '91.9', '90.6', '55.9', '53.3', '40.6', '86.0', '83.1', '79.7', '68.2', '65.0', '63.9'], ['[EMPTY]', '0.25x', '91.6', '91.3', '89.0', '52.0', '50.0', '27.6', '83.1', '80.4', '77.5', '65.3', '63.5', '60.3'], ['[EMPTY]', 'avg.', '92.0', '91.6', '90.4', '56.1', '54.4', '38.4', '85.1', '82.6', '78.6', '67.6', '65.5', '63.6'], ['[EMPTY]', '( [ITALIC] mw, [ITALIC] md)', '1.0x', '0.75x', '0.5x', '1.0x', '0.75x', '0.5x', '1.0x', '0.75x', '0.5x', '1.0x', '0.75x', '0.5x'], ['[EMPTY]', '1.0x', '92.9', '93.3', '92.7', '57.1', '56.7', '52.6', '86.3', '85.8', '85.0', '72.2', '70.4', '66.1'], ['+ Distillation and', '0.75x', '93.1', '93.1', '92.1', '57.7', '55.4', '51.9', '86.5', '85.5', '84.1', '72.6', '72.2', '64.6'], ['Data augmentation', '0.5x', '92.9', '92.1', '91.3', '54.1', '53.7', '47.5', '84.8', '84.1', '83.1', '72.9', '72.6', '66.1'], ['[EMPTY]', '0.25x', '92.5', '91.7', '91.6', '50.7', '51.0', '44.6', '83.8', '83.8', '81.4', '67.5', '67.9', '62.5'], ['[EMPTY]', 'avg.', '92.9', '92.6', '91.9', '54.9', '54.2', '49.2', '[BOLD] 85.4', '[BOLD] 84.8', '[BOLD] 83.4', '[BOLD] 71.3', '70.8', '64.8'], ['[EMPTY]', '( [ITALIC] mw, [ITALIC] md)', '1.0x', '0.75x', '0.5x', '1.0x', '0.75x', '0.5x', '1.0x', '0.75x', '0.5x', '1.0x', '0.75x', '0.5x'], ['[EMPTY]', '1.0x', '93.2', '93.3', '92.7', '59.7', '59.1', '54.6', '84.1', '83.6', '82.6', '72.2', '71.8', '66.1'], ['+ Fine-tuning', '0.75x', '93.0', '93.1', '92.8', '60.8', '59.6', '53.2', '84.8', '83.6', '82.8', '71.8', '73.3', '65.7'], ['[EMPTY]', '0.5x', '93.3', '92.7', '91.6', '58.4', '56.8', '48.5', '83.6', '83.3', '82.6', '72.2', '72.2', '67.9'], ['[EMPTY]', '0.25x', '92.8', '92.0', '92.0', '50.9', '51.6', '43.7', '82.6', '83.6', '81.1', '68.6', '68.6', '63.2'], ['[EMPTY]', 'avg.', '[BOLD] 93.1', '[BOLD] 92.8', '[BOLD] 92.3', '[BOLD] 57.5', '[BOLD] 56.8', '[BOLD] 50.0', '83.8', '83.5', '82.3', '71.2', '[BOLD] 71.5', '[BOLD] 65.7']]
We evaluate the effect of knowledge distillation, data augmentation and final fine-tuning in the training of DynaBERT on four of GLUE data sets The DynaBERT trained without knowledge distillation, data augmentation and final fine-tuning is called “vanilla DynaBERT”. Additional fine-tuning further improves the average accuracy on all three depth multipliers on SST-2, CoLA and two on RTE, but harms the performance on MRPC. Empirically, we choose the model with higher average accuracy between with and without fine-tuning.
An Unsupervised Approach for Mapping between Vector Spaces
1711.05680
Table 8: Inter-Language comparison of results. “Increase on SG” denotes the difference between the mentioned systems and the scores of SkipGram embeddings trained on the target languages - fr-SG, de-SG and ur-SG respectively. We are not using ConceptNet-T for this comparison because any equivalent embedding is not available for Hindi.
['[BOLD] System', '[BOLD] Language Pair', '[BOLD] Increase on SG']
[['hi-SG-T', 'Hindi-Urdu', '15.58'], ['en-SG-T', 'English-French', '6.14'], ['en-SG-T', 'English-German', '2.34']]
This may be because Hindi-Urdu are very much more similar than either English-French and English-German.
MDE: Multiple Distance Embeddings for Link Prediction in Knowledge Graphs
1905.10702
Table 6: Results of MDE after 100 iterations when removing one of the terms. Best results are in bold.
['Removed Term', 'MR', '[BOLD] WN18RR MRR', 'Hit@10', 'MR', '[BOLD] WIN18 MRR', 'Hit@10']
[['S1', '3983', '0.417', '[BOLD] 0.501', '[BOLD] 113', '0.838', '[BOLD] 0.946'], ['S2', '[BOLD] 3727', '0.358', '0.490', '131', '0.823', '0.943'], ['S3', '3960', '0.427', '0.499', '161', '0.850', '0.943'], ['S4', '3921', '0.366', '0.478', '163', '0.705', '0.929'], ['[ITALIC] None', '3985', '[BOLD] 0.428', '[BOLD] 0.501', '151', '[BOLD] 0.844', '[BOLD] 0.946']]
The evaluations on WN18RR and WN18 show that the removal of S4 has the most negative effect on the performance of MDE. The removal of S1 that was one of the good performing terms in the last experiment has the least effect. Nevertheless, S1 improves the MRR in the MDE. Also, when we remove S2, the MRR and Hit@10 are negatively influenced, indicating that it exists cases that S2 performs better than the other terms, although, in the individual tests, it performed the worst between all the terms.
MDE: Multiple Distance Embeddings for Link Prediction in Knowledge Graphs
1905.10702
Table 5: Results of each individual term in MDE on WN18RR and FB15k-237. Best results are in bold.
['Individual Term', 'MR', '[BOLD] WN18RR MRR', 'Hit@10', 'MR', '[BOLD] FB15k-237 MRR', 'Hit@10']
[['S1', '3137', '0.184', '0.447', '187', '0.260', '0.454'], ['S2', '8063', '0.283', '0.376', '439', '0.204', '0.342'], ['S3', '3153', '0.183', '0.449', '[BOLD] 186', '0.258', '0.455'], ['S4', '[BOLD] 2245', '[BOLD] 0.323', '[BOLD] 0.467', '220', '[BOLD] 0.273', '[BOLD] 0.462']]
We can see that S4 outperforms the other terms while S1 and S3 perform very similar on these two datasets. Between the four terms, S2 performs the worst since most of the relations in the test datasets follow an antisymmetric pattern and S2 is not efficient in modeling them.
Actionable and Political Text Classification using Word Embeddings and LSTM
1607.02501
Table 1: LSTM Language Models for Actionability. (V: Vocabulary Size)
['[BOLD] Language', '[BOLD] Training Samples', '[BOLD] Test Samples', '[BOLD] Training Accuracy ( [ITALIC] V=100k)', '[BOLD] Test Accuracy ( [ITALIC] V=100k)', '[BOLD] Training Accuracy ( [ITALIC] V=20k)', '[BOLD] Test Accuracy ( [ITALIC] V=20k)']
[['af', '174,119', '43,530', '0.8810', '0.8525', '0.8665', '0.8547'], ['ar', '380,038', '95,010', '0.8266', '0.7940', '0.8086', '0.7950'], ['cs', '31,407', '7,852', '0.9266', '0.8988', '0.9163', '0.9087'], ['da', '134,623', '33,656', '0.9098', '0.8844', '0.8974', '0.8905'], ['de', '223,438', '55,860', '0.8714', '0.8316', '0.8526', '0.8352'], ['en', '6,841,344', '1,710,337', '0.8540', '0.8508', '0.8490', '0.8475'], ['es', '720,783', '180,196', '0.8563', '0.8406', '0.8461', '0.8402'], ['et', '113,635', '28,409', '0.9140', '0.8874', '0.9050', '0.8958'], ['fa', '21,991', '5,498', '0.8308', '0.7641', '0.8160', '0.7643'], ['fi', '93,844', '23,462', '0.9247', '0.8951', '0.9103', '0.9013'], ['fr', '397,749', '99,438', '0.8349', '0.8112', '0.8197', '0.8088'], ['hr', '48,728', '12,182', '0.9166', '0.8747', '0.8976', '0.8771'], ['hu', '35,719', '8,930', '0.8251', '0.8507', '0.8814', '0.8550'], ['id', '403,060', '100,766', '0.8826', '0.8624', '0.8699', '0.8616'], ['it', '219,898', '54,975', '0.8915', '0.8736', '0.8818', '0.8734'], ['lt', '22,257', '5,565', '0.8655', '0.8762', '0.8803', '0.8652'], ['nl', '286,350', '71,588', '0.8530', '0.8249', '0.8361', '0.8240'], ['no', '140,932', '35,234', '0.9056', '0.8798', '0.8944', '0.8843'], ['pl', '79,386', '19,847', '0.9069', '0.8668', '0.8901', '0.8787'], ['pt', '465,925', '116,482', '0.8545', '0.8357', '0.8441', '0.8363'], ['ro', '81,920', '20,481', '0.9097', '0.8726', '0.8899', '0.8798'], ['sk', '73,808', '18,452', '0.9296', '0.9000', '0.9093', '0.9098'], ['sl', '76,784', '19,196', '0.9120', '0.8658', '0.8878', '0.8728'], ['so', '146,428', '36,608', '0.8911', '0.8591', '0.8773', '0.8656'], ['sq', '31,683', '7,921', '0.8991', '0.8794', '0.8846', '0.8799'], ['sv', '114,395', '28,599', '0.9276', '0.9013', '0.9151', '0.9069'], ['sw', '51,396', '12,849', '0.9029', '0.8434', '0.8778', '0.8594'], ['th', '52,859', '13,215', '0.8898', '0.7601', '0.8088', '0.7997'], ['tl', '356,072', '89,018', '0.8487', '0.8253', '0.8374', '0.8261'], ['tr', '123,217', '30,805', '0.8798', '0.8291', '0.8570', '0.8331'], ['vi', '58,769', '14,693', '0.8769', '0.8542', '0.8694', '0.8541'], ['Combined', '1,417,723', '354,431', '0.9031', '0.8597', '0.8802', '0.8681']]
We examine actionability across languages in our experiments. A pair of training and test datasets is therefore created for each language, as well as one dataset that includes all languages. Some languages have sparse data and smaller number of samples, while others are larger. Our dataset sizes vary from 27k messages for the language with the least data (Farsi), to 8.5 million messages for the language with the most (English). A combined language dataset is also prepared, which contains 1.7 million sampled messages across all languages. Languages such as Czech (cs), Finnish (fi), Slovak (si) and Swedish (sv) show the highest accuracies of over 90%. The lowest accuracies are for languages such as Arabic (ar), Farsi (fa) and Thai (th), where the accuracies are slightly below 80%, though in some cases like fa and th, the low accuracy may be attributed to sparsity of data. As expected, certain languages are more predictable in terms of actionability than others, and we observe a variation of 15% between the least and most accurate models. In our experiments we consider two vocabulary sizes for each language. The first is a smaller vocabulary of size 20,000, and the second is a larger one of size 100,000. The larger vocabulary adds significant overhead in training time, since more params need to be learnt by the model.
Actionable and Political Text Classification using Word Embeddings and LSTM
1607.02501
Table 2: Traditional vs LSTM Model Performance
['[BOLD] Language', '[BOLD] Traditional Model Accuracy', '[BOLD] LSTM Model Accuracy']
[['[BOLD] en', '0.74', '0.85'], ['[BOLD] es', '0.76', '0.84'], ['[BOLD] fr', '0.76', '0.81'], ['[BOLD] pt', '0.71', '0.84'], ['[BOLD] tl', '0.79', '0.83'], ['[BOLD] id', '0.74', '0.86'], ['[BOLD] af', '0.75', '0.85'], ['[BOLD] it', '0.72', '0.87'], ['[BOLD] nl', '0.72', '0.82'], ['[BOLD] ar', '0.81', '0.80']]
Further the improvement is significantly large with the LSTM model, the largest improvement being 15% for Italian (it). This clearly demonstrates the superiority of neural networks when it comes to the text classification problems such as actionability.
Actionable and Political Text Classification using Word Embeddings and LSTM
1607.02501
Table 3: Non-Actionable vs Actionable Classification Examples
['[BOLD] Message', '[BOLD] Prediction Score', '[BOLD] Classification']
[['@Verizon #CEO confirms interest in possible @Yahoo acquisition', '0.02', 'NON-ACTIONABLE'], ['@PGE4Me’s new program allows customers to go 100% #solar without having to install rooftop solar panels!', '0.14', 'NON-ACTIONABLE'], ['Just another reason to despise @comcast and @xfinity Report: Comcast wants to limit Netflix binges', '0.15', 'NON-ACTIONABLE'], ['We’re switching up the Official 1D @Spotify Playlist, which classic tracks do you want to see added?', '0.32', 'NON-ACTIONABLE'], ['I freaking HATE the changes @Yahoo has done to both the home page and my freaking mail!!!', '0.35', 'NON-ACTIONABLE'], ['Thanks for the 6 hour outage in my area @comcast ill def be calling and having money taken off my monthly bill.', '0.59', 'ACTIONABLE'], ['@PGE4Me what’s up with your online portal? Taking forever to login and then everything says unavailable', '0.73', 'ACTIONABLE'], ['@comcastcares @XFINITY Help! My wife just bought an episode of a show that we already watched (for free). Can I reverse the charge?', '0.82', 'ACTIONABLE'], ['@Yahoo I accidentally deleted my email, how do I get it back?', '0.85', 'ACTIONABLE'], ['@SpotifyCares hi someone has hacked my spotify and it won’t let me listen to music because they are listening from another device help :(', '0.96', 'ACTIONABLE']]
A useful side effect of the predictions made with the model, is that along with the final classification, the prediction score for a given message is indicative of how strongly actionable a message may be. We find that messages that simply convey some news are classified as the least actionable, and the ones which state an explicit issue with a call to action are classified as highly actionable. Broad complaints or general questions are classified as non-actionable, or weakly actionable, which is a highly desired aspect when sifting through such messages.
IMPROVED LARGE-MARGIN SOFTMAX LOSS FOR SPEAKER DIARISATION
1911.03970
Table 2: SERs (%) for models using all parameters in GLM-Softmax. The Overlapping Speech Model uses the approach of modifying the GLM-Softmax parameters for overlapping speech samples.
['Model', 'Combined v1', 'Combined v2', 'Overlapping Speech Model']
[['Model', '( [ITALIC] m1, [ITALIC] m2, [ITALIC] m3)=(1.05,0.08,0.02)', '( [ITALIC] m1, [ITALIC] m2, [ITALIC] m3)=(0.94,0.20,0.00)', '( [ITALIC] m1, [ITALIC] m2, [ITALIC] m3)=(1.045,0.04,0.05)'], ['Dev', '13.4', '13.5', '13.2'], ['Eval', '12.4', '12.6', '10.9']]
The combined approach shows further improvements relative to the single parameter models. Combined v1 has a further 3.0% SER relative reduction over m1=1.10 (and 24.6% relative over the baseline), with Combined v2 performing competitively. The notable property about v2 is that m1<1 makes the training criteria harder as the model becomes more accurate, and m2=0.20 ensures cos(θt)⩾ψ(θt). For the Overlapping Speech Model (using the approach of changing parameters to (m1,m2,m3)=(1,0,0) for overlapping speech samples), since the AMI dataset includes a significant amount of overlapping speech, the best parameters to use are notably different from the conventional approach (of using just one set of parameters). Overall, the reduction in SER relative to the baseline is 17.0% and 40.4% for dev and eval respectively, giving an overall reduction of 29.5%.
K-BERT: Enabling Language Representation with Knowledge Graph
1909.07606
Table 3: Results of various models on specific-domain tasks (%).
['[BOLD] Models\\Datasets', '[BOLD] Finance_Q&A [ITALIC] P.', '[BOLD] Finance_Q&A [ITALIC] R.', '[BOLD] Finance_Q&A [ITALIC] F1', '[BOLD] Law_Q&A [ITALIC] P.', '[BOLD] Law_Q&A [ITALIC] R.', '[BOLD] Law_Q&A [ITALIC] F1', '[BOLD] Finance_NER [ITALIC] P.', '[BOLD] Finance_NER [ITALIC] R.', '[BOLD] Finance_NER [ITALIC] F1', '[BOLD] Medicine_NER [ITALIC] P.', '[BOLD] Medicine_NER [ITALIC] R.', '[BOLD] Medicine_NER [ITALIC] F1']
[['Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.'], ['Google BERT', '81.9', '86.0', '83.9', '83.1', '90.1', '86.4', '84.8', '87.4', '86.1', '91.9', '93.1', '92.5'], ['K-BERT (HowNet)', '83.3', '84.4', '83.9', '83.7', '91.2', '87.3', '86.3', '89.0', '[BOLD] 87.6', '93.2', '93.3', '93.3'], ['K-BERT (CN-DBpedia)', '81.5', '88.6', '[BOLD] 84.9', '82.1', '93.8', '[BOLD] 87.5', '86.1', '88.7', '87.4', '93.9', '93.8', '93.8'], ['K-BERT (MedicalKG)', '-', '-', '-', '-', '-', '-', '-', '-', '-', '94.0', '94.4', '[BOLD] 94.2'], ['Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.'], ['Our BERT', '82.1', '86.5', '84.2', '83.2', '91.7', '87.2', '84.9', '87.4', '86.1', '91.8', '93.5', '92.7'], ['K-BERT (HowNet)', '82.8', '85.8', '84.3', '83.0', '92.4', '87.5', '86.3', '88.5', '87.3', '93.5', '93.8', '93.7'], ['K-BERT (CN-DBpedia)', '81.9', '87.1', '[BOLD] 84.4', '83.1', '92.6', '[BOLD] 87.6', '86.3', '88.6', '[BOLD] 87.4', '93.9', '94.3', '94.1'], ['K-BERT (MedicalKG)', '-', '-', '-', '-', '-', '-', '-', '-', '-', '94.1', '94.3', '[BOLD] 94.2']]
Similarly, the specific-domain datasets are split into three parts: train, dev, and test, which are used to fine-tune, select and test model, respectively. Compared with BERT, K-BERT has a significant performance improvement in terms of domain tasks. As for F1, K-BERT with CN-DBpedia can improve the performance of all tasks by 1∼2%. The unique gain benefits from the domain knowledge in KG. From these results, we can conclude that KG, especially the domain KG, is very helpful for domain-specific tasks.
K-BERT: Enabling Language Representation with Knowledge Graph
1909.07606
Table 1: Results of various models on sentence classification tasks on open-domain tasks (Acc. %)
['[BOLD] Models\\Datasets', '[BOLD] Book_review [ITALIC] Dev', '[BOLD] Book_review [ITALIC] Test', '[BOLD] Chnsenticorp [ITALIC] Dev', '[BOLD] Chnsenticorp [ITALIC] Test', '[BOLD] Shopping [ITALIC] Dev', '[BOLD] Shopping [ITALIC] Test', '[BOLD] Weibo [ITALIC] Dev', '[BOLD] Weibo [ITALIC] Test', '[BOLD] XNLI [ITALIC] Dev', '[BOLD] XNLI [ITALIC] Test', '[BOLD] LCQMC [ITALIC] Dev', '[BOLD] LCQMC [ITALIC] Test']
[['Pre-trainied on WikiZh by Google.', 'Pre-trainied on WikiZh by Google.', 'Pre-trainied on WikiZh by Google.', 'Pre-trainied on WikiZh by Google.', 'Pre-trainied on WikiZh by Google.', 'Pre-trainied on WikiZh by Google.', 'Pre-trainied on WikiZh by Google.', 'Pre-trainied on WikiZh by Google.', 'Pre-trainied on WikiZh by Google.', 'Pre-trainied on WikiZh by Google.', 'Pre-trainied on WikiZh by Google.', 'Pre-trainied on WikiZh by Google.', 'Pre-trainied on WikiZh by Google.'], ['Google BERT', '88.3', '[BOLD] 87.5', '93.3', '94.3', '96.7', '96.3', '98.2', '98.3', '76.0', '75.4', '88.4', '86.2'], ['K-BERT (HowNet)', '88.6', '87.2', '[BOLD] 94.6', '[BOLD] 95.6', '[BOLD] 97.1', '[BOLD] 97.0', '98.3', '98.3', '[BOLD] 76.8', '[BOLD] 76.1', '[BOLD] 88.9', '86.9'], ['K-BERT (CN-DBpedia)', '[BOLD] 88.6', '87.3', '93.9', '95.3', '96.6', '96.5', '98.3', '98.3', '76.5', '76.0', '88.6', '[BOLD] 87.0'], ['Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.'], ['Our BERT', '[BOLD] 88.6', '87.9', '94.8', '95.7', '96.9', '[BOLD] 97.1', '98.2', '98.2', '77.0', '76.3', '89.0', '86.7'], ['K-BERT (HowNet)', '88.5', '87.4', '[BOLD] 95.4', '95.6', '96.9', '96.9', '98.3', '[BOLD] 98.4', '[BOLD] 77.2', '[BOLD] 77.0', '[BOLD] 89.2', '[BOLD] 87.1'], ['K-BERT (CN-DBpedia)', '88.8', '87.9', '95.0', '[BOLD] 95.8', '[BOLD] 97.1', '97.0', '98.3', '98.3', '76.2', '75.9', '89.0', '86.9']]
Each of the above datasets is divided into three parts: train, dev, and test. We use the train part to fine-tune the model and then evaluate its performance on the dev and test parts. The language KG (HowNet) performs better than the encyclopedic KG in terms of semantic similarity tasks (i.e., XNLI and LCQMC); (3) For Q&A and NER tasks (i.e., NLPCC-DBQA and MSRA-NER), the encyclopedic KG (CN-DBpedia) is more suitable than the language KG. Therefore, it is important to choose the right KG based on the type of task.
K-BERT: Enabling Language Representation with Knowledge Graph
1909.07606
Table 2: Results of various models on NLPCC-DBQA (MRR %) and MSRA-NER (F1 %).
['[BOLD] Models\\Datasets', '[BOLD] NLPCC-DBQA [ITALIC] Dev', '[BOLD] NLPCC-DBQA [ITALIC] Test', '[BOLD] MSRA-NER [ITALIC] Dev', '[BOLD] MSRA-NER [ITALIC] Test']
[['Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.', 'Pre-trained on WikiZh by Google.'], ['Google BERT', '93.4', '93.3', '94.5', '93.6'], ['K-BERT (HowNet)', '93.2', '93.1', '95.8', '94.5'], ['K-BERT (CN-DBpedia)', '[BOLD] 94.5', '[BOLD] 94.3', '[BOLD] 96.6', '[BOLD] 95.7'], ['Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.', 'Pre-trained on WikiZh and WebtextZh by us.'], ['Our BERT', '93.3', '93.6', '95.7', '94.6'], ['K-BERT (HowNet)', '93.2', '93.1', '96.3', '95.6'], ['K-BERT (CN-DBpedia)', '[BOLD] 93.6', '[BOLD] 94.2', '[BOLD] 96.4', '[BOLD] 95.6']]
Each of the above datasets is divided into three parts: train, dev, and test. We use the train part to fine-tune the model and then evaluate its performance on the dev and test parts. The language KG (HowNet) performs better than the encyclopedic KG in terms of semantic similarity tasks (i.e., XNLI and LCQMC); (3) For Q&A and NER tasks (i.e., NLPCC-DBQA and MSRA-NER), the encyclopedic KG (CN-DBpedia) is more suitable than the language KG. Therefore, it is important to choose the right KG based on the type of task. In addition, it can be observed that the use of an additional corpus (WebtextZh) can also bring performance boost, but not as significant as KG.
A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications
1804.09635
Table 4: Mean ± standard deviation of various measurements on reviews in the ACL 2017 and ICLR 2017 sections of PeerRead. Note that ACL aspects were written by the reviewers themselves, while ICLR aspects were predicted by our annotators based on the review.
['[BOLD] Measurement', '[BOLD] ACL’17', '[BOLD] ICLR’17']
[['Review length (words)', '531±323', '346±213'], ['Appropriateness', '4.9±0.4', '2.6±1.3'], ['Meaningful comparison', '3.5±0.8', '2.9±1.1'], ['Substance', '3.6±0.8', '3.0±0.9'], ['Originality', '3.9±0.9', '3.3±1.1'], ['Clarity', '3.9±0.9', '4.2±1.0'], ['Impact', '3.2±0.5', '3.4±1.0'], ['Overall recommendation', '3.3±0.9', '3.3±1.4']]
Most of the mean scores are similar in both sections, with a few notable exceptions. The comments in ACL 2017 reviews tend to be about 50% longer than those in the ICLR 2017 reviews. Since review length is often thought of as a measure of its quality, this raises interesting questions about the quality of reviews in ICLR vs. ACL conferences. We note, however, that ACL 2017 reviews were explicitly opted-in while the ICLR 2017 reviews include all official reviews, which is likely to result in a positive bias in review quality of the ACL reviews included in this study.
A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications
1804.09635
Table 3: Mean review scores for each presentation format (oral vs. poster). Raw scores range between 1–5. For reference, the last column shows the sample standard deviation based on all reviews.
['[BOLD] Presentation format', '[BOLD] Oral', '[BOLD] Poster', 'Δ', '[BOLD] stdev']
[['Recommendation', '3.83', '2.92', '0.90', '0.89'], ['Substance', '3.91', '3.29', '0.62', '0.84'], ['Clarity', '4.19', '3.72', '0.47', '0.90'], ['Meaningful comparison', '3.60', '3.36', '0.24', '0.82'], ['Impact', '3.27', '3.09', '0.18', '0.54'], ['Originality', '3.91', '3.88', '0.02', '0.87'], ['Soundness/Correctness', '3.93', '4.18', '-0.25', '0.91']]
Notably, the average ‘overall recommendation’ score in reviews recommending an oral presentation is 0.9 higher than in reviews recommending a poster presentation, suggesting that reviewers tend to recommend oral presentation for submissions which are holistically stronger.
A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications
1804.09635
Table 5: Test accuracies (%) for acceptance classification. Our best model outperforms the majority classifiers in all cases.
['[EMPTY]', '[BOLD] ICLR', '[BOLD] cs.cl', '[BOLD] cs.lg', '[BOLD] cs.ai']
[['Majority', '57.6', '68.9', '67.9', '92.1'], ['Ours (Δ)', '65.3 +7.7', '75.7 +6.8', '70.7 +2.8', '92.6 +0.5']]
Our best model outperforms the majority classifier in all cases, with up to 22% error reduction. Since our models lack the sophistication to assess the quality of the work discussed in the given paper, this might indicate that some of the features we define are correlated with strong papers, or bias reviewers’ judgments.