paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
Sparse Sequence-to-Sequence Models
1905.05702
Table 1: Average per-language accuracy on the test set (CoNLL–SIGMORPHON 2018 task 1) averaged or ensembled over three runs.
['[ITALIC] α output', '[ITALIC] α attention', 'high (avg.)', 'high (ens.)', 'medium (avg.)', 'medium (ens.)']
[['1', '1', '93.15', '94.20', '82.55', '85.68'], ['[EMPTY]', '1.5', '92.32', '93.50', '83.20', '85.63'], ['[EMPTY]', '2', '90.98', '92.60', '83.13', '85.65'], ['1.5', '1', '94.36', '94.96', '84.88', '86.38'], ['[EMPTY]', '1.5', '94.44', '95.00', '84.93', '86.55'], ['[EMPTY]', '2', '94.05', '94.74', '84.93', '86.59'], ['2', '1', '[BOLD] 94.59', '[BOLD] 95.10', '84.95', '86.41'], ['[EMPTY]', '1.5', '94.47', '95.01', '[BOLD] 85.03', '[BOLD] 86.61'], ['[EMPTY]', '2', '94.32', '94.89', '84.96', '86.47'], ['UZH ( 2018 )', 'UZH ( 2018 )', 'UZH ( 2018 )', '96.00', '[EMPTY]', '86.64']]
We report the official metric of the shared task, word accuracy averaged across languages. In addition to the average results of three individual model runs, we use an ensemble of those models, where we decode by averaging the raw probabilities at each time step. Our best sparse loss models beat the softmax baseline by nearly a full percentage point with ensembling, and up to two and a half points in the medium setting without ensembling. The choice of attention has a smaller impact. In contrast, our only departure from standard seq2seq training is the drop-in replacement of softmax by entmax.
Few-Shot Dialogue Generation Without Annotated Data: A Transfer Learning Approach
1908.05854
Table 1: Evaluation results. Marked with asterisks are individual results higher than the ZSDG baseline which are achieved with the minimum amount of training data, and in bold is the model consistently outperforming ZSDG in all domains and metrics with minimum data.
['[BOLD] ModelDomain', '[BOLD] Navigation BLEU, %', '[BOLD] Navigation Entity F1, %', '[BOLD] Weather BLEU, %', '[BOLD] Weather Entity F1, %', '[BOLD] Schedule BLEU, %', '[BOLD] Schedule Entity F1, %']
[['ZSDG', '5.9', '14.0', '8.1', '31', '7.9', '36.9'], ['NLU_ZSDG', '6.1±2.2', '12.7±3.3', '5.0±1.6', '16.8±6.7', '6.0±1.7', '26.5±5.4'], ['NLU_ZSDG+LAED', '7.9±1', '12.3±2.9', '8.7±0.6', '21.5±6.2', '8.3±1', '20.7±4.8'], ['FSDG@1%', '6.0±1.8', '9.8±4.8', '6.9±1.1', '22.2±10.7', '5.5±0.8', '25.6±8.2'], ['FSDG@3%', '7.9±0.7', '11.8±4.4', '9.6±1.8', '39.8±7', '8.2±1.1', '34.8±4.4'], ['FSDG@5%', '8.3±1.3', '15.3±6.3', '11.5±1.6', '38.0±10.5', '9.7±1.4', '37.6±8.0'], ['FSDG@10%', '9.8±0.8', '19.2±3.2', '12.9±2.4', '40.4±11.0', '12.0±1.0', '38.2±4.2'], ['FSDG+VAE@1%', '3.6±2.6', '9.3±4.1', '6.8±1.3', '23.2±10.1', '4.6±1.6', '28.9±7.3'], ['FSDG+VAE@3%', '6.9±1.9', '15.6±5.8', '9.5±2.6', '32.2±11.8', '6.6±1.7', '34.8±7.7'], ['FSDG+VAE@5%', '7.8±1.9', '12.7±4.2', '10.1±2.1', '40.3±10.4', '8.2±1.7', '34.2±8.7'], ['FSDG+VAE@10%', '9.0±2.0', '18.0±5.8', '12.9±2.2', '40.1±7.6', '11.6±1.5', '39.9±6.9'], ['FSDG+LAED@1%', '7.1±0.8⋆', '10.1±4.5', '10.6±2.1⋆', '31.4±8.1⋆', '7.4±1.2', '29.1±6.6'], ['FSDG+LAED@3%', '9.2±0.8', '14.5±4.8⋆', '13.1±1.7', '40.8±6.1', '9.2±1.2⋆', '32.7±6.1'], ['[BOLD] FSDG+LAED@5%', '10.3±1.2', '15.6±4.5', '14.5±2.2', '40.9±8.6', '11.8±1.9', '37.6±6.1∗'], ['FSDG+LAED@10%', '12.3±0.9', '17.3±4.5', '17.6±1.9', '47.5±6.0', '15.2±1.6', '38.7±8.4']]
Our objective here is maximum accuracy with minimum training data required, and it can be seen that few-shot models with LAED representation are the best performing models for this objective. While the improvements can already be seen with simple FSDG, the use of LAED representation helps to significantly reduce the amount of in-domain training data needed: in most cases, the state-of-the-art results are attained with as little as 3% of in-domain data. At 5%, we see that FSDG+LAED consistently improves upon all other models in every domain, either by increasing the mean accuracy or by decreasing the variation. In contrast, the ZSDG setup used approximately 150 annotated training utterances for each of the 3 domains, totalling about 450 annotated utterances. Although in our few-shot approach we use full in-domain dialogues, we end up having a comparable amount of target-domain training data, with the crucial difference that none of those has to be annotated for our approach. Therefore, the method we introduced attains state-of-the-art in both accuracy and data-efficiency.
A Hierarchical Model for Data-to-Text Generation
1912.10011
Table 1: Evaluation on the RotoWire testset using relation generation (RG) count (#) and precision (P%), content selection (CS) precision (P%) and recall (R%), content ordering (CO), and BLEU. -: number of parameters unavailable.
['[EMPTY]', 'BLEU', 'RG P%', 'RG #', 'CS P%', 'CS R%', 'CS F1', 'CO', 'Nb Params']
[['Gold descriptions', '100', '96.11', '17.31', '100', '100', '100', '100', '[EMPTY]'], ['Wiseman', '14.5', '75.62', '[BOLD] 16.83', '32.80', '39.93', '36.2', '15.62', '45M'], ['Li', '16.19', '84.86', '19.31', '30.81', '38.79', '34.34', '16.34', '-'], ['Pudupully-plan', '16.5', '87.47', '34.28', '34.18', '51.22', '41', '18.58', '35M'], ['Puduppully-updt', '16.2', '[BOLD] 92.69', '30.11', '38.64', '48.51', '43.01', '[BOLD] 20.17', '23M'], ['Flat', '16.7.2', '76.621', '18.54.6', '31.67.7', '42.91', '36.42.4', '14.64.3', '14M'], ['Hierarchical-kv', '17.3', '89.041', '21.46.9', '38.571.2', '51.50.9', '44.19.7', '18.70.7', '14M'], ['Hierarchical-k', '[BOLD] 17.5.3', '89.461.4', '21.171.4', '[BOLD] 39.471.4', '[BOLD] 51.641', '[BOLD] 44.7.6', '18.90.7', '14M']]
For each proposed variant of our architecture, we report the mean score over ten runs, as well as the standard deviation in subscript. We also report the result of the oracle (metrics on the gold descriptions). Please note that gold descriptions trivially obtain 100% on all metrics expect RG, as they are all based on comparison with themselves. RG scores are different, as the IE system is imperfect and fails to extract accurate entities 4% of the time. RG-# is an absolute count.
MOSS: End-to-End Dialog System Framework with Modular Supervision
1909.05528
Table 1: Performance comparison on CamRest676 among the baselines, MOSS-all, and several variants of MOSS.
['[BOLD] Model', '[BOLD] Mat', '[BOLD] Succ.F1', '[BOLD] BLEU']
[['KVRN', '[EMPTY]', '[EMPTY]', '0.134'], ['NDM', '0.904', '0.832', '0.212'], ['LIDM', '0.912', '0.840', '0.246'], ['TSCP', '0.927', '0.854', '0.253'], ['MOSS w/o DPL', '0.932', '0.856', '0.251'], ['MOSS w/o NLU', '0.932', '0.857', '0.255'], ['MOSS-all × 60%', '0.947', '0.857', '0.202'], ['MOSS × (60%all + 40%raw )', '0.947', '0.859', '0.221'], ['MOSS-all', '[BOLD] 0.951', '[BOLD] 0.860', '[BOLD] 0.259']]
The first key takeaway is that the more supervision the model has, the better the performance is. (i) KVRN < (ii) NDM ≈ (iii) LIDM < (iv) TSCP < (v) MOSS w/o DPL ≈ (vi) MOSS w/o NLU < (ix) MOSS-all. We note that this performance ranking is the same as the ranking of how much supervision each system receives: (i) KVRN only incorporates supervision from one dialog module (i.e., natural language generation); (ii, iii, iv) NDM, LIDM, TSCP incorporate supervision from two dialog modules (i.e., dialog state tracking and natural language generation); (v, vi) MOSS without dialog policy learning (MOSS w/o DPL) and MOSS without natural language understanding (MOSS w/o NLU) incorporate supervision from three dialog modules; (ix) MOSS-all incorporates supervision from all four modules and outperform all models on all three metrics. Another takeaway is that models that have access to more detailed supervision need fewer number of dialogs to reach good performance. As for language generation quality (BLEU), MOSS-all with 60% training data performs worse. We suspect that it is partially because MOSS-all with 60% training data has seen fewer number of dialogs and thus has a weaker natural language generation module. We observe a large improvement on the BLEU score. So we compare our model against TSCP and some variants of MOSS on LaptopNetwork. We augment the belief span Bt originally introduced in \citeauthorlei-etal-2018-sequicity (2018) by concatenating user act, old Bt and system act in TSCP. This augmentation makes sure that TSCP has access to the same annotations as MOSS, otherwise TSCP could hardly generate reasonable response.
An Automated Text Categorization Framework based on Hyperparameter Optimization
1704.01975
Table 2: Authorship Attribution Data sets.
['Dataset', '[BOLD] macro-F1 Cummins\xa0', '[BOLD] macro-F1 Escalante ', '[BOLD] macro-F1 [ITALIC] μTC']
[['CCA', '0.0182', '0.7032', '[BOLD] 0.7633'], ['NFL', '[BOLD] 0.7654', '0.7637', '0.7422'], ['Business', '0.7548', '0.7808', '[BOLD] 0.8199'], ['Poetry', '0.4489', '0.7003', '[BOLD] 0.7135'], ['Travel', '0.6758', '0.7392', '[BOLD] 0.8621'], ['Cricket', '0.9170', '0.8810', '[BOLD] 0.9665']]
The first task analyzed is authorship attribution, The pre-processing stage of the μTC’s input is all-terms; others use the stemmed stage. The best performing classifiers are created by μTC, except for NFL where alternatives perform better. In the case of Business, Escalante et. Please notice that NFL and Bussiness are among the smaller dataset we tested, the low performance of μTC can be produced by the low number of exemplars, while alternative schemes take advantage of the few samples to compute better weights. In this benchmark, μTC produces the best result in all average cases. In a fine-grained comparison, only Meina surpasses μTC on the gender identification for English. This is also true for larger datasets like those depicted in Figures The figures show that even on k=2 the μTC achieves almost its optimal actual performance; even when the predicted performance is most of the times better for larger k values. On the other hand, the binary partition method is prone to overfit, especially on small datasets and small 1−β values (i.e., small test sets).
An Automated Text Categorization Framework based on Hyperparameter Optimization
1704.01975
Table 2: Authorship Attribution Data sets.
['Dataset', '[BOLD] Accuracy Cummins\xa0', '[BOLD] Accuracy Escalante ', '[BOLD] Accuracy [ITALIC] μTC']
[['CCA', '0.1000', '0.7372', '[BOLD] 0.7660'], ['NFL', '0.7778', '[BOLD] 0.8376', '0.7555'], ['Business', '0.7556', '[BOLD] 0.8358', '0.8222'], ['Poetry', '0.5636', '0.7405', '[BOLD] 0.7272'], ['Travel', '0.6833', '0.7845', '[BOLD] 0.8667'], ['Cricket', '0.9167', '0.9206', '[BOLD] 0.9667']]
The first task analyzed is authorship attribution, The pre-processing stage of the μTC’s input is all-terms; others use the stemmed stage. The best performing classifiers are created by μTC, except for NFL where alternatives perform better. In the case of Business, Escalante et. Please notice that NFL and Bussiness are among the smaller dataset we tested, the low performance of μTC can be produced by the low number of exemplars, while alternative schemes take advantage of the few samples to compute better weights. In this benchmark, μTC produces the best result in all average cases. In a fine-grained comparison, only Meina surpasses μTC on the gender identification for English. This is also true for larger datasets like those depicted in Figures The figures show that even on k=2 the μTC achieves almost its optimal actual performance; even when the predicted performance is most of the times better for larger k values. On the other hand, the binary partition method is prone to overfit, especially on small datasets and small 1−β values (i.e., small test sets).
An Automated Text Categorization Framework based on Hyperparameter Optimization
1704.01975
Table 4: Author profiling: PAN2017 benchmark [21], all methods were scored with the official gold-standard. All scores are based on the accuracy computation over the specified subset of items.
['Method', 'Task', 'Arabic', 'English', 'Spanish', 'Portuguese', 'Avg.']
[['[EMPTY]', 'Gender', '0.7569', '0.7938', '0.7975', '0.8038', '0.7880'], ['[ITALIC] μTC', 'Variety', '0.7894', '0.8388', '0.9364', '0.9750', '0.8849'], ['[EMPTY]', 'Joint', '0.6081', '0.6704', '0.7518', '0.7850', '0.7038'], ['[EMPTY]', 'Gender', '0.8006', '0.8233', '0.8321', '0.8450', '[BOLD] 0.8253'], ['Basile et al. ', 'Variety', '0.8313', '0.8988', '0.9621', '0.9813', '[BOLD] 0.9184'], ['[EMPTY]', 'Joint', '0.6831', '0.7429', '0.8036', '0.8288', '[BOLD] 0.7646'], ['[EMPTY]', 'Gender', '0.8031', '0.8071', '0.8193', '0.8600', '0.8224'], ['Martinc et al. ', 'Variety', '0.8288', '0.8688', '0.9525', '0.9838', '0.9085'], ['[EMPTY]', 'Joint', '0.6825', '0.7042', '0.7850', '0.8463', '0.7545'], ['[EMPTY]', 'Gender', '0.7838', '0.8054', '0.7957', '0.8538', '0.8097'], ['Tellez et al. ', 'Variety', '0.8275', '0.9004', '0.9554', '0.9850', '0.9171'], ['[EMPTY]', 'Joint', '0.6713', '0.7267', '0.7621', '0.8425', '0.7507']]
Please note that the result by Tellez et al. The plain μTC, as described in this manuscript, achieves accuracies of 0.7880 and 0.8849, respectively for gender and variety identification. The joint prediction of both classes achieves an accuracy of 0.7038.
An Automated Text Categorization Framework based on Hyperparameter Optimization
1704.01975
Table 7: spam classification
['Data set', '[BOLD] macro-F1 Androutsopoulos ', '[BOLD] macro-F1 Sakkis ', '[BOLD] macro-F1 Cheng ', '[BOLD] macro-F1 [ITALIC] μTC']
[['Ling-Spam', '-', '0.8957', '0.9870', '[BOLD] 0.9979'], ['PUA', '0.8897', '-', '-', '[BOLD] 0.9478'], ['PU1', '0.9149', '-', '[BOLD] 0.983', '0.9664'], ['PU2', '0.6794', '-', '-', '[BOLD] 0.9044'], ['PU3', '0.9265', '-', '[BOLD] 0.977', '0.9701']]
Here, it can be seen that best results in the macro-F1 measure were obtained with our approach μTC; nevertheless, the best results in the accuracy score were achieved by Androutsopoulos et al.
Impact of ASR on Alzheimer’s Disease Detection:All Errors are Equal, but Deletions are More Equal than Others
1904.01684
Table 1: Rates of ASR errors on DB and FP datasets.
['[BOLD] Dataset DB', '[BOLD] Dataset HC', '[BOLD] Del (%) 52.55', '[BOLD] Ins (%) 4.15', '[BOLD] Sub (%) 43.30']
[['DB', 'AD', '42.99', '5.70', '51.31'], ['[0.5pt/2pt] FP', 'HC', '34.16', '11.72', '54.12'], ['[0.5pt/2pt] FP', 'AD', '29.70', '12.14', '58.16']]
Rates of ASR errors for healthy and impaired speakers for DB and FP datasets are in Tab. Majority of errors arise from deletions and substitutions for both datasets and for both groups.
Evaluating Syntactic Properties of Seq2seq Output with a Broad Coverage HPSG: A Case Study on Machine Translation
1809.02035
Table 1: The distribution of root node conditions for the reference and NMT translations on the 200K analysis sentence pairs. Root node conditions are taken from the recorded best derivation. The best derivation is chosen by the maximum entropy model included in the ERG.
['[BOLD] Source', '[BOLD] Strict [BOLD] Full', '[BOLD] Strict [BOLD] Frag', '[BOLD] Informal [BOLD] Full', '[BOLD] Informal [BOLD] Frag', '[BOLD] Unpar- [BOLD] seable']
[['Ref', '64.7', '2.4', '31.5', '1.4', '0.0'], ['NMT', '60.5', '3.0', '28.1', '1.6', '6.8'], ['Δ', '-4.2', '+0.6', '-3.4', '+0.2', '+6.8']]
Root node conditions are used by the ERG to denote whether the parser had to relax punctuation and capitalization rules, with “strict” and “informal”, and whether the derivation is of a full sentence or a fragment, with “full” and “frag”. Fragments can be isolated noun, verb, or prepositional phrases. Both full sentence root node conditions saw a decrease in usage, with the strict full root condition having the largest drop out of all conditions. Both fragments have a small increase in usage.
Text mining policy: Classifying forest and landscape restoration policy agenda with neural information retrieval
1908.02425
Table 4. Accuracy, precision, recall, and F1-score for policy agenda classification across 31 policy documents in Malawi (n=12), Kenya (n=12), and Rwanda (n=7).
['Agenda', 'Accuracy', 'Precision', 'Recall', 'F1']
[['Maintaining # trees', '0.90', '1.00', '0.90', '0.97'], ['Increasing # trees', '0.71', '0.75', '0.78', '0.77'], ['Economic benefits', '0.84', '0.89', '0.84', '0.86'], ['Health benefits', '0.74', '0.88', '0.77', '0.82'], ['Benefit sharing', '0.88', '0.92', '0.55', '0.75'], ['Land ownership', '0.87', '0.83', '0.89', '0.85'], ['Land use rights', '0.65', '0.66', '0.78', '0.73'], ['Local participation', '0.90', '0.92', '0.96', '0.94'], ['Silviculture', '0.87', '0.89', '0.82', '0.87'], ['Agroforestry', '0.87', '0.94', '0.78', '0.90'], ['Soil erosion', '0.87', '0.95', '0.86', '0.89'], ['Forest protection', '0.68', '0.48', '0.60', '0.54'], ['Buffer zone', '0.94', '0.87', '1.00', '0.92'], ['Timber product mgmt.', '0.85', '0.80', '0.90', '0.86'], ['[BOLD] Average', '[BOLD] 0.83', '[BOLD] 0.82', '[BOLD] 0.84', '[BOLD] 0.83']]
The methodology performed best on topics with a very narrow scope and little overlap with other agenda, such as the establishment of buffer zones, and worst on topics that are broad in nature, subjective, and have potential to overlap with other agenda, such as land use rights and forest protection. Overall, we report similar metrics across the three countries indicating that this approach is able to generalize to new contexts. Specifically, the F1-scores for Kenya, Malawi, and Rwanda were 0.85, 0.80, and 0.82, respectively.
Past, Present, Future: A Computational Investigation of the Typology of Tense in 1000 Languages
1704.08914
Table 2: MRR results for step 4. See text for details.
['language', 'past', 'present', 'future', 'all']
[['Arabic', '1.00', '0.39', '0.77', '0.72'], ['Chinese', '0.00', '0.00', '0.87', '0.29'], ['English', '1.00', '1.00', '1.00', '1.00'], ['French', '1.00', '1.00', '1.00', '1.00'], ['German', '1.00', '1.00', '1.00', '1.00'], ['Italian', '1.00', '1.00', '1.00', '1.00'], ['Persian', '0.77', '1.00', '1.00', '0.92'], ['Polish', '1.00', '1.00', '0.58', '0.86'], ['Russian', '0.90', '0.50', '0.62', '0.67'], ['Spanish', '1.00', '1.00', '1.00', '1.00'], ['all', '0.88', '0.79', '0.88', '0.85']]
We make three contributions. (i) Our basic hypotheses are H1 and H2. (H1) For an important linguistic feature, there exist a few languages that mark it overtly and easily recognizably. (H2) It is possible to project overt markers to overt and non-overt markers in other languages. Based on these two hypotheses we design SuperPivot, a new method for analyzing highly parallel corpora, and show that it performs well for the crosslingual analysis of the linguistic phenomenon of tense. (ii) Given a superparallel corpus, SuperPivot can be used for the analysis of any low-resource language represented in that corpus. (iii) We extend Michael Cysouw’s pioneering work on typological analysis using parallel corpora by overcoming several limiting factors. The most important is that Cysouw’s method is only applicable if markers of the relevant linguistic feature are recognizable on the surface in all languages. In contrast, we only assume that markers of the relevant linguistic feature are recognizable on the surface in a small number of languages. The rank for a particular ranking of n-grams is the first n-gram that is highly correlated with the relevant tense; e.g., character subsequences of the name “Paulus” are evaluated as incorrect, the subsequence “-ed” in English as correct for past. MRR is averaged over all n-gram sizes, 2≤n≤6. Chinese has consistent tense marking only for future, so results are poor. Russian and Polish perform poorly because their central grammatical category is aspect, not tense. The poor performance on Arabic is due to the limits of character n-gram features for a “templatic” language.
Speaker Recognition with Random Digit Strings Using Uncertainty Normalized HMM-based i-vectors
1907.06111
TABLE III: Combinations of Uncertainty Normalization, Regularized LDA, S-Norm and PLDA
['Model', 'Version', '[BOLD] Male EER [%]', '[BOLD] Male NDCFminold', '[BOLD] Male NDCFminnew', '[BOLD] Female EER [%]', '[BOLD] Female NDCFminold', '[BOLD] Female NDCFminnew']
[['Proposed', 'Uncert. Norm, Reg. LDA, S-Norm', '[BOLD] 1.52', '[BOLD] 0.093', '[BOLD] 0.517', '[BOLD] 1.77', '[BOLD] 0.094', '[BOLD] 0.424'], ['Proposed', 'Uncert. Norm, Reg. LDA', '2.04', '0.113', '0.533', '2.57', '0.133', '0.515'], ['Proposed', 'S-Norm', '2.15', '0.113', '0.546', '3.12', '0.143', '0.561'], ['Proposed', 'Uncert. Norm, S-Norm', '1.68', '0.093', '0.550', '1.89', '0.102', '0.440'], ['Proposed', 'Uncert. Norm, Reg. LDA, PLDA', '2.37', '0.118', '0.491', '2.63', '0.176', '0.516']]
First of all, we observe that the contribution of Regularized LDA is rather minor compared to uncertainty normalization. This result is rather surprising; it shows that state-of-the-art performance can be attained even without explicit channel modelling, i.e. without the need of collecting multiple training recording coming from different channels, sessions or handsets, per speaker.
Speaker Recognition with Random Digit Strings Using Uncertainty Normalized HMM-based i-vectors
1907.06111
TABLE V: Comparison between different number of HMM states.
['Number of HMM states', '[BOLD] Male EER [%]', '[BOLD] Male NDCFminold', '[BOLD] Male NDCFminnew', '[BOLD] Female EER [%]', '[BOLD] Female NDCFminold', '[BOLD] Female NDCFminnew']
[['4', '1.59', '0.096', '0.519', '1.82', '0.097', '0.431'], ['8', '1.52', '0.093', '0.517', '1.77', '0.094', '0.424'], ['16', '1.50', '0.089', '0.516', '1.76', '0.089', '0.422'], ['32', '1.56', '0.094', '0.520', '1.78', '0.092', '0.428']]
As we observe, the performance is rather insensitive to Sd, being slightly higher for Sd=16. However, we choose to use Sd=8 for the rest of the experiments, since their differences are minor and the algorithm becomes less computationally and memory demanding.
Speaker Recognition with Random Digit Strings Using Uncertainty Normalized HMM-based i-vectors
1907.06111
TABLE VI: Comparison between LDA with and without length normalization.
['Method', '[BOLD] Male EER [%]', '[BOLD] Male NDCFminold', '[BOLD] Male NDCFminnew', '[BOLD] Female EER [%]', '[BOLD] Female NDCFminold', '[BOLD] Female NDCFminnew']
[['LDA with length normalization', '1.52', '0.093', '0.517', '1.77', '0.094', '0.424'], ['LDA without length normalization', '1.68', '0.112', '0.521', '1.98', '0.105', '0.431']]
It is generally agreed that applying length normalization before LDA improves its performance. In order to reexamine its positive effect we perform an experiment to compare the performance of length normalization followed by LDA and LDA without length normalization. The system with MFCC features and uncertainty normalization is used as the single system in this experiment. that although cosine similarity scoring applies length normalization implicitly, applying length normalization before LDA and after uncertainty compensation is beneficial. As discussed above, length normalization makes vectors more normally distributed, which is in line with the Gaussian assumptions of LDA.
Neural Emoji Recommendation in Dialogue Systems
1612.04609
Table 3: Evaluation results of P@1 on different emojis
['Emoji', 'Definition', 'S-LSTM', 'H-LSTM']
[['[EMPTY]', '[ITALIC] tears of joy', '16.5', '21.6'], ['[EMPTY]', '[ITALIC] thinking', '21.6', '22.7'], ['[EMPTY]', '[ITALIC] laugh', '17.5', '24.1'], ['[EMPTY]', '[ITALIC] nervous', '23.2', '27.1'], ['[EMPTY]', '[ITALIC] shy', '23.5', '28.5'], ['[EMPTY]', '[ITALIC] delicious', '33.1', '32.7'], ['[EMPTY]', '[ITALIC] cry', '35.6', '38.9'], ['[EMPTY]', '[ITALIC] astonished', '46.6', '47.4'], ['[EMPTY]', '[ITALIC] angry', '49.3', '51.0'], ['[EMPTY]', '[ITALIC] heart', '60.3', '62.2']]
For further comparisons, we report the P@1 results on different emoji categories. The evaluation results on different emojis show different performances, which implies that there are indeed existing emojis that are more confusing and harder to be predicted than other emojis. (2) Emojis such as heart and angry are relatively easier to be predicted, since this kind of emojis is more straightforward and is usually used in constrained contexts. (3) On the contrary, predicting emojis such as tears of joy and thinking are more challenging, for these emojis are more ambiguous and complicated. For instance, the emoji tears of joy is usually used to express the compounded feeling mixed with slight sad, helpless and embarrassed. Such compounded emotions could be smoothly replaced by other emojis like cry or nervous according to the contexts. (4) H-LSTM has advantages over S-LSTM almost on every emoji category, especially on those more complicated emojis. It confirms that the information in multi-turn dialogue indeed helps understanding the emotions of more complicated dialogues.
Unsupervised Recurrent Neural Network Grammars
1904.03746
Table 1: Language modeling perplexity (PPL) and grammar induction F1 scores on English (PTB) and Chinese (CTB) for the different models. We separate results for those that make do not make use of annotated data (top) versus those that do (mid). Note that our PTB setup from Dyer et al. (2016) differs considerably from the usual language modeling setup Mikolov et al. (2010) since we model each sentence independently and use a much larger vocabulary (see section 3.1).
['Model', 'PTB PPL', 'PTB [ITALIC] F1', 'CTB PPL', 'CTB [ITALIC] F1']
[['RNNLM', '93.2', '–', '201.3', '–'], ['PRPN (default)', '126.2', '32.9', '290.9', '32.9'], ['PRPN (tuned)', '96.7', '41.2', '216.0', '36.1'], ['Left Branching Trees', '100.9', '10.3', '223.6', '12.4'], ['Right Branching Trees', '93.3', '34.8', '203.5', '20.6'], ['Random Trees', '113.2', '17.0', '209.1', '17.4'], ['URNNG', '90.6', '40.7', '195.7', '29.1'], ['RNNG', '88.7', '68.1', '193.1', '52.3'], ['RNNG → URNNG', '85.9', '67.7', '181.1', '51.9'], ['Oracle Binary Trees', '–', '82.5', '–', '88.6']]
As a language model URNNG outperforms an RNNLM and is competitive with the supervised RNNG. The left branching baseline performs poorly, implying that the strong performance of URNNG/RNNG is not simply due to the additional depth afforded by the tree LSTM composition function (a left branching tree, which always performs reduce when possible, is the “deepest” model). The right branching baseline is essentially equivalent to an RNNLM and hence performs similarly. We found PRPN with default hyperparameters (which obtains a perplexity of 62.0 in the PTB setup from Mikolov et al. to not perform well, but tuning hyperparameters improves performance. The supervised RNNG performs well as a language model, despite being trained on the joint (rather than marginal) likelihood objective. This indicates that explicit modeling of syntax helps generalization even with richly-parameterized neural models. Encouraged by these observations, we also experiment with a hybrid approach where we train a supervised RNNG first and continue fine-tuning the model (including the inference network) on the URNNG objective This approach results in nontrivial perplexity improvements, and suggests that it is potentially possible to improve language models with supervision on parsed data. Note that we induce latent trees directly from words on the full dataset. For RNNG/URNNG we obtain the highest scoring tree from qϕ(z|x) through the Viterbi inside (i.e. CKY) algorithm. We calculate unlabeled F1 using evalb, which ignores punctuation and discards trivial spans (width-one and sentence spans). Since we compare F1 against the original, non-binarized trees (per convention), F1 scores of models using oracle binarized trees constitute the upper bounds.
Unsupervised Recurrent Neural Network Grammars
1904.03746
Table 2: (Top) Comparison of this work as a language model against prior works on sentence-level PTB with preprocessing from Dyer et al. (2016). Note that previous versions of RNNG differ from ours in terms of parameterization and model size. (Bottom) Results on a subset (1M sentences) of the one billion word corpus. PRPN† is the model from Shen et al. (2018), whose hyperparameters were tuned by us. RNNG‡ is trained on predicted parse trees from the self-attentive parser from Kitaev and Klein (2018).
['PTB', 'PPL']
[['KN 5-gram Dyer et\xa0al. ( 2016 )', '169.3'], ['RNNLM Dyer et\xa0al. ( 2016 )', '113.4'], ['Original RNNG Dyer et\xa0al. ( 2016 )', '102.4'], ['Stack-only RNNG Kuncoro et\xa0al. ( 2017 )', '101.2'], ['Gated-Attention RNNG Kuncoro et\xa0al. ( 2017 )', '100.9'], ['Generative Dep. Parser Buys and Blunsom ( 2015 )', '138.6'], ['RNNLM Buys and Blunsom ( 2018 )', '100.7'], ['Sup. Syntactic NLM Buys and Blunsom ( 2018 )', '107.6'], ['Unsup. Syntactic NLM Buys and Blunsom ( 2018 )', '125.2'], ['PRPN† Shen et\xa0al. ( 2018 )', '96.7'], ['This work:', '[EMPTY]'], ['RNNLM', '93.2'], ['URNNG', '90.6'], ['RNNG', '88.7'], ['RNNG → URNNG', '85.9'], ['1M Sentences', 'PPL'], ['PRPN† Shen et\xa0al. ( 2018 )', '77.7'], ['RNNLM', '77.4'], ['URNNG', '71.8'], ['RNNG‡', '72.9'], ['RNNG‡ → URNNG', '72.0']]
We find that a standard language model (RNNLM) is better at modeling short sentences, but underperforms models that explicitly take into account structure (RNNG/URNNG) when the sentence length is greater than 10. On this larger dataset URNNG still improves upon the RNNLM. We also trained an RNNG (and RNNG → URNNG) on this dataset by parsing the training set with the self-attentive parser from Kitaev and Klein These models improve upon the RNNLM but not the URNNG, potentially highlighting the limitations of using predicted trees for supervising RNNGs.
Unsupervised Recurrent Neural Network Grammars
1904.03746
Table 5: Metrics related to the generative model/inference network for RNNG/URNNG. For the supervised RNNG we take the “inference network” to be the discriminative parser trained alongside the generative model (see section 3.3). Recon. PPL is the reconstruction perplexity based on Eqϕ(z|x)[logpθ(x|z)], and KL is the Kullback-Leibler divergence. Prior entropy is the entropy of the conditional prior pθ(z|x
['[EMPTY]', 'PTB RNNG', 'PTB URNNG', 'CTB RNNG', 'CTB URNNG']
[['PPL', '88.7', '90.6', '193.1', '195.7'], ['Recon. PPL', '74.6', '73.4', '183.4', '151.9'], ['KL', '7.10', '6.13', '11.11', '8.91'], ['Prior Entropy', '7.65', '9.61', '9.48', '15.13'], ['Post. Entropy', '1.56', '2.28', '6.23', '5.75'], ['Unif. Entropy', '26.07', '26.07', '30.17', '30.17']]
The “reconstruction” perplexity based on Eqϕ(z|x)[logpθ(x|z)] is much lower than regular perplexity, and further, the Kullback-Leibler divergence between the conditional prior and the variational posterior, given by Eqϕ(z|x)[logqϕ(z|x)pθ(z|x
SciDTB: Discourse Dependency TreeBank for Scientific Abstracts
1806.03653
Table 5: Performance of baseline parsers.
['[EMPTY]', '[BOLD] Dev set [BOLD] UAS', '[BOLD] Dev set [BOLD] LAS', '[BOLD] Test set [BOLD] UAS', '[BOLD] Test set [BOLD] LAS']
[['Vanilla transition', '[BOLD] 0.730', '0.557', '[BOLD] 0.702', '0.535'], ['Two-stage transition', '[BOLD] 0.730', '[BOLD] 0.577', '[BOLD] 0.702', '[BOLD] 0.545'], ['Graph-based', '0.607', '0.455', '0.576', '0.425'], ['Human', '0.806', '0.627', '0.802', '0.622']]
We also measure parsing accuracy with UAS and LAS. The human agreement is presented for comparison. With the addition of tree structural features in relation type prediction, the two-stage dependency parser gets better performance on LAS than vanilla system on both development and test set. Compared with graph-based model, the two transition-based baselines achieve higher accuracy with regard to UAS and LAS. Using more effective training strategies like MIRA may improve graph-based models. We can also see that human performance is still much higher than the three parsers, meaning there is large space for improvement in future work.
Argument Strength is in the Eye of the Beholder: Audience Effects in Persuasion
1708.09085
Table 6: Means and σ for belief change for neutral and entrenched participants presented with mono, fact, or emot argument types. Neutrals show more belief change, and all argument types significantly affect beliefs
['[EMPTY]', '[BOLD] N', '[BOLD] Mean change', '[ITALIC] σ [BOLD] change']
[['mono entrenched', '1826', '0.50', '1.09'], ['mono neutral', '1359', '0.62', '0.71'], ['fact entrenched', '258', '0.27', '0.79'], ['fact neutral', '202', '0.39', '0.55'], ['emot entrenched', '213', '0.35', '0.87'], ['emot neutral', '187', '0.37', '0.54'], ['ALL entrenched', '2951', '0.43', '1.00'], ['ALL neutral', '2234', '0.51', '0.65']]
Our first question is whether our method changed participant’s beliefs. Belief change occurred for all argument types: and the change was statistically significant as measured by paired t-tests (t(5184) = 38.31, p <0.0001). This confirms our hypothesis that social media can be mined for persuasive materials. In addition, all three types of arguments independently led to significant changes in belief. We defined people as having more entrenched initial beliefs if their response to the initial stance question was within 0.5 points of the two ends of the scale, i.e. (1.0-1.5) or (4.5-5.0), indicating an extreme initial view. We wanted to test whether the engaging, socially interesting, dialogic materials of emot and fact might promote more belief change than balanced curated monologic summaries. We tested the differences between argument types, finding a main effect for argument type (F(2,5179)=31.59, p <0.0001), with Tukey post-hoc tests showing mono led to more belief change than both emot and fact (both p <0.0001), but no differences between emot and fact overall across all subjects Finally there was no interaction between Initial Belief and Argument Type (F(2,5179)=1.25, p >0.05): so although neutrals show more belief change overall, this susceptibility does not vary by argument type.
Content based Weighted Consensus Summarization
1802.00946
Table 1: System performance comparison
['System', 'DUC 2003 R-1', 'DUC 2003 R-2', 'DUC 2003 R-4', 'DUC 2004 R-1', 'DUC 2004 R-2', 'DUC 2004 R-4']
[['LexRank', '0.357', '0.081', '0.009', '0.354', '0.075', '0.009'], ['TexRank', '0.353', '0.072', '0.010', '0.356', '0.078', '0.010'], ['Centroid', '0.330', '0.067', '0.008', '0.332', '0.059', '0.005'], ['FreqSum', '0.349', '0.080', '0.010', '0.347', '0.082', '0.010'], ['TsSum', '0.344', '0.750', '0.008', '0.352', '0.074', '0.009'], ['Greedy-KL', '0.339', '0.074', '0.005', '0.342', '0.072', '0.010'], ['Borda', '0.351', '0.080', '0.0140', '0.360', '0.0079', '0.015'], ['WCS', '0.375', '0.088', '0.0150', '0.382', '0.093', '0.0180'], ['C-WCS', '0.390', '[BOLD] 0.109†', '0.0198', '[BOLD] 0.409†', '[BOLD] 0.110', '[BOLD] 0.0212'], ['Oracle', '[BOLD] 0.394', '0.104', '[BOLD] 0.0205†', '0.397', '0.107', '0.0211'], ['Submodular', '0.392', '0.102', '0.0186', '0.400', '[BOLD] 0.110', '0.0198'], ['DPP', '0.388', '0.104', '0.0154', '0.394', '0.105', '0.0202']]
The DUC 2003 and DUC 2004 datasets were used for evaluating the experiments. We report ROUGE-1, ROUGE-2 and ROUGE-4 recall. We use three baseline aggregation techniques against which the proposed method is compared. Besides Borda Count and WCS, we also compare the results with the choose-best Oracle technique. In case of the Oracle method, we assume that the performance of each candidate system, in terms of ROUGE score, is known to us. For each document, we directly select summary generated by the system that scored highest for that particular document and call it the meta-summary. This is a very strong baseline, and average ROUGE-1 score for this meta-system, on the DUC 2003 dataset, was 0.394 compared to a maximum ROUGE-1 of 0.357 for the best performing LexRank system.
A Unified Tagging Solution:Bidirectional LSTM Recurrent Neural Network with Word Embedding
1511.00215
Table 5: Comparison different word embeddings.
['[BOLD] WE', '[BOLD] Dim', '[BOLD] Vocab#', '[BOLD] Train Corpus (Toks #)', '[BOLD] POS (Acc)', '[BOLD] CHUNK (F1)', '[BOLD] NER']
[['[BOLD] ', '80', '82K', 'Broadcast news (400M)', '96.97', '92.53', '84.69'], ['[BOLD] ', '50', '130K', 'RCV1+Wiki (221M+631M)', '97.02', '93.76', '89.34'], ['[BOLD] ', '300', '3M', 'Google news (10B)', '96.85', '92.45', '85.80'], ['[BOLD] ', '100', '1193K', 'Twitter (27B)', '97.02', '93.01', '87.33'], ['[BOLD] BLSTMWE(10m)', '100', '100K', 'US news (10M)', '96.61', '91.91', '84.66'], ['[BOLD] BLSTMWE(100m)', '100', '100K', 'US news (100M)', '97.10', '93.86', '86.47'], ['[BOLD] BLSTMWE(all)', '100', '100K', 'US news (536M)', '[BOLD] 97.26', '94.44', '88.38'], ['[BOLD] BLSTMWE(all) + ', '100', '113K', 'US news (536M)', '[BOLD] 97.26', '[BOLD] 94.59', '[BOLD] 89.64'], ['[BOLD] RANDOM', '100', '100K', '[EMPTY]', '96.61', '91.71', '82.52']]
1 news set. RANDOM is the word embedding set composed of random values which is the baseline. BSLTMWE(10m), BSLTMWE(100m) and BSLTMWE(all) are word embeddings respectively trained by BLSTM-RNN on the first 10 million words, first 100 million words and all 536 million words of North American news corpus. While BSLTMWE(10m) does not bring about obvious improvement, BSLTMWE(100m) and BSLTMWE(all) significantly improve the performance. It shows that BLSTM-RNN can benefit from word embeddings trained by our approach and larger training corpus indeed leads to a better performance. This suggests that the result may be further improved by using even bigger unlabeled dataset. In our experiment, BSLTMWE(all) can be trained in about one day (23 hrs) on a NVIDIA Tesla M2090 GPU. The training time increases linearly with the training corpus size.
A Unified Tagging Solution:Bidirectional LSTM Recurrent Neural Network with Word Embedding
1511.00215
Table 4: Comparison of systems with one and two BLSTM hidden layers.
['[BOLD] Sys', '[BOLD] POS(Acc)', '[BOLD] CHUNK(F1)', '[BOLD] NER(F1)']
[['B', '96.60', '91.91', '82.52'], ['BB', '96.63', '91.76', '82.66']]
Besides, we also evaluate deep structure which uses multiple BLSTM layers. This deep BLSTM has been reported achieving significantly better performance than single layer BLSTM in various applications such as speech synthesis Size of all hidden layers is set 100.
Cross-domain Semantic Parsing via Paraphrasing
1704.05974
Table 3: Main experiment results. We combine the proposed paraphrase model with different word embedding initializations. I: in-domain, X: cross-domain, EN: per-example normalization, FS: per-feature standardization, ES: per-example standardization.
['Method', 'Calendar', 'Blocks', 'Housing', 'Restaurants', 'Publications', 'Recipes', 'Social', 'Basketball', 'Avg.']
[['[BOLD] Previous Methods', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Wang et al.\xa0Wang et\xa0al. ( 2015 )', '74.4', '41.9', '54.0', '75.9', '59.0', '70.8', '48.2', '46.3', '58.8'], ['Xiao et al.\xa0Xiao et\xa0al. ( 2016 )', '75.0', '55.6', '61.9', '80.1', '75.8', '–', '80.0', '80.5', '72.7'], ['Jia and Liang\xa0Jia and Liang ( 2016 )', '78.0', '58.1', '71.4', '76.2', '76.4', '79.6', '81.4', '85.2', '75.8'], ['Herzig and Berant\xa0Herzig and Berant ( 2017 )', '82.1', '[BOLD] 62.7', '78.3', '82.2', '[BOLD] 80.7', '82.9', '81.7', '86.2', '79.6'], ['[BOLD] Our Methods', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Random + I', '75.6', '60.2', '67.2', '77.7', '77.6', '80.1', '80.7', '86.5', '75.7'], ['Random + X', '79.2', '54.9', '74.1', '76.2', '78.5', '82.4', '82.5', '86.7', '76.9'], ['word2vec + I', '67.9', '59.4', '52.4', '75.0', '64.0', '73.2', '77.0', '87.5', '69.5'], ['word2vec + X', '78.0', '54.4', '63.0', '81.3', '74.5', '83.3', '81.5', '83.1', '74.9'], ['word2vec + EN + I', '63.1', '56.1', '60.3', '75.3', '65.2', '69.0', '76.4', '81.8', '68.4'], ['word2vec + EN + X', '78.0', '52.6', '63.5', '74.7', '65.2', '80.6', '79.9', '80.8', '71.2'], ['word2vec + FS + I', '78.6', '62.2', '67.7', '78.6', '75.8', '85.7', '81.3', '86.7', '77.1'], ['word2vec + FS + X', '[BOLD] 82.7', '59.4', '75.1', '80.4', '78.9', '85.2', '81.8', '87.2', '78.9'], ['word2vec + ES + I', '79.8', '60.2', '71.4', '81.6', '78.9', '84.7', '82.9', '86.2', '78.2'], ['word2vec + ES + X', '82.1', '62.2', '[BOLD] 78.8', '[BOLD] 83.7', '80.1', '[BOLD] 86.1', '[BOLD] 83.1', '[BOLD] 88.2', '[BOLD] 80.6']]
5.3.1 Comparison with Previous Methods With our main novelties, cross-domain training and word embedding standardization, our full model is able to outperform the previous best model, and achieve the best accuracy on 6 out of the 8 domains. Next we examine the novelties separately.
Grammatical Error Correction with Neural Reinforcement Learning
1707.00299
Table 4: Human (TrueSkill) and GLEU evaluation of system outputs on the development and test set.
['Models', 'dev set Human', 'dev set GLEU', 'test set Human', 'test set GLEU']
[['Original', '-1.072', '38.21', '-0.760', '40.54'], ['AMU', '-0.405', '41.74', '-0.168', '44.85'], ['CAMB14', '-0.160', '42.81', '-0.225', '46.04'], ['NUS', '-0.131', '46.27', '-0.249', '50.13'], ['CAMB16', '-0.117', '47.20', '-0.164', '52.05'], ['MLE', '-0.052', '48.24', '-0.110', '52.75'], ['NRL', '0.169', '[BOLD] 49.82', '0.111', '[BOLD] 53.98'], ['Reference', '1.769', '55.26', '1.565', '62.37']]
In both dev and test set, NRL outperforms MLE and other baselines in both the human and automatic evaluations. Human evaluation and GLEU scores correlate highly, corroborating the reliability of GLEU. With respect to inter-annotator agreement, Spearman’s rank correlation between Turkers is 55.6 for the dev set and 49.2 for the test set. The correlations are sufficiently high to show the agreement between Turkers, considering the low chance level (i.e., ranking five randomly selected systems consistently between two Turkers).
Explainable Rumor Detection using Inter and Intra-feature Attention Networks
2007.11057
Table 5: Attention values for content latent features (USE embedding), handcrafted content features and user features for three records numbered 857, 870 and 94. Label 0 corresponds to a false rumor and Label 1 corresponds to a true rumor. In all cases, the model identified the status of the tweet correctly.
['[BOLD] Record Number', '857', '870', '94']
[['[BOLD] Label', '0', '1', '1'], ['[BOLD] Latent Features', '0.26', '0.41', '0.308'], ['[BOLD] Handcrafted Features', '0.363', '0.38', '0.447'], ['[BOLD] User Features', '0.377', '0.21', '0.245']]
Finally we show how to interpret the relative importance between the feature classes themselves (user features, handcrafted features and latent features) using an example. For record number 94, (identified correctly as the true rumor) the model gave 30.8% of the weight to latent features, 44.7% weight to handcrafted content features and 24.5% weight to the user features. This level of explanations can give the user a better idea of what features of the tweet or news item triggered the model to arrive at its conclusion.
REST: A Thread Embedding Approach for Identifying and Classifying User-specified Information in Security Forums
2001.02660
Table 11: Classification: the performance of the five different methods in classifying threads in 10-fold cross validation.
['Datasets', 'Metrics', 'BOW', 'NMF', 'SWEM', 'FastText', 'LEAM', 'BERT', '[BOLD] REST']
[['OffensComm.', 'Accuracy', '75.33±0.1', '74.31±0.1', '75.55±0.21', '74.64±0.15', '74.88±0.22', '[BOLD] 78.58± 0.08', '77.1±0.18'], ['OffensComm.', 'F1 Score', '[EMPTY]', '[EMPTY]', '74.15±0.23', '72.5±0.15', '72.91±0.18', '[BOLD] 78.47±0.01', '75.10±0.14'], ['HackThisSite', 'Accuracy', '65.3±0.41', '69.46±0.12', '73.27±0.10', '69.92±0.08', '74.6±0.04', '68.99±0.4', '[BOLD] 76.8± 0.1'], ['HackThisSite', 'F1 Score', '[EMPTY]', '70.23±0.13', '71.89±0.14', '65.81±0.4', '71.41±0.09', '63.61±0.41', '[BOLD] 74.47±0.24'], ['EthicalHackers', 'Accuracy', '59.74± 0.21', '58.3± 0.15', '61.3± 0.17', '59.73± 0.21', '61.80 ±0.13', '54.91± 0.32', '[BOLD] 63.3± 0.09'], ['EthicalHackers', 'F1 Score', '[EMPTY]', '57.83±0.16', '59.6±0.23', '59.5±0.13', '60.9±0.17', '51.78±0.15', '[BOLD] 61.7±0.21']]
REST compared to the state-of-the-art. Our approach compares favourably against the competition. REST outperforms other baseline methods with at least 1.4 percentage point in accuracy and 0.7 percentage point in F1 score, except BERT. First, using BERT “right out of the box” did not give good results initially. However, we fine-tuned BERT for this domain. BERT performs poorly on two sites, HackThisSite and EthicalHackers, while it performs well for OffensiveCommunity. We attribute this to the limited training data in terms of text size and also the nature of the language users use in such forums. For example, we found that the titles of two misclassified threads contained typos and used unconventional slang and writing structure “ Hw 2 gt st4rtd with r3v3r53 3ngin33ring 4 n00bs!!” , “metaXploit 3xplained fa b3ginners!!!”. We intend to investigate BERT and how it can be tuned further in future work.
REST: A Thread Embedding Approach for Identifying and Classifying User-specified Information in Security Forums
2001.02660
Table 8: Identification Precision: the precision of the identified thread of interest with the similarity-based method.
['[EMPTY]', 'OffensComm.', 'HackThisSite', 'EthicHack', 'Avg.']
[['Precision', '98.2', '97.5', '97.0', '97.5']]
Estimating precision. To evaluate precision, we want to identify what percentage of the retrieved threads are relevant. To this end, we resort to manual evaluation. We have labeled 300 threads from each dataset retrieved with 50% of the keywords and we get our annotators to identify if they are relevant. We understand that on average more than 97.5% of the threads identified with the similarity based method are relevant with an inter-annotator agreement Fleiss-Kappa coefficient of 0.952.
Gender Bias in Coreference Resolution:Evaluation and Debiasing Methods
1804.06876
Table 2: F1 on OntoNotes and WinoBias development set. WinoBias results are split between Type-1 and Type-2 and in pro/anti-stereotypical conditions. * indicates the difference between pro/anti stereotypical conditions is significant (p
['Method E2E', 'Anon.', 'Resour.', 'Aug.', 'OntoNotes [BOLD] 67.7', 'T1-p [BOLD] 76.0', 'T1-a 49.4', 'Avg 62.7', '∣ Diff ∣ 26.6*', 'T2-p [BOLD] 88.7', 'T2-a 75.2', 'Avg 82.0', '∣ Diff ∣ 13.5*']
[['E2E', '[EMPTY]', '[EMPTY]', '[EMPTY]', '66.4', '73.5', '51.2', '62.6', '21.3*', '86.3', '70.3', '78.3', '16.1*'], ['E2E', '[EMPTY]', '[EMPTY]', '[EMPTY]', '66.5', '67.2', '59.3', '63.2', '7.9*', '81.4', '82.3', '81.9', '0.9'], ['E2E', '[EMPTY]', '[EMPTY]', '[EMPTY]', '66.2', '65.1', '59.2', '62.2', '5.9*', '86.5', '[BOLD] 83.7', '[BOLD] 85.1', '2.8*'], ['E2E', '[EMPTY]', '[EMPTY]', '[EMPTY]', '66.3', '63.9', '[BOLD] 62.8', '[BOLD] 63.4', '[BOLD] 1.1', '81.3', '83.4', '82.4', '[BOLD] 2.1'], ['Feature', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[BOLD] 61.7', '[BOLD] 66.7', '56.0', '61.4', '10.6*', '[BOLD] 73.0', '57.4', '65.2', '15.7*'], ['Feature', '[EMPTY]', '[EMPTY]', '[EMPTY]', '61.3', '65.9', '56.8', '61.3', '9.1*', '72.0', '58.5', '65.3', '13.5*'], ['Feature', '[EMPTY]', '[EMPTY]', '[EMPTY]', '61.2', '61.8', '[BOLD] 62.0', '[BOLD] 61.9', '[BOLD] 0.2', '67.1', '63.5', '65.3', '3.6'], ['Feature', '[EMPTY]', '[EMPTY]', '[EMPTY]', '61.0', '65.0', '57.3', '61.2', '7.7*', '72.8', '63.2', '68.0', '9.6*'], ['Feature', '[EMPTY]', '[EMPTY]', '[EMPTY]', '61.0', '62.3', '60.4', '61.4', '1.9*', '71.1', '[BOLD] 68.6', '[BOLD] 69.9', '[BOLD] 2.5'], ['Rule', '[EMPTY]', '[EMPTY]', '[EMPTY]', '57.0', '76.7', '37.5', '57.1', '39.2*', '50.5', '29.2', '39.9', '21.3*']]
WinoBias Reveals Gender Bias Table Systems were evaluated on both types of sentences in WinoBias (T1 and T2), separately in pro-stereotyped and anti-stereotyped conditions ( T1-p vs. T1-a, T2-p vs T2-a). E2E and Feature were retrained in each condition using default hyper-parameters while Rule was not debiased because it is untrainable. We evaluate using the coreference scorer v8.01 Pradhan et al.
Question Answering as an Automatic Evaluation Metric for News Article Summarization
1906.00318
Table 2: Correlation matrix of ROUGE and APES.
['[EMPTY]', 'R-1', 'R-2', 'R-L', 'R-SU', 'APES']
[['R-1', '1.00', '0.83', '0.92', '0.94', '0.66'], ['R-2', '[EMPTY]', '1.00', '0.82', '0.90', '0.61'], ['R-L', '[EMPTY]', '[EMPTY]', '1.00', '0.89', '0.66'], ['R-SU', '[EMPTY]', '[EMPTY]', '[EMPTY]', '1.00', '0.67'], ['APES', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '1.00']]
While ROUGE baselines were beaten only by a very small number of suggested metrics in the original AESOP task, we find that APES shows better correlation than the popular R-1, R-2 and R-L, and the strong R-SU. Although showing statistical significance for our hypothesis is difficult because of the small dataset size, we claim APES gives an additional value comparing to ROUGE: In contrast, APES is not correlated with ROUGE metrics to the same extent (around 0.6). The above suggests that APES offers additional information regarding the text in a manner that ROUGE does not. For this reason, we believe APES complements ROUGE.
Question Answering as an Automatic Evaluation Metric for News Article Summarization
1906.00318
Table 1: Pearson Correlation of ROUGE and APES against Pyramid and Responsiveness on summary level. Statistically significant differences are marked with *.
['[EMPTY]', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L', 'ROUGE-SU', 'APES']
[['Pyramid', '0.590', '0.468*', '0.599', '0.563*', '[BOLD] 0.608'], ['Responsiveness', '0.540', '0.518*', '0.537', '0.541', '[BOLD] 0.576']]
We follow the work of Louis and Nenkova and compare input level APES scores with manual Pyramid and Responsiveness scores provided in the AESOP task. In Input level, correlation is computed for each summary against its manual score. In contrast, system level reports the average score for a summarization system over the entire dataset.
Efficient Neural Architecture Search via Parameter Sharing
1802.03268
Table 2: Classification errors of ENAS and baselines on CIFAR-10. In this table, the first block presents DenseNet, one of the state-of-the-art architectures designed by human experts. The second block presents approaches that design the entire network. The last block presents techniques that design modular cells which are combined to build the final network.
['[BOLD] Method', '[BOLD] GPUs', '[BOLD] Times', '[BOLD] Params', '[BOLD] Error']
[['[BOLD] Method', '[BOLD] GPUs', '(days)', '(million)', '(%)'], ['DenseNet-BC\xa0(Huang et\xa0al., 2016 )', '−', '−', '25.6', '3.46'], ['DenseNet + Shake-Shake\xa0(Gastaldi, 2016 )', '−', '−', '26.2', '2.86'], ['DenseNet + CutOut\xa0(DeVries & Taylor, 2017 )', '−', '−', '26.2', '[BOLD] 2.56'], ['Budgeted Super Nets\xa0(Veniat & Denoyer, 2017 )', '−', '−', '−', '9.21'], ['ConvFabrics\xa0(Saxena & Verbeek, 2016 )', '−', '−', '21.2', '7.43'], ['Macro NAS + Q-Learning\xa0(Baker et\xa0al., 2017a )', '10', '8-10', '11.2', '6.92'], ['Net Transformation\xa0(Cai et\xa0al., 2018 )', '5', '2', '19.7', '5.70'], ['FractalNet\xa0(Larsson et\xa0al., 2017 )', '−', '−', '38.6', '4.60'], ['SMASH\xa0(Brock et\xa0al., 2018 )', '1', '1.5', '16.0', '4.03'], ['NAS\xa0(Zoph & Le, 2017 )', '800', '21-28', '7.1', '4.47'], ['NAS + more filters\xa0(Zoph & Le, 2017 )', '800', '21-28', '37.4', '[BOLD] 3.65'], ['ENAS\xa0+ macro search space', '1', '0.32', '21.3', '4.23'], ['ENAS\xa0+ macro search space + more channels', '1', '0.32', '38.0', '[BOLD] 3.87'], ['Hierarchical NAS\xa0(Liu et\xa0al., 2018 )', '200', '1.5', '61.3', '3.63'], ['Micro NAS + Q-Learning\xa0(Zhong et\xa0al., 2018 )', '32', '3', '−', '3.60'], ['Progressive NAS\xa0(Liu et\xa0al., 2017 )', '100', '1.5', '3.2', '3.63'], ['NASNet-A\xa0(Zoph et\xa0al., 2018 )', '450', '3-4', '3.3', '3.41'], ['NASNet-A + CutOut\xa0(Zoph et\xa0al., 2018 )', '450', '3-4', '3.3', '[BOLD] 2.65'], ['ENAS\xa0+ micro search space', '1', '0.45', '4.6', '3.54'], ['ENAS\xa0+ micro search space + CutOut', '1', '0.45', '4.6', '[BOLD] 2.89']]
S3SS2SSS0Px4 Results. If we keep the architecture, but increase the number of filters in the network’s highest layer to 512, then the test error decreases to 3.87%, which is not far away from NAS’s best model, whose test error is 3.65%. Impressively, ENAS takes about 7 hours to find this architecture, reducing the number of GPU-hours by more than 50,000x compared to NAS. NASNet-A. There is a growing interest in improving the efficiency of NAS.
Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification
1805.02220
Table 3: Performance of our method and competing models on the MS-MARCO test set
['Model', 'ROUGE-L', 'BLEU-1']
[['FastQA_Ext Weissenborn et\xa0al. ( 2017 )', '33.67', '33.93'], ['Prediction Wang and Jiang ( 2016 )', '37.33', '40.72'], ['ReasoNet Shen et\xa0al. ( 2017 )', '38.81', '39.86'], ['R-Net Wang et\xa0al. ( 2017c )', '42.89', '42.22'], ['S-Net Tan et\xa0al. ( 2017 )', '45.23', '43.78'], ['Our Model', '[BOLD] 46.15', '[BOLD] 44.47'], ['S-Net (Ensemble)', '46.65', '44.78'], ['Our Model (Ensemble)', '[BOLD] 46.66', '[BOLD] 45.41'], ['Human', '47', '46']]
et al. As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human performance. If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in \newcitesnet, especially in terms of the BLEU-1.
Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification
1805.02220
Table 4: Performance on the DuReader test set
['Model', 'BLEU-4', 'ROUGE-L']
[['Match-LSTM', '31.8', '39.0'], ['BiDAF', '31.9', '39.2'], ['PR + BiDAF', '37.55', '41.81'], ['Our Model', '[BOLD] 40.97', '[BOLD] 44.18'], ['Human', '56.1', '57.4']]
The BiDAF and Match-LSTM models are provided as two baseline systems He et al. We can see that this paragraph ranking can boost the BiDAF baseline significantly. Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin.
Contextual Lensing of Universal Sentence Representations
2002.08866
Table 5: F1 scores for parallel data mining on the BUCC 2018 training and test sets. The first four language columns are training while the last four are testing. Middle group are unsupervised mining. For training data we report oracle scores with the corresponding mean and variance of thresholds (τ). For test data, we report results using the best threshold found on the training sets. Best results overall are bolded, best results per group are underlined.
['Model Schwenk ( 2018 )', 'DE 76.1', 'FR 74.9', 'RU 73.3', 'ZH 71.6', 'DE 76.9', 'FR 75.8', 'RU 73.8', 'ZH 71.6', 'Mean( [ITALIC] τ)', 'Std( [ITALIC] τ)']
[['Azpeitia et al. ( 2018 )', '84.27', '80.63', '80.89', '76.45', '85.52', '81.47', '81.30', '77.45', '[EMPTY]', '[EMPTY]'], ['LASER (Artetxe and Schwenk, 2019 )', '[BOLD] 95.43', '[BOLD] 92.40', '[BOLD] 92.29', '[BOLD] 91.20', '[BOLD] 96.19', '[BOLD] 93.91', '[BOLD] 93.30', '[BOLD] 92.27', '[EMPTY]', '[EMPTY]'], ['mBERT BOW', '54.47', '50.89', '39.42', '39.57', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '1.11', '0.012'], ['CL(mBERT; NLI)', '59.00', '59.46', '47.11', '41.12', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '1.13', '0.012'], ['CL(mBERT; 100M)', '88.24', '85.11', '86.18', '82.15', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '1.103', '0.004'], ['sCL(mBERT; 100M)†', '89.98', '86.79', '87.15', '84.91', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '1.130', '0.007'], ['CL(mBERT; 100M)†', '89.84', '87.14', '88.00', '86.05', '90.24', '88.54', '89.25', '86.70', '1.132', '0.011']]
First observe the mean and variance of the threshold parameters. This is even true for unsupervised models, meaning one could tune the threshold on high-resource pairs and use the same model for mining language pairs without seeing any parallel data. Again, we see strong performance from our Simple encoder, which only learns a single weight matrix. In this experiment, we notice a more substantial performance difference between our best results and LASER. We made a single submission of our best model to Zweigenbaum et al.
Contextual Lensing of Universal Sentence Representations
2002.08866
Table 2: Comparison of USVs on 9 downstream tasks. The first seven tasks are evaluated by training a logistic regression classifier directly on top of the sentence vectors. Performance is accuracy. The last two tasks report Spearman correlation of unsupervised textual similarity. Δ indicates the mean improvement over all tasks with respect to BERT BOW. Best results per column are bolded. Results that are not ours are obtained from Reimers and Gurevych (2019).
['Model', 'MR', 'CR', 'SUBJ', 'MPQA', 'SST', 'TREC', 'MRPC', 'STSb', 'SICK', 'Δ']
[['BERT CLS (Reimers and Gurevych, 2019 )', '78.68', '84.85', '94.21', '88.23', '84.13', '91.4', '71.13', '16.50', '42.63', '-5.28'], ['Glove BOW (Conneau et al., 2017a )', '77.25', '78.30', '91.17', '87.85', '80.18', '83.0', '72.87', '58.02', '53.76', '-1.88'], ['BERT BOW (Reimers and Gurevych, 2019 )', '78.66', '86.25', '94.37', '88.66', '84.40', '92.8', '69.45', '46.35', '58.40', '[EMPTY]'], ['InferSent (Conneau et al., 2017a )', '81.57', '86.54', '92.50', '[BOLD] 90.38', '84.18', '88.2', '75.77', '68.03', '65.65', '3.72'], ['USE (Cer et al., 2018 )', '80.09', '85.19', '93.98', '86.70', '86.38', '[BOLD] 93.2', '70.14', '74.92', '76.69', '5.33'], ['SBERT (Reimers and Gurevych, 2019 )', '[BOLD] 84.88', '[BOLD] 90.07', '[BOLD] 94.52', '90.33', '[BOLD] 90.66', '87.4', '75.94', '[BOLD] 79.23', '73.75', '[BOLD] 7.50'], ['sCL(BERT-Large; NLI)', '83.40', '89.32', '93.49', '89.16', '89.35', '91.8', '[BOLD] 76.75', '73.75', '72.08', '6.64']]
Here we make two observations. First, our results outperform on aggregate against existing non-BERT universal sentence encoders while only learning a single weight matrix on top of the contextualized embeddings. Second, while SBERT outperforms on aggregate, the Δ is small relative to a basic BERT BOW baseline. This demonstrates that the gain from fine-tuning all of BERT for these tasks are minimal and much of the performance improvements over a naive BERT BOW baseline can be achieved by adapting fixed contextualized embeddings. (Liu et al., While fine-tuning RoBERTa on static tasks substantially improves over BERT, the gains are minimal for downstream sentence evaluations. Taken together, these results suggest that the fine-tune paradigm that is so crucial for static evaluations may not directly carry over to learning USVs.
Ensemble-Based Deep Reinforcement Learning for Chatbots
1908.10422
Table 5: Automatic evaluation of chatbots on test data
['Agent/Metric', 'Dialogue Reward', 'F1 Score', 'Recall@1']
[['Upper Bound', '7.7800', '1.0000', '1.0000'], ['Lower Bound', '-7.0600', '0.0796', '0.0461'], ['Ensemble', '[BOLD] -2.8882', '[BOLD] 0.4606', '[BOLD] 0.3168'], ['Single Agent', '-6.4800', '0.1399', '0.0832'], ['Seq2Seq', '-5.7000', '0.2081', '0.1316']]
As expected, the Upper Bound agent achieved the best scores and the Lower Bound agent the lowest scores. The difference in performance between the Ensemble agent and the Seq2Seq agent is significant at p=0.0332 for the Fluency metric and at p<0.01 for the other metrics (Engagingness and Consistency)—based on a two-tailed Wilcoxon Signed Rank Test.
Multi-task Recurrent Model for Speech and Speaker Recognition
1603.09643
Table 2: SRE baseline results.
['System', 'EER% Cosine', 'EER% LDA', 'EER% PLDA']
[['i-vector (200)', '2.89', '1.03', '0.57'], ['r-vector (256)', '1.84', '1.34', '3.18']]
It can be observed that the i-vector system generally outperforms the r-vector system. Particularly, the discriminative methods (LDA and PLDA) offer much more significant improvement for the i-vector system than for the r-vector system. For this reason, we only consider the simple cosine kernel when scoring r-vectors in the following experiments.
Sentence-Level BERT and Multi-Task Learning of Age and Gender in Social Media
1911.00637
Table 1: Distribution of age and gender classes in our data splits
['[BOLD] Data split', '[BOLD] Under 25 [BOLD] Female', '[BOLD] Under 25 [BOLD] Male', '[BOLD] 25 until 34 [BOLD] Female', '[BOLD] 25 until 34 [BOLD] Male', '[BOLD] 35 and up [BOLD] Female', '[BOLD] 35 and up [BOLD] Male', '[BOLD] #tweets']
[['[BOLD] TRAIN', '215,950', '213,249', '207,184', '248,769', '174,511', '226,132', '1,285,795'], ['[BOLD] DEV', '27,076', '26,551', '25,750', '31,111', '21,942', '28,294', '160,724'], ['[BOLD] TEST', '26,878', '26,422', '25,905', '31,211', '21,991', '28,318', '160,725'], ['[BOLD] ALL', '269,904', '266,222', '258,839', '311,091', '218,444', '282,744', '1,607,244']]
We make use of Arap-Tweet zaghouani2018arap, which we will refer to as Arab-Tweet. Arab-tweet is a dataset of tweets covering 11 Arabic regions from 17 different countries. For each region, data from 100 Twitter users were crawled. Users needed to have posted at least 2,000 and were selected based on an initial list of seed words characteristic of each region. The seed list included words such as ¡برشة¿ /barsha/ ‘many’ for Tunisian Arabic and ¡وايد¿ /wayed/ ‘many’ for Gulf Arabic. zaghouani2018arap employed human annotators to verify that users do belong to each respective region. Annotators also assigned gender labels from the set male, female and age group labels from the set under-25, 25-to34, above-35 at the user-level, which in turn is assigned at tweet level. Tweets with less than 3 words and re-tweets were removed. Refer to zaghouani2018arap for details about how annotation was carried out. We note that zaghouani2018arap do not report classification models exploiting the data. We shuffle the dataset and split it into 80% training (TRAIN), 10% development (DEV), and 10% test (TEST). For pre-processing, we reduce 2 or more consecutive repetitions of the same character into only 2 and remove diacritics. We have two baselines, (1) (maj-base), is the majority class in our TRAIN and (2) a small unidirectional GRU (small-GRU) with a single 500-units hidden layer. We train the small GRU with the same batch size = 8 and dropout =0.5 as our bigger BiGRUs We report our results in accuracy.
Lexical-Morphological Modeling forLegal Text Analysis
1609.00799
Table 6: Competition results for phases 1 (IR) and 3 (IR + TE) respectively. First three ranked.
['[BOLD] Rank', '[BOLD] ID', '[BOLD] Prec.', '[BOLD] Recall', '[BOLD] F-m']
[['1', 'UA1', '0.633', '0.490', '0.552'], ['[BOLD] 2', '[BOLD] JAIST1', '[BOLD] 0.566', '[BOLD] 0.460', '[BOLD] 0.508'], ['3', 'ALV2015', '0.342', '0.529', '0.415']]
The method presented in this paper achieved significant results in COLIEE, being ranked 2nd in phase one (IR) and 3rd in phase three (combined IR + TE). It was not well ranked in phase two (TE).
The Flores Evaluation Datasets for Low-Resource Machine Translation: Nepali–English and Sinhala–English
1902.01382
Table 4: Weakly supervised experiments: Adding noisy parallel data from filtered Paracrawl improves translation quality in some conditions. “Parallel” refers to the data described in Table 2.
['[BOLD] Corpora', '[BOLD] BLEU ne–en', '[BOLD] BLEU si–en']
[['Parallel', '7.6', '7.2'], ['Unfiltered Paracrawl', '0.43', '0.44'], ['Paracrawl Random', '0.14', '0.36'], ['Paracrawl Clean', '5.9', '7.73'], ['Parallel + Paracrawl Clean', '9.60', '10.86']]
Without any filtering or with random filtering, BLEU score is close to 0 BLEU. Adding Paracrawl Clean to the initial parallel data improves performance by +2.0 and +3.7 BLEU points, for Nepali–English and Sinhala–English, respectively.
The Flores Evaluation Datasets for Low-Resource Machine Translation: Nepali–English and Sinhala–English
1902.01382
Table 1: Number of unique sentences (uniq) and total number of sentence pairs (tot) per Flores test set grouped by their original languages.
['[BOLD] orig lang', '[ITALIC] dev uniq', '[ITALIC] dev tot', '[ITALIC] devtest uniq', '[ITALIC] devtest tot', '[ITALIC] test uniq', '[ITALIC] test tot']
[['[BOLD] Nepali–English', '[BOLD] Nepali–English', '[BOLD] Nepali–English', '[BOLD] Nepali–English', '[BOLD] Nepali–English', '[BOLD] Nepali–English', '[BOLD] Nepali–English'], ['English', '693', '1,181', '800', '1,393', '850', '1,462'], ['Nepali', '825', '1,378', '800', '1,442', '850', '1,462'], ['[EMPTY]', '[BOLD] 1,518', '[BOLD] 2,559', '[BOLD] 1,600', '[BOLD] 2,835', '[BOLD] 1,700', '[BOLD] 2,924'], ['[BOLD] Sinhala–English', '[BOLD] Sinhala–English', '[BOLD] Sinhala–English', '[BOLD] Sinhala–English', '[BOLD] Sinhala–English', '[BOLD] Sinhala–English', '[BOLD] Sinhala–English'], ['English', '1,123', '1,913', '800', '1,395', '850', '1,465'], ['Sinhala', '565', '985', '800', '1,371', '850', '1,440'], ['[EMPTY]', '[BOLD] 1,688', '[BOLD] 2,898', '[BOLD] 1600', '[BOLD] 2,766', '[BOLD] 1700', '[BOLD] 2,905']]
For Sinhala–English, the test set is composed of 850 sentences originally in English, and 850 originally in Sinhala. We have approximately 1.7 translations per sentence. This yielded 1,465 sentence pairs originally in English, and 1,440 originally in Sinhalese, for a total of 2,905 sentences. Similarly, for Nepali–English, the test set is composed of 850 sentences originally in English, and 850 originally in Nepali. This yielded 1,462 sentence pairs originally in English and 1,462 originally in Nepali, for a total of 2,924 sentence pairs.
The Flores Evaluation Datasets for Low-Resource Machine Translation: Nepali–English and Sinhala–English
1902.01382
Table 3: BLEU scores of NMT using various learning settings on devtest (see §3). We report detokenized SacreBLEU Post (2018) for {Ne,Si}→En and tokenized BLEU for En→{Ne,Si}.
['[EMPTY]', '[BOLD] Supervised', '[BOLD] Supervised +mult.', '[BOLD] Unsupervised', '[BOLD] Unsupervised + mult.', '[BOLD] Semi-supervised it. 1', '[BOLD] Semi-supervised it. 2', '[BOLD] Semi-supervised it 1. + mult.', '[BOLD] Semi-supervised it 2. + mult.', '[BOLD] Weakly supervised']
[['[BOLD] English–Nepali', '4.27', '6.86', '0.11', '8.34', '6.83', '6.84', '8.76', '8.77', '5.82'], ['[BOLD] Nepali–English', '7.57', '14.18', '0.47', '18.78', '12.725', '15.061', '19.78', '21.45', '9.6'], ['[BOLD] English–Sinhala', '1.23', '-', '0.08', '-', '5.22', '6.46', '-', '-', '3.10'], ['[BOLD] Sinhala–English', '7.159', '-', '0.08', '-', '12.046', '15.105', '-', '-', '10.86']]
In the supervised setting, PBSMT performed quite worse than NMT, achieving BLEU scores of 2.5, 4.4, 1.6 and 5.0 on English–Nepali, Nepali–English, English–Sinhala and Sinhala–English, respectively. There are several observations we can make.
Taking a Stance on Fake News: Towards Automatic Disinformation Assessment via Deep Bidirectional Transformer Language Models for Stance Detection
1911.11951
Table 2: Performance of various methods on the FNC-I benchmark. The first and second groups are methods introduced during and after the challenge period, respectively. Best results are in bold.
['Method', '[ITALIC] Accw', 'Acc']
[['Riedel [ITALIC] et al. ucl', '81.72', '88.46'], ['Hanselowski [ITALIC] et al. athenes', '81.97', '89.48'], ['Baird [ITALIC] et al. talos', '82.02', '89.08'], ['Bhatt [ITALIC] et al. Bhatt:2018:CNS:3184558.3191577', '83.08', '89.29'], ['Borges [ITALIC] et al. Borges2019', '83.38', '89.21'], ['Zhang [ITALIC] et al. 2018 Zhang:2018:RMN:3184558.3186919', '86.66', '92.00'], ['Wang [ITALIC] et al. Wang:2018:RDD:3184558.3188723', '86.72', '82.91'], ['Zhang [ITALIC] et al. 2019 Zhang:2019:SIH:3308558.3313724', '88.15', '93.50'], ['Proposed Method', '[BOLD] 90.01', '[BOLD] 93.71']]
Results of our proposed method, the top three methods in the original Fake News Challenge, and the best-performing methods since the challenge’s conclusion on the FNC- A confusion matrix for our method is presented in the Appendix. To the best of our knowledge, our method achieves state-of-the-art results in weighted-accuracy and standard accuracy on the dataset. Notably, since the conclusion of the Fake News Challenge in 2017, the weighted-accuracy error-rate has decreased by 8%, signifying improved performance of NLP models and innovations in the domain of stance detection, as well as a continued interest in combating the spread of disinformation.
Taking a Stance on Fake News: Towards Automatic Disinformation Assessment via Deep Bidirectional Transformer Language Models for Stance Detection
1911.11951
Table 3: Effect of claim-article pair sequence length of FNC-I test set on classification accuracy of RoBERTa model, with a maximum sequence length of 512.
['Number of Tokens in Example', 'Acc', 'Number of Examples']
[['<129', '92.05', '2904'], ['129-256', '93.90', '3606'], ['257-384', '95.07', '6328'], ['385-512', '[BOLD] 95.11', '4763'], ['>512', '92.23', '7812'], ['All', '93.71', '25413']]
The model has a maximum sequence length of 512 tokens, so any examples longer than this are trimmed. We find that the model performs best for examples that utilize the full capacity of the input sequence (385 to 512 tokens). Very short sequences (<129 tokens) provide the least amount of information to the model, and the model performs poorly. Long sequences (>512 tokens) have some of their context removed from their input, and these examples also perform relatively poor.
Taking a Stance on Fake News: Towards Automatic Disinformation Assessment via Deep Bidirectional Transformer Language Models for Stance Detection
1911.11951
Table 4: Effect of maximum sequence length of RoBERTa model on weighted accuracy and classification accuracy.
['Maximum Number of Tokens', '[ITALIC] Accw', 'Acc']
[['128', '89.52', '93.46'], ['256', '89.54', '93.48'], ['512', '[BOLD] 90.01', '[BOLD] 93.71']]
We find an increase in accuracy with a longer maximum sequence length, as more context is provided to the model. We cannot increase the length of the input sequence beyond 512 tokens without training the RoBERTa model from scratch, which is not feasible for us.
An Annotated Corpus of Reference Resolution for Interpreting Common Grounding
1911.07588
Table 6: Results of the reference resolution task grouped by the number of referents in the gold annotation (along with the average count of such markables in the test set).
['# Referents', '% Accuracy', '% Exact Match', 'Count']
[['0', '95.91±1.38', '83.53±4.65', '0148.5'], ['1', '89.34±0.17', '36.86±1.32', '2782.5'], ['2', '78.14±1.07', '20.59±1.90', '0587.9'], ['3', '70.64±1.02', '13.63±2.06', '0283.3'], ['4', '69.12±2.69', '10.16±3.47', '0081.0'], ['5', '73.57±2.94', '17.56±5.88', '0033.0'], ['6', '78.69±4.45', '13.18±7.31', '0043.0'], ['7', '74.60±7.49', '50.38±11.40', '0022.3']]
To demonstrate the advantages of our approach for interpreting and analyzing dialogue systems, we give a more detailed analysis of TSEL-REF-DIAL model which performed well on all three tasks. In terms of the exact match rate, we found that the model performs very well on 0 and 7 referents: this is because most of them can be recognized at the superficial level, such as “none of them”, “all of mine” or “I don’t have that”. However, the model struggles on all other cases: the results are especially worse for markables with more than 1 referent. This shows that the model still lacks the ability of precisely tracking multiple referents, which can be expressed in complex, pragmatic ways (such as groupings).
Focused Meeting Summarization via Unsupervised Relation Extraction
1606.07849
Table 4: ROUGE-1 (R-1), ROUGE-2 (R-2) and ROUGE-SU4 (R-SU4) scores for summaries produced by the baselines, GRE [Hachey2009]’s best results, the supervised methods, our method and an upperbound — all with perfect/true DRDA clusterings.
['[EMPTY]', '[BOLD] True Clusterings [BOLD] R-1', '[BOLD] True Clusterings [BOLD] R-1', '[BOLD] True Clusterings [BOLD] R-1', '[BOLD] True Clusterings [BOLD] R-2', '[BOLD] True Clusterings [BOLD] R-SU4']
[['[EMPTY]', 'PREC', 'REC', 'F1', 'F1', 'F1'], ['[BOLD] Baselines', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Longest DA', '34.06', '31.28', '32.61', '12.03', '13.58'], ['Prototype DA', '40.72', '28.21', '33.32', '12.18', '13.46'], ['[BOLD] GRE', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['5 topics', '38.51', '30.66', '34.13', '11.44', '13.54'], ['10 topics', '39.39', '31.01', '34.69', '11.28', '13.42'], ['15 topics', '38.00', '29.83', '33.41', '11.40', '12.80'], ['20 topics', '37.24', '30.13', '33.30', '10.89', '12.95'], ['[BOLD] Supervised Methods', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['CRF', '53.95', '26.57', '35.61', '11.52', '14.07'], ['SVM', '42.30', '41.49', '40.87', '12.91', '16.29'], ['[BOLD] Our Method', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['5 Relations', '39.33', '35.12', '37.10', '12.05', '14.29'], ['10 Relations', '37.94', '37.03', '[BOLD] 37.47', '[BOLD] 12.20', '[BOLD] 14.59'], ['15 Relations', '37.36', '37.43', '37.39', '11.47', '14.00'], ['20 Relations', '37.27', '[ITALIC] [BOLD] 37.64', '37.45', '11.40', '13.90'], ['[BOLD] Upperbound', '100.00', '[ITALIC] [BOLD] 45.05', '62.12', '33.27', '34.89']]
Note that for GRE based approach, we only list out their best results for utterance-level summarization. If using the salient relation instances identified by GRE as the summaries, the ROUGE results will be significantly lower. When measured by ROUGE-2, our method still have better or comparable performances than other unsupervised methods. Moreover, our system achieves F scores in between those of the supervised learning methods, performing better than the CRF in both recall and F score. The recall score for the upperbound in ROUGE-1, on the other hand, indicates that there is still a wide gap between the extractive summaries and human-written abstracts: without additional lexical information (e.g., semantic class information, ontologies) or a real language generation component, recall appears to be a bottleneck for extractive summarization methods that select content only from decision-related dialogue acts (DRDAs).
To Swap or Not to Swap? Exploiting Dependency Word Pairs for Reordering in Statistical Machine Translation
1608.01084
Table 1: The results of our reordering approach using sparse dependency swap (DS) features, in BLEU scores (%) compared to the baseline (Base), on which features are added (∗: significant at p<0.05; ∗∗: significant at p<0.01). We also show the results of prior reordering methods, i.e., dependency distortion penalty (+DDP) [Cherry2008], sparse dependency path features (+Path) [Chang et al.2009], the combination of both (+DDP+Path), as well as sparse reordering orientation features (+SHR) [Cherry2013]. For all systems involving DDP and other features, comparison is also made to the system with only DDP. (†: significant at p<0.05; ††: significant at p<0.01). Note: All systems involving DS always incorporate DDP.
['[BOLD] Dataset', '[BOLD] Base', '[BOLD] [BOLD] +DDP', '[BOLD] [BOLD] +Path', '[BOLD] +DDP+Path', '[BOLD] [BOLD] +SHR', '[BOLD] Ours [BOLD] +DDP+DS', '[BOLD] Ours [BOLD] +DDP+Path+DS']
[['Devset', '40.04', '39.55', '40.51', '40.32', '40.92', '41.39', '41.61'], ['NIST02', '39.19', '39.06', '39.39', '39.81∗∗††', '40.08∗∗', '40.48∗∗††', '40.55∗∗††'], ['NIST03', '39.44', '40.09∗∗', '40.17∗∗', '39.95∗∗', '39.81', '40.88∗∗††', '40.73∗∗†'], ['NIST04', '40.26', '40.16', '40.62∗∗', '40.63∗∗††', '40.39', '40.97∗∗††', '41.04∗∗††'], ['NIST05', '39.65', '39.66', '39.94∗', '40.02∗†', '39.86', '41.26∗∗††', '40.98∗∗††'], ['NIST06', '38.70', '38.42', '38.25∗∗', '38.64', '38.74', '39.15∗†', '39.54∗∗††'], ['NIST08', '30.11', '30.91∗∗', '30.03', '30.88∗∗', '30.56∗', '31.12∗∗', '31.76∗∗††'], ['Average', '37.89', '38.05', '38.07∗∗', '38.32∗∗††', '38.24∗∗', '38.98∗∗††', '[BOLD] 39.10∗∗††']]
The distortion limit of all the systems is set to 14, which yields the best result on the development set for the baseline system. As shown in the table, the system with our DS features and DDP on top of the baseline is able to improve over the baseline system without and with DDP, by +1.09 and +0.93 BLEU points respectively. The individual contribution of the other dependency-based features (Path), without or with DDP, is inferior to our DS features. Nevertheless, coupling our DS features with Path features yields the best result (+1.21 and +1.05 BLEU points over the baseline without and with DDP).
Knowledge Graph Alignment Network with Gated Multi-hop Neighborhood Aggregation
1911.08936
Table 3: Results on DBP15K w.r.t. k values
['Methods', 'DBPZH-EN H@1', 'DBPZH-EN H@10', 'DBPZH-EN MRR', 'DBPJA-EN H@1', 'DBPJA-EN H@10', 'DBPJA-EN MRR', 'DBPFR-EN H@1', 'DBPFR-EN H@10', 'DBPFR-EN MRR']
[['GCN', '0.487', '0.790', '0.559', '0.507', '0.805', '0.618', '0.508', '0.808', '0.628'], ['AliNet', '[BOLD] 0.539', '[BOLD] 0.826', '[BOLD] 0.628', '[BOLD] 0.549', '[BOLD] 0.831', '[BOLD] 0.645', '[BOLD] 0.552', '[BOLD] 0.852', '[BOLD] 0.657'], ['AliNet ( [ITALIC] k=3)', '0.461', '0.786', '0.571', '0.484', '0.802', '0.590', '0.450', '0.813', '0.575'], ['AliNet ( [ITALIC] k=4)', '0.386', '0.721', '0.501', '0.407', '0.706', '0.516', '0.373', '0.745', '0.499']]
AliNet with 2 layers achieves the best performance over all the three metrics. We observe that when AliNet has more layers, its performance declines as well. Although more layers allow AliNet to indirectly capture more distant neighborhood information by layer-to-layer propagation, such distant neighbors would introduce much noise and lead to more non-isomorphic neighborhood structures. We can see that considering two-hop neighborhood leads to the best results. This is similarly attributed to the aforementioned reasons regarding aggregation of multi-hop neighbors. This is further verified by an analysis about DBP15K. For example, in DBPZH-EN, each Chinese entity has 6.6 one-hop neighbors on average and this number for each English entity is 8.6. However, between their one-hop neighbors, there are only 4.5 pairs of counterpart entities, leaving 2.1 Chinese one-hop neighbors and 4.1 English ones unaligned. If considering two-hop neighbors, the numbers of unaligned one-hop neighbors are reduced to 0.5 for Chinese and 0.9 for English, respectively. The numbers have less room to be reduced by introducing more distant neighbors. This suggests us that aggregating two-hop neighborhood information is enough.
Learning Word Embeddings with Domain Awareness
1906.03249
Table 2: NER results on GENIA.
['Embeddings', 'P', 'R', 'F1']
[['CBOW S', '[BOLD] 78.69', '70.75', '74.51'], ['SG S', '76.79', '72.99', '74.84'], ['CBOW T', '76.14', '71.18', '73.57'], ['SG T', '76.89', '71.08', '73.87'], ['CBOW S + T', '76.24', '72.61', '74.38'], ['SG S + T', '75.64', '72.91', '74.25'], ['CBOW S ⊕ T', '75.00', '72.97', '73.97'], ['SG S ⊕ T', '75.47', '72.97', '74.20'], ['CBOW Avg(S,T)', '74.81', '71.62', '73.18'], ['SG Avg(S,T)', '74.91', '72.13', '73.49'], ['CRE', '77.44', '70.47', '73.79'], ['Faruqui et al. faruqui2015-NAACL-HLT', '72.86', '69.15', '70.96'], ['Kiela et al. kiela2015-EMNLP', '73.86', '70.15', '71.96'], ['CTCB', '76.52', '73.03', '74.73'], ['CTSG', '75.94', '72.31', '74.08'], ['SG-DI', '77.04', '[BOLD] 74.18', '[BOLD] 75.59'], ['CBOW-DA', '76.76', '73.30', '75.51']]
Overall the embeddings trained in the target domain show worse performance than those trained in the source domain; all the aggregation baselines perform worse than trained on source domain. Such findings indicate that with limited data, the quality of embeddings trained on the target domain is not guaranteed. Interestingly, we find the retrofitting methods obtain the worst results, which show the limitation of such methods in the cold-start scenario with limited data in the target domain. Compared to other embeddings, both our models significantly improve the recall and F1 score. To investigate, we notice that the model with our embeddings extract more entities without greatly reducing precision, while all other embeddings, on the contrary, suffer from precision drop when introducing target domain information. Consider the entities is usually domain-specific terms, high recall is thus an indicator of how the model is performed with target domain knowledge.
Learning Word Embeddings with Domain Awareness
1906.03249
Table 1: Text classification results on ATIS and IMDB.
['Embeddings', 'ATIS', 'IMDB']
[['CBOW S', '96.19', '90.22'], ['SG S', '96.30', '90.16'], ['CBOW T', '95.74', '91.10'], ['SG T', '95.30', '90.67'], ['CBOW S + T', '96.19', '90.32'], ['SG S + T', '96.30', '90.28'], ['CBOW S ⊕ T', '96.09', '90.55'], ['SG S ⊕ T', '95.86', '90.87'], ['CBOW Avg(S,T)', '95.86', '90.38'], ['SG Avg(S,T)', '95.52', '90.45'], ['CRE', '95.86', '91.22'], ['Faruqui et al. faruqui2015-NAACL-HLT', '94.95', '89.18'], ['Kiela et al. kiela2015-EMNLP', '95.41', '90.45'], ['CTCB', '97.42', '91.09'], ['CTSG', '97.20', '90.84'], ['SG-DI', '97.64', '91.89'], ['CBOW-DA', '[BOLD] 97.75', '[BOLD] 92.34']]
Overall, in-domain word embeddings tend to outperform out-of-domain ones when in-domain data is relatively large and in a contrary when in-domain data is limited. The aggression methods, i.e., S⊕T, Avg(S,T), normally perform in between the embeddings from the source and the target domain; while the embeddings trained on the concatenated corpus (S+T) tend to perform similar to the ones trained in the source domain because the Wiki corpus dominate the in-domain corpora. Against all aggression methods and baselines from other studies, our SG-DI and CBOW-DA constantly outperform them with a large margin. Particularly, no matter whether there is enough in-domain data, SG-DI and CBOW-DA effectively integrate domain knowledge into embeddings and thus result better performance in text classification. When there are more in-domain data, CBOW-DA demonstrates a larger margin on top of other baselines, as well as SG-DI. Especially, the performance of CBOW-DA on ATIS achieve a record score 97.75, which is, to the best of our knowledge, the state-of-the-art result reported on this dataset. To compare between SG-DI and CBOW-DA, the reason leads to the difference of their performance could be that the attention mechanism works more smooth than the indicator.
Learning Word Embeddings with Domain Awareness
1906.03249
Table 3: POS tagging results on Twitter.
['Embeddings', 'P', 'R', 'F1']
[['CBOW S', '85.13', '85.29', '85.21'], ['SG S', '84.96', '84.86', '84.91'], ['CBOW T', '79.51', '79.78', '79.64'], ['SG T', '79.55', '80.18', '79.86'], ['CBOW S + T', '85.14', '85.29', '85.21'], ['SG S + T', '85.00', '84.89', '84.94'], ['CBOW S ⊕ T', '84.24', '84.62', '84.43'], ['SG S ⊕ T', '84.02', '84.40', '84.21'], ['CBOW Avg(S,T)', '84.38', '84.72', '84.55'], ['SG Avg(S,T)', '84.12', '84.58', '84.35'], ['CRE', '81.42', '81.42', '81.42'], ['Faruqui et al. faruqui2015-NAACL-HLT', '81.12', '81.52', '81.32'], ['Kiela et al. kiela2015-EMNLP', '82.04', '82.34', '82.19'], ['CTCB', '85.25', '85.08', '85.16'], ['CTSG', '85.23', '84.85', '85.04'], ['SG-DI', '[BOLD] 85.44', '85.42', '85.43'], ['CBOW-DA', '85.41', '[BOLD] 85.93', '[BOLD] 85.67']]
Owing to the limited target domain data, the overall trend of the results from the source, target and aggregated embeddings is similar to the NER task, while our SG-DI and CBOW-DA also show improvement over all baselines. Such results illustrate the effectiveness of our models in utilizing and combining the source and target domain information. Especially when data target domain is limited, such word-pair information is more robust than others in depicting word-word relations. Moreover, since our SG-DI and CBOW-DA embedding are trained on the source domain corpus, they do not suffer from the data sparsity issue as that in target domain embeddings and those retrofitting methods.
DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications
1711.05073
Table 8: Performance on various question types. Current MRC models achieve impressive improvements compared with the selected paragraph baseline. However, there is a large gap between these models and human.
['[BOLD] Question type', '[BOLD] Description BLEU-4%', '[BOLD] Description Rouge-L%', '[BOLD] Entity BLEU-4%', '[BOLD] Entity Rouge-L%', '[BOLD] YesNo BLEU-4%', '[BOLD] YesNo Rouge-L%']
[['[BOLD] Match-LSTM', '32.8', '40.0', '29.5', '38.5', '5.9', '7.2'], ['[BOLD] BiDAF', '32.6', '39.7', '29.8', '38.4', '5.5', '7.5'], ['[BOLD] Human', '58.1', '58.0', '44.6', '52.0', '56.2', '57.4']]
We can see that both the models and human achieve relatively good performance on description questions, while YesNo questions seem to be the hardest to model. We consider that description questions are usually answered with long text on the same topic. This is preferred by BLEU or Rouge. However, the answers to YesNo questions are relatively short, which could be a simple Yes or No in some cases.
DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension
1804.07927
Table 1: Comparison between various RC datasets
['[BOLD] Metrics for Comparative Analysis', '[BOLD] Movie QA', '[BOLD] NarrativeQA over plot-summaries', '[BOLD] Self-RC', '[BOLD] Paraph-raseRC']
[['Avg. word distance', '20.67', '24.94', '13.4', '45.3'], ['Avg. sentence distance', '1.67', '1.95', '1.34', '2.7'], ['Number of sentences for inferencing', '2.3', '1.95', '1.51', '2.47'], ['% of instances where both Query & Answer entities were found in passage', '67.96', '59.4', '58.79', '12.25'], ['% of instances where Only Query entities were found in passage', '59.61', '61.77', '63.39', '47.05'], ['% Length of the Longest Common sequence of non-stop words in Query (w.r.t Query Length) and Plot', '25', '26.26', '38', '21']]
Fig. We use NER and noun phrase/verb phrase extraction over the entire dataset to identify key entities in the question, plot and answer which are in turn used to compute the metrics mentioned in the table. The metrics “Avg word distance” and “Avg sentence distance” indicate the average distance (in terms of words/sentences) between the occurrence of the question entities and closest occurrence of the answer entities in the passage. “Number of sentences for inferencing” is indicative of the minimum number of sentences required to cover all the question and answer entities. It is evident that tackling ParaphraseRC is much harder than the others on account of (i) larger distance between the query and answer, (ii) low word-overlap between query & passage, and (iii) higher number of sentences required to infer an answer.
Context is Key: Grammatical Error Detection with Contextual Word Representations
1906.06593
Table 3: Error detection F0.5 of different embedding integration strategies (‘input’ vs. ‘output’) per model on all datasets.
['Flair', 'Input', 'Shared Task Dev 36.45', 'CoNLL Test 1 25.79', 'CoNLL Test 2 34.35', 'FCE test 49.97', 'JFLEG test 54.08']
[['[EMPTY]', 'Output', '33.47', '24.52', '33.18', '48.50', '52.10'], ['ELMo', 'Input', '42.96', '29.14', '40.15', '52.81', '58.54'], ['[EMPTY]', 'Output', '37.33', '27.33', '38.10', '52.99', '54.86'], ['BERT base', 'Input', '[BOLD] 48.50', '35.70', '46.29', '[BOLD] 57.28', '[BOLD] 61.98'], ['[EMPTY]', 'Output', '46.33', '37.04', '46.50', '55.32', '60.97'], ['BERT large', 'Input', '47.75', '36.94', '45.80', '56.96', '61.52'], ['[EMPTY]', 'Output', '46.72', '[BOLD] 39.07', '[BOLD] 46.96', '55.10', '60.56']]
We observe that, although performance varies across datasets and models, integration by concatenation to the word embeddings yields the best results across the majority of datasets for all models (BERT: 3/5 datasets; ELMo: 4/5 datasets; Flair: 5/5 datasets). The lower integration point allows the model to learn more levels of task-specific transformations on top of the contextual representations, leading to an overall better performance.
Massively Multilingual Transfer for NER
1902.00193
Table 2: The performance of RaRe and BEA in terms of phrase-based F1 on CoNLL NER datasets compared with state-of-the-art benchmark methods. Resource requirements are indicated with superscripts, p: parallel corpus, w: Wikipedia, d: dictionary, l: 100 NER annotation, 0: no extra resources.
['lang.', 'de', 'es', 'nl', 'en']
[['Täckström et\xa0al. ( 2012 ) [ITALIC] p', '40.40', '59.30', '58.40', '—'], ['Nothman et\xa0al. ( 2013 ) [ITALIC] w', '55.80', '61.00', '64.00', '61.30'], ['Tsai et\xa0al. ( 2016 ) [ITALIC] w', '48.12', '60.55', '61.60', '—'], ['Ni et\xa0al. ( 2017 ) [ITALIC] w, [ITALIC] p, [ITALIC] d', '58.50', '65.10', '65.40', '—'], ['Mayhew et\xa0al. ( 2017 ) [ITALIC] w, [ITALIC] d', '59.11', '65.95', '66.50', '—'], ['Xie et\xa0al. ( 2018 )0', '57.76', '72.37', '70.40', '—'], ['our work', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['MVtok,\xa00', '57.38', '66.41', '71.01', '62.14'], ['MVent,\xa00', '57.67', '69.03', '70.34', '64.64'], ['BEAtok,\xa00uns', '58.18', '64.72', '70.14', '61.24'], ['BEAent,\xa00uns', '57.76', '63.37', '70.30', '64.81'], ['RaRe 0uns', '59.14', '71.75', '67.59', '67.46'], ['RaRe [ITALIC] l', '63.99', '72.49', '72.45', '70.04'], ['HSup', '79.11', '85.69', '87.11', '89.50']]
S4SS0SSS0Px4 CoNLL Dataset Finally, we apply our model to the CoNLL-02/03 datasets, to benchmark our technique against related work. This corpus is much less rich than Wikiann used above, as it includes only four languages (en, de, nl, es), and furthermore, the languages are closely related and share the same script. Note that there are only 3 source models and none of them is clearly bad , so BEA estimates that they are similarly reliable which results in little difference in terms of performance between BEA and MV.
Massively Multilingual Transfer for NER
1902.00193
Table 3: The effect of the choice of monolingual word embeddings (Common Crawl and Wikipedia), and their cross-lingual mapping on NER direct transfer. Word translation accuracy, and direct transfer NER F1 are averaged over 40 languages.
['Unsup', 'crawl', 'Transl. Acc. 34', 'Dir.Transf. F1 26']
[['Unsup', 'wiki', '24', '21'], ['IdentChar', 'crawl', '43', '37'], ['IdentChar', 'wiki', '[BOLD] 53', '[BOLD] 44'], ['Sup', 'crawl', '50', '39'], ['Sup', 'wiki', '[BOLD] 54', '[BOLD] 45']]
We experimented with Wiki and CommonCrawl monolingual embeddings from fastText Bojanowski et al. Each of the 41 languages is mapped to English embedding space using three methods from MUSE: 1) supervised with bilingual dictionaries; 2) seeding using identical character sequences; and 3) unsupervised training using adversarial learning Lample et al. The cross-lingual mappings are evaluated by precision at k=1. CommonCrawl doesn’t perform well in bilingual induction despite having larger text corpora, and underperforms in direct transfer NER. It is also evident that using identical character strings instead of a bilingual dictionary as the seed for learning a supervised bilingual mapping barely affects the performance. This finding also applies to few-shot learning over larger ensembles: running RaRe over 40 source languages achieves an average F1 of 77.9 when using embeddings trained with a dictionary, versus 76.9 using string identity instead. Experiments with unsupervised mappings performed substantially worse than supervised methods, and so we didn’t explore these further.
Diving Deep into Clickbaits: Who Use Them to What Extents in Which Topics with What Effects?
1703.09400
TABLE II: Performance of the methods on the 11Headlines2Media Corpus[] dataset
['Method Without Pre-trained Vectors', 'Method *Chakroborty et al.\xa0', 'Precision 0.95', 'Recall 0.90', 'F-measure 0.93', 'Accuracy 0.93', 'Cohen’s [ITALIC] κ', 'ROC-AUC 0.97']
[['Without Pre-trained Vectors', 'Skip-Gram [ITALIC] sw', '0.976', '0.975', '0.975', '0.976', '0.952', '0.976'], ['With Pre-trained Vectors', '*Anand et al.\xa0', '0.984', '0.978', '0.982', '0.982', '[EMPTY]', '0.998'], ['With Pre-trained Vectors', 'Skip-Gram [ITALIC] sw+ Google_word2vec', '0.977', '0.977', '0.977', '0.976', '0.951', '0.976'], ['With Pre-trained Vectors', 'Skip-Gram [ITALIC] sw+ (Headline)', '0.981', '0.981', '0.981', '0.981', '0.962', '0.981'], ['With Pre-trained Vectors', 'Skip-Gram [ITALIC] sw+ (Headline + Message)', '0.982', '0.982', '0.982', '0.982', '0.964', '0.982'], ['[EMPTY]', 'Skip-Gram [ITALIC] sw+ (Headline + Body + Message)', '[BOLD] 0.983', '[BOLD] 0.983', '[BOLD] 0.983', '[BOLD] 0.983', '[BOLD] 0.965', '[BOLD] 0.983']]
We use the \IfEqCase11Headlines2Media Corpus[] dataset to evaluate our classification model. We perform 10-fold cross-validation to evaluate various methods with respect to accuracy, precision, recall, f-measure, area under the ROC curve (ROC-AUC) and Cohen’s κ. To avoid randomness effect, we perform each experiment 5 times and present the average. There are in total seven methods. We categorize them based on the use of pre-trained vectors. Note that we report performances of Chakroborty et al. and Anand et al. We keep Anand et al. with the methods which use pre-trained vectors. Each word embedding has 300 dimensions. Moreover, the training and test sets of the earlier dataset are not available. So, we could not compare our methods with them using the same test bed.
Effective Sentence Scoring Method using Bidirectional Language Modelfor Speech Recognition
1905.06655
Table 2: WERs for unidirectional and bidirectional SANLMs interpolated with the baseline model on LibriSpeech
['Model', '| [ITALIC] V|', 'dev clean', 'dev other', 'test clean', 'test other']
[['baseline', '[EMPTY]', '7.17', '19.79', '7.26', '20.37'], ['+ uniSANLM', '10k', '6.09', '17.50', '6.08', '18.33'], ['+ uniSANLM', '20k', '6.05', '17.48', '6.11', '18.25'], ['+ uniSANLM', '40k', '6.08', '17.32', '6.11', '18.13'], ['+ biSANLM', '10k', '5.65', '16.85', '5.69', '17.59'], ['+ biSANLM', '20k', '5.57', '16.71', '5.68', '[BOLD] 17.37'], ['+ biSANLM', '40k', '[BOLD] 5.52', '[BOLD] 16.61', '[BOLD] 5.65', '17.44']]
The WER results show that the biSANLM with our approach is consistently and significantly better than the uniSANLM regardless of the test set and the vocabulary size.
When and Why is Unsupervised Neural Machine Translation Useless?
2004.10581
Table 3: Unsupervised NMT performance where source and target training data are from different domains. The data size on both sides is the same (20M sentences).
['Domain ( [BOLD] en)', 'Domain ( [BOLD] de/ [BOLD] ru)', 'Bleu [%] [BOLD] de-en', 'Bleu [%] [BOLD] en-de', 'Bleu [%] [BOLD] ru-en', 'Bleu [%] [BOLD] en-ru']
[['Newswire', 'Newswire', '23.3', '19.9', '11.9', '9.3'], ['Newswire', 'Politics', '11.5', '12.2', '2.3', '2.5'], ['Newswire', 'Random', '18.4', '16.4', '6.9', '6.1']]
(News Crawl). The results show that the domain matching is critical for unsupervised NMT. For instance, although German and English are very similar languages, we see the performance of German↔English deteriorate down to -11.8 % Bleu by the domain mismatch.
EESEN: End-to-End Speech Recognition using Deep RNN Models and WFST-based Decoding
1507.08240
Table 2: Comparisons of decoding speed between the phoneme-based Eesen system and the hybrid HMM/DNN system. “RTF” refers to the real-time factor in decoding. “Graph Size” means the size of the decoding graph in terms of megabytes.
['Model', 'RTF', 'Graph Size']
[['Eesen RNN', '0.64', '263'], ['Hybrid HMM/DNN', '2.06', '480']]
A major advantage of Eesen compared with the hybrid approach is the decoding speed. The acceleration comes from the drastic reduction of the number of states, i.e., from thousands of senones to tens of phonemes/characters. From their real-time factors, we observe that decoding in Eesen is 3.2× faster than that of HMM/DNN. Also, the decoding graph (TLG) in Eesen is significantly smaller than the graph (HCLG) used by HMM/DNN, which saves the disk space for storing the graphs.
KdConv: A Chinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation
2004.04100
Table 6: Manual evaluation. The best results (t-test, p-value < 0.005) are in bold. Between two generative models, the significantly better results are italic underlined (t-test, p-value < 0.005) or underlined (t-test, p-value < 0.05). κ is the Fleiss’ kappa value. “+ know” means the models enhanced by knowledge information.
['[BOLD] Model', '[BOLD] Fluency', '[BOLD] Coherence']
[['[BOLD] Film\xa0∖\xa0 [ITALIC] κ', '0.50', '0.61'], ['[BOLD] HRED', '1.64', '1.19'], ['[BOLD] HRED + know', '[ITALIC] 1.78', '[ITALIC] 1.28'], ['[BOLD] BERT + know', '[BOLD] 2.00', '[BOLD] 1.79'], ['[BOLD] Music\xa0∖\xa0 [ITALIC] κ', '0.37', '0.57'], ['[BOLD] HRED', '1.90', '1.30'], ['[BOLD] HRED + know', '1.86', '1.36'], ['[BOLD] BERT + know', '[BOLD] 2.00', '[BOLD] 1.80'], ['[BOLD] Travel\xa0∖\xa0 [ITALIC] κ', '0.55', '0.74'], ['[BOLD] HRED', '1.77', '1.10'], ['[BOLD] HRED + know', '1.78', '[ITALIC] 1.31'], ['[BOLD] BERT + know', '[BOLD] 2.00', '[BOLD] 1.76']]
4.4.3 Results As can be seen, knowledge-aware BERT outperforms other models significantly in both metrics in all the three domains, which agrees with the results of automatic evaluation. The Fluency is 2.00 because the retrieved responses are all human-written sentences. The Fluency scores of both generation-based models are close to 2.00 (in the music domain, the Fluency of HRED is 1.90), showing that the generated responses are fluent and grammatical. The Coherence scores of both HRED and knowledge-aware HRED are higher than 1.00 but still have a huge gap to 2.00, indicating that the generated responses are relevant to the context but not coherent to knowledge information in most cases. After incorporating the knowledge information into HRED, the Coherence score is improved significantly in all the three domains, as the knowledge information is more expressed in the generated responses.
How Can We Know What Language Models Know?
1911.12543
Table 14: Micro-averaged accuracy (%) before and after LM-aware prompt fine-tuning.
['[BOLD] Prompts', '[BOLD] Top1', '[BOLD] Top3', '[BOLD] Top5', '[BOLD] Opti.', '[BOLD] Oracle']
[['before', '31.9', '34.5', '33.8', '38.1', '47.9'], ['after', '30.2', '32.5', '34.7', '37.5', '50.8']]
After fine-tuning, the oracle performance increased significantly, while the ensemble performances (both rank-based and optimization-based) dropped slightly. This indicates that LM-aware fine-tuning has the potential to discover better prompts, but some portion of the refined prompts may have over-fit to the training set upon which they were optimized.
How Can We Know What Language Models Know?
1911.12543
Table 7: Ablation study of middle-word and dependency-based prompts on BERT-base.
['[BOLD] Prompts', '[BOLD] Top1', '[BOLD] Top3', '[BOLD] Top5', '[BOLD] Opti.', '[BOLD] Oracle']
[['[BOLD] Mid', '30.7', '32.7', '31.2', '36.9', '45.1'], ['[BOLD] Mid+Dep', '31.4', '34.2', '34.7', '38.9', '50.7']]
Middle-word vs. Dependency-based The improvements confirm our intuition that words belonging to the dependency path but not in the middle of the subject and object are also indicative of the relation.
How Can We Know What Language Models Know?
1911.12543
Table 8: Micro-averaged accuracy (%) of various LMs
['[BOLD] Model', '[BOLD] Man', '[BOLD] Mine', '[BOLD] Mine [BOLD] +Man', '[BOLD] Mine [BOLD] +Para', '[BOLD] Man [BOLD] +Para']
[['BERT', '31.1', '38.9', '39.6', '36.2', '37.3'], ['ERNIE', '32.1', '42.3', '43.8', '40.1', '41.1'], ['KnowBert', '26.2', '34.1', '34.6', '31.9', '32.1']]
, we compare BERT with ERNIE and KnowBert, which are enhanced with external knowledge by explicitly incorporating entity embeddings. ERNIE outperforms BERT by 1 point even with the manually defined prompts, but our prompt generation methods further emphasize the difference between the two methods, with the highest accuracy numbers differing by 4.2 points using the Mine+Man method. This indicates that if LMs are queried effectively, the differences between highly performant models may become more clear. KnowBert underperforms BERT on LAMA, which is opposite to the observation made in Peters et al. This is probably because that multi-token subjects/objects are used to evaluate KnowBert in Peters et al.
How Can We Know What Language Models Know?
1911.12543
Table 10: Micro-averaged accuracy (%) on Google-RE.
['[BOLD] Model', '[BOLD] Man', '[BOLD] Mine', '[BOLD] Mine [BOLD] +Man', '[BOLD] Mine [BOLD] +Para', '[BOLD] Man [BOLD] +Para']
[['BERT-base', '9.8', '10.0', '10.4', '9.6', '10.0'], ['BERT-large', '10.5', '10.6', '11.3', '10.4', '10.7']]
Again, ensembling diverse prompts improves accuracies for both the BERT-base and BERT-large models. The gains are somewhat smaller than those on the T-REx subset, which might be caused by the fact that there are only 3 relations and one of them (predicting the birth-date of a person) is particularly hard to the extent that only one prompt yields non-zero accuracy.
Sequence-to-Sequence Learning for Task-oriented Dialogue with Dialogue State Representation
1806.04441
Table 3: Ablation experiment on navigation domain. -copy refers to a framework without copying. -RL refers to a framework without RL loss.
['[BOLD] Model', '[BOLD] BLEU', '[BOLD] Macro F1', '[BOLD] Micro F1']
[['our model', '[BOLD] 13.7', '[BOLD] 62.0', '[BOLD] 56.9'], ['-copying', '9.6', '35.2', '41.3'], ['-RL', '9.3', '38.2', '46.0']]
In this section, we perform several ablation experiments to evaluate different components in our framework on the navigation domain. The results demonstrate effectiveness of components of our model to the final performance.
Sequence-to-Sequence Learning for Task-oriented Dialogue with Dialogue State Representation
1806.04441
Table 2: Automatic evaluation on test data. Best results are shown in bold. Generally, our framework outperforms other models in most automatic evaluation metrics.
['[BOLD] Model', '[BOLD] Navigation [BOLD] BLEU', '[BOLD] Navigation [BOLD] Macro F1', '[BOLD] Navigation [BOLD] Micro F1', '[BOLD] Weather [BOLD] BLEU', '[BOLD] Weather [BOLD] Macro F1', '[BOLD] Weather [BOLD] Micro F1']
[['Seq2Seq with Attention', '8.3', '15.6', '17.5', '[BOLD] 19.6', '56.0', '53.5'], ['Copy Net', '8.7', '20.8', '23.7', '17.5', '52.4', '53.1'], ['KV Net', '8.7', '24.9', '29.5', '12.4', '37.7', '39.4'], ['our model', '[BOLD] 13.7', '[BOLD] 62.0', '[BOLD] 56.9', '14.9', '[BOLD] 58.5', '[BOLD] 56.3']]
The results show that our model outperforms other models in most of automatic evaluation metrics. In the navigation domain, compared to KV Net, we achieve 5.0 improvement on BLEU score, 37.1 improvement on Macro F1 and 27.4 improvement on Micro F1. Compared to Copy Net, we achieve 5.0 improvement on BLEU score, 41.2 improvement Macro F1 and 27.4 improvement on Micro F1. The results in navigation show our model’s capability to generate more natural and accurate response than the Seq2Seq baseline models. In the weather domain, our model generates more accurate responses than our baseline models as well. The BLEU score is a little bit lower than Copy Net and Seq2Seq with attention. This is because the forms of responses are relatively limited in weather domain. Besides, the entities in inputs are highly probable to be mentioned in responses, such as “location”. These two reasons indicate that the simpler models can capture this pattern more smoothly. The results that Seq2Seq with Attention performs better than Copy Net and KV Net also confirm this.
Sequence-to-Sequence Learning for Task-oriented Dialogue with Dialogue State Representation
1806.04441
Table 4: Human evaluation of responses based on random selected previous dialogue history in test dataset. The agreement scores indicate the percentage of responses to which all three human experts give exactly the same scores.
['[BOLD] Model', '[BOLD] Correct', '[BOLD] Fluent', '[BOLD] Humanlike']
[['Copy Net', '3.52', '4.47', '4.17'], ['KV Net', '3.61', '4.50', '4.20'], ['our model', '[BOLD] 4.21', '[BOLD] 4.65', '[BOLD] 4.38'], ['agreement', '41.0', '55.0', '43.0']]
In this section, we provide human evaluation on our framework and other baseline models. We randomly generated 200 responses. These response are based on distinct dialogue history in navigation test data. We hire three different human experts to evaluate the quality of responses. Three dimensions are involved, which are correctness, fluency, and humanlikeness. Three human experts judged each dimension on a scale from 1 to 5. And each judgment indicates a relative score compared to standard response from test data. The results show that our framework outperforms other baseline models on all metrics. The most significant improvement is from correctness, indicating that our model generates more accurate information that the users want to know.
Linguistic Geometries for Unsupervised Dimensionality Reduction
1003.0628
Table 2: Three evaluation measures (i), (ii), and (iii) (see the beginning of the section for description) for convex combinations (8) using different values of α. The first four rows represent methods A, B, C, and D. The bottom row represents a convex combination whose coefficients were obtained by searching for the minimizer of measure (iii). Interestingly the minimizer also performs well on measure (i) and more impressively on the labeled measure (iii).
['( [ITALIC] α1, [ITALIC] α2, [ITALIC] α3, [ITALIC] α4)', '(i)', '(ii)', '(iii) (k=5)']
[['(1,0,0,0)', '0.5756', '-3.9334', '0.7666'], ['(0,1,0,0)', '0.5645', '-4.6966', '0.7765'], ['(0,0,1,0)', '0.5155', '-5.0154', '0.8146'], ['(0,0,0,1)', '0.6035', '-3.1154', '0.8245'], ['(0.3,0.4,0.1,0.2)', '[BOLD] 0.4735', '[BOLD] -5.1154', '[BOLD] 0.8976']]
We also examined convex combinations α1HA+α2HB+α3HC+α4HD (8) with ∑αi=1 and αi≥0. The beginning of the section provides more information on these measures. The first four rows correspond to the “pure” methods A,B,C,D. The bottom row correspond to a convex combination found by minimizing the unsupervised evaluation measure (ii). Note that the convex combination found also outperforms A, B, C, and D on measure (i) and more impressively on measure (iii) which is a supervised measure that uses labeled data (the search for the optimal combination was done based on (ii) which does not require labeled data). We conclude that combining heterogeneous domain knowledge may improve the quality of dimensionality reduction for visualization, and that the search for an improved convex combination may be accomplished without the use of labeled data.
SCALABLE MULTILINGUAL FRONTEND FOR TTS
2004.04934
Table 4: Testing Accuracy – single model, unspliced and spliced
['Locale', 'Combined BLEU', 'Combined chrF3', 'Combined, Spliced BLEU', 'Combined, Spliced chrF3']
[['de-DE', '92.01', '0.9484', '94.82', '0.9782'], ['en-US', '92.94', '0.9428', '96.84', '0.9822'], ['es-ES', '91.51', '0.9246', '99.54', '0.9969'], ['nl-NL', '94.42', '0.9509', '97.41', '0.9826'], ['ru-RU', '94.46', '0.9558', '98.48', '0.9919'], ['sv-SE', '97.39', '0.9789', '98.41', '0.9891']]
Generally the accuracy was lower, but still very reasonable for most synthesis cases. This table also illustrates the significant performance boost achieved by using the splicing technique.
SCALABLE MULTILINGUAL FRONTEND FOR TTS
2004.04934
Table 3: Testing Accuracy – dual model
['Locale', 'Normalization BLEU', 'Normalization chrF3', 'Pronunciation BLEU', 'Pronunciation chrF3']
[['en-US', '99.69', '0.9991', '97.09', '0.9926'], ['es-ES', '99.79', '0.9990', '99.88', '0.9996'], ['it-IT', '99.80', '0.9994', '99.71', '0.9991'], ['pt-PT', '99.85', '0.9993', '99.68', '0.9992'], ['fr-FR', '99.70', '0.9991', '99.52', '0.9985'], ['sv-SE', '99.10', '0.9934', '99.34', '0.9970'], ['nl-NL', '98.13', '0.9855', '98.62', '0.9925'], ['en-AU', '99.60', '0.9870', '98.91', '0.9882'], ['de-DE', '99.80', '0.9877', '95.87', '0.9895'], ['ru-RU', '99.00', '0.9942', '99.10', '0.9964'], ['da-DK', '97.07', '0.9915', '97.94', '0.9894'], ['en-IN', '99.15', '0.9969', '99.51', '0.9974'], ['nb-NO', '93.93', '0.9808', '96.22', '0.9853'], ['en-ZA', '98.20', '0.9855', '98.02', '0.9865'], ['en-IE', '97.72', '0.9810', '97.65', '0.9833'], ['tr-TR', '94.20', '0.9763', '98.14', '0.9853'], ['en-GB', '83.66', '0.9005', '99.56', '0.9975'], ['pt-BR', '79.10', '0.6585', '95.86', '0.9673']]
Generally the accuracy was lower, but still reasonable for most synthesis cases. For longer sentences the test outputs are created by splicing multiple shorter outputs.
Detecting Adverse Drug Reactions from Twitter through Domain-Specific Preprocessing and BERT Ensembling
2005.06634
Table 5: Average Prediction for BERTLARGE, BioBert, ClinicalBert and Max Ensemble Results with and without Preprocessor
['[ITALIC] No preprocessor', '[BOLD] BERT', '[BOLD] BioBERT', '[BOLD] ClinicalBERT', '[BOLD] Max Ensemble']
[['[ITALIC] F1-score', '0.6446', '0.5915', '0.5809', '0.6378'], ['Precision', '0.6695', '0.6200', '0.6180', '0.5663'], ['Recall', '0.6214', '0.5655', '0.5479', '0.7300'], ['[ITALIC] Preprocessor', '[BOLD] BERT PP', '[BOLD] BioBERT PP', '[BOLD] ClinicalBERT PP', '[BOLD] Max Ensemble PP'], ['[ITALIC] F1-score', '0.6475', '0.6153', '0.6212', '0.6681'], ['Precision', '0.6907', '0.6577', '0.6360', '0.5900'], ['Recall', '0.6097', '0.5793', '0.6076', '0.7700']]
However, the difference in performance lessened considerably when the preprocessor was applied, with ClinicalBERT in particular observing improved predictions, with F1-score increasing from 0.58 to 0.62 and recall increasing from 0.55 to 0.61. Despite the poorer performance of BioBERT and ClinicalBERT, ensemble methods demonstrate that these alternative representations capture additional information which BERTLARGE does not.
Detecting Adverse Drug Reactions from Twitter through Domain-Specific Preprocessing and BERT Ensembling
2005.06634
Table 3: Average Prediction on Test set for Baseline BERT
['[EMPTY]', '[BOLD] 1', '[BOLD] 2', '[BOLD] 3', '[BOLD] 4', '[BOLD] 5', '[BOLD] Avg', '[BOLD] Chen1']
[['[ITALIC] F1', '0.59', '0.63', '0.62', '0.63', '0.62', '[BOLD] 0.618', '0.618'], ['P', '0.64', '0.66', '0.64', '0.66', '0.67', '[BOLD] 0.654', '0.646'], ['R', '0.55', '0.61', '0.61', '0.60', '0.57', '[BOLD] 0.587', '0.593']]
Our baseline model, using the SMM4H winning team’s model parameters but without their corpus for retraining, reported as ”BERT_noRetrained” Chen et al. We therefore ran the model multiple times and only included the results of the first five models with non-zero scores as our baseline. Notably, the F1-score is nearly identical to that reported by Chen et al Chen et al. Due to this convergence issue, we decreased our learning rate from 5e-5 to 2e-5 for our subsequent models.
Syntactic Scaffolds for Semantic Structures
1808.10485
Table 1: Frame SRL results on the test set of FrameNet 1.5., using gold frames. Ensembles are denoted by †.
['[BOLD] Model', '[BOLD] Prec.', '[BOLD] Rec.', '[ITALIC] F1']
[['Kshirsagar et\xa0al. ( 2015 )', '66.0', '60.4', '63.1'], ['Yang and Mitchell ( 2017 ) (Rel)', '71.8', '57.7', '64.0'], ['Yang and Mitchell ( 2017 ) (Seq)', '63.4', '66.4', '64.9'], ['†Yang and Mitchell ( 2017 ) (All)', '70.2', '60.2', '65.5'], ['Semi-CRF baseline', '67.8', '66.2', '67.0'], ['+ constituent identity', '68.1', '67.4', '67.7'], ['+ nonterminal and parent', '68.8', '68.2', '68.5'], ['+ nonterminal', '69.4', '68.0', '68.7'], ['+ common nonterminals', '69.2', '69.0', '[BOLD] 69.1']]
We follow the official evaluation from the SemEval shared task for frame-semantic parsing Baker et al. Our semi-CRF baseline outperforms all prior work, without any syntax. This highlights the benefits of modeling spans and of global normalization. Contemporaneously with this work, Peng et al. We evaluated their output for argument identification only; our semi-CRF baseline model exceeds their performance by 1 F1, and our common nonterminal scaffold by 3.1 F1.
Humor in Collective Discourse: Unsupervised Funniness Detection in the New Yorker Cartoon Caption Contest
1506.08126
Table 1: Comparison between the methods. Score s4 corresponds to pairs for which the seven judges agreed more significantly (a difference of 4+ votes). Score s3 requires a difference of 3+ votes. Score s includes all pairs (about 850 per method, minus a small number of errors). The best methods (CU2R, CU3, OR2, and CU2) are in bold.
['[BOLD] Category', '[BOLD] Code', '[BOLD] Method', '[ITALIC] n4', '[ITALIC] s4', '[ITALIC] n3', '[ITALIC] s3', '[ITALIC] n', '[ITALIC] s']
[['Centrality', 'OR1R', 'least similar to centroid', '308', '-2.73', '453', '-2.14', '846', '-1.26'], ['[EMPTY]', '[BOLD] OR2', 'highest lexrank', '302', '[BOLD] 1.39', '457', '[BOLD] 1.11', '846', '[BOLD] 0.59'], ['[EMPTY]', 'OR2R', 'smallest lexrank', '317', '-0.61', '450', '-0.58', '846', '-0.29'], ['[EMPTY]', 'OR3R', 'small cluster', '468', '-4.40', '581', '-3.94', '848', '-2.85'], ['[EMPTY]', 'OR4', 'tfidf', '474', '-4.93', '596', '-4.36', '850', '-3.24'], ['New Yorker', '[BOLD] NY1', 'official winner', '314', '[BOLD] 3.57', '466', '[BOLD] 2.96', '847', '[BOLD] 1.78'], ['[EMPTY]', '[BOLD] NY2', 'official runner up', '330', '[BOLD] 3.24', '463', '[BOLD] 2.60', '845', '[BOLD] 1.54'], ['[EMPTY]', '[BOLD] NY3', 'official third place', '276', '[BOLD] 2.29', '435', '[BOLD] 1.57', '842', '[BOLD] 0.89'], ['General', 'GE1', 'syntactically complex', '268', '-0.10', '406', '-0.14', '846', '-0.70'], ['[EMPTY]', 'GE2', 'concrete', '259', '-0.33', '427', '-0.41', '844', '-0.26'], ['[EMPTY]', 'GE3R', 'well formatted', '296', '0.81', '446', '0.61', '846', '0.31'], ['Content', 'CU1', 'freebase', '290', '0.26', '424', '0.17', '840', '0.07'], ['[EMPTY]', '[BOLD] CU2', 'positive sentiment', '268', '[BOLD] 1.21', '396', '[BOLD] 0.83', '836', '[BOLD] 0.46'], ['[EMPTY]', '[BOLD] CU2R', 'negative sentiment', '298', '[BOLD] 1.69', '445', '[BOLD] 1.30', '826', '[BOLD] 0.70'], ['[EMPTY]', '[BOLD] CU3', 'people', '276', '[BOLD] 1.45', '409', '[BOLD] 1.24', '834', '[BOLD] 0.68'], ['Control', 'CO2', 'antijoke', '259', '0.27', '394', '-0.04', '822', '-0.09']]
Each evaluation (ni, si pair) corresponds to the number of votes in favor of the given method minus the number of votes against. So the first set corresponds to pairs in which, out of seven judges, there was a difference of at least 4 votes in favor of one or the other caption. This level of significant agreement happened in 5,594/15,154 cases (36.9% of the time). A difference of at least 3 votes happened in 8,131/15,154 pairs (53.6%). The third evaluation corresponds to all pairwise comparisons, including ties. ni refers to the number of times the above constraint for i is met and score si is calculated by averaging the number of votes in favor minus the number of votes against for each ni. The probability that a random process will generate a difference of at least 4 votes (excluding ties) is 12.5%.
Variational Question-Answer Pair Generation for Machine Reading Comprehension
2004.03238
Table 3: Results for answer extraction on the test set. For all the metrics, higher is better.
['[EMPTY]', 'Relevance Precision', 'Relevance Precision', 'Relevance Recall', 'Relevance Recall', 'Diversity Dist']
[['[EMPTY]', 'Prop.', 'Exact', 'Prop.', 'Exact', 'Dist'], ['NER', '34.44', '19.61', '64.60', '45.39', '30.0k'], ['BiLSTM-CRF w/ char w/ NER (Du18)', '45.96', '33.90', '41.05', '28.37', '-'], ['VQAG', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['{\\rm C=0}', '[BOLD] 58.39', '[BOLD] 47.15', '21.82', '16.38', '3.1k'], ['{\\rm C=5}', '30.16', '13.41', '[BOLD] 83.13', '[BOLD] 60.88', '71.2k'], ['{\\rm C=20}', '21.95', '5.75', '72.26', '42.15', '[BOLD] 103.3k'], ['{\\rm C=100}', '23.32', '7.48', '71.74', '39.70', '84.6k']]
Our model with the condition {\rm C=5} performed the best in terms of the recall scores, while surpassing NER in terms of diversity. From the viewpoint of diversity, {\rm C=20} is the best setting. However, high Dist scores do not occur together with high recall scores. This observation shows the trade-off between diversity and quality. In this task, we show that our model with {\rm C=5} can cover most of the human-created answers and also extract more diverse answers than baselines. However, when {\rm C=0}, the Dist score is fairly low. This implies the posterior collapse issue, though the precision scores are the best.
Variational Question-Answer Pair Generation for Machine Reading Comprehension
2004.03238
Table 2: QA pair modeling capacity measured on the test set. NLL: negative log likelihood (-\log p(q,a|c)). {\rm NLL}_{a}=-\log p(a|c), {\rm NLL}_{q}=-\log p(q|a,c). D_{{\rm KL}_{z}} and D_{{\rm KL}_{y}} are Kullback–Leibler divergence between the approximate posterior and the prior of the latent variable z and y. The lower NLL is, the higher the probability is that the model assigns to the test set. NLL for our models are estimated with importance sampling using 300 samples.
['[EMPTY]', 'NLL', '{\\rm NLL}_{a}', '{\\rm NLL}_{q}', 'D_{{\\rm KL}_{z}}', 'D_{{\\rm KL}_{y}}']
[['Pipeline', '36.26', '3.99', '32.50', '-', '-'], ['VQAG', 'VQAG', 'VQAG', 'VQAG', '[EMPTY]', '[EMPTY]'], ['{\\rm C=0}', '[BOLD] 34.46', '4.46', '30.00', '0.027', '0.036'], ['{\\rm C=5}', '37.00', '5.15', '31.51', '4.862', '4.745'], ['{\\rm C=20}', '59.66', '14.38', '43.56', '17.821', '17.038'], ['{\\rm C=100}', '199.43', '81.01', '112.37', '92.342', '91.635']]
We also observe that this issue happens when implementing our model according to the Ineq. To mitigate this problem, inspired by Prokhorv19, we use modified \beta-VAE beta-vae proposed by Burgess18, which uses two hyperparameters to control the KL terms. Our modified variational lower bound is as follows: \displaystyle\log \displaystyle\ p_{\theta}(q,a|c)\geq\mathbb{E}_{z,y\sim q_{\phi}(z,y|q,a,c)}[% \log p_{\theta}(q|y,a,c) \displaystyle+\log p_{\theta}(a|z,c)] \displaystyle-\beta|D_{\rm KL}(q_{\phi}(z|a,c)||p_{\theta}(z|c))-C| \displaystyle-\beta|D_{\rm KL}(q_{\phi}(y|q,c)||p_{\theta}(y|c))-C|, (3) where \beta>0 and C\geq 0. We use the same \beta and C for the two KL terms for simplicity. First, our models with {\rm C=0} are superior to the pipeline model, which means that introducing latent random variables aid QA pair modeling capacity. However, the KL terms converge to zero with {\rm C=0}. In other tasks, it is shown that our model with {\rm C=0} collapses into a deterministic model. The fact that {\rm NLL_{a}} is consistently lower than {\rm NLL_{q}} is due to the decomposition of probability p(a|c)=p(c_{end}|c_{start},c)p(c_{start}|c) and p(q|a,c)=\prod_{i}p(q_{i}|q_{1: i-1},a,c), which is sensitive to the sequence length. Also, we observe that the hyperparameter C can control the KL values, showing the potential to avoid the posterior collapse issue in our case. When we set C>0, KL values are greater than 0, which implies that latent variables have non-trivial information about questions and answers.
COCO-CN for Cross-Lingual Image Tagging, Captioning and Retrieval
1805.08661
TABLE IV: Automated evaluation of different models for image tagging. Cascading MLP learned from cross-lingual data is the best.
['[BOLD] Model', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F-measure']
[['Clarifai\xa0', '0.217', '0.261', '0.228'], ['MLP trained on COCO-MT', '0.432', '0.525', '0.456'], ['MLP trained on COCO-CN', '0.477', '0.576', '0.503'], ['Multi-task MLP', '0.482', '0.583', '0.508'], ['Cascading MLP', '[BOLD] 0.491', '[BOLD] 0.594', '[BOLD] 0.517']]
The proposed Cascading MLP tops the performance. Although the number of training images in COCO-MT is 6.6 times as large as COCO-CN, the MLP trained on COCO-CN outperforms its COCO-MT counterpart with a clear margin. This result shows the importance of high-quality annotation for training. Some image tagging results are presented in Table VII. In the first row, the tag umbrella is not predicted by MLP trained on COCO-CN, while in the second row, MLP trained on COCO-MT incorrectly predicts the tag keyboard. Learning from the two complementary datasets, Cascading MLP makes better prediction in general.
COCO-CN for Cross-Lingual Image Tagging, Captioning and Retrieval
1805.08661
TABLE V: Human evaluation of different models for image tagging on COCO-CN test100. Cascading MLP again performs the best.
['[BOLD] Model', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F-measure']
[['Clarifai', '0.634', '0.358', '0.451'], ['MLP trained on COCO-MT', '0.778', '0.453', '0.563'], ['MLP trained on COCO-CN', '0.836', '0.488', '0.607'], ['Cascading MLP', '[BOLD] 0.858', '[BOLD] 0.501', '[BOLD] 0.623']]
Human evaluation. It is possible that the lower performance of Clarifai is caused by a discrepancy between the Chinese vocabulary of the online service and our ground-truth vocabulary. To resolve this uncertainty, we performed a user study as follows. For each of the 100 pre-specified images, we collected the top 5 predicted tags by each model. Eight subjects participated in the user study. Each image together with the collected tags was shown to two subjects, who independently rated each of the tags as relevant, irrelevant or unsure. To avoid bias, the tags were randomly shuffled in advanced. Only tags rated as relevant by both subjects were preserved. As the ground truth is more complete, all the scores improve. Cascading MLP again performs the best. Moreover, the qualitative conclusion concerning which model performs better remains the same.
COCO-CN for Cross-Lingual Image Tagging, Captioning and Retrieval
1805.08661
TABLE VIII: Automated evaluation of image captioning models trained on different datasets. The proposed cross-lingual transfer performs the best.
['[BOLD] Training', '[BOLD] BLEU-4', '[BOLD] METEOR', '[BOLD] ROUGE-L', '[BOLD] CIDEr']
[['Flickr8k-CN', '10.1', '14.9', '33.8', '22.9'], ['AIC-ICC', '7.4', '21.3', '34.2', '24.6'], ['COCO-MT', '30.2', '27.1', '50.0', '86.2'], ['COCO-CN', '31.7', '27.2', '52.0', '84.6'], ['COCO-Mixed', '29.8', '28.6', '50.3', '86.8'], ['Transfer learning ', '33.7', '28.2', '52.9', '89.2'], ['Artificial token ', '31.8', '26.9', '51.5', '85.4'], ['[ITALIC] Sequential Learning', '[BOLD] 36.7', '[BOLD] 29.5', '[BOLD] 55.0', '[BOLD] 98.4']]
Models trained on Flickr8k-CN and AIC-ICC have fairly low scores. The relatively limited size of Flickr8k-CN makes it lack the ability for training a good captioning model. As for AIC-ICC, although it is the largest Chinese captioning dataset, it is strongly biased that all the images are about human beings. Consequently, the model trained on this dataset is unsuitable for describing general images. The model trained on COCO-CN is on par with the model trained on machine translated sentences of the full MS-COCO set. As for COCO-Mixed, since manually written sentences from COCO-CN are overwhelmed by machine translated sentences from COCO-MT during training, the benefit of COCO-CN appears to be marginal. By contrast, Sequential learning is quite effective, performing best under all metrics.
Identifying Protein-Protein Interaction using Tree LSTM and Structured Attention
1808.03227
TABLE IV: Cross-corpus results (F-score in %). Rows correspond to training corpora and columns to testing. Models marked with † represents tLSTM and ‡ represents tLSTM+tAttn
['[EMPTY]', '[ITALIC] AIMed', '[ITALIC] BioInfer', '[ITALIC] IEPA', '[ITALIC] HPRD50', '[ITALIC] LLL']
[['AIMed †', '−', '47.0', '38.6', '41.5', '34.6'], ['AIMed ‡', '−', '45.0', '37.9', '39.1', '33.5'], ['BioInfer †', '50.8', '−', '40.8', '43.7', '35.0'], ['BioInfer ‡', '50.0', '−', '40.0', '45.5', '33.5']]
Rows correspond to the training corpora and columns correspond to the test corpora. It is clearly visible that the performance degrades on all of the corpora as the training and testing sets are not from the same distribution which goes against the fundamental machine learning theory about training and test sets being identically distributed. Being larger in size, the models that are trained on BioInfer perform better than the models trained on AIMed. One more interesting aspect of our evaluation is that the models without attention perform better than the models with attention. The main reason is that our structured attention captures the syntactic dependencies in the sentences and because of the two different distributions between the training and testing sets, the attention mechanism fails to capture these dependencies.
Automatic Disambiguation ofFrench Discourse Connectives
1704.05162
Table 6: Information Gain of Each Feature in the Disambiguation of Discourse Connectives
['[EMPTY]', '[BOLD] Feature', '[BOLD] French', '[BOLD] English']
[['[ITALIC] Lexical:', '[ITALIC] Conn', '0.352', '0.351'], ['[ITALIC] Syntactic:', '[ITALIC] SelfCat', '0.167', '0.468'], ['[EMPTY]', '[ITALIC] SelfCatLeftSibling', '0.108', '0.145'], ['[EMPTY]', '[ITALIC] SelfCatParent', '0.093', '0.292'], ['[EMPTY]', '[ITALIC] Pos', '0.045', '0.119'], ['[EMPTY]', '[ITALIC] SelfCatRightSibling', '0.032', '0.085']]
To evaluate the contribution of each feature, we ranked the features by their information gain for both languages. For example, the Selfcat feature has a significantly lower information gain than the Conn feature for the disambiguation of French discourse connectives while this is not the case for English discourse connectives. This seems to indicate that the English discourse connectives tend to appear in more restricted syntactic contexts than French discourse connectives.
Creative Procedural-Knowledge Extraction From Web Design Tutorials
1904.08587
Table 5: BLEU scores for the usage summarization task evaluated in the validation and testing sets. The best performance in either set is bolded. The naming schema of different algorithms: [number of layers]-layer-[whether or not the attention mechanism Bahdanau et al. (2014) is applied].
['dropout', 'validation 0', 'validation 0.2', 'validation 0.5', 'testing 0', 'testing 0.2', 'testing 0.5']
[['1-layer', '11.73', '12.29', '13.49', '10.33', '11.71', '12.17'], ['1-layer-att', '18.45', '19.18', '[BOLD] 21.53', '17.24', '17.84', '[BOLD] 19.70'], ['2-layer', '11.37', '11.97', '13.16', '10.03', '11.30', '12.56'], ['2-layer-att', '16.37', '16.23', '17.18', '14.83', '15.27', '16.85']]
An usage summarization module takes a raw sentence as the input and generates a command usage summary. A natural model that can accomplish this task is the sequence-to-sequence model Sutskever et al. Then a seperate RNN-based decoder takes the representation as the input and sequentially generates a list of words as the summary. We experiment the Neural Machine Translation (NMT) model Bahdanau et al. (2) different dropout rates, and (3) the number of recurrent layers. To evaluate summarization performances, sentences that contain at least one action (43,582 out of 94,022 sentences satisfy the requirement) are randomly divided into a training set (23,582 sentences), a validation set (10,000 sentences), and a testing set (10,000 sentences); and each sentence corresponds to one or more reference summaries. NMT models are trained on the training set (batch size: 128, iterations: 100,000) using the Adam optimizer Kingma and Ba Finally, models’ performances are reported using the testing BLEU scores, as shown in Table.
Sound-Word2Vec: Learning Word Representations Grounded in Sounds
1703.01720
Table 1: Text-based sound retrieval (higher is better). We find that our sound-word2vec model outperforms all baselines.
['Embedding', 'Recall @1', 'Recall @10', 'Recall @50', 'Recall @100']
[['word2vec', '6.47±0.00', '14.25±0.05', '21.72±0.12', '26.03±0.22'], ['tag-word2vec', '6.95±0.02', '15.10±0.03', '22.43±0.09', '27.21±0.24'], ['sound-word2vec(r)', '6.49±0.00', '14.98±0.03', '21.96±0.11', '26.43±0.20'], ['(Lopopolo and van Miltenburg, 2015 )', '6.48±0.02', '15.09±0.05', '21.82±0.13', '26.89±0.23'], ['(Kiela and Clark, 2015 )', '6.52±0.01', '15.21±0.03', '21.92±0.08', '27.74±0.21'], ['sound-word2vec', '[BOLD] 7.11±0.02', '[BOLD] 15.88±0.04', '[BOLD] 23.14±0.09', '[BOLD] 28.67±0.17']]
Given a textual description of a sound as query, we compare it with tags associated with sounds in the database to retrieve the sound with the closest matching tags. Note that this is a purely textual task, albeit one that needs awareness of sound. In a sense, this task exactly captures what we want our model to be able to do – bridge the semantic gap between language and sound. We use the training split (Sec. For retrieval, we represent sounds by averaging the learnt embeddings for the associated tags. We embed the caption provided for the sound (in the Freesound database) in a similar manner, and use it as the query. We then rank sounds based on the cosine similarity between the tag and query representations for retrieval. We evaluate using standard retrieval metrics – Recall@{1,10,50,100}. Note that the entire testing set (≈10k sounds) is present in the retrieval pool. So, recall@100 corresponds to obtaining the correct result in the top 1% of the search results, which is a relatively stringent evaluation criterion. Results. Table. Among our approaches, tag-word2vec performs second best – this is intuitive since the tag distributions implicitly capture auditory relatedness (a sound may have tags cat and meow), while word2vec and sound-word2vec(r) have the lowest performance.
Sound-Word2Vec: Learning Word Representations Grounded in Sounds
1703.01720
Table 2: Comparison to state of the art AMEN and ASLex datasets (Kiela and Clark, 2015) (higher is better). Our approach performs better than Kiela and Clark (2015).
['Embedding', 'Spearman Correlation [ITALIC] ρs AMEN', 'Spearman Correlation [ITALIC] ρs ASLex']
[['(Lopopolo and van Miltenburg, 2015 )', '0.410±0.09', '0.237±0.04'], ['(Kiela and Clark, 2015 )', '0.648±0.08', '0.366±0.11'], ['sound-word2vec', '[BOLD] 0.674±0.05', '[BOLD] 0.391±0.06']]
In addition to enlarging the vocabulary, the pre-training helps induce smoothness in the sound-word2vec embeddings – allowing us to transfer semantics learnt from sounds to words that were not present as tags in the Freesound database. Indeed, we find that word2vec pre-training helps improve performance (Sec.
Improving text classification with vectors of reduced precisionThis research was supported in part by Faculty of Management and Social Communication of the Jagiellonian University and PLGrid Infrastructure.
1706.06363
TABLE V: Times of training and testing for the classifiers on the corpora without or with SVD with different number of components.
['[BOLD] Classifier', '[BOLD] Variant', '[BOLD] Training (seconds) webkb', '[BOLD] Training (seconds) r8', '[BOLD] Training (seconds) r52', '[BOLD] Training (seconds) ng20', '[BOLD] Training (seconds) cade', '[BOLD] Testing (seconds) webkb', '[BOLD] Testing (seconds) r8', '[BOLD] Testing (seconds) r52', '[BOLD] Testing (seconds) ng20', '[BOLD] Testing (seconds) cade']
[['KNN 1', 'no SVD', '1.44', '0.44', '0.48', '3.22', '24.86', '0.50', '0.50', '0.66', '3.56', '11.78'], ['KNN 1', 'SVD(100)', '10.48', '10.32', '10.23', '17.23', '55.77', '0.54', '0.38', '0.43', '1.54', '9.05'], ['KNN 1', 'SVD(500)', '51.12', '55.34', '55.38', '78.15', '204.10', '0.57', '0.48', '0.60', '2.23', '12.96'], ['KNN 1', 'SVD(1000)', '94.20', '101.02', '103.53', '146.47', '382.13', '0.67', '0.68', '0.88', '3.17', '18.45'], ['KNN 5', 'no SVD', '1.44', '0.44', '0.48', '3.22', '24.86', '0.52', '0.57', '0.76', '3.99', '13.27'], ['KNN 5', 'SVD(100)', '10.48', '10.32', '10.23', '17.23', '55.77', '0.59', '0.46', '0.54', '1.97', '10.83'], ['KNN 5', 'SVD(500)', '51.13', '55.34', '55.38', '78.15', '204.11', '0.65', '0.55', '0.70', '2.65', '14.80'], ['KNN 5', 'SVD(1000)', '94.22', '101.02', '103.53', '146.47', '382.13', '0.74', '0.74', '0.98', '3.54', '20.32'], ['Logistic Regression', 'no SVD', '1.56', '0.79', '3.03', '11.14', '45.39', '0.35', '0.11', '0.12', '0.79', '5.39'], ['Logistic Regression', 'SVD(100)', '10.57', '10.73', '13.18', '22.41', '64.44', '0.40', '0.14', '0.15', '0.83', '5.50'], ['Logistic Regression', 'SVD(500)', '51.69', '57.76', '72.39', '95.28', '237.10', '0.39', '0.16', '0.18', '1.00', '5.90'], ['Logistic Regression', 'SVD(1000)', '95.31', '105.60', '135.25', '177.72', '450.09', '0.43', '0.20', '0.24', '1.21', '6.40'], ['SVM', 'no SVD', '1.64', '0.72', '1.38', '6.78', '38.90', '0.35', '0.11', '0.12', '0.79', '5.39'], ['SVM', 'SVD(100)', '10.58', '10.49', '11.30', '18.61', '60.38', '0.40', '0.14', '0.15', '0.83', '5.50'], ['SVM', 'SVD(500)', '51.66', '56.33', '59.63', '84.52', '227.25', '0.39', '0.16', '0.18', '1.00', '5.90'], ['SVM', 'SVD(1000)', '95.37', '103.28', '112.41', '159.62', '437.49', '0.43', '0.20', '0.24', '1.21', '6.40']]
SVD is the most time–consuming phase in training in comparison to classification. However, it can reduce time of testing. Time of testing using KNNs is higher than other classifiers, because it is proportional to number of documents. Time of precision reduction is negligible.
Multi-Field Structural Decomposition for Question Answering
1604.00938
Table 1: Results from our question-answering system on 8 types of questions in the bAbI tasks.
['Type', 'Lexical [ITALIC] λ=1', 'Lexical [ITALIC] λ=1', 'Lexical [ITALIC] λ is learned', 'Lexical [ITALIC] λ is learned', 'Lexical + Syntax [ITALIC] λ=1', 'Lexical + Syntax [ITALIC] λ=1', 'Lexical + Syntax [ITALIC] λ is learned', 'Lexical + Syntax [ITALIC] λ is learned', 'Lexical + Syntax + Semantics [ITALIC] λ=1', 'Lexical + Syntax + Semantics [ITALIC] λ=1', 'Lexical + Syntax + Semantics [ITALIC] λ is learned', 'Lexical + Syntax + Semantics [ITALIC] λ is learned']
[['Type', 'MAP', 'MRR', 'MAP', 'MRR', 'MAP', 'MRR', 'MAP', 'MRR', 'MAP', 'MRR', 'MAP', 'MRR'], ['1 (qa1)', '39.62', '61.73', '39.62', '61.73', '29.90', '48.05', '40.50', '61.47', '72.60', '85.07', '[BOLD] 100.0', '[BOLD] 100.0'], ['2 (qa4)', '62.90', '81.45', '62.90', '81.45', '64.00', '82.00', '64.00', '82.00', '55.70', '77.85', '[BOLD] 64.10', '[BOLD] 82.05'], ['3 (qa5)', '37.10', '54.00', '38.20', '54.70', '48.00', '62.15', '48.40', '62.25', '72.60', '82.65', '[BOLD] 94.20', '[BOLD] 96.33'], ['4 (qa6)', '64.00', '75.07', '64.00', '75.07', '65.80', '78.47', '66.10', '78.53', '78.20', '88.33', '[BOLD] 89.30', '[BOLD] 94.27'], ['5 (qa9)', '47.90', '63.50', '48.10', '63.62', '47.90', '63.67', '50.50', '65.47', '53.90', '67.88', '[BOLD] 94.40', '[BOLD] 96.72'], ['6 (qa10)', '47.80', '63.78', '47.90', '63.92', '49.20', '65.52', '50.20', '66.33', '57.60', '70.68', '[BOLD] 96.90', '[BOLD] 98.23'], ['7 (qa12)', '19.20', '38.68', '19.20', '38.68', '25.10', '40.83', '31.90', '49.82', '55.00', '70.60', '[BOLD] 99.60', '[BOLD] 99.80'], ['8 (qa20)', '37.10', '51.82', '37.10', '51.82', '31.40', '42.00', '35.70', '44.22', '31.20', '46.50', '[BOLD] 42.80', '[BOLD] 56.32'], ['Avg.', '44.45', '61.25', '44.63', '61.37', '45.16', '60.34', '48.41', '63.76', '59.60', '73.70', '[BOLD] 85.16', '[BOLD] 90.47']]
The map and mrr show clear correlation with respect to the number of active fields. For the majority of tasks, using only the lexical fields does not perform well. The fictional stories included in this data often contain multiple occurrences of the same lexicons, and the lexical fields alone are not able to select the correct answer. Significantly lower accuracy for the last task is due to a fact that besides an answer is located within a single sentence, multiple passages for the single question are required to correctly locate the sentence with the answers. Lexical fields coupled with only syntactic fields do not perform much better. It may be due to a fact that the syntactic fields containing ordinary dependency labels do not provide sufficient context-wise information so that they do not generate enough features for statistical learning to capture specific characteristic of the context. The significant improvement, however, is reached when the semantics fields are added as they provide deeper understanding of the context.
Multi-label Dataless Text Classification with Topic Modeling
1711.01563
Table 2: Performance comparison on the two datasets. The best and the second best results by dataless classifiers are highlighted in boldface and underlined respectively. #F1: Macro-F1 score; #AUC: Macro-AUC score.
['Method', 'Ohsumed # [ITALIC] F1', 'Ohsumed # [ITALIC] AUC', 'Delicious # [ITALIC] F1', 'Delicious # [ITALIC] AUC']
[['SVM', '0.629', '0.921', '0.461', '0.846'], ['L-LDA', '0.520', '0.861', '0.401', '0.763'], ['MLTM', '0.463', '0.874', '0.286', '0.780'], ['SVM [ITALIC] s', '0.418', '0.789', '0.340', '0.754'], ['L-LDA [ITALIC] s', '0.411', '0.818', '0.321', '0.745'], ['MLTM [ITALIC] s', '0.278', '0.805', '0.296', '0.781'], ['ESA', '0.424', '0.851', '0.343', '0.775'], ['WMD', '0.264', '0.753', '0.268', '0.783'], ['DescLDA', '0.358', '0.781', '0.297', '0.743'], ['SMTM', '[BOLD] 0.480', '[BOLD] 0.872', '[BOLD] 0.370', '[BOLD] 0.793'], ['SMTM - sparsity', '0.437', '0.864', '0.346', '0.788'], ['SMTM - category promotion', '0.448', '0.866', '0.334', '0.786'], ['SMTM - word promotion', '0.450', '0.861', '0.362', '0.789'], ['SMTM + word embedding', '0.451', '0.845', '0.364', '0.783']]
We observe that SMTM significantly outperforms all other dataless methods in terms of both Macro-F1 and Macro-AUC on both datasets. Among all dataless baselines in comparison, ESA delivers the best Macro-F1 scores on the two datasets, however with an expensive external knowledge base. We can also find that our approach is much better than DescLDA. Note that DescLDA is also built upon probabilistic topic models but designed for single-label classification. This suggests that our approach successfully discovers the underlying topical structure of multi-labeled documents, leading to better classification results. Thus, supervised classifiers should be preferred when training data is large in volume and of high-quality. Here, we are interested in conducting a deeper comparison of SMTM and supervised classifiers. Specifically, we will discuss a few scenarios in which our dataless classifier will be a more desired choice than supervised classifiers.
A Probabilistic Formulation of Unsupervised Text Style Transfer
2002.03912
Table 4: Comparison of gradient approximation on the sentiment transfer task.
['[BOLD] Method', '[BOLD] train ELBO↑', '[BOLD] test ELBO↑', '[BOLD] Acc.', '[BOLD] BLEU [ITALIC] r', '[BOLD] BLEU [ITALIC] s', '[BOLD] PPLD1', '[BOLD] PPLD2']
[['Sample-based', '-3.51', '-3.79', '87.90', '13.34', '33.19', '24.55', '25.67'], ['Greedy', '-2.05', '-2.07', '87.90', '18.67', '48.38', '27.75', '35.61']]
Greedy vs. Sample-based Gradient Approximation. In our experiments, we use greedy decoding from the inference network to approximate the expectation required by ELBO, which is a biased estimator. The main purpose of this approach is to reduce the variance of the gradient estimator during training, especially in the early stages when the variance of sample-based approaches is quite high. As an ablation experiment on the sentiment transfer task we compare greedy and sample-based gradient approximations in terms of both train and test ELBO, as well as task performance corresponding to best test ELBO. After the model is fully trained, we find that the sample-based approximation has low variance. With a single sample, the standard deviation of the EBLO is less than 0.3 across 10 different test repetitions. All final reported ELBO values are all computed with this approach, regardless of whether the greedy approximation was used during training. The reported ELBO values are the evidence lower bound per word.
A Probabilistic Formulation of Unsupervised Text Style Transfer
2002.03912
Table 5: Comparison of gradient propagation method on the sentiment transfer task.
['[BOLD] Method', '[BOLD] train ELBO↑', '[BOLD] test ELBO↑', '[BOLD] Acc.', '[BOLD] BLEU [ITALIC] r', '[BOLD] BLEU [ITALIC] s', '[BOLD] PPLD1', '[BOLD] PPLD2']
[['Gumbel Softmax', '-2.96', '-2.98', '81.30', '16.17', '40.47', '22.70', '23.88'], ['REINFORCE', '-6.07', '-6.48', '95.10', '4.08', '9.74', '6.31', '4.08'], ['Stop Gradient', '-2.05', '-2.07', '87.90', '18.67', '48.38', '27.75', '35.61']]
While being much simpler, we show that the stop-gradient trick produces superior ELBO over Gumbel Softmax and REINFORCE. This result suggests that stopping gradient helps better optimize the likelihood objective under our probabilistic formulation in comparison with other optimization techniques that propagate gradients, which is counter-intuitive. A likely explanation is that as a gradient estimator, while clearly biased, stop-gradient has substantially reduced variance. In comparison with other techniques that offer reduced bias but extremely high variance when applied to our model class (which involves discrete sequences as latent variables), stop-gradient actually leads to better optimization of our objective because it achieves better balance of bias and variance overall.
Deep Text Mining of Instagram Data Without Strong Supervision
1909.10812
TABLE IV: The average performance from three training runs.
['[ITALIC] Model', '[ITALIC] Accuracy', '[ITALIC] Precision', '[ITALIC] Recall', '[ITALIC] F1']
[['CNN-DataProgramming', '0.797±0.01', '0.566±0.05', '0.678±0.04', '0.616±0.02'], ['CNN-MajorityVote', '0.739±0.02', '0.470±0.06', '0.686±0.05', '0.555±0.03'], ['SemCluster', '0.719', '0.541', '0.453', '0.493'], ['DomainExpert', '0.807', '0.704', '0.529', '0.604']]
The Data Programming Paradigm Versus Majority The data programming approach achieves the best F1 result, on level with the human benchmark, beating both SemCluster and CNN-MajorityVote. The human benchmark had a higher precision but a lower recall than the CNN models.
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
1502.05698
Table 3: Test accuracy (%) on our 20 Tasks for various methods (1000 training examples each). Our proposed extensions to MemNNs are in columns 5-9: with adaptive memory (AM), N-grams (NG), nonlinear matching function (NL), and combinations thereof. Bold numbers indicate tasks where our extensions achieve ≥95% accuracy but the original MemNN model of Weston et al. (2014) did not. The last two columns (10-11) give extra analysis of the MemNN\em AM + NG% + NL method. Column 10 gives the amount of training data for each task needed to obtain ≥95% accuracy, or FAIL if this is not achievable with 1000 training examples. The final column gives the accuracy when training on all data at once, rather than separately.
['[EMPTY]', 'Weakly Supervised', 'Weakly Supervised', 'Uses External Resources', 'Strong Supervision (using supporting facts)', 'Strong Supervision (using supporting facts)', 'Strong Supervision (using supporting facts)', 'Strong Supervision (using supporting facts)', 'Strong Supervision (using supporting facts)', 'Strong Supervision (using supporting facts)', 'Strong Supervision (using supporting facts)']
[['TASK~{}~{}~{}~{}~{}~{}~{}~{}~{}', 'N\\text{-}gramClassifier%', 'LSTM', 'Structured SVM% \\em COREF+SRL features', 'MemNN\\@@cite[cite]{\\@@bibref{Autho% rs Phrase1YearPhrase2}{memnns}{\\@@citephrase{(}}{\\@@citephrase{)}}}', 'MemNN\\em ADAPTIVE% MEMORY', 'MemNN\\em AM + N-% GRAMS', 'MemNN\\em AM + % NONLINEAR', 'MemNN\\em AM + NG% + NL', ' [BOLD] No. of ex. req. ≥ 95', 'MultiTask Training'], ['1 - Single Supporting Fact', '36', '50', '99', '100', '100', '100', '100', '100', '250 ex.', '100'], ['2 - Two Supporting Facts', '2', '20', '74', '100', '100', '100', '100', '100', '500 ex.', '100'], ['3 - Three Supporting Facts', '7', '20', '17', '20', '[BOLD] 100', '[BOLD] 99', '[BOLD] 100', '[BOLD] 100', '500 ex.', '[BOLD] 98'], ['4 - Two Arg. Relations', '50', '61', '98', '71', '69', '[BOLD] 100', '73', '[BOLD] 100', '500 ex.', '80'], ['5 - Three Arg. Relations', '20', '70', '83', '83', '83', '86', '86', '[BOLD] 98', '1000 ex.', '[BOLD] 99'], ['6 - Yes/No Questions', '49', '48', '99', '47', '52', '53', '[BOLD] 100', '[BOLD] 100', '500 ex.', '[BOLD] 100'], ['7 - Counting', '52', '49', '69', '68', '78', '86', '83', '85', 'FAIL ', '86'], ['8 - Lists/Sets', '40', '45', '70', '77', '90', '88', '94', '91', 'FAIL ', '93'], ['9 - Simple Negation', '62', '64', '100', '65', '71', '63', '[BOLD] 100', '[BOLD] 100 ', '500 ex.', '[BOLD] 100'], ['10 - Indefinite Knowledge', '45', '44', '99', '59', '57', '54', '[BOLD] 97', '[BOLD] 98', '1000 ex.', '[BOLD] 98'], ['11 - Basic Coreference', '29', '72', '100', '100', '100', '100', '100', '100', '250 ex.', '100'], ['12 - Conjunction', '9', '74', '96', '100', '100', '100', '100', '100', '250 ex.', '100'], ['13 - Compound Coref.', '26', '94', '99', '100', '100', '100', '100', '100', '250 ex.', '100'], ['14 - Time Reasoning', '19', '27', '99', '99', '100', '99', '100', '99', '500 ex.', '99'], ['15 - Basic Deduction', '20', '21', '96', '74', '73', '[BOLD] 100', '77', '[BOLD] 100', '100 ex.', '[BOLD] 100'], ['16 - Basic Induction', '43', '23', '24', '27', '[BOLD] 100', '[BOLD] 100', '[BOLD] 100', '[BOLD] 100', '100 ex.', '94'], ['17 - Positional Reasoning', '46', '51', '61', '54', '46', '49', '57', '65', 'FAIL ', '72'], ['18 - Size Reasoning', '52', '52', '62', '57', '50', '74', '54', '[BOLD] 95', '1000 ex.', '93'], ['19 - Path Finding', '0', '8', '49', '0', '9', '3', '15', '36', 'FAIL ', '19'], ['20 - Agent’s Motivations', '76', '91', '95', '100', '100', '100', '100', '100', '250 ex.', '100'], ['Mean Performance', '34', '49', '79', '75', '79', '83', '87', '93', '100', '92']]
Learning rates and other hyperparameters for all methods are chosen using the training set. We give results for each of the 20 tasks separately, as well as mean performance and number of failed tasks in the final two rows. The adaptive approach gives a straight-forward improvement in tasks 3 and 16 because they both require more than two supporting facts, and also gives (small) improvements in 8 and 19 because they require multi-word outputs (but still remain difficult). We hence use the AM model in combination with all our other extensions in the subsequent experiments.
Conditional Self-Attention for Query-based Summarization
2002.07338
Table 1: Query-based summarization on Debatepedia (abstractive) and HotpotQA (extractive). Two CSA models are evaluated: (Mul) and (Add) refer to multiplicative and additive cross-attention used in CSA.
['Model', 'Debatepedia\xa0(Nema et al., 2017 )', 'Debatepedia\xa0(Nema et al., 2017 )', 'Debatepedia\xa0(Nema et al., 2017 )', 'HotpotQA\xa0(Yang et al., 2018 )', 'HotpotQA\xa0(Yang et al., 2018 )', 'HotpotQA\xa0(Yang et al., 2018 )']
[['Model', 'Rouge-1', 'Rouge-2', 'Rouge-L', 'Rouge-1', 'Rouge-2', 'Rouge-L'], ['Transformer\xa0Vaswani et al. ( 2017 )', '28.16', '17.48', '27.28', '35.45', '28.17', '30.31'], ['UT\xa0Dehghani et al. ( 2019 )', '36.21', '26.75', '35.53', '41.58', '32.28', '34.88'], ['SD2\xa0Nema et al. ( 2017 )', '41.26', '18.75', '40.43', '–', '–', '–'], ['CONCAT', '41.72', '33.62', '41.25', '28.23', '24.50', '24.76'], ['ADD', '41.10', '33.35', '40.72', '32.84', '28.01', '28.53'], ['CSA Transformer (Mul)', '41.70', '32.92', '41.29', '[BOLD] 59.57', '[BOLD] 49.89', '[BOLD] 48.34'], ['CSA Transformer (Add)', '[BOLD] 46.44', '[BOLD] 37.38', '[BOLD] 45.85', '47.00', '37.78', '39.52']]
Note our models have much higher Rouge-2 scores than baselines, which suggests the summarization generated by CSA is more coherent. The learned attention scores emphasize not only lexical units such as ”coal-electricity” but also conjunctive adverb such as ”therefore.” More example summaries can be found in Appendix. Different from Debatepedia results, CSA module with multiplicative cross-attention performs better than the additive one. This is because in each dataset we adopt the same hyper-parameters for both variants – in Debatepedia the hyper-parameters favor multiplicative cross-attention, while in HotpotQA they prefer the additive one. The experimental results show that both variants can have significantly better performance than the baselines. It also suggests that we cannot simply adopt the hyper-parameters from another tasks without fine-tuning – although the model can be more compelling than the baselines if we do so.
From Characters to Words to in Between: Do We Capture Morphology?
1704.08352
Table 10: Average perplexities of words that occur after reduplicated words in the test set.
['Model', 'all', 'frequent', 'rare']
[['word', '101.71', '91.71', '156.98'], ['characters', '[BOLD] 99.21', '[BOLD] 91.35', '[BOLD] 137.42'], ['BPE', '117.2', '108.86', '156.81']]
In contrast with the overall results, the BPE bi-LSTM model has the worst perplexities, while character bi-LSTM has the best, suggesting that these models are more effective for reduplication.
From Characters to Words to in Between: Do We Capture Morphology?
1704.08352
Table 7: Perplexity results on the Czech development data, varying training data size. Perplexity using ~1M tokens annotated data is 28.83.
['#tokens', 'word', 'char trigram', 'char']
[['#tokens', 'word', 'bi-LSTM', 'CNN'], ['1M', '39.69', '32.34', '35.15'], ['2M', '37.59', '36.44', '35.58'], ['3M', '36.71', '35.60', '35.75'], ['4M', '35.89', '32.68', '35.93'], ['5M', '35.20', '34.80', '37.02'], ['10M', '35.60', '35.82', '39.09']]
However, we can obtain much more unannotated than annotated data, and we might guess that the character-level models would outperform those based on morphological analyses if trained on larger data. To test this, we ran experiments that varied the training data size on three representation models: word, character-trigram bi-LSTM, and character CNN. Since we want to see how much training data is needed to reach perplexity obtained using annotated data, we use the same output vocabulary derived from the original training. While this makes it possible to compare perplexities across models, it is unfavorable to the models trained on larger data, which may focus on other words. This is a limitation of our experimental setup, but does allow us to draw some tentative conclusions.
Czech Text Document Corpus v 2.0
1710.02365
Table 1: Corpus statistical information
['Unit name Document', 'Number 11,955', 'Unit name Word', 'Number 3,505,965']
[['Category', '60', 'Unique word', '150,899'], ['Cat. classif.', '37', 'Unique lemma', '82,986'], ['Noun', '894,951', 'Punct', '553,099'], ['Adjective', '369,172', 'Adposition', '340,785'], ['Verb', '287,253', 'Numeral', '265,430'], ['Pronoun', '258,988', 'Adverb', '144,791'], ['Coord. conj.', '100,611', 'Determiner', '84,681'], ['Pronoun', '74,340', 'Aux. verb', '70,810'], ['Subord. conj.', '41,487', 'Particle', '12,383'], ['Symbol', '2420', 'Interjection', '142'], ['Other', '4126', '[EMPTY]', '[EMPTY]']]
It shows for instance that lemmatization decreases the vocabulary size from 150,899 to 82,986 which represents the reduction by 45%. Another interesting observation is the distribution of the POS tags in this corpus.
Windowing Models for Abstractive Summarization of Long Texts
2004.03324
Table 1: Results on the CNN/Dailymail test set: summaries of Ty=125 tokens; Stan trained with fixed-size input of Tx=400 tokens; SWM (d=1.2, k=0.8) & DWM trained on Tx=1160 tokens, with windows of Tw=400 tokens (stride ss=380).
['Model', 'R-1', 'R-2', 'R-L']
[['Lead-3', '39.89', '17.22', '36.08'], ['Stan', '37.85', '16.48', '34.95'], ['SWM', '37.11', '16.01', '34.37'], ['DWM', '36.02', '15.67', '33.28']]
Unsurprisingly, the simple Lead-3 baseline outperforms Stan and both our static and dynamic windowing models. This is because in CNN/Dailymail documents almost all of the summary-relevant content is found at the very beginning of the document. The ability to process all windows does not benefit to SWM and DWM in this setting as there is virtually no summary-relevant content in later windows.