paper
stringlengths 0
839
| paper_id
stringlengths 1
12
| table_caption
stringlengths 3
2.35k
| table_column_names
large_stringlengths 13
1.76k
| table_content_values
large_stringlengths 2
11.9k
| text
large_stringlengths 69
2.82k
|
---|---|---|---|---|---|
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis | 1911.12569 | TABLE V: Confusion matrix for sentiment analysis | ['Actual', 'Predicted NO', 'Predicted YES'] | [['NO', '665', '277'], ['YES', '235', '656']] | We perform quantitative error analysis for both sentiment and emotion for the M2 model. This may be due to the reason that this particular class is the most underrepresented in the training set. These three emotions have the least share of training instances, making the system less confident towards these emotions. |
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis | 1911.12569 | TABLE V: Confusion matrix for sentiment analysis | ['Actual', 'Predicted NO', 'Predicted YES'] | [['NO', '911', '160'], ['YES', '445', '317']] | We perform quantitative error analysis for both sentiment and emotion for the M2 model. This may be due to the reason that this particular class is the most underrepresented in the training set. These three emotions have the least share of training instances, making the system less confident towards these emotions. |
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis | 1911.12569 | TABLE V: Confusion matrix for sentiment analysis | ['Actual', 'Predicted NO', 'Predicted YES'] | [['NO', '886', '236'], ['YES', '291', '420']] | We perform quantitative error analysis for both sentiment and emotion for the M2 model. This may be due to the reason that this particular class is the most underrepresented in the training set. These three emotions have the least share of training instances, making the system less confident towards these emotions. |
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis | 1911.12569 | TABLE V: Confusion matrix for sentiment analysis | ['Actual', 'Predicted NO', 'Predicted YES'] | [['NO', '413', '405'], ['YES', '191', '824']] | We perform quantitative error analysis for both sentiment and emotion for the M2 model. This may be due to the reason that this particular class is the most underrepresented in the training set. These three emotions have the least share of training instances, making the system less confident towards these emotions. |
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis | 1911.12569 | TABLE V: Confusion matrix for sentiment analysis | ['Actual', 'Predicted NO', 'Predicted YES'] | [['NO', '1312', '30'], ['YES', '426', '65']] | We perform quantitative error analysis for both sentiment and emotion for the M2 model. This may be due to the reason that this particular class is the most underrepresented in the training set. These three emotions have the least share of training instances, making the system less confident towards these emotions. |
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis | 1911.12569 | TABLE V: Confusion matrix for sentiment analysis | ['Actual', 'Predicted NO', 'Predicted YES'] | [['NO', '1032', '150'], ['YES', '335', '316']] | We perform quantitative error analysis for both sentiment and emotion for the M2 model. This may be due to the reason that this particular class is the most underrepresented in the training set. These three emotions have the least share of training instances, making the system less confident towards these emotions. |
Learning Conceptual-Contextual Embeddings for Medical Text | 1908.06203 | Table 4: Readmission prediction performance | ['[BOLD] Model', '[BOLD] Acc', '[BOLD] Pre-0', '[BOLD] Pre-1', '[BOLD] Re-0', '[BOLD] Re-1', '[BOLD] A.R.', '[BOLD] A.P.'] | [['', '0.698', '0.916', '0.367', '0.687', '0.742', '0.791', '0.513'], ['LSTM', '0.840', '0.956', '0.366', '0.859', '0.704', '0.794', '0.600'], ['CC-LSTM', '0.848', '0.978', '0.321', '0.854', '0.786', '[BOLD] 0.804', '0.613']] | Lin et al. uses chart events, demographic information and diagnosis as input to a LSTM+CNN model. We only use written note text and none of the numerical and time-series information. The primary metric area under ROC clearly shows that the CC model produces a performance boost over the baseline and surpassed state-of-the-art results. |
Learning Conceptual-Contextual Embeddings for Medical Text | 1908.06203 | Table 3: Performance on language-inferable and non-language-inferable knowledge | ['[EMPTY]', '[BOLD] # of examples', '[BOLD] Hits@10 (%)'] | [['LI', '76', '77.6'], ['Non-LI', '24', '20.8'], ['Total', '100', '64.0']] | Break-down analysis To measure the effect of semantic generalizability on model performance, and to understand the performance gap between the CC model and TransE , we first sampled 100 examples from the testing set, and labeled them to two categories: Language-inferable (LI) and Non-language-inferable (Non-LI), following our previous definition. First we observe that 3/4 of the triplets contain knowledge that can be inferred from text. This shows that in medical knowledge graphs, a majority of structured knowledge can potentially be carried by text representations. Making use of concept names can be difference-making in medical knowledge embeddings. In the case of the CC model, on LI type examples it gets to 77 percent hit at top10, while for Non-LI type the performance is much lower and is on par with the TransE model. |
Learning Conceptual-Contextual Embeddings for Medical Text | 1908.06203 | Table 6: Post-discharge mortality prediction performance | ['[BOLD] Model', '[BOLD] 30-day', '[BOLD] 1-year'] | [['[BOLD] Model', '[BOLD] A.R.', '[BOLD] A.R.'], ['', '0.80', '0.77'], ['', '0.82', '0.81'], ['(retrospective)', '0.82', '0.81'], ['', '0.858', '0.853'], ['LSTM', '0.823', '0.820'], ['CC-LSTM', '0.839', '0.837']] | In this task we predict post ICU discharge mortality. Note that like in the previous task, results from other works are listed mainly for reference rather than direct comparison, for these models use different information from EHR as input. Although not matching with state-of-the-art, the performance gain of the CC model over an LSTM model is consistent. |
The Wisdom of MaSSeS:Majority, Subjectivity, and Semantic Similarity in the Evaluation of VQA | 1809.04344 | Table 2: Examples from the validation splits of VQA 1.0 (top) and VizWiz (bottom). For each example, we report the pattern of answers provided by annotators (unique answer: frequency), the prediction of the model, and the scores (note that acm, SeS, MaSSeS are computed using threshold 0.9). Answers that are grouped together by SeS are included in square brackets. | ['[BOLD] dataset', '[BOLD] n.', '[BOLD] answers', '[BOLD] prediction', '[BOLD] VQA3+', '[BOLD] acm', '[BOLD] Ma', '[BOLD] S', '[BOLD] SeS', '[BOLD] MaSSeS'] | [['[ITALIC] VQA 1.0', '1', '[yellow: 5, orange: 4, light orange: 1]', '[ITALIC] yellow', '1.0', '0.53', '1.0', '0.44', '1.0', '1.0'], ['[ITALIC] VQA 1.0', '2', '[refrigerator: 6, fridge: 4]', '[ITALIC] refrigerator', '1.0', '0.98', '1.0', '0.55', '1.0', '1.0'], ['[ITALIC] VQA 1.0', '3', '[tennis rackets: 4, tennis racket: 2, tennis racquet: 1], racket: 2, racquets: 1', '[ITALIC] tennis rackets', '1.0', '0.98', '1.0', '0.33', '0.67', '0.67'], ['[ITALIC] VQA 1.0', '4', '[hot dogs: 5, hot dog: 2, hot dogs and fries: 1, hot dog fries: 1, hot dog and onion rings: 1]', '[ITALIC] hot dog', '0.60', '0.70', '0.4', '0.44', '1.0', '1.0'], ['[ITALIC] VizWiz', '1', '[christmas tree: 6, tree: 1, chritmas tree shaped santaclauses: 1, christmas tree santas: 1], santas: 1', '[ITALIC] christmas tree', '1.0', '0.70', '1.0', '0.55', '0.89', '0.89'], ['[ITALIC] VizWiz', '2', 'white: 6, [green: 2, light green: 1, very light green: 1]', '[ITALIC] white', '1.0', '0.62', '1.0', '0.55', '0.55', '0.55'], ['[ITALIC] VizWiz', '3', '[ginger peach: 5, ginger peach tea: 2, ginger peach herbal tea: 1], unanswerable: 2', '[ITALIC] unanswerable', '0.60', '0.20', '0.4', '0.44', '0.77', '0.19'], ['[ITALIC] VizWiz', '4', '[beef: 5, beef flavored broth: 2, beef flavored: 1, beef flavor: 1, this beef flavor: 1]', '[ITALIC] unanswerable', '0.0', '0.0', '0.0', '0.44', '1.0', '0.0']] | Starting from VQA 1.0, we notice that examples 1 and 2 are considered as 100% correct by both VQA3 + and MaSSeS. The former metric assigns this score because ‘yellow’ and ‘refrigerator’ have frequency equal to or greater than 4. As for MaSSeS, this score is produced because (a) the two answers have max frequency, and (b) the SeS score assigned to the response pattern is the highest (i.e. 1.0) due to their semantic consistency. That is, all the answers are grouped together since their cosine similarity with the centroid is equal or greater than 0.9. Notably, acm produces a similar score in example 2, but very different (i.e., much lower) in example 1, though the words involved are semantically very similar (very similar colors). Moving to example 3, we observe that MaSSeS assigns a lower score (0.67) compared to VQA3+ (1.0) since SeS makes a fine-grained distinction between generic ‘rackets’ and specific ones (i.e., for ‘tennis’). This proves the validity and precision our semantic similarity component, especially in comparison with acm, whose high score does not account for such distinction (0.98). As for example 4, the score output by MaSSeS (1.0) turns out to be higher than both VQA3+ (0.6) and acm (0.7) due to the extremely high semantic consistency of the answers. |
The Wisdom of MaSSeS:Majority, Subjectivity, and Semantic Similarity in the Evaluation of VQA | 1809.04344 | Table 1: Results of VQA3+, WUPS-acm, WUPS–mcm, MaSSeS and its components on four VQA datasets. | ['[BOLD] dataset', '[BOLD] metric [BOLD] VQA3+', '[BOLD] metric [BOLD] WUPS', '[BOLD] metric [BOLD] WUPS', '[BOLD] metric [BOLD] MaSSeS', '[BOLD] metric [BOLD] MaSSeS', '[BOLD] metric [BOLD] MaSSeS', '[BOLD] metric [BOLD] MaSSeS', '[BOLD] metric [BOLD] MaSSeS', '[BOLD] metric [BOLD] MaSSeS', '[BOLD] metric [BOLD] MaSSeS'] | [['[EMPTY]', '[EMPTY]', 'acm0.9', 'mcm0.9', 'Ma', 'S', 'SeS0.7', 'SeS0.9', 'MaS', 'MaSSeS0.7', 'MaSSeS0.9'], ['[ITALIC] VQA 1.0', '0.542', '0.479', '0.642', '0.523', '0.731', '0.922', '0.786', '0.425', '0.567', '0.458'], ['[ITALIC] VQA 2.0', '0.516', '0.441', '0.634', '0.495', '0.705', '0.907', '0.760', '0.384', '0.545', '0.418'], ['[ITALIC] VQA-abstract', '0.602', '0.532', '0.685', '0.582', '0.780', '0.944', '0.818', '0.482', '0.618', '0.507'], ['[ITALIC] VizWiz', '0.448', '0.163', '0.441', '0.444', '0.460', '0.705', '0.541', '0.207', '0.292', '0.227']] | Note that columns VQA3 + , WUPS-acm, WUPS-mcm, Ma, MaS, and MaSSeS are accuracies, while S and SeS are reliability scores. As can be noted, accuracies obtained with both versions of MaSSeS are generally lower compared to those of VQA3+, with the drop being particularly accentuated for VizWiz. As can be seen, the scores produced by our metric (blue) are ‘distributed’ across the x-axis (from 0 to 1), while those produced by VQA3+ (red) are grouped into 5 ‘classes’. Moreover, we observe that our metric is much more reluctant to output score 1. (recall that if an element is not max it is not considered as 100% correct by Ma). This drop is further accentuated by multiplying Ma by either S (to obtain MaS) or SeS (to obtain MaSSeS). Since the values of these components cannot exceed 1, the resulting score will be lowered according to the degree of subjectivity of the dataset. Bearing this in mind, it is worth focusing on the scores of S and SeS in each dataset. As can be noticed, S in VQA is relatively high, with most of the answers being grouped in the rightmost bars (0.8 or more). In contrast, we observe an almost normal distribution of S in VizWiz, with very few answers being scored with high values. When injecting semantic information into subjectivity (SeS0.9), however, the distribution changes. Indeed, we observe much less cases scored with extremely low values and much many cases with high values. In numbers, this is reflected in an overall increase of 8 points from S (0.46) to SeS (0.54). A similar pattern is also observed in VQA 1.0 (+5 points). It is worth mentioning that using a lowest similarity threshold (0.7) makes the increase between S and SeS even bigger. This, in turn, makes the MaSSeS score significantly higher and comparable to VQA3+ in the three VQA-based datasets (not for VizWiz). |
Utilizing BERT Intermediate Layers for Aspect Based Sentiment Analysis and Natural Language Inference | 2002.04815 | Table 2: Accuracy and macro-F1 (%) for aspect based sentiment analysis on three popular datasets. | ['[BOLD] Domain [BOLD] Methods', '[BOLD] Laptop [BOLD] Acc.', '[BOLD] Laptop [BOLD] F1', '[BOLD] Restaurant [BOLD] Acc.', '[BOLD] Restaurant [BOLD] F1', '[BOLD] Twitter [BOLD] Acc.', '[BOLD] Twitter [BOLD] F1'] | [['BERT \\textsc [ITALIC] BASE', '74.66', '68.64', '81.92', '71.97', '72.46', '71.04'], ['[BOLD] BERT-LSTM', '[BOLD] 75.31', '[BOLD] 69.37', '82.21', '72.52', '73.06', '71.61'], ['[BOLD] BERT-Attention', '75.16', '68.76', '[BOLD] 82.38', '[BOLD] 73.22', '[BOLD] 73.35', '[BOLD] 71.88'], ['BERT-PT Xu et al. ( 2019 )', '76.27', '70.66', '84.53', '75.33', '-', '-'], ['[BOLD] BERT-PT-LSTM', '77.08', '71.65', '[BOLD] 85.29', '[BOLD] 76.88', '-', '-'], ['[BOLD] BERT-PT-Attention', '[BOLD] 77.68', '[BOLD] 72.57', '84.92', '75.89', '-', '-']] | Since BERT outperforms previous non-BERT-based studies on ABSA task by a large margin, we are not going to compare our models with non-BERT-based models. |
Utilizing BERT Intermediate Layers for Aspect Based Sentiment Analysis and Natural Language Inference | 2002.04815 | Table 3: Classification accuracy (%) for natural language inference on SNLI dataset. Results with “*” are obtained from the official SNLI leaderboard (https://nlp.stanford.edu/projects/snli/). | ['[BOLD] Model', 'Dev', 'Test'] | [['GPT Radford et al. ( 2018 )', '-', '89.9∗'], ['Kim et al. ( 2018 )', '-', '90.1∗'], ['BERT \\textsc [ITALIC] BASE', '90.94', '90.66'], ['[BOLD] BERT-Attention', '91.12', '90.70'], ['[BOLD] BERT-LSTM', '[BOLD] 91.18', '[BOLD] 90.79'], ['MT-DNN Liu et al. ( 2019 )', '91.35', '[BOLD] 91.00'], ['[BOLD] MT-DNN-Attention', '91.41', '90.95'], ['[BOLD] MT-DNN-LSTM', '[BOLD] 91.50', '90.91']] | From the results, BERT-Attention and BERT-LSTM perform better than vanilla BERT\textscBASE. Furthermore, MT-DNN-Attention and MT-DNN-LSTM outperform vanilla MT-DNN on Dev set, and are slightly inferior to vanilla MT-DNN on Test set. As a whole, our pooling strategies generally improve the vanilla BERT-based model, which draws the same conclusion as on ABSA. |
Extended Recommendation Framework: Generating the Text of a User Review as a Personalized Summary | 1412.5448 | Table 3: Test performance (classification error) as polarity classifiers. LL stands for LibLinear (SVM), μi, γu.γi, fA, fT are the recommender systems as in table 2. LL + fA and LL + fT are two hybrid opinion classification models combining the SVM classifier and fA and fT recommender systems. | ['Subsets', 'LL', '[ITALIC] μi', '[ITALIC] γu. [ITALIC] γi', '[ITALIC] fA', '[ITALIC] fT', 'LL + [ITALIC] fA', 'LL + [ITALIC] fT'] | [['RB_U50_I200', '5.35', '5.12', '6.01', '5.57', '5.57', '[BOLD] 3.79', '[BOLD] 3.79'], ['RB_U500_I2k', '7.18', '10.67', '9.73', '8.55', '8.55', '[BOLD] 6.52', '6.92'], ['RB_U5k_I20k', '8.44', '11.80', '10.04', '9.17', '9.17', '[BOLD] 8.33', '[BOLD] 8.35'], ['A_U200_I120', '[BOLD] 10.00', '15.83', '22.50', '20.00', '20.83', '[BOLD] 10.00', '[BOLD] 10.00'], ['A_U2k_I1k', '7.89', '15.25', '12.85', '12.62', '12.62', '[BOLD] 7.54', '[BOLD] 7.54'], ['A_U20k_I12k', '[BOLD] 6.34', '13.99', '12.79', '12.38', '12.37', '[BOLD] 6.29', '[BOLD] 6.29'], ['A_U210k_I120k', '[BOLD] 6.25', '14.04', '14.40', '13.32', '13.31', '[BOLD] 6.22', '[BOLD] 6.22']] | Because they give very poor performance, the bias recommendation models (μ and μu) are not presented here. The item bias μi, second column, gives a baseline, which is improved by the matrix factorization γu.γi, third column. Our hybrid models fA, fourth column, and fT, fifth column, have lower classification errors than all the other recommender systems. The first column, LL is the linear support vector machine (SVM) baseline. It has been learnt on the training set texts, and the regularization hyperparameter has been selected using the validation set. Our implementation relies on liblinear (LL) The resulting hybrid approaches, denoted LL + fA and LL + fT, exploit both text based decision (SVM) and user profile (fA and fT). |
Guessing What’s Plausible But Remembering What’s True: Accurate Neural Reasoning for Question-Answering | 2004.03658 | Table 2: Precision of retrieval for the set-follow operation with various count-min sketches. | ['[EMPTY]', 'k=1', 'k=10', 'k=100', 'k=1000'] | [['no sketch', '99.8', '89.8', '46.1', '23.6'], ['width=200', '99.9', '97.5', '65.5', '23.8'], ['width=2000', '100.0', '100.0', '100.0', '99.98']] | The results show that sketches are essential for obtaining high precision. This is true especially for larger sets, but even for k=10, substantial gains in precision are obtained by the addition of sketches. |
Guessing What’s Plausible But Remembering What’s True: Accurate Neural Reasoning for Question-Answering | 2004.03658 | Table 1: Precision, recall and F1 for model jointly trained on all four reasoning tasks. | ['Set', 'R', 'k=1 34.7', 'k=10 94.7', 'k=100 98.9', 'k=1000 99.8'] | [['Set', 'P', '100.0', '100.0', '100.0', '100.0'], ['Set', 'F1', '49.6', '96.5', '99.2', '99.9'], ['Intersect', 'R', '71.0', '91.7', '97.7', '99.7'], ['Intersect', 'P', '99.8', '99.4', '99.2', '98.9'], ['Intersect', 'F1', '74.7', '92.7', '98.8', '99.3'], ['Union', 'R', '9.6', '39.6', '59.7', '79.0'], ['Union', 'P', '100.0', '99.7', '99.2', '98.8'], ['Union', 'F1', '16.1', '47.0', '65.8', '84.3'], ['Follow', 'R', '10.3', '52.1', '81.7', '93.0'], ['Follow', 'P', '100.0', '100.0', '100.0', '100.0'], ['Follow', 'F1', '16.4', '59.2', '84.7', '94.9']] | We measured the precision, recall and F1 of the top k values in each computed set (averaged across the test cases). The sketches ensure that sets are high precision (for this sketch size, precision is always 99% or better), but the experiments verify that training can make the sets coherent, so relatively small k can be used while still obtaining good recall. |
Guessing What’s Plausible But Remembering What’s True: Accurate Neural Reasoning for Question-Answering | 2004.03658 | Table 3: Recall of set-follow operation with joint training vs. single task. | ['[EMPTY]', 'k=1', 'k=10', 'k=100', 'k=1000'] | [['recall (single task)', '9.7', '49.2', '78.1', '91.8'], ['recall (multi task)', '10.3', '52.1', '81.7', '93.0'], ['max recall', '10.3', '53.1', '84.3', '94.1']] | We also report here the maximum recall obtainable by retrieving k results. Joint training improves performance at all values of k, and brings performance close to the theoretical optimum. The benefits of joint training are most likely due to the fact that the requirement of coherency for embeddings is shared by all operations. |
Guessing What’s Plausible But Remembering What’s True: Accurate Neural Reasoning for Question-Answering | 2004.03658 | Table 5: Hits@1 of WebQuestionsSP and MetaQA (2-hop and 3-hop) datasets. (We re-run GRAFT-Net and PullNet on WebQuestionsSP with oracle entities.) | ['[EMPTY]', 'MetaQA2', 'MetaQA3', 'WebQSP'] | [['KV-Mem', '82.7', '48.9', '46.7'], ['ReifKB', '81.1', '72.3', '52.7'], ['GRAFT-Net', '94.8', '77.7', '70.3'], ['PullNet', '[BOLD] 99.9', '91.4', '69.7'], ['EmQL', '98.6', '[BOLD] 99.1', '[BOLD] 75.5'], ['EmQL (no-sketch)', '70.3', '60.9', '53.2'], ['EmQL (no-constr)', '–', '–', '65.2']] | 5.2.3 Results We achieved a new state-of-the-art on the MetaQA 3-hop and WebQuestionSP datasets, beating the previous state-of-the-art by a large margin (7.7% and 5.8% hits@1 absolute improvement). The results on the MetaQA 2-hop dataset are comparable to the previous state-of-the-art. We note that unlike PullNet, EmQL does not require any intermediate supervision—it learns only from the final answers. |
Guessing What’s Plausible But Remembering What’s True: Accurate Neural Reasoning for Question-Answering | 2004.03658 | Table 10: Ablated study on WebQuestionsSP | ['[EMPTY]', 'WebQuestionsSP'] | [['EmQL', '[BOLD] 75.5'], ['EmQL (no-sketch)', '53.2'], ['EmQL (no-constr)', '65.2'], ['EmQL (approx. MIPS)', '73.4'], ['EmQL (no-bert)', '74.2']] | We did two more experiments on the WebQuestionsSP dataset. Instead, we randomly initialize KB entity and relation embeddings, and train the set operations. The performance of EmQL (no-bert) on the downstream QA task is 1.3% lower than our full model. |
Modeling Voting for System Combination in Machine Translation | 2007.06943 | Table 5: Generalization ability evaluation. We report the results of single MT systems and system combination methods (i.e. Hier and Ours) on the Chinese-English task. N denotes the number of single MT systems. Note that both system combination methods were trained on the outputs of three single MT systems (i.e., N=3) and tested on various number of single systems (i.e., N=2,3,4,5). “†” and “††”: significantly better than the best system among inputs (p<0.05 and p<0.01). “‡” and “‡‡”: significantly better than “Hier” (p<0.05 and p<0.01). | ['N 2', 'Single MT Systems 47.44', 'Single MT Systems 46.18', 'Single MT Systems -', 'Single MT Systems -', 'Single MT Systems -', 'Training No', 'Test Yes', 'Hier 47.35', 'Ours [BOLD] 47.76{}^{{\\dagger}{\\ddagger}}', '\\Delta +0.41'] | [['3', '47.44', '46.18', '45.18', '-', '-', 'Yes', 'Yes', '48.26', '[BOLD] 49.00{}^{{\\dagger}{\\dagger}{\\ddagger}{\\ddagger}}', '+0.74'], ['4', '47.44', '46.18', '45.18', '47.09', '-', 'No', 'Yes', '48.59', '[BOLD] 49.56{}^{{\\dagger}{\\dagger}{\\ddagger}{\\ddagger}}', '+0.97'], ['5', '47.44', '46.18', '45.18', '47.09', '45.97', 'No', 'Yes', '48.79', '[BOLD] 49.80{}^{{\\dagger}{\\dagger}{\\ddagger}{\\ddagger}}', '+1.01']] | We are interested in evaluating whether a system combination method performs well when the number of single MT systems during testing is different from that during training. We shared the parameters of each hypothesis encoder \mathrm{Encoder}^{\mathrm{hyp}}(\cdot) so that the system combination model has the ability to generalize to different number of single systems. |
Modeling Voting for System Combination in Machine Translation | 2007.06943 | Table 2: Results on the Chinese-English task. The evaluation metric is case-insensitive tokenized BLEU. “All” is the concatenation of all test sets. The translations of the top three single MT systems are the inputs of the bottom three system combination methods. “††”: significantly better than “Transformer-L2R” (p<0.01). “‡” and “‡‡”: significantly better than “Jane” (p<0.05 and p<0.01). “**”: significantly better than “Hier” (p<0.01). | ['Method', 'NIST02', 'NIST03', 'NIST04', 'NIST05', 'NIST08', 'All'] | [['Trans-R2L', '45.11', '44.67', '46.66', '46.08', '36.90', '44.26'], ['Mask-NAT\xa0[Ghazvininejad2019MaskPredictPD]', '46.69', '45.93', '47.27', '45.72', '36.14', '44.76'], ['Trans-L2R\xa0[Vaswani2017AttentionIA]', '47.25', '47.30', '47.97', '47.64', '37.49', '45.92'], ['Jane\xa0[Freitag2014JaneOS]', '47.75', '47.88', '48.90', '48.83', '38.66', '46.78'], ['Hier\xa0[Zhou2017NeuralSC]', '48.71', '48.31', '48.96', '48.74', '38.42', '46.85'], ['Ours', '[BOLD] 49.30{}^{{\\dagger}{\\dagger}{\\ddagger}{\\ddagger}**}', '[BOLD] 49.24{}^{{\\dagger}{\\dagger}{\\ddagger}{\\ddagger}**}', '[BOLD] 49.65{}^{{\\dagger}{\\dagger}{\\ddagger}{\\ddagger}**}', '[BOLD] 49.28{}^{{\\dagger}{\\dagger}{\\ddagger}**}', '[BOLD] 39.41{}^{{\\dagger}{\\dagger}{\\ddagger}{\\ddagger}**}', '[BOLD] 47.69{}^{{\\dagger}{\\dagger}{\\ddagger}{\\ddagger}**}']] | We find that our method outperforms the best single system Trans-L2R, the statistical combination method Jane, and the neural combination method Hier. All the differences are statistically significant (p<0.01). The superiority over Jane and Hier suggests that combining the merits of analyzing the dependencies between hypotheses and end-to-end training of neural networks helps to generate better translations. |
Modeling Voting for System Combination in Machine Translation | 2007.06943 | Table 3: Results on the English-German task. The evaluation metric is case-sensitive tokenized BLEU. The translations of the top three single MT systems are the inputs of the bottom three system combination methods. “††”: significantly better than “Transformer{}_{\mathrm{big}}-fb” (p<0.01). “‡‡”: significantly better than “Jane” (p<0.01). “**”: significantly better than “Hier” (p<0.01). | ['Method', 'newstest2014'] | [['Transformer{}_{\\mathrm{big}}\xa0[Vaswani2017AttentionIA]', '28.72'], ['DynamicConv\xa0[Wu2019PayLA]', '29.74'], ['Transformer{}_{\\mathrm{big}}-fb [Ott2018ScalingNM]', '29.76'], ['Jane\xa0[Freitag2014JaneOS]', '29.62'], ['Hier\xa0[Zhou2017NeuralSC]', '29.95'], ['Ours', '[BOLD] 30.52{}^{{\\dagger}{\\dagger}{\\ddagger}{\\ddagger}**}']] | Our approach also achieves significant improvements over the state-of-the-art results (p<0.01). The gaps are relatively smaller than those on Chinese-English because the English-German task uses single reference while Chinese-English uses four references. Considering that Transformer{}_{\mathrm{big}}-fb is nowadays acknowledged strongest single system result, Jane cannot improve the translation quality and Hier improves a little while our approach achieves significant improvements, indicating that voting mechanism and end-to-end training of neural networks is important especially when the translations of single systems already have high quality. |
Learning Sentiment Memories for Sentiment Modification without Parallel Data | 1808.07311 | Table 2: Results of human evaluation. | ['Model', 'Sentiment', 'Content', 'Fluency'] | [['CAE', '6.55', '4.46', '5.98'], ['MAE', '6.64', '4.43', '5.36'], ['[BOLD] SMAE', '6.57', '5.98', '6.69']] | We also involve human evaluation to measure the quality of generated text. Each item contains an input and three outputs generated by different systems. Then 200 items are distributed to 2 annotators with linguistic background. The annotators have no idea about which system the output is from. They are asked to score the output on three criteria on a scale from 1 to 10: the transformed sentiment degree, the content preservation degree, and the fluency. Our model has obvious advantage over the baseline systems in content preservation, and also performs well in other aspects. |
Learning Sentiment Memories for Sentiment Modification without Parallel Data | 1808.07311 | Table 1: Performance of the proposed method and state-of-the-art systems. | ['Model', 'ACC', 'BLEU'] | [['CEA', '71.96', '2.77'], ['MAE', '74.59', '5.45'], ['[BOLD] SMAE', '[BOLD] 76.64 (+2.05)', '[BOLD] 24.00 (+18.55)']] | Both baseline models have low BLEU score but high accuracy, which indicates that they may be trapped in a situation that they simply output a sentence with the target sentiment regardless of the content. The main reason is that these methods using adversarial learning attempt to implicitly separate the emotional information from the context information in a sentence vector. However, without parallel data, it is difficult to achieve such a goal. Our proposed SMAE model takes advantage of self-attention mechanism and explicitly removes the emotional words, leading to a significant improvement of content preservation and the state-of-the-art performance in terms of both metrics. |
Appeared in the proceedings of EMNLP–IJCNLP 2019 (Hong Kong, November).This clarified version was prepared in December 2019.It’s All in the Name: Mitigating Gender Bias with Name-Based Counterfactual Data Substitution | 1909.00871 | Table 2: Word similarity Results | ['Method', '[ITALIC] rs [BOLD] Gigaword', '[ITALIC] rs [BOLD] Wikipedia'] | [['none', '0.385', '0.368'], ['CDA', '0.381', '0.363'], ['gCDA', '0.381', '0.363'], ['nCDA', '0.380', '0.365'], ['gCDS', '0.382', '0.366'], ['nCDS', '0.380', '0.362'], ['WED40', '0.386', '0.371'], ['WED70', '0.395', '0.375'], ['nWED70', '0.384', '0.367']] | the SimLex-999 Spearman rank-order correlation coefficients rs (all are significant, p<0.01). Surprisingly, the WED40 and 70 methods outperform the unmitigated embedding, although the difference in result is small (0.386 and 0.395 vs. 0.385 on Gigaword, 0.371 and 0.367 vs. 0.368 on Wikipedia). nWED70, on the other hand, performs worse than the unmitigated embedding (0.384 vs. 0.385 on Gigaword, 0.367 vs. 0.368 on Wikipedia). CDA and CDS methods do not match the quality of the unmitigated space, but once again the difference is small. author=simone,color=blue!40,size=,fancyline, caption=,todo: author=simone,color=blue!40,size=,fancyline,caption=,Second Part of Reaction to Reviewer 4.It should be noted that since SimLex-999 was produced by human raters, it will reflect the human biases these methods were designed to remove, so worse performance might result from successful bias mitigation. |
Appeared in the proceedings of EMNLP–IJCNLP 2019 (Hong Kong, November).This clarified version was prepared in December 2019.It’s All in the Name: Mitigating Gender Bias with Name-Based Counterfactual Data Substitution | 1909.00871 | Table 1: Direct bias results | ['Method', 'Art–Maths [ITALIC] d', 'Art–Maths [ITALIC] p', 'Arts–Sciences [ITALIC] d', 'Arts–Sciences [ITALIC] p', 'Career–Family [ITALIC] d', 'Career–Family [ITALIC] p'] | [['[EMPTY]', '[BOLD] Gigaword', '[BOLD] Gigaword', '[BOLD] Gigaword', '[BOLD] Gigaword', '[BOLD] Gigaword', '[BOLD] Gigaword'], ['none', '1.32', '<10−2', '1.50', '<10−3', '1.74', '<10−4'], ['CDA', '0.67', '.10', '1.05', '.02', '1.79', '<10−4'], ['gCDA', '1.16', '.01', '1.46', '<10−2', '1.77', '<10−4'], ['nCDA', '−0.49', '.83', '0.34', '.27', '1.45', '<10−3'], ['gCDS', '0.96', '.03', '1.31', '<10−2', '1.78', '<10−4'], ['nCDS', '−0.19', '.63', '0.48', '.19', '1.45', '<10−3'], ['WED40', '−0.73', '.92', '0.31', '.28', '1.24', '<10−2'], ['WED70', '−0.73', '.92', '0.30', '.29', '1.15', '<10−2'], ['nWED70', '0.30', '.47', '0.54', '.19', '0.59', '.15'], ['[EMPTY]', '[BOLD] Wikipedia', '[BOLD] Wikipedia', '[BOLD] Wikipedia', '[BOLD] Wikipedia', '[BOLD] Wikipedia', '[BOLD] Wikipedia'], ['none', '1.64', '<10−3', '1.51', '<10−3', '1.88', '<10−4'], ['CDA', '1.58', '<10−3', '1.66', '<10−4', '1.87', '<10−4'], ['gCDA', '1.52', '<10−3', '1.57', '<10−3', '1.84', '<10−4'], ['nCDA', '1.06', '.02', '1.54', '<10−4', '1.65', '<10−4'], ['gCDS', '1.45', '<10−3', '1.53', '<10−3', '1.87', '<10−4'], ['nCDS', '1.05', '.02', '1.37', '<10−3', '1.65', '<10−4'], ['WED40', '1.28', '<10−2', '1.36', '<10−2', '1.81', '<10−4'], ['WED70', '1.05', '.02', '1.24', '<10−2', '1.67', '<10−3'], ['nWED70', '−0.46', '.52', '−0.42', '.51', '0.85', '.05'], ['[ITALIC] Nosek et al.', '0.82', '<10−2', '1.47', '<10−24', '0.72', '<10−2']] | We also compute a two-tailed p-value to determine whether the difference between the various sets is significant. |
Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function | 1905.12801 | Table 3: Evaluation results for models trained on Daily Mail and their generated texts | ['Model', '[ITALIC] BN', '[ITALIC] BNc', '[ITALIC] GR', '[ITALIC] Ppl.', '[ITALIC] CB| [ITALIC] o', '[ITALIC] CB| [ITALIC] g', '[ITALIC] EBd'] | [['Dataset', '0.340', '0.213', '[EMPTY]', '-', '-', '-', '-'], ['Baseline', '0.531', '0.282', '1.415', '117.845', '1.447', '97.762', '0.528'], ['REG', '0.381', '0.329', '1.028', '[BOLD] 114.438', '1.861', '108.740', '0.373'], ['CDA', '0.208', '0.149', '1.037', '117.976', '0.703', '56.82', '0.268'], ['[ITALIC] λ0.01', '0.492', '0.245', '1.445', '118.585', '0.111', '9.306', '0.077'], ['[ITALIC] λ0.1', '0.459', '0.208', '1.463', '118.713', '0.013', '2.326', '0.018'], ['[ITALIC] λ0.5', '0.312', '0.173', '1.252', '120.344', '[BOLD] 0.000', '1.159', '0.006'], ['[ITALIC] λ0.8', '0.226', '0.151', '1.096', '119.792', '0.001', '1.448', '0.002'], ['[ITALIC] λ1', '0.218', '0.153', '1.049', '120.973', '[BOLD] 0.000', '0.999', '0.002'], ['[ITALIC] λ2', '0.221', '0.157', '1.020', '123.248', '[BOLD] 0.000', '0.471', '[BOLD] 0.000'], ['[ITALIC] [BOLD] λ [BOLD] 0.5 + CDA', '[BOLD] 0.205', '[BOLD] 0.145', '[BOLD] 1.012', '117.971', '[BOLD] 0.000', '[BOLD] 0.153', '[BOLD] 0.000']] | Initially, we measure the co-occurrence bias in the training data. After training the baseline model, we implement our loss function and tune for the λ hyperparameter. Additionally, we implement a combination of our loss function and CDA and tune for λ. Finally, bias evaluation is performed for all the trained models. Causal occupation bias is measured directly from the models using template datasets discussed above and co-occurrence bias is measured from the model-generated texts, which consist of 10,000 documents of 500 words each. It is interesting to observe that the baseline model amplifies the bias in the training data set as measured by BNand BNc. From measurements using the described bias metrics, our method effectively mitigates bias in language modelling without a significant increase in perplexity. At λ value of 1, it reduces BN by 58.95%, BNc by 45.74%, CB|o by 100%, CB|g by 98.52% and EBd by 98.98%. Compared to the results of CDA and REG, it achieves the best results in both occupation biases, CB|g and CB|o, and EBd. We notice that all methods result in GR around 1, indicating that there are near equal amounts of female and male words in the generated texts. In our experiments we note that with increasing λ, the bias steadily decreases and perplexity tends to slightly increase. This indicates that there is a trade-off between bias and perplexity. |
Bag-of-Words as Target for Neural Machine Translation | 1805.04871 | Table 2: Results of our model and the baselines (directly reported in the referred articles) on the Chinese-English translation. “-” means that the studies did not test the models on the corresponding datasets. | ['Model', 'MT-02', 'MT-03', 'MT-04', 'MT-05', 'MT-06', 'MT-08', 'All'] | [['Moses (Su et\xa0al., 2016 )', '33.19', '32.43', '34.14', '31.47', '30.81', '23.85', '31.04'], ['RNNSearch (Su et\xa0al., 2016 )', '34.68', '33.08', '35.32', '31.42', '31.61', '23.58', '31.76'], ['Lattice (Su et\xa0al., 2016 )', '35.94', '34.32', '36.50', '32.40', '32.77', '24.84', '32.95'], ['CPR (Zhang et\xa0al., 2017 )', '33.84', '31.18', '33.26', '30.67', '29.63', '22.38', '29.72'], ['POSTREG (Zhang et\xa0al., 2017 )', '34.37', '31.42', '34.18', '30.99', '29.90', '22.87', '30.20'], ['PKI (Zhang et\xa0al., 2017 )', '36.10', '33.64', '36.48', '33.08', '32.90', '24.63', '32.51'], ['Bi-Tree-LSTM (Chen et\xa0al., 2017 )', '36.57', '35.64', '36.63', '34.35', '30.57', '-', '-'], ['Mixed RNN (Li et\xa0al., 2017 )', '37.70', '34.90', '38.60', '35.50', '35.60', '-', '-'], ['Seq2Seq+Attn (our implementation)', '34.71', '33.15', '35.26', '32.36', '32.45', '23.96', '31.96'], ['[BOLD] +Bag-of-Words (this paper)', '[BOLD] 39.77', '[BOLD] 38.91', '[BOLD] 40.02', '[BOLD] 36.82', '[BOLD] 35.93', '[BOLD] 27.61', '[BOLD] 36.51']] | We compare our model with our implementation of Seq2Seq+Attention model. For fair comparison, the experimental setting of Seq2Seq+Attention is the same as BAT, so that we can regard it as our proposed model removing the bag-of-words target. The results show that our model achieves the BLEU score of 36.51 on the total test sets, which outperforms the Seq2Seq baseline by the BLEU of 4.55. In order to further evaluate the performance of our model, we compare our model with the recent NMT systems which have been evaluated on the same training set and the test sets as ours. Their results are directly reported in the referred articles. |
Improving Segmentation for Technical Support Problems | 2005.11055 | Table 3: Results for experiments between using Word2Vec and fastText embeddings. Also includes results of using attention on top of the model with Word2Vec. Since attention results were not promising, we did not repeat them with fastText. | ['[BOLD] Model', '[BOLD] P', '[BOLD] R', '[BOLD] F1'] | [['Word2Vec (w/o Attn)', '65.20', '58.59', '61.72'], ['+ weighted Attn.', '62.34', '57.0', '59.55'], ['+ un-weighted Attn.', '69.21', '56.15', '62.0'], ['fastText', '74.57', '75.51', '75.04']] | Both word2vec and fastText embeddings are trained on all posts in the Ask Ubuntu dataset. As we can see, fastText gives a marked improvement over using embeddings from word2vec. This is probably due to the nature of the vocabulary in our task. Since large portions of questions are spans of command output or error messages a lot of tokens appear very rarely. In fact, out of the 62,501 unique tokens in the dataset, 57% appear just once, and 78% appear 3 or fewer times. However, the characters in these tokens are probably very informative (for example “http” in a token would signal that the token is a URL). Therefore, fastText, which uses n-grams from a token to compute embeddings, would emit more meaningful representations. Given the long tickets in our dataset, and un-reasonably long lengths of spans for labels like command output or error messages, we explored the usefulness of attention in our model. We used the Scaled Dot-Product Attention as in Vaswani et al. We find that weighted attention actually hurts performance. This could be because of the large number of extra parameters introduced in the calculation of Key, Value, and Query matrices. While the un-weighted version gets around this by using the bi-directional GRU hidden states as all 3 matrices, it doesn’t improve results significantly either. |
Improving Segmentation for Technical Support Problems | 2005.11055 | Table 4: Results comparing the models using various pre-trained embeddings. The en data source is the downloaded pre-trained ELMo model. For simple concatenation, we present the results for the best model at each n combinations of data sources. For example, when concatenating any 2 datasources, the en + config combination gives the best performance. | ['[BOLD] Model', '[BOLD] P', '[BOLD] R', '[BOLD] F1'] | [['No Pretraining', '74.57', '75.51', '75.04'], ['Simple Concat - 1 (en)', '76.88', '74.49', '75.67'], ['Simple Concat - 2 (en + config)', '77.67', '76.12', '76.89'], ['Simple Concat - 3 (en + code + config)', '79.64', '77.72', '78.67'], ['Simple Concat - 4 (ALL)', '76.05', '76.65', '76.35'], ['DME', '77.42', '75.82', '76.61'], ['CDME', '78.30', '79.29', '[BOLD] 78.80']] | For the simple concatenation method, we present results for the best n-way combination of embeddings from different data sources, for each n (1, 2, 3, and 4). We find that combining embeddings from multiple language models trained on different data sources considerably outperforms using embeddings from a single pre-trained model (using both the naive concatenation and CDME). This is an artifact of the support problems containing large sections of non-natural language text. We also find that contextual weighting does better than a simple concatenation. |
Improving Segmentation for Technical Support Problems | 2005.11055 | Table 5: Retrieval results, comparing the performance of querying with the full question against segmented question (gold segments and predicted segments) | ['[BOLD] Method', '[BOLD] MRR'] | [['Full Question', '0.292'], ['Segmented Question - Gold', '0.300'], ['Segmented Question - Predicted', '[BOLD] 0.298']] | We show that weighing identified segments of the question with separate weights improves retrieval of the correct answer over a query with all tokens from the question. We also present results from the gold annotations of segments for these questions, as an upper-bound of the performance improvement we can hope to achieve. |
Dynamic Layer Aggregation for Neural Machine Translation with Routing-by-Agreement | 1902.05770 | Table 1: Translation performance on WMT14 English⇒German translation task. “# Para.” denotes the number of parameters, and “Train” and “Decode” respectively denote the training (steps/second) and decoding (sentences/second) speeds. | ['[BOLD] #', '[BOLD] Model', '[BOLD] # Para.', '[BOLD] Train', '[BOLD] Decode', '[BOLD] BLEU', '△'] | [['1', 'Transformer-Base', '88.0M', '1.79', '1.43', '27.31', '–'], ['2', '+ Linear Combination\xa0', '+14.7M', '1.57', '1.36', '27.73', '+0.42'], ['3', '+ Dynamic Combination', '+25.2M', '1.50', '1.30', '28.33', '+1.02'], ['4', '+ Dynamic Routing', '+37.8M', '1.37', '1.24', '28.22', '+0.91'], ['5', '+ EM Routing', '+56.8M', '1.10', '1.15', '28.81', '+1.50']] | As one would expect, the linear combination (Row 2) improves translation performance by +0.42 BLEU points, indicating the necessity of aggregating layers for deep NMT models. |
LMVE at SemEval-2020 Task 4: Commonsense Validation and Explanation using Pretraining Language Model | 2007.02540 | Table 2: Ablation study on model components. † means we use the model structure as Figure 1(a) and Baseline means the model in Sec 3.2. | ['[ITALIC] Explanation Model', 'Acc', 'Δ [ITALIC] b'] | [['Our-single', '95.48', '-'], ['w/o hint sentence', '93.47', '2.01'], ['w/o data augmentation', '95.39', '0.09'], ['w/o weighted sum fusion', '95.11', '0.37'], ['w/o subtask level transfer', '94.98', '0.50'], ['Baseline', '93.12', '2.36']] | We think the reason is a common sense statement with a similar grammar and syntax does help the model to determine why the input sentence is against common sense. |
LMVE at SemEval-2020 Task 4: Commonsense Validation and Explanation using Pretraining Language Model | 2007.02540 | Table 1: Performance with different encoder. | ['Model', 'Params', '[ITALIC] Sen-Making', '[ITALIC] Explanation'] | [['Random', '-', '49.52', '32.77'], ['BERT [ITALIC] base', '117M', '88.56', '85.32'], ['BERT [ITALIC] large', '340M', '86.55', '90.12'], ['XLNet', '340M', '90.33', '91.07'], ['SpanBERT', '340M', '89.46', '90.47'], ['RoBERTa', '355M', '93.56', '92.37'], ['ALBERT [ITALIC] base', '12M', '86.63', '84.37'], ['ALBERT [ITALIC] large', '18M', '88.01', '89.72'], ['ALBERT [ITALIC] xlarge', '60M', '92.03', '92.45'], ['Ours(ALBERT [ITALIC] xxlarge)', '235M', '95.68', '95.48'], ['Our-ensemble', '-', '95.91', '96.39']] | The result of our model for subtask a and We have tried different pretraining language model as our encoder, and found that ALBERT based model achieves the state-of-the-art performance. |
LMVE at SemEval-2020 Task 4: Commonsense Validation and Explanation using Pretraining Language Model | 2007.02540 | Table 2: Ablation study on model components. † means we use the model structure as Figure 1(a) and Baseline means the model in Sec 3.2. | ['[ITALIC] Sen-Making Model', 'Acc', 'Δ [ITALIC] a'] | [['Our-single', '95.68', '-'], ['w/o method b†', '94.88', '0.80'], ['w/o data augmentation', '95.43', '0.25'], ['w/o weighted sum fusion', '95.32', '0.36'], ['w/o subtask level transfer', '94.85', '0.83'], ['Baseline', '93.24', '2.44']] | We think the reason is a common sense statement with a similar grammar and syntax does help the model to determine why the input sentence is against common sense. |
Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory | 1704.01074 | Table 6: Manual evaluation of the generated responses in terms of Content (Cont.) and Emotion (Emot.) . | ['Method', 'Overall Cont.', 'Overall Emot.', 'Like Cont.', 'Like Emot.', 'Sad Cont.', 'Sad Emot.', 'Disgust Cont.', 'Disgust Emot.', 'Angry Cont.', 'Angry Emot.', 'Happy Cont.', 'Happy Emot.'] | [['Seq2Seq', '1.255', '0.152', '1.308', '0.337', '1.270', '0.077', '[BOLD] 1.285', '0.038', '[BOLD] 1.223', '0.052', '1.223', '0.257'], ['Emb', '1.256', '0.363', '1.348', '0.663', '1.337', '0.228', '1.272', '0.157', '1.035', '0.162', '1.418', '0.607'], ['ECM', '[BOLD] 1.299', '[BOLD] 0.424', '[BOLD] 1.460', '[BOLD] 0.697', '[BOLD] 1.352', '[BOLD] 0.313', '1.233', '[BOLD] 0.193', '0.98', '[BOLD] 0.217', '[BOLD] 1.428', '[BOLD] 0.700']] | ECM with all options outperforms the other methods in both metrics significantly (2-tailed t-test, p<0.05 for Content, and p<0.005 for Emotion). After incorporating the internal memory and the external memory modules, the performance of ECM in Emotion is improved comparing to Emb, indicating our model can generate more explicit expressions of emotion. Besides, the performance in Content is improved from 1.256 of Emb to 1.299 of ECM, which shows the ability of ECM to control the weight of emotion and generate responses appropriate in content. |
Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory | 1704.01074 | Table 4: Objective evaluation with perplexity and accuracy. | ['Method', 'Perplexity', 'Accuracy'] | [['Seq2Seq', '68.0', '0.179'], ['Emb', '62.5', '0.724'], ['ECM', '65.9', '[BOLD] 0.773'], ['w/o Emb', '66.1', '0.753'], ['w/o IMem', '66.7', '0.749'], ['w/o EMem', '[BOLD] 61.8', '0.731']] | As can be seen, ECM obtains the best performance in emotion accuracy, and the performance in perplexity is better than Seq2Seq but worse than Emb. In practice, emotion accuracy is more important than perplexity considering that the generated sentences are already fluent and grammatical with the perplexity of 68.0. |
Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory | 1704.01074 | Table 5: The percentage of responses in manual evaluation with the score of Content-Emotion. For instance, 2-1 means content score is 2 and emotion score is 1. | ['Method (%)', '2-1', '1-1', '0-1', '2-0', '1-0', '0-0'] | [['Seq2Seq', '9.0', '5.1', '1.1', '37.6', '28.0', '19.2'], ['Emb', '22.8', '9.3', '4.3', '27.1', '19.1', '17.4'], ['ECM', '[BOLD] 27.2', '[BOLD] 10.8', '4.4', '24.2', '15.5', '17.9']] | As we can see, 27.2% of the responses generated by ECM have a Content score of 2 and an Emotion score of 1, while only 22.8% for Emb and 9.0% for Seq2Seq. These indicate that ECM is better in generating high-quality responses in both content and emotion. |
Joint Neural Entity Disambiguation with Output Space Search | 1806.07495 | Table 4: Confusion Matrices for (a) using heuristic h1 with τ=25%n, (b) heuristic h1 with τ=50%n and (c) heuristic h2 with flexible τ. | ['[ITALIC] τ=25% [ITALIC] n', 'r ≤ 25%', 'r > 25%'] | [['correct', '1134', '2937'], ['incorrect', '276', '136']] | For heuristic h1, we use a fixed depth strategy. In particular, given a document with n mentions, we consider two different depth limits: τ=25%n and τ=50%n, which lead to an average depth of 5 and 10 per document respectively. For heuristic h2, the depth is flexible and determined by the number of mentions that are predicted to be incorrect by h2. This strategy leads to an average depth of 4. In these tables correct/incorrect mean whether the local prediction is correct or not (comparing to the ground truth). Therefore the first cell with value 1134 indicates that there are 1134 mentions in test-b that are correctly predicted by the local model, but deemed as among the top 25% least confident mentions (aka the hard queries) by h1. The confusion matrix for a good heuristic will have small diagonal values and large anti-diagonal values. These results show that heuristic h2 gives the best precision as well as recall of the real mistakes made by the local model. |
Joint Neural Entity Disambiguation with Output Space Search | 1806.07495 | Table 2: Evaluation on CoNLL 2003 Test-b and TAC-2010 | ['[BOLD] models', '[BOLD] In-KB acc%'] | [['[BOLD] local', '[BOLD] local'], ['(He et\xa0al., 2013 )', '85.6'], ['(Francis-Landau et\xa0al., 2016 )', '85.5'], ['(Sil and Florian, 2016 )', '86.2'], ['(Nevena\xa0Lazic and Pereira, 2015 )', '86.4'], ['(Yamada et\xa0al., 2016 )', '90.9'], ['(Sil et\xa0al., 2018 )', '94.0'], ['[BOLD] global', '[BOLD] global'], ['(Hoffart et\xa0al., 2011 )', '82.5'], ['(Pershina et\xa0al., 2015 )', '91.8'], ['(Globerson et\xa0al., 2016 )', '92.7'], ['(Yamada et\xa0al., 2016 )', '93.1'], ['[BOLD] our model', '[BOLD] our model'], ['local', '90.89'], ['global', '[BOLD] 94.44']] | To evaluate the model performance we use the standard micro-average accuracies of the top-ranked candidate entities. We use different alias mappings for TAC and CoNLL. Specifically, for TAC we only use anchor-title alias mappings constructed from hyper-links in the Wikipedia. (b) show our performance on the CoNLL and TAC datasets for our local and global models along with other competitive systems respectively. The results show that our global model outperforms all the competitors for both CoNLL 2003 and TAC 2010. It is interesting to note that our local model is solid, but is noticeably inferior to the stat-of-the-art local model. |
Joint Neural Entity Disambiguation with Output Space Search | 1806.07495 | Table 2: Evaluation on CoNLL 2003 Test-b and TAC-2010 | ['[BOLD] models', '[BOLD] In-KB acc%'] | [['[BOLD] local', '[BOLD] local'], ['(Sil and Florian, 2016 )', '78.6'], ['(He et\xa0al., 2013 )', '81.0'], ['(Sun et\xa0al., 2015 )', '83.9'], ['(Yamada et\xa0al., 2016 )', '84.6'], ['(Sil et\xa0al., 2018 )', '87.4'], ['[BOLD] global', '[BOLD] global'], ['(Yamada et\xa0al., 2016 )', '85.2'], ['(Globerson et\xa0al., 2016 )', '87.2'], ['[BOLD] our model', '[BOLD] our model'], ['local', '85.73'], ['global', '[BOLD] 87.9']] | To evaluate the model performance we use the standard micro-average accuracies of the top-ranked candidate entities. We use different alias mappings for TAC and CoNLL. Specifically, for TAC we only use anchor-title alias mappings constructed from hyper-links in the Wikipedia. (b) show our performance on the CoNLL and TAC datasets for our local and global models along with other competitive systems respectively. The results show that our global model outperforms all the competitors for both CoNLL 2003 and TAC 2010. It is interesting to note that our local model is solid, but is noticeably inferior to the stat-of-the-art local model. |
Joint Neural Entity Disambiguation with Output Space Search | 1806.07495 | Table 4: Confusion Matrices for (a) using heuristic h1 with τ=25%n, (b) heuristic h1 with τ=50%n and (c) heuristic h2 with flexible τ. | ['[ITALIC] τ=50% [ITALIC] n', 'r ≤ 50%', 'r > 50%'] | [['correct', '2071', '2000'], ['incorrect', '331', '81']] | For heuristic h1, we use a fixed depth strategy. In particular, given a document with n mentions, we consider two different depth limits: τ=25%n and τ=50%n, which lead to an average depth of 5 and 10 per document respectively. For heuristic h2, the depth is flexible and determined by the number of mentions that are predicted to be incorrect by h2. This strategy leads to an average depth of 4. In these tables correct/incorrect mean whether the local prediction is correct or not (comparing to the ground truth). Therefore the first cell with value 1134 indicates that there are 1134 mentions in test-b that are correctly predicted by the local model, but deemed as among the top 25% least confident mentions (aka the hard queries) by h1. The confusion matrix for a good heuristic will have small diagonal values and large anti-diagonal values. These results show that heuristic h2 gives the best precision as well as recall of the real mistakes made by the local model. |
Joint Neural Entity Disambiguation with Output Space Search | 1806.07495 | Table 4: Confusion Matrices for (a) using heuristic h1 with τ=25%n, (b) heuristic h1 with τ=50%n and (c) heuristic h2 with flexible τ. | ['[ITALIC] τ=flexible', 'label=0', 'label=1'] | [['correct', '805', '3266'], ['incorrect', '333', '79']] | For heuristic h1, we use a fixed depth strategy. In particular, given a document with n mentions, we consider two different depth limits: τ=25%n and τ=50%n, which lead to an average depth of 5 and 10 per document respectively. For heuristic h2, the depth is flexible and determined by the number of mentions that are predicted to be incorrect by h2. This strategy leads to an average depth of 4. In these tables correct/incorrect mean whether the local prediction is correct or not (comparing to the ground truth). Therefore the first cell with value 1134 indicates that there are 1134 mentions in test-b that are correctly predicted by the local model, but deemed as among the top 25% least confident mentions (aka the hard queries) by h1. The confusion matrix for a good heuristic will have small diagonal values and large anti-diagonal values. These results show that heuristic h2 gives the best precision as well as recall of the real mistakes made by the local model. |
Joint Neural Entity Disambiguation with Output Space Search | 1806.07495 | Table 5: CoNLL 2003 Test-b | ['[BOLD] Models', 'Heuristic', 'Depth ( [ITALIC] τ)', 'Beam Size (b)', 'In-KB acc %'] | [['Global + LDS', '[ITALIC] h1', '[ITALIC] τ=25% [ITALIC] n', '1', '93.88'], ['Global + LDS', '[ITALIC] h1', '[ITALIC] τ=25% [ITALIC] n', '5', '94.12'], ['Global + LDS', '[ITALIC] h1', '[ITALIC] τ=50% [ITALIC] n', '5', '94.23'], ['Global + LDS', '[ITALIC] h2', 'flexible', '5', '94.44']] | The results show that using a beam of size 5 improves upon single greedy search, and using heuristic h2 with flexible depth gives the best performance in terms of both prediction accuracy and efficiency (due to smaller search trees). When h1 is used, doubling the depth of the search tree brings about only a marginal improvement in accuracy at the cost of doubling the search depth and thus the prediction time. |
Integrating Local Context and Global Cohesivenessfor Open Information Extraction | 1804.09931 | Table 3. Performance comparison with state-of-the-art entity phrase extraction algorithms for the weakly-supervised entity phrase extraction task. | ['[BOLD] Methods', '[BOLD] NYT ( riedel2013relation, ) F1', '[BOLD] NYT ( riedel2013relation, ) Prec', '[BOLD] NYT ( riedel2013relation, ) Rec', '[BOLD] Wiki-KBP ( ling2012fine, ) F1', '[BOLD] Wiki-KBP ( ling2012fine, ) Prec', '[BOLD] Wiki-KBP ( ling2012fine, ) Rec'] | [['AutoPhrase ( shang2017automated, )', '0.531', '0.543', '0.519', '0.416', '0.529', '0.343'], ['Ma & Hovy ( ma2016end, )', '0.664', '0.704', '0.629', '0.324', '0.629', '0.218'], ['Liu. [ITALIC] et\xa0al. ( 2017arXiv170904109L, )', '[BOLD] 0.676', '[BOLD] 0.704', '0.650', '0.337', '0.629', '0.230'], ['ReMine', '0.648', '0.524', '[BOLD] 0.849', '[BOLD] 0.515', '[BOLD] 0.636', '[BOLD] 0.432']] | 1. Performance on Entity Phrase Extraction. The training data is generated through distant supervision described above without type information. Regarding open domain extractions, we train baseline models using the same distant supervision as ReMine, to push them towards a fair comparison. In the Wiki-KBP dataset, ReMine evidently outperforms all the other baselines. In the NYT dataset, ReMine has a rather high recall and is on par with the two neural network models on F1-score. |
Integrating Local Context and Global Cohesivenessfor Open Information Extraction | 1804.09931 | Table 4. Performance comparison with state-of-the-art Open IE systems on two datasets from different domains, using Precision@K, Mean Average Precision (MAP), Normalized Discounted Cumulative Gain (NDCG) and Mean Reciprocal Rank (MRR). | ['[BOLD] Methods', '[BOLD] NYT ( riedel2013relation, ) P@100', '[BOLD] NYT ( riedel2013relation, ) P@200', '[BOLD] NYT ( riedel2013relation, ) MAP', '[BOLD] NYT ( riedel2013relation, ) NDCG@100', '[BOLD] NYT ( riedel2013relation, ) NDCG@200', '[BOLD] NYT ( riedel2013relation, ) MRR', '[BOLD] Twitter ( zhang2016geoburst, ) P@100', '[BOLD] Twitter ( zhang2016geoburst, ) P@200', '[BOLD] Twitter ( zhang2016geoburst, ) MAP', '[BOLD] Twitter ( zhang2016geoburst, ) NDCG@100', '[BOLD] Twitter ( zhang2016geoburst, ) NDCG@200', '[BOLD] Twitter ( zhang2016geoburst, ) MRR'] | [['ClausIE', '0.580', '0.625', '0.623', '0.575', '0.667', '0.019', '0.300', '0.305', '0.308', '0.332', '0.545', '0.021'], ['Stanford', '0.680', '0.625', '0.665', '0.689', '0.654', '0.023', '0.390', '0.410', '0.415', '0.413', '0.557', '0.023'], ['OLLIE', '0.670', '0.640', '0.683', '0.684', '0.775', '[BOLD] 0.028', '0.580', '0.510', '0.525', '0.519', '0.626', '0.017'], ['MinIE', '0.680', '0.645', '0.687', '0.724', '0.723', '0.027', '0.350', '0.340', '0.361', '0.362', '0.541', '[BOLD] 0.025'], ['ReMine-G', '0.730', '0.695', '0.734', '0.751', '0.783', '0.027', '0.510', '0.580', '0.561', '0.522', '0.610', '0.021'], ['ReMine', '[BOLD] 0.780', '[BOLD] 0.720', '[BOLD] 0.760', '[BOLD] 0.787', '[BOLD] 0.791', '0.027', '[BOLD] 0.610', '[BOLD] 0.610', '[BOLD] 0.627', '[BOLD] 0.615', '[BOLD] 0.651', '0.022']] | 2. Performance on Relation Tuple Extraction. On NYT and twitter test set, we compare ReMine with its variants ReMine-L and ReMine-G as well as four baseline open IE systems mentioned above. All methods experience performance drop in Twitter, while ReMine declines less than any other methods on the rank-based measures. In the NYT dataset, all the systems except OLLIE have similar overall precision (i.e. P@300). But ReMine has a “higher” curve since most tuples obtained by Stanford OpenIE and ClausIE will be assigned score 1. Therefore we may not rank them in a very rational way. In contrast, the scores of different tuples obtained by ReMine-G and ReMine are usually distinct from each other. In the Twitter dataset, ReMine shows its power in dealing with short and noisy text. Both ClausIE and MinIE have a rather low score since there are lots of non-standard language usages and grammatical errors in tweets. In twitter, dependency parsing attaches more wrong arguments and labels than usual. All methods investigated depend on dependency parsing to varying degrees, while clause-based methods rely heavily on it and may not achieve a satisfying performance. Model-wise, we believe global cohesiveness helps open IE from two aspects: (1) ranking tuples (2) updating entity phrase pairs. In particular, ReMine-G differs from ReMine-L only on extraction scores, since global cohesiveness σ provides better ranking performance (P@300) over random (ReMine-L). The gain between ReMine and ReMine-G clearly shows the updated extractions have better quality in general. |
Lexical Normalization and POS-tagging for Code-switched Data | 2006.01175 | Table 6: Accuracies of normalization (Id-En, Tr-De) and POS tagging (Tr-De) on test data, comparing the baselines to the best two normalization models. | ['Dataset', 'Task', 'LAI', 'Multilingual', 'Language-aware', 'Gold'] | [['Id-En', 'norm.', '74.03', '94.27', '94.32', '100.00'], ['Tr-De', 'norm.', '67.02', '78.28', '77.83', '100.00'], ['Tr-De', 'pos', '59.60', '62.86', '62.72', '66.47']] | Interestingly, the multilingual model slightly outperforms the language-aware model on the Tr-De data. For the Id-En data, the final performances of the two normalization models are very close to each other. Similar as for the normalization, the POS tagger also performs better when using the multilingual model, in general the LAI scores are lower and performance increases are higher compared to the development data. |
Lexical Normalization and POS-tagging for Code-switched Data | 2006.01175 | Table 3: Word level accuracies for language identification (10-fold). | ['[EMPTY]', 'Indonesian-English MarMoT', 'Indonesian-English Bilty', 'Indonesian-English MaChAmp', 'Turkish-German MarMoT', 'Turkish-German Bilty', 'Turkish-German MaChAmp'] | [['Lang1 (Id/Tr)', '96.79', '97.29', '97.53', '96.90', '96.91', '97.54'], ['Lang2 (En/De)', '92.11', '93.57', '94.81', '86.65', '90.88', '93.71'], ['Unspecified', '85.68', '87.75', '91.11', '88.86', '91.29', '92.75'], ['Total', '92.71', '93.86', '95.17', '92.76', '94.26', '95.57']] | Unsurprisingly, the performances are in line with the chronological order of the introduction of the systems, and their computational complexity. It should be noted that for MaChAmp we used pre-trained embeddings which were trained on the largest amount of external data. The hardest class is ‘Unspecified’; even though this class contains quite some easy cases (punctuation), there are also many harder cases, where a word belongs to any language other than Lang1 and Lang2, or when the annotator is uncertain. From the single label results, we can see that Bilty is more balanced compared to MarMoT, and that MaChAmp consistently outperforms the other two models. \newcitebarik -etal-2019-normalization use a conditional random fields classifier with a variety of features for this task, and report 90.11 accuracy for the Id-En dataset in a 5-fold cross-validation setting. |
Lexical Normalization and POS-tagging for Code-switched Data | 2006.01175 | Table 5: Accuracies for POS tagging, using a variety of normalization strategies. | ['[EMPTY]', 'LAI', 'Multilingual', 'Language-aware', 'Gold'] | [['MarMoT–POS', '60.47', '63.96', '63.93', '67.52'], ['Bilty–POS', '63.77', '66.41', '66.68', '70.37'], ['MaChAmp–POS', '64.18', '66.71', '66.57', '69.63']] | Comparing the different models, we see that MarMoT underperforms on all settings. A manual inspection revealed that this is mainly due to misclassification of emoticons (as PUNCT), which do not occur in the training data. In this setup, performance was generally higher, and the gains when using normalization were smaller. |
Propagate-Selector: Detecting Supporting Sentences for Question Answering via Graph Neural Networks | 1908.09137 | Table 2: Model performance on the HotpotQA dataset (top scores marked in bold). Models [1-5] are from (Shen et al.,2017a; Tran et al., 2018; Wang and Jiang, 2016; Bian et al.,2017; Yoon et al., 2019), respectively. | ['[BOLD] Model', 'dev MAP', 'dev MRR', 'train MAP', 'train MRR'] | [['[BOLD] IWAN\xa0[1]', '0.526', '0.680', '0.605', '0.775'], ['[BOLD] sCARNN\xa0[2]', '0.534', '0.698', '0.620', '0.792'], ['[BOLD] CompAggr\xa0[3]', '0.659', '0.812', '0.796', '0.911'], ['[BOLD] CompAggr-kMax\xa0[4]', '0.670', '0.825', '0.767', '0.901'], ['[BOLD] CompClip-LM-LC\xa0[5]', '0.702', '0.848', '0.757', '0.884'], ['[BOLD] PS- [ITALIC] rnn-elmo-s', '0.716', '0.841', '0.813', '0.916'], ['[BOLD] PS- [ITALIC] rnn-elmo', '[BOLD] 0.734', '[BOLD] 0.853', '[BOLD] 0.863', '[BOLD] 0.945']] | Because the dataset only provides training (trainset) and validation (devset) subsets, we report the model performances on these datasets. While training the model, we implement early termination based on the devset performance and measure the best performance. sCARNN In addition to the main proposed model, PS-rnn-elmo, we also report the model performance with a small version of ELMo, PS-rnn-elmo-s. |
Propagate-Selector: Detecting Supporting Sentences for Question Answering via Graph Neural Networks | 1908.09137 | Table 5: Model performance with different typologies. The connection strategies between nodes for each type are illustrated in figure 4. | ['[BOLD] Model', 'dev MAP', 'dev MRR', 'train MAP', 'train MRR'] | [['[BOLD] PS- [ITALIC] rnn-elmo-s', '[BOLD] 0.716', '[BOLD] 0.841', '[BOLD] 0.813', '[BOLD] 0.916'], ['[BOLD] Type-1\xa0( [ITALIC] rnn-elmo-s)', '0.694', '0.834', '0.807', '0.915'], ['[BOLD] Type-2\xa0( [ITALIC] rnn-elmo-s)', '0.705', '0.836', '0.792', '0.903'], ['[BOLD] Type-3\xa0( [ITALIC] rnn-elmo-s)', '0.658', '0.796', '0.729', '0.857']] | To reach the best performance, we conduct experiments multiple times by changing the number of hops in the model from 1 to 6 for each case (Type-1 to Type-3). From the experiment, hop-4 is selected as the best-performing hyper-parameter. However, all the model variations undergo performance degradation compared to the original topology (PS-rnn-elmo-s). |
Propagate-Selector: Detecting Supporting Sentences for Question Answering via Graph Neural Networks | 1908.09137 | Table 6: Model performance with the different method for computing node representation. | ['[BOLD] Model', 'dev MAP', 'dev MRR', 'train MAP', 'train MRR'] | [['[BOLD] PS- [ITALIC] USD_T', '0.651', '0.795', '0.693', '0.830'], ['[BOLD] PS- [ITALIC] avg-glove', '0.617', '0.753', '0.876', '0.945'], ['[BOLD] PS- [ITALIC] avg-elmo-s', '0.471', '0.611', '0.483', '0.625'], ['[BOLD] PS- [ITALIC] rnn-glove', '0.700', '0.822', '[BOLD] 0.919', '[BOLD] 0.971'], ['[BOLD] PS- [ITALIC] rnn-elmo-s', '0.716', '0.841', '0.813', '0.916'], ['[BOLD] PS- [ITALIC] rnn-elmo', '[BOLD] 0.734', '[BOLD] 0.853', '0.863', '0.945'], ['[BOLD] PS- [ITALIC] rnn-bert', '0.667', '0.806', '0.708', '0.841']] | In all cases, the RNN encoding skims (-rnn) performs better than that of the average pooling (-avg). Interestingly, average pooling with ELMo representation ( PS-avg-elmo-s) performs worse than in the GloVe representation (PS-avg-glove) case. From this result, we find that averaging ELMo does not produce proper node representations. For the PS-rnn-bert case, we do not fine-tune the BERT model and only use its computing word representation. We expect there exists a possibility to enhance model performance by fine-tuning the BERT with the end-to-end training process. |
Fine-tune Bert for DocRED with Two-step Process | 1909.11898 | Table 3: Comparison of the BiLSTM baseline with SentModel which encode the document sentence by sentence. We report the F1 score and AUC on the Dev set here. | ['Model', 'F1', 'AUC'] | [['BiLSTM', '50.94', '50.26'], ['SentModel', '50.97', '49.31']] | To test whether current model can capture the complex interaction between entities, we use a SentModel which encodes the document sentence by sentence. Then we locate each entity within a specific sentence and compute its embedding by averaging the word embedding of the entity name. In this way, there will be no interaction between sentences since we encode the whole document sentence by sentence. Surprisingly, the SentModel can achieve very similar performance compared to the BiLSTM model which encodes the whole document as a sequence. Therefore, the current model fails to capture complex interactions among entities, and only local information around each entity is used to predict a relation. |
Fine-tune Bert for DocRED with Two-step Process | 1909.11898 | Table 2: Comparison of the BERT model with other baselines. We report F1 score on the Dev and Test set. | ['Model', 'Dev', 'Test'] | [['CNN', '43.45', '42.26'], ['LSTM', '50.68', '50.07'], ['BiLSTM', '50.94', '51.06'], ['Context-Aware', '51.09', '50.70'], ['BERT', '54.16', '53.20'], ['BERT-Two-Step', '[BOLD] 54.42', '[BOLD] 53.92']] | We can see that we obtain a 2% F1 improvement by using the BERT encoder, which indicates that it may contain useful information such as common-sense knowledge in order to solve this task. By using the two-step training process, performance is improved further improve. In our experiments, we find that the accuracy for the second step is above 90%, which means the bottleneck lies in the first step, e.g., predicting whether a relation exists for a given entity pair. |
Modeling Sentiment Dependencies with Graph Convolutional Networks for Aspect-level Sentiment Classification | 1906.04501 | Table 3: The effect of GCN. | ['Models', 'Restaurant Acc', 'Restaurant Macro-F1', 'Laptop Acc', 'Laptop Macro-F1'] | [['Att', '81.43', '72.40', '72.12', '68.67'], ['Att+GCN', '82.77', '74.33', '74.61', '70.33'], ['BiAtt', '81.61', '73.49', '73.51', '69.73'], ['BiAtt+GCN (SDGCN)', '82.95', '75.79', '75.55', '71.35']] | It is clear to see that, comparing with GCN-reduced models, the two models with GCN achieve higher performance, respectively. The results verify that the modeling of the sentiment dependencies between different aspects with GCN plays a great role in predicting the sentiment polarities of aspects. |
Modeling Sentiment Dependencies with Graph Convolutional Networks for Aspect-level Sentiment Classification | 1906.04501 | Table 2: Comparisons with baseline models on the Restaurant dataset and Laptop dataset. The results of baseline models are retrieved from published papers. The best results in GloVe-based models and BERT-based models are all in bold separately. -A means that the model is based on adjacent-relation graph, and -G means the model is based on global-relation graph. | ['Word Embedding', 'Models', 'Restaurant Acc', 'Restaurant Macro-F1', 'Laptop Acc', 'Laptop Macro-F1'] | [['GloVe', 'TD-LSTM', '75.63', '-', '68.13', '-'], ['GloVe', 'ATAE-LSTM', '77.20', '-', '68.70', '-'], ['GloVe', 'MenNet', '78.16', '65.83', '70.33', '64.09'], ['GloVe', 'IAN', '78.60', '-', '72.10', '-'], ['GloVe', 'RAN', '80.23', '70.80', '74.49', '[BOLD] 71.35'], ['GloVe', 'PBAN', '81.16', '-', '74.12', '-'], ['GloVe', 'TSN', '80.1', '-', '73.1', '-'], ['GloVe', 'AEN', '80.98', '72.14', '73.51', '69.04'], ['GloVe', 'SDGCN-A w/o p', '81.61', '72.22', '73.20', '68.54'], ['GloVe', 'SDGCN-G w/o p', '81.61', '72.93', '73.67', '68.70'], ['GloVe', 'SDGCN-A', '82.14', '73.47', '75.39', '70.04'], ['GloVe', 'SDGCN-G', '[BOLD] 82.95', '[BOLD] 75.79', '[BOLD] 75.55', '[BOLD] 71.35'], ['BERT', 'AEN-BERT', '83.12', '73.76', '79.93', '76.31'], ['BERT', 'SDGCN-BERT', '[BOLD] 83.57', '[BOLD] 76.47', '[BOLD] 81.35', '[BOLD] 78.34']] | In order to remove the influence with different word representations and directly compare the performance of different models, we compare GloVe-based models and BERT-based models separately. Our proposed model achieves the best performance on both GloVe-based models and BERT-based models, which demonstrates the effectiveness of our proposed model. In particularly, SDGCN-BERT obtains new state-of-the-art results. |
Insights into Analogy Completion from the Biomedical Domain | 1706.02241 | Table 3: MAP performance on the three BMASS relations with ≥100 unigram analogies. Uni is using unigram embeddings on unigram data, UniM is using MWE embeddings on unigram data, and MWE is performance with MWE embeddings over the full MWE data. | ['Rel', 'PM-2 Uni', 'PM-2 Uni [ITALIC] M', 'PM-2 MWE', 'CBOW Uni', 'CBOW Uni [ITALIC] M', 'CBOW MWE'] | [['L2', '0.07', '0.10', '0.07', '0.11', '0.14', '0.06'], ['L3', '0.14', '0.19', '0.06', '0.12', '0.16', '0.06'], ['L4', '0.01', '0.00', '0.02', '0.04', '0.05', '0.07']] | Since prior work on analogies has primarily been concerned with unigram data, we also identified a subset of our data for which we could find single-word string realizations for all concepts in an analogy, using the full vocabulary of our trained embeddings. The unigram analogies are slightly better captured than the full MWE data for has-lab-number (L2) and has-tradename (L3); however, lower performance on the unigram subset in tradename-of (L4) shows that unigram analogies are not always easier. We see a small effect from the much larger set of candidate answers in the unigram case (>1m unigrams), as shown by the slightly higher MAP numbers in the UniM case. In general, it is clear that the difficulty of some of the relations in our dataset is not due solely to using MWEs in the analogies. |
Show, Recall, and Tell: Image Captioning with Recall Mechanism | 2001.05876 | Table 1: Experiment of our proposed recall mechanism on the MSCOCO Karpathy test split with both cross-entropy loss and CIDEr optimization. We implement our proposed methods: semantic guide (SG), recalled-word slot (RWS) and recalled-word reward (WR) on the baseline model Up-Down. Test results show that our proposed methods have obvious improvement over our baseline. B-1 / B-4 / M / R / C / S refers to BLEU1/ BLEU4 / METEOR / ROUGE-L / CIDEr / SPICE scores. | ['Models', 'Cross-entropy loss B-1', 'Cross-entropy loss B-4', 'Cross-entropy loss M', 'Cross-entropy loss R', 'Cross-entropy loss C', 'Cross-entropy loss S', 'CIDEr optimization training B-1', 'CIDEr optimization training B-4', 'CIDEr optimization training M', 'CIDEr optimization training R', 'CIDEr optimization training C', 'CIDEr optimization training S'] | [['Test-guide [Mun2016TextguidedAM]', '74.9', '32.6', '25.7', '-', '102.4', '-', '-', '-', '-', '-', '-', '-'], ['SCST [Rennie_2017_CVPR]', '-', '30.0', '25.9', '53.4', '99.4', '-', '-', '34.2', '26.7', '55.7', '114.0', '-'], ['StackCap [gu2018look]', '76.2', '35.2', '26.5', '-', '109.1', '-', '78.5', '36.1', '27.4', '-', '120.4', '-'], ['CAVP [liu2018context]', '-', '-', '-', '-', '-', '-', '-', '[BOLD] 38.6', '28.3', '[BOLD] 58.5', '126.3', '21.6'], ['Up-Down [anderson2018bottom]', '[BOLD] 77.2', '36.2', '27.0', '56.4', '113.5', '20.3', '79.8', '36.3', '27.7', '56.9', '120.1', '21.4'], ['Ours:SG', '77.1', '36.3', '27.8', '56.8', '115.3', '21.0', '80.2', '38.3', '28.5', '58.3', '127.3', '22.0'], ['Ours:SG+RWS', '77.1', '36.6', '28.0', '56.9', '116.9', '21.3', '80.3', '38.3', '28.5', '58.3', '128.3', '22.2'], ['Ours:SG+RWS+WR', '77.1', '[BOLD] 36.6', '[BOLD] 28.0', '[BOLD] 56.9', '[BOLD] 116.9', '[BOLD] 21.3', '[BOLD] 80.3', '38.5', '[BOLD] 28.7', '58.4', '[BOLD] 129.1', '[BOLD] 22.4']] | , we have proposed semantic guide (SG), recalled-word slot (RWS) in captioning model and recalled-word reward (WR) in CIDEr optimization. Then we test their performances on MSCOCO Karpathy test split. Beam search with beam size 2 is employed to generate captions. Semantic guide with recalled-word slot (SG+RWS) has improved 3.0% on cross-entropy loss, and 6.8% on CIDEr optimization. Our best model (SG+RWS+WR) has obtained 7.5% improvement on CIDEr optimization, and it has obtained BLEU4 / CIDEr / SPICE scores of 36.6 / 116.9 / 21.3 with cross-entropy loss and 38.5 / 129.1/ 22.4 with CIDEr optimization. In addition, compared with other state-of-the-art models, like SCST [Rennie_2017_CVPR], StacKCap [gu2018stack], and CAVP [liu2018context] , our results still outperform theirs. The results of comparison demonstrate the effectiveness of our proposed methods, especially on those more convincing evaluation metrics such as CIDEr, SPICE. |
Show, Recall, and Tell: Image Captioning with Recall Mechanism | 2001.05876 | Table 2: Performance of our proposed methods over other state-of-the-art models after cider optimization training. | ['Models', 'B-4', 'M', 'R', 'C', 'S'] | [['att2in [Rennie_2017_CVPR]', '36.1', '27.2', '56.9', '119.1', '20.8'], ['att2in+SG+RWS+WR', '[BOLD] 36.7', '[BOLD] 27.8', '[BOLD] 57.4', '[BOLD] 122.0', '[BOLD] 21.4'], ['att2all [Rennie_2017_CVPR]', '36.3', '27.5', '57.2', '121.7', '21.1'], ['att2all+SG+RWS+WR', '[BOLD] 37.1', '[BOLD] 28.0', '[BOLD] 57.8', '[BOLD] 125.0', '[BOLD] 21.7'], ['stackcap [gu2018look]', '36.6', '27.6', '57.3', '121.1', '21.0'], ['stackcap+SG+RWS+WR', '[BOLD] 37.8', '[BOLD] 28.3', '[BOLD] 58.0', '[BOLD] 126.4', '[BOLD] 21.9']] | To prove the effectiveness and generality of our proposed methods, we have also implemented our proposed methods over other state-of-the-art models: att2in[Rennie_2017_CVPR], att2all[Rennie_2017_CVPR] and stackcap [gu2018look]. We do the comparative experiments over these three models. To be detailed, we have average 2.3% improvement on att2in, 2.1% improvement on att2all, and 3.3% improvement on stackcap. We have conducted the MSCOCO online evaluation and achieved promising results (called “caption-recall”, reported at 19 Oct 2019), which also surpass the online results of our baseline model Up-Down. |
Show, Recall, and Tell: Image Captioning with Recall Mechanism | 2001.05876 | Table 3: Performance of text-retrieval model on MSCOCO Karpathy validation set. | ['[EMPTY]', 'R@1', 'R@5', 'R@10', 'Mean r'] | [['Text-retrieval', '36.2', '69.0', '81.7', '7.7'], ['Image-retrieval', '39.6', '72.3', '83.7', '11.0']] | The performance of text-retrieval model can directly affect the relevance between the recalled words and image. We assume that our text-retrieval model does not achieve state-of-the-art as in [lee2018stacked], but it is well qualified to retrieve sufficient and relevant words for images. Our corpus is collected from all the captions of MSCOCO Karpathy train splits. For determining an appropriate number of captions to be retrieved, we respectively retrieve top 1, 5 and 15 related captions for each image, and test the performance on cross-entropy loss. is a better choice than 1 or 15. It is necessary to emphasize that we avoid retrieving the ground truth caption for images, and all the retrieved captions only come from the train splits of MSCOCO Karpathy. In the following experiments, top 5 captions retrieved are used to construct recalled words for each image. |
Show, Recall, and Tell: Image Captioning with Recall Mechanism | 2001.05876 | Table 4: Experiments on choice of K, the number of captions retrieved for each image. B-1 / B-4 / M / R / C / S refers to BLEU1/ BLEU4 / METEOR / ROUGE-L / CIDEr / SPICE scores. Experiments are conducted on MSCOCO Karpathy validation set. | ['Top [ITALIC] K', 'Cross-Entropy Loss B-1', 'Cross-Entropy Loss B-4', 'Cross-Entropy Loss M', 'Cross-Entropy Loss R', 'Cross-Entropy Loss C', 'Cross-Entropy Loss S'] | [['[ITALIC] K=1', '77.1', '36.3', '27.8', '56.9', '115.8', '21.2'], ['[ITALIC] K=5', '77.1', '36.5', '28.0', '57.0', '[BOLD] 116.7', '21.3'], ['[ITALIC] K=15', '77.0', '36.3', '27.9', '56.6', '115.6', '21.0']] | The performance of text-retrieval model can directly affect the relevance between the recalled words and image. We assume that our text-retrieval model does not achieve state-of-the-art as in [lee2018stacked], but it is well qualified to retrieve sufficient and relevant words for images. Our corpus is collected from all the captions of MSCOCO Karpathy train splits. For determining an appropriate number of captions to be retrieved, we respectively retrieve top 1, 5 and 15 related captions for each image, and test the performance on cross-entropy loss. is a better choice than 1 or 15. It is necessary to emphasize that we avoid retrieving the ground truth caption for images, and all the retrieved captions only come from the train splits of MSCOCO Karpathy. In the following experiments, top 5 captions retrieved are used to construct recalled words for each image. Thus we set different λ values from 0 to 1 to conduct a model selection. As a result, in the following experiments, we set λ=0.5. |
KUISAIL at SemEval-2020 Task 12: BERT-CNN for Offensive Speech Identification in Social Media | 2007.13184 | Table 2: Macro averaged F1-Scores of our submissions and the other experiments on test data | ['[BOLD] Model', '[BOLD] Arabic', '[BOLD] Greek', '[BOLD] Turkish', '[BOLD] Average'] | [['[BOLD] SVM with TF-IDF', '0.772', '0.823', '0.685', '0.760'], ['[BOLD] Multilingual BERT', '0.808', '0.807', '0.774', '0.796'], ['[BOLD] Bi-LSTM', '0.822', '0.826', '0.755', '0.801'], ['[BOLD] CNN-Text', '0.840', '0.825', '0.751', '0.805'], ['[BOLD] BERT', '0.884', '0.822', '[BOLD] 0.816', '0.841'], ['[BOLD] BERT-CNN (Ours)44footnotemark: 4', '[BOLD] 0.897', '[BOLD] 0.843', '0.814', '[BOLD] 0.851']] | Macro-averaged F1-Score metric was used for evaluation in this shared task. The baseline model used Term Frequency-Inverse Document Frequency (TF-IDF) with Support Vector Machine (SVM) Using CNNs with the same structure as the main model, but without pre-trained BERT as an embedder. CNN-Text model uses randomly initialized embeddings of size 300, which were trained along with the model. While CNNs could be used to capture local features of the text, LSTM which have shown remarkable performance in text classification tasks, capture the temporal information. |
Bayesian Optimization of Text Representations | 1503.00693 | Table 8: Comparisons on the 20 Newsgroups dataset for classifying documents into all topics. The disriminative RBM result is from drbm; compressive feature learning and LR-5-grams results are from compressive, and the distributed structured output result is from srikumar. | ['[BOLD] Method', '[BOLD] Acc.'] | [['Discriminative RBM', '76.20'], ['Compressive feature learning', '83.00'], ['LR-{1,2,3,4,5}-grams', '82.80'], ['Distributed structured output', '84.00'], ['[BOLD] LR (this work)', '87.84']] | The strong logistic regression baseline from \newcitecompressive uses all 5-grams, heuristic normalization, and elastic net regularization; our method found that unigrams and bigrams, with binary weighting and ℓ2 penalty, achieved far better results. |
Bayesian Optimization of Text Representations | 1503.00693 | Table 4: Comparisons on the Stanford sentiment treebank dataset. Scores are as reported by socher and paragraphvector. | ['[BOLD] Method', '[BOLD] Acc.'] | [['Naïve Bayes', '81.8'], ['SVM', '79.4'], ['Vector average', '80.1'], ['Recursive neural networks', '82.4'], ['[BOLD] LR (this work)', '82.4'], ['Matrix-vector RNN', '82.9'], ['Recursive neural tensor networks', '85.4'], ['Paragraph vector', '87.8']] | Stanford sentiment treebank Our logistic regression model outperforms the baseline SVM reported by \newcitesocher, who used only unigrams but did not specify the weighting scheme for their SVM baseline. |
Bayesian Optimization of Text Representations | 1503.00693 | Table 5: Comparisons on the Amazon electronics dataset. Scores are as reported by riejohnson. | ['[BOLD] Method', '[BOLD] Acc.'] | [['SVM-unigrams', '88.62'], ['SVM-{1,2}-grams', '90.70'], ['SVM-{1,2,3}-grams', '90.68'], ['NN-unigrams', '88.94'], ['NN-{1,2}-grams', '91.10'], ['NN-{1,2,3}-grams', '91.24'], ['[BOLD] LR (this work)', '91.56'], ['Bag of words CNN', '91.58'], ['Sequential CNN', '92.22']] | Our method is on par with the second-best of these, outperforming all of the reported feed-forward neural networks and SVM variants Johnson and Zhang used as baselines. They varied the representations, and used log term frequency and normalization to unit vectors as the weighting scheme, after finding that this outperformed term frequency. Our method achieved the best performance with binary weighting, which they did not consider. |
Bayesian Optimization of Text Representations | 1503.00693 | Table 7: Comparisons on the U.S. congressional vote dataset. SVM-link exploits link structures [Thomas et al.2006]; the min-cut result is from bansal; and SVM-SLE result is reported by ainur. | ['[BOLD] Method', '[BOLD] Acc.'] | [['SVM-link', '71.28'], ['Min-cut', '75.00'], ['SVM-SLE', '77.67'], ['[BOLD] LR (this work)', '78.59']] | Our method outperforms the best reported results of \newciteainur, which use a multi-level structured model based on a latent-variable SVM. We show comparisons to two well-known but weaker baselines, as well. |
A PDTB-Styled End-to-End Discourse Parser | 1011.0835 | Table 3: Results for identifying the Arg1 and Arg2 subtree nodes for the SS case under the GS + no EP setting for the three categories. | ['[EMPTY]', 'Arg1 [ITALIC] F1', 'Arg2 [ITALIC] F1', 'Rel [ITALIC] F1'] | [['Subordinating', '88.46', '97.93', '86.98'], ['Coordinating', '90.34', '90.34', '82.39'], ['Discourse adverbial', '46.88', '62.50', '37.50'], ['All', '86.63', '93.41', '82.60']] | We next evaluate the performance of the argument extractor. The last column shows the relation level F1 which requires both Arg1 and Arg2 nodes to be matched. We only show the results for the GS + no EP setting to save space. As expected, Arg1 and Arg2 nodes for subordinating connectives are the easiest ones to identify and give a high Arg2 F1 of 97.93% and a Rel F1 of 86.98%. We note that the Arg1 F1 and Arg2 F1 for coordinating connectives are the same, which is strange, as we expect Arg2 nodes to be handled more easily. The error analysis shows that Arg2 spans for coordinating connectives tend to include extra texts that cause the Arg2 nodes to move lower down in the parse tree. For example, “… and Mr. Simpson said he resigned in 1988” contains the extra span “Mr. Simpson said” which causes the Arg2 node moving two levels down the tree. As we discussed, discourse adverbials are difficult to identify as their Arg1 and Arg2 nodes are not strongly bound in the parse trees. However, as they do not occupy a large percentage in the test data, they do not lead to a large degradation as shown in the last row. |
A PDTB-Styled End-to-End Discourse Parser | 1011.0835 | Table 1: Results for the connective classifier. | ['[EMPTY]', 'P&N Acc.', 'P&N [ITALIC] F1', '+new Acc.', '+new [ITALIC] F1'] | [['GS', '95.30', '92.75', '97.34', '95.76'], ['Auto', '94.21', '91.00', '96.02', '93.62']] | 02–21 and tested on Sec. 23. The second and third columns show the accuracy and F1 using the features of P&N, whereas the last two columns show the results when we add in the lexico-syntactic and path features (+new). Introducing the new features significantly (all with p<0.001) increases the accuracy and F1 by 2.04% and 3.01% under the GS setting, and 1.81% and 2.62% under the Auto setting. This confirms the usefulness of integrating the contextual and syntactic information. As the connective classifier is the first component in the pipeline, its high performance is crucial to mitigate the effect of cascaded errors downstream. |
A PDTB-Styled End-to-End Discourse Parser | 1011.0835 | Table 4: Overall results for the argument extractor. | ['Partial', 'GS + no EP', 'Arg1 [ITALIC] F1 86.67', 'Arg2 [ITALIC] F1 99.13', 'Rel [ITALIC] F1 86.24'] | [['Partial', 'GS + EP', '83.62', '94.98', '83.52'], ['Partial', 'Auto + EP', '81.72', '92.64', '80.96'], ['Exact', 'GS + no EP', '59.15', '82.23', '53.85'], ['Exact', 'GS + EP', '57.64', '79.80', '52.29'], ['Exact', 'Auto + EP', '47.68', '70.27', '40.37']] | Miltsakaki et al. They found that most of the disagreements for exact match come from partial overlaps which do not show significant semantic difference. We follow such work and report both exact and partial matches. When checking exact match, we require two spans to match identically, excluding any leading and ending punctuation symbols. A partial match is credited if there is any overlap between the verbs and nouns of the two spans. The GS + no EP setting gives a satisfactory F1 of 86.24% for partial matching on relation level. On the other hand, the results for exact matching are much lower than the human agreement. We observe that most misses are due to small portions of text being deleted from or added to the spans by the annotators to follow the minimality principle to include in the argument the minimal span of text that is sufficient for the interpretation of the relation, which poses difficulties for machines to follow. |
A PDTB-Styled End-to-End Discourse Parser | 1011.0835 | Table 5: Results for the explicit classifier. | ['[EMPTY]', 'Precision', 'Recall', '[ITALIC] F1'] | [['GS + no EP', '86.77', '86.77', '86.77'], ['GS + EP', '83.19', '82.65', '82.92'], ['Auto + EP', '81.19', '80.04', '80.61']] | Recall that human agreement on Level 2 types is 84.00% and a baseline classifier that uses only the connectives as features yields an F1 of 86.00% under the GS + no EP setting on Sec. 23. Adding our new features improves F1 to 86.77%. With full automation and error propagation, we obtain an F1 of 80.61%. Pitler and Nenkova 02-22. This actually performs worse than the baseline when trained on Sec. 02-21 and tested on Sec. 23. |
A PDTB-Styled End-to-End Discourse Parser | 1011.0835 | Table 6: Results for the non-explicit classifier. | ['[EMPTY]', 'Precision', 'Recall', '[ITALIC] F1', 'Baseline [ITALIC] F1'] | [['GS + no EP', '39.63', '39.63', '39.63', '21.34'], ['GS + EP', '26.21', '27.63', '26.90', '20.30'], ['Auto + EP', '24.54', '26.45', '25.46', '19.31']] | A single component evaluation (GS + no EP) shows a micro F1 of 39.63%. Although the F1 scores for the GS + EP and Auto + EP settings are unsatisfactory, they still significantly outperform the majority class baseline by about 6%. This performance is in line with the difficulties of classifying Implicit relations discussed in detail in our previous work |
A PDTB-Styled End-to-End Discourse Parser | 1011.0835 | Table 7: Results for the attribution span labeler. | ['Partial', 'GS + no EP', 'Precision 79.40', 'Recall 79.96', '[ITALIC] F1 79.68'] | [['Partial', 'GS + EP', '65.93', '79.96', '72.27'], ['Partial', 'Auto + EP', '64.40', '51.68', '57.34'], ['Exact', 'GS + no EP', '65.72', '66.19', '65.95'], ['Exact', 'GS + EP', '54.57', '66.19', '59.82'], ['Exact', 'Auto + EP', '47.83', '38.39', '42.59']] | The final component, the attribution span labeler, is evaluated under both partial and exact match, in accordance with the argument extractor. When error propagation is introduced, the degradation of F1 is largely due to the drop in precision. This is not surprising as at this point, the test data contains a lot of false positives propagated from the previous components. This has effect on the precision calculation but not recall (the recall scores do not change). When full automation is further added, the degradation is largely due to the drop in recall. This is because the automatic parser introduces noise that causes errors in the clause splitting step. |
Two Local Models for Neural Constituent Parsing | 1808.04850 | Table 2: Span representation methods. | ['Model', 'English LP', 'English LR', 'English LF', 'Chinese LP', 'Chinese LR', 'Chinese LF'] | [['BinarySpan', '92.16', '92.19', '92.17', '91.31', '90.48', '90.89'], ['MultiSpan', '92.47', '[BOLD] 92.41', '[BOLD] 92.44', '[BOLD] 91.69', '90.91', '[BOLD] 91.30'], ['LinearRule', '92.03', '92.03', '92.03', '91.03', '89.19', '90.10'], ['BiaffineRule', '[BOLD] 92.49', '92.23', '92.36', '91.31', '[BOLD] 91.28', '91.29']] | We study the two span representation methods, namely the simple concatenating representation v[i,j] We investigate appropriate representations for different models on the English dev dataset. When using sr[i,j] for BinarySpan, the performance drops greatly (92.17→91.80). Similar observations can be found when replacing sr[i,j] with v[i,j] for BiaffineRule. Therefore, we use v[i,j] for the span models and sr[i,j] for the rule models in latter experiments. For English, BinarySpan acheives 92.17 LF score. The multi-class span classifier (MultiSpan) is much better than BinarySpan due to the awareness of label information. Similar phenomenon can be observed on the Chinese dataset. We also test the linear rule (LinearRule) methods. For English, LinearRule obtains 92.03 LF score, which is much worse than BiaffineRule. In general, the performances of BiaffineRule and MultiSpan are quite close both for English and Chinese. |
Two Local Models for Neural Constituent Parsing | 1808.04850 | Table 1: Hyper-parameters for training. | ['[BOLD] hyper-parameter Word embeddings', '[BOLD] value English: 100 Chinese: 80', '[BOLD] hyper-parameter Word LSTM layers', '[BOLD] value 2'] | [['Word LSTM hidden units', '200', 'Character embeddings', '20'], ['Character LSTM layers', '1', 'Character LSTM hidden units', '25'], ['Tree-LSTM hidden units', '200', 'POS tag embeddings', '32'], ['Constituent label embeddings', '32', 'Label LSTM layers', '1'], ['Label LSTM hidden units', '200', 'Last output layer hidden units', '128'], ['Maximum training epochs', '50', 'Dropout', 'English: 0.5, Chinese 0.3'], ['Trainer', 'SGD', 'Initial learning rate', '0.1'], ['Per-epoch decay', '0.05', '[ITALIC] ϕ', 'ELU ']] | Hyper-parameters. These values are tuned using the corresponding development sets. We optimize our models with stochastic gradient descent (SGD). The initial learning rate is 0.1. Our model are initialized with pretrained word embeddings both for English and Chinese. The pretrained word embeddings are the same as those used in \newcitedyer2016rnng. For Chinese, we find that 0.3 is a good choice for the dropout probability. The number of training epochs is decided by the evaluation performances on development set. In particular, we perform evaluations on development set for every 10,000 examples. The training procedure stops when the results of next 20 evaluations do not become better than the previous best record. |
Two Local Models for Neural Constituent Parsing | 1808.04850 | Table 5: Results on the Chinese Treebank 5.1 test set. | ['Parser charniak2005rerank (R)', 'LR 80.8', 'LP 83.8', 'LF 82.3', 'Parser petrov2007unlex', 'LR 81.9', 'LP 84.8', 'LF 83.3'] | [['zhu2013acl (S)', '84.4', '86.8', '85.6', 'zhang2009tran', '78.6', '78.0', '78.3'], ['wang2015feature (S)', '[EMPTY]', '[EMPTY]', '86.6', 'watanabe2015transition', '[EMPTY]', '[EMPTY]', '84.3'], ['huang2009selftraining (ST)', '[EMPTY]', '[EMPTY]', '85.2', 'dyer2016rnng', '[EMPTY]', '[EMPTY]', '84.6'], ['dyer2016rnng (R)', '[EMPTY]', '[EMPTY]', '86.9', '[BOLD] BinarySpan', '85.9', '87.1', '86.5'], ['liu2016lookahead', '85.2', '85.9', '85.5', '[BOLD] MultiSpan', '86.6', '88.0', '[BOLD] 87.3'], ['liu2017inorder', '[EMPTY]', '[EMPTY]', '86.1', '[BOLD] BiaffineRule', '87.1', '87.5', '[BOLD] 87.3']] | Chinese. Under the same settings, all the three models outperform the state-of-the-art neural model Compared with the in-order transition-based parser, our best model improves the labeled F1 score by 1.2 (86.1→87.3). |
Query-by-Example Search with Discriminative Neural Acoustic Word Embeddings | 1706.03818 | Table 4: Comparison of QbE system performance on the evaluation set. | ['System', '[BOLD] Median Example [BOLD] FOM', '[BOLD] Median Example [BOLD] OTWV', '[BOLD] Median Example [BOLD] P@10', '[BOLD] Best Example [BOLD] FOM', '[BOLD] Best Example [BOLD] OTWV', '[BOLD] Best Example [BOLD] P@10', '[BOLD] Query time (s)'] | [['RAILS ', '6.7', '2.7', '44.0', '20.7', '10.4', '84.4', '24.7'], ['S-RAILS (baseline)', '24.5', '14.4', '34.5', '46.2', '26.6', '87.4', '0.078'], ['S-RAILS+NAWE (ours)', '43.3', '22.4', '60.2', '65.4', '43.3', '95.1', '0.38']] | We find that our approach improves significantly over both RAILS and S-RAILS in terms of all performance metrics at this operating point. The biggest gains from S-RAILS+NAWE are seen in the Median Example results, where there is a relative improvement over S-RAILS of more than 55% across all measures. In terms of FOM and OTWV, we see relative improvements of over 40% in the Best Example case. Although the baselines obtain good P@10, we still find large improvements in this measure as well, from 87.1% to 95.1%. |
Query-by-Example Search with Discriminative Neural Acoustic Word Embeddings | 1706.03818 | Table 2: Effect of number of permutations P on S-RAILS+NAWE performance on the development set, for signature length b=1024 and beamwidth B=2,000. | ['[ITALIC] b', '[BOLD] Median Example [BOLD] FOM', '[BOLD] Median Example [BOLD] OTWV', '[BOLD] Median Example [BOLD] P@10', '[BOLD] Best Example [BOLD] FOM', '[BOLD] Best Example [BOLD] OTWV', '[BOLD] Best Example [BOLD] P@10'] | [['128', '62.1', '37.4', '42.1', '81.7', '60.8', '83.8'], ['256', '67.2', '42.6', '48.6', '83.0', '65.4', '84.9'], ['512', '68.2', '44.8', '52.6', '83.6', '65.9', '84.9'], ['1024', '69.1', '46.5', '54.5', '84.1', '66.7', '84.8'], ['2048', '70.4', '48.3', '54.5', '85.0', '66.8', '86.0']] | Again in contrast to the S-RAILS system, our method responds strongly to increases in the number of permutations used. This is to be expected if the neural embeddings provide a better measure of speech segment distances, since the increased number of permutations helps provide a more exact estimate of the embedding distance. Further increasing the number of permutations may further improve these metrics, but this incurs a large cost in memory. To obtain higher precision systems, it is more important to use computational resources for increasing the number of permutations or using longer signatures. However, as would be expected, the higher beamwidths help to significantly improve the FOM score, a metric concerned primarily with recall. |
Query-by-Example Search with Discriminative Neural Acoustic Word Embeddings | 1706.03818 | Table 2: Effect of number of permutations P on S-RAILS+NAWE performance on the development set, for signature length b=1024 and beamwidth B=2,000. | ['[ITALIC] P', '[BOLD] Median Example [BOLD] FOM', '[BOLD] Median Example [BOLD] OTWV', '[BOLD] Median Example [BOLD] P@10', '[BOLD] Best Example [BOLD] FOM', '[BOLD] Best Example [BOLD] OTWV', '[BOLD] Best Example [BOLD] P@10'] | [['4', '48.8', '33.2', '45.2', '75.2', '59.0', '83.0'], ['8', '60.9', '41.0', '50.3', '80.3', '63.8', '85.0'], ['16', '69.1', '46.5', '54.5', '84.1', '66.7', '84.8']] | Again in contrast to the S-RAILS system, our method responds strongly to increases in the number of permutations used. This is to be expected if the neural embeddings provide a better measure of speech segment distances, since the increased number of permutations helps provide a more exact estimate of the embedding distance. Further increasing the number of permutations may further improve these metrics, but this incurs a large cost in memory. To obtain higher precision systems, it is more important to use computational resources for increasing the number of permutations or using longer signatures. However, as would be expected, the higher beamwidths help to significantly improve the FOM score, a metric concerned primarily with recall. |
Query-by-Example Search with Discriminative Neural Acoustic Word Embeddings | 1706.03818 | Table 2: Effect of number of permutations P on S-RAILS+NAWE performance on the development set, for signature length b=1024 and beamwidth B=2,000. | ['[ITALIC] B', '[BOLD] Median Example [BOLD] FOM', '[BOLD] Median Example [BOLD] OTWV', '[BOLD] Median Example [BOLD] P@10', '[BOLD] Best Example [BOLD] FOM', '[BOLD] Best Example [BOLD] OTWV', '[BOLD] Best Example [BOLD] P@10'] | [['1000', '65.8', '44.8', '53.4', '83.0', '65.6', '85.0'], ['2000', '69.1', '46.5', '54.5', '84.1', '66.7', '84.8'], ['10000', '74.6', '49.5', '54.2', '86.3', '67.9', '84.8']] | Again in contrast to the S-RAILS system, our method responds strongly to increases in the number of permutations used. This is to be expected if the neural embeddings provide a better measure of speech segment distances, since the increased number of permutations helps provide a more exact estimate of the embedding distance. Further increasing the number of permutations may further improve these metrics, but this incurs a large cost in memory. To obtain higher precision systems, it is more important to use computational resources for increasing the number of permutations or using longer signatures. However, as would be expected, the higher beamwidths help to significantly improve the FOM score, a metric concerned primarily with recall. |
Irony Detection in a Multilingual Context | 2002.02427 | Table 1: Tweet distribution in all corpora. | ['[EMPTY]', '[BOLD] # Ironic', '[BOLD] # Not-Ironic', '[BOLD] Train', '[BOLD] Test'] | [['Ar', '6,005', '5,220', '10,219', '1,006'], ['Fr', '2,425', '4,882', '5,843', '1,464'], ['En', '5,602', '5,623', '10,219', '1,006']] | Also, for French, we use the original dataset without any modification, keeping the same number of records for train and test to better compare with state-of-the-art results. For the classes distribution (ironic vs. non ironic), we do not choose a specific ratio but we use the resulted distribution from the random shuffling process. |
Irony Detection in a Multilingual Context | 2002.02427 | Table 2: Results of the monolingual experiments (in percentage) in terms of accuracy (A), precision (P), recall (R), and macro F-score (F). | ['Arabic', 'Arabic A', 'Arabic P', 'Arabic R', 'Arabic F', 'French A', 'French P', 'French R', 'French F', 'English A', 'English P', 'English R', 'English F'] | [['RF', '68.0', '67.0', '82.0', '68.0', '68.5', '71.7', '87.3', '61.0', '61.2', '60.0', '70.0', '61.0'], ['CNN', '[BOLD] 80.5', '79.1', '84.9', '[BOLD] 80.4', '[BOLD] 77.6', '68.2', '59.6', '[BOLD] 73.5', '[BOLD] 77.9', '74.6', '84.7', '[BOLD] 77.8']] | Results. For English, our results, in terms of macro F-score (F), were not comparable to those of [Techc:2014, tay-etal-2018-reasoning], as we used 11% of the original dataset. For French, our scores are in line with those reported in state of the art (cf. best system in the irony shared task achieved F=78.3 [Deft2017]). They outperform those obtained for Arabic (A=71.7) [KarouiACLING:2017] and are comparable to those recently reported in the irony detection shared task in Arabic tweets [idat2019, ghanem2019idat] (F=84.4). Overall, the results show that semantic-based information captured by the embedding space are more productive comparing to standard surface and lexicon-based features. |
Irony Detection in a Multilingual Context | 2002.02427 | Table 3: Results of the cross-lingual experiments. | ['Train→Test', 'CNN A', 'CNN P', 'CNN R', 'CNN F', 'RF A', 'RF P', 'RF R', 'RF F'] | [['Ar→Fr', '60.1', '37.2', '26.6', '[BOLD] 51.7', '47.03', '29.9', '43.9', '46.0'], ['Fr→Ar', '57.8', '62.9', '45.7', '[BOLD] 57.3', '51.11', '61.1', '24.0', '54.0'], ['Ar→En', '48.5', '26.5', '17.9', '34.1', '49.67', '49.7', '66.2', '[BOLD] 50.0'], ['En→Ar', '56.7', '57.7', '62.3', '[BOLD] 56.4', '52.5', '58.6', '38.5', '53.0'], ['Fr→En', '53.0', '67.9', '11.0', '42.9', '52.38', '52.0', '63.6', '[BOLD] 52.0'], ['En→Fr', '56.7', '33.5', '29.5', '50.0', '56.44', '74.6', '52.7', '[BOLD] 58.0'], ['(En/Fr)→Ar', '62.4', '66.1', '56.8', '[BOLD] 62.4', '55.08', '56.7', '68.5', '62.0'], ['Ar→(En/Fr)', '56.3', '33.9', '09.5', '42.7', '59.84', '60.0', '98.7', '[BOLD] 74.6']] | We use the previous CNN architecture with bilingual embedding and the RF model with surface features (e.g., use of personal pronoun, presence of interjections, emoticon or specific punctuation) : (a) has similar ironic pragmatic devices, and (b) uses similar text-based pattern in the narrative of the ironic tweets. As continuous word embedding spaces exhibit similar structures across (even distant) languages [Mikolov:2013], we use a multilingual word representation which aims to learn a linear mapping from a source to a target embedding space. Many methods have been proposed to learn this mapping such as parallel data supervision and bilingual dictionaries [Mikolov:2013] or unsupervised methods relying on monolingual corpora [Conneau:2017, Artetxe:2018, Wada:2018]. For our experiments, we use Conneau et al ’s approach as it showed superior results with respect to the literature [Conneau:2017]. We perform several experiments by training on one language (lang1) and testing on another one (lang2) ( henceforth lang1→lang2). We get 6 configurations, plus two others to evaluate how irony devices are expressed cross-culturally, i.e. in European vs. non European languages. In each experiment, we took 20% from the training to validate the model before the testing process. |
Cross-Lingual Low-Resource Set-to-Description Retrieval for Global E-Commerce | 2005.08188 | Table 3: Evaluation results of baselines and our models. CLMN− refers to CLMN without using pre-trained BERT. | ['[BOLD] Model', '[ITALIC] R2@1', '[ITALIC] R10@1', '[ITALIC] R10@2', '[ITALIC] R10@5', '[ITALIC] MRR'] | [['Unsupervised models', 'Unsupervised models', 'Unsupervised models', 'Unsupervised models', 'Unsupervised models', 'Unsupervised models'], ['TbTQT', '86.0', '40.1', '64.7', '94.5', '61.7'], ['BWE-AGG', '64.1', '20.4', '33.6', '65.5', '40.3'], ['BWE-IDF', '62.0', '19.0', '32.6', '66.4', '39.3'], ['Bigram', '64.0', '19.4', '33.4', '65.3', '39.6'], ['Translation+Monolingual information retrieval models', 'Translation+Monolingual information retrieval models', 'Translation+Monolingual information retrieval models', 'Translation+Monolingual information retrieval models', 'Translation+Monolingual information retrieval models', 'Translation+Monolingual information retrieval models'], ['SMN', '94.7', '73.6', '88.8', '98.4', '84.2'], ['DAM', '95.8', '78.5', '91.5', '98.8', '87.4'], ['CSRAN', '96.0', '79.8', '92.5', '99.0', '88.2'], ['Cross-lingual information retrieval model', 'Cross-lingual information retrieval model', 'Cross-lingual information retrieval model', 'Cross-lingual information retrieval model', 'Cross-lingual information retrieval model', 'Cross-lingual information retrieval model'], ['ML-BERT', '97.2', '83.8', '95.0', '99.3', '90.8'], ['POSIT-DRMM', '95.4', '74.3', '90.7', '99.3', '85.1'], ['CLMN−', '97.1', '83.5', '94.7', '99.6', '90.5'], ['[BOLD] CLMN', '[BOLD] 97.8', '[BOLD] 86.8', '[BOLD] 95.5', '[BOLD] 99.8', '[BOLD] 92.3']] | Overall, our proposed CLMN model achieves the best performance on five evaluation metrics. For simplicity, we mainly discuss the performance of R10@1 in the following part. In details, the advanced unsupervised CLIR models only achieve a fair performance without the collected CLIR-EC dataset. Through utilizing paired data, translation based CLIR models significantly outperform the SOTA unsupervised methods and achieves a decent performance on each metric. Moreover, directly modeling the cross-lingual information retrieval task can further introduce a significant performance improvement over the translation based models. Both the recent proposed POSIT-DRMM and ML-BERT models yield a large increase over the strong translation based models. We also observe that our proposed CLMN− matching model are better than the other matching models and even achieves comparable results with the fine-tuned multi-lingual BERT. With the enhancement of monolingual BERT, our proposed CLMN model outperforms the fine-tuned multi-lingual BERT model. |
Cross-Lingual Low-Resource Set-to-Description Retrieval for Global E-Commerce | 2005.08188 | Table 4: Ablation study of cross-lingual alignment, where mono refers to without cross-lingual alignment learning, en2ch refers to calculating the matching score in the Chinese bilingual space, and ch2en is to compute the relevance score in the English Bilingual Space. | ['[BOLD] Model', '[ITALIC] R2@1', '[ITALIC] R10@1', '[ITALIC] R10@2', '[ITALIC] R10@5', '[ITALIC] MRR'] | [['Initialize with word embedding', 'Initialize with word embedding', 'Initialize with word embedding', 'Initialize with word embedding', 'Initialize with word embedding', 'Initialize with word embedding'], ['CLMN−-mono', '95.8', '78.3', '91.6', '99.2', '87.3'], ['CLMN−-en2ch', '96.3', '81.5', '93.1', '99.5', '89.2'], ['CLMN−-ch2en', '96.7', '81.8', '93.4', '99.5', '89.5'], ['CLMN−', '97.1', '83.5', '94.7', '99.6', '90.5'], ['Initialize with BERT', 'Initialize with BERT', 'Initialize with BERT', 'Initialize with BERT', 'Initialize with BERT', 'Initialize with BERT'], ['CLMN-mono', '96.5', '80.7', '93.8', '99.8', '89.1'], ['CLMN-en2ch', '97.2', '84.6', '95.4', '99.7', '91.4'], ['CLMN-ch2en', '97.5', '85.5', '94.8', '99.8', '91.7'], ['[BOLD] CLMN', '[BOLD] 97.8', '[BOLD] 86.8', '[BOLD] 95.5', '[BOLD] 99.8', '[BOLD] 92.3']] | Through analyzing, we observe that monolingual BERT is superior to train domain-specific word embeddings in our task. We attribute such an observation to the relatively small size of the domain-specific monolingual data, which is far from ideal to train a decent model. As for the influence of cross-lingual alignment, it can effectively capture the correlations between two different languages and thus can introduce performance improvement. Besides, we also notice that the context-dependent mapping strategy is more effective in English bilingual space. This observation is perhaps owing to that the relatively small size of the unordered Chinese product attribute can be easily mapped to another language space compared with a relatively long product description with lexical and syntactic structures. |
BAKSA at SemEval-2020 Task 9: Bolstering CNN with Self-Attention for Sentiment Analysis of Code Mixed Text | 2007.10819 | Table 2: Performance of Ensemble system on Hinglish and Spanglish test datasets | ['[EMPTY]', '[BOLD] F1 [BOLD] o', '[BOLD] F1 [BOLD] +', '[BOLD] F1 [BOLD] -', '[BOLD] F1 [BOLD] Macro', '[BOLD] Macro [BOLD] Precision', '[BOLD] Macro [BOLD] Recall'] | [['[BOLD] Hinglish', '0.640', '0.762', '0.729', '0.707', '0.712', '0.705'], ['[BOLD] Spanglish', '0.135', '0.825', '0.375', '0.725', '0.763', '0.696']] | Combining the results of CNN with those of the Self-Attention model was the primary motivation for using an ensemble of the two. The ensemble outperforms all our previous models, achieving a recall of 0.705 with an F1-score of 0.707 on the Hinglish test dataset and a recall of 0.696 with an F1-score of 0.725 on the Spanglish test dataset The confusion matrices for the ensemble on both datasets are shown in figure 6 and 7 (o : neutral, + : positive, - : negative). Our team was ranked 5th among 62 teams in Hinglish and 13th among 29 teams in Spanglish. |
Some of Them Can be Guessed!Exploring the Effect of Linguistic Context in Predicting Quantifiers | 1806.00354 | Table 2: Accuracy of models and humans. Values in bold are the highest in the column. *Note that due to an imperfect balancing of data, chance level for humans (computed as majority class) is 0.124. | ['[EMPTY]', '1-Sent [ITALIC] val', '1-Sent [ITALIC] test', '3-Sent [ITALIC] val', '3-Sent [ITALIC] test'] | [['[ITALIC] chance', '0.111', '0.111', '0.111', '0.111'], ['BoW-conc', '0.270', '0.238', '0.224', '0.207'], ['BoW-sum', '0.308', '0.290', '0.267', '0.245'], ['fastText', '0.305', '0.271', '0.297', '0.245'], ['CNN', '0.310', '0.304', '[BOLD] 0.298', '0.257'], ['LSTM', '0.315', '0.310', '0.277', '0.253'], ['bi-LSTM', '0.341', '[BOLD] 0.337', '0.279', '0.265'], ['Att-LSTM', '0.319', '0.324', '0.287', '[BOLD] 0.291'], ['AttCon-LSTM', '[BOLD] 0.343', '0.319', '0.274', '0.288'], ['Humans', '0.221*', '——', '0.258*', '——']] | We have three main results. (1) Broader context helps humans to perform the task, but hurts model performance. This can be seen by comparing the 4-point increase of human accuracy from 1-Sent (0.22) to 3-Sent (0.26) with the generally worse performance of all models (e.g. AttCon-LSTM, from 0.34 to 0.27 in val). (2) All models are significantly better than humans in performing the task at the sentence level (1-Sent), whereas their performance is only slightly better than humans’ in 3-Sent. AttCon-LSTM, which is the best model in the former setting, achieves a significantly higher accuracy than humans’ (0.34 vs 0.22). By contrast, in 3-Sent, the performance of the best model is closer to that of humans (0.29 of Att-LSTM vs 0.26). It can be seen that LSTMs are overall the best-performing architectures, with CNN showing some potential in the handling of longer sequences (3-Sent). Compare ‘few’, ‘a few’, ‘more than half’, ‘some’, and ‘most’: while the first three are generally hard for humans but predictable by the models, the last two show the opposite pattern. Moreover, quantifiers that are guessed by humans to a larger extent in 3-Sent compared to 1-Sent, thus profiting from the broader linguistic context, do not experience the same boost with models. Human accuracy improves notably for ‘few’, ‘a few’, ‘many’, and ‘most’, while model performance on the same quantifiers does not. |
Some of Them Can be Guessed!Exploring the Effect of Linguistic Context in Predicting Quantifiers | 1806.00354 | Table 3: Responses by humans (top) and AttCon-LSTM (bottom) in 3-Sent (val). Values in bold are the highest in the row. | ['[ITALIC] none [ITALIC] few', '[BOLD] 19 5', '1 [BOLD] 9', '2', '0 6', '2 5', '0', '0 3', '0', '12 2'] | [['[ITALIC] a few', '0', '0', '7', '[BOLD] 17', '9', '0', '4', '0', '4'], ['[ITALIC] some', '0', '0', '3', '[BOLD] 14', '5', '0', '4', '0', '3'], ['[ITALIC] many', '0', '1', '0', '3', '[BOLD] 18', '0', '3', '0', '7'], ['[ITALIC] more than half', '0', '0', '0', '2', '2', '[BOLD] 11', '10', '4', '2'], ['[ITALIC] most', '0', '0', '0', '1', '7', '0', '[BOLD] 23', '4', '8'], ['[ITALIC] almost all', '0', '1', '0', '3', '2', '1', '[BOLD] 7', '2', '6'], ['[ITALIC] all', '0', '0', '2', '1', '5', '0', '4', '3', '[BOLD] 28'], ['[ITALIC] none', '[BOLD] 39', '15', '13', '10', '0', '20', '5', '3', '10'], ['[ITALIC] few', '3', '[BOLD] 48', '18', '7', '9', '20', '5', '1', '4'], ['[ITALIC] a few', '7', '13', '[BOLD] 31', '18', '5', '15', '12', '8', '6'], ['[ITALIC] some', '5', '18', '16', '17', '16', '[BOLD] 19', '9', '5', '10'], ['[ITALIC] many', '2', '18', '18', '15', '[BOLD] 20', '17', '10', '6', '9'], ['[ITALIC] more than half', '2', '7', '2', '3', '10', '[BOLD] 82', '2', '1', '6'], ['[ITALIC] most', '8', '14', '14', '12', '12', '[BOLD] 26', '15', '5', '9'], ['[ITALIC] almost all', '5', '9', '15', '10', '8', '[BOLD] 37', '15', '6', '10'], ['[ITALIC] all', '7', '12', '10', '15', '21', '13', '7', '4', '[BOLD] 26']] | To check whether humans and the models make similar errors, we look into the distribution of responses in 3-Sent (val), which is the most comparable setting with respect to accuracy. Human errors generally involve quantifiers that display a similar magnitude as the correct one. To illustrate, ‘some’ is chosen in place of ‘a few’, and ‘most’ in place of either ‘almost all’ or ‘more than half’. A similar pattern is observed in the model’s predictions, though we note a bias toward ‘more than half’. |
[ | 2006.03965 | Table 5: Coefficients of a beta regression generalized additive model with ratio of maximum intensity ([s] vs. vowel) as the dependent variable. | ['A. parametric coefficients', 'Estimate', 'Std. Error', 't-value', 'p-value'] | [['(Intercept) = [ITALIC] z11', '-0.0571', '0.0156', '-3.6505', '0.0003'], ['[ITALIC] z5', '-0.0404', '0.0123', '-3.2820', '0.0011'], ['[ITALIC] z14', '-0.0011', '0.0144', '-0.0753', '0.9400'], ['[ITALIC] z26', '-0.0097', '0.0115', '-0.8444', '0.3989'], ['[ITALIC] z29', '-0.0590', '0.0113', '-5.2131', '< 0.0001'], ['[ITALIC] z49', '0.0074', '0.0112', '0.6595', '0.5100'], ['[ITALIC] z74', '-0.0741', '0.0121', '-6.1071', '< 0.0001'], ['B. smooth terms', 'edf', 'Ref.df', 'F-value', 'p-value'], ['s(zValuePerc): [ITALIC] z5', '1.0002', '1.0000', '11.8417', '0.0006'], ['s(zValuePerc): [ITALIC] z11', '4.1696', '4.8546', '14.1190', '< 0.0001'], ['s(zValuePerc): [ITALIC] z14', '5.3322', '6.1117', '36.6899', '< 0.0001'], ['s(zValuePerc): [ITALIC] z26', '1.0003', '1.0002', '12.5952', '0.0004'], ['s(zValuePerc): [ITALIC] z29', '1.0002', '1.0000', '12.0036', '0.0006'], ['s(zValuePerc): [ITALIC] z49', '4.2002', '4.8650', '19.1225', '< 0.0001'], ['s(zValuePerc): [ITALIC] z74', '3.2768', '3.7863', '1.2326', '0.2479'], ['fs(zValuePerc,sameValues,m=1,k=5)', '110.6863', '143.0000', '6.1542', '< 0.0001'], ['fs(zValuePerc,trajectoryZ,m=1,k=5)', '558.2060', '728.0000', '56.9670', '< 0.0001']] | The data were fit to a beta regression generalized additive mixed model (in the mgcv package; mgcv) with the ratio as the dependent variable, the seven chosen variables as the parametric term, thin-plate smooths for each variable and random smooths (with first order of penalty; baayen16; soskuthy17) for (i) trajectory and for (ii) value of other variables in the latent space of the Generator network. All smooths (except for z74) are significantly different from 0 In other words, maximum intensity of [s] is increasingly attenuated compared to the intensity of the vowel as z approaches the opposite value from the one identified as predicting the presence of [s] until it completely disappears from the output. |
[ | 2006.03965 | Table 2: AIC values of five fitted models with corresponding degrees of freedom (df), fitted with Maximum Likelihood. AIC of Select is not listed because it was not fitted with ML; AIC of Select fitted with REML is, however, similar to Excluded (=1,008.46 vs. 1008.54). | ['[EMPTY]', 'df', 'AIC'] | [['Full', '108.94', '1018.38'], ['Modified', '88.06', '1031.03'], ['Excluded', '71.51', '1008.20'], ['Linear', '101.00', '1036.04'], ['Linear excluded', '78.00', '1007.06']] | Six models are thus fit in an exploratory method to identify variables in the latent space that predict the presence of [s] in generated outputs. The LinearExcluded model has the lowest AIC score. All six models, however, yield similar results. |
[ | 2006.03965 | Table 3: Coefficients of a Gamma regression model with duration of VOT in the training data as the dependent variable and condition (#TV vs. #sTV) and Place of articulation (with interaction) as independent variables. | ['[EMPTY]', 'Estimate', 'Std. Error', 't value', 'Pr(>|t|)'] | [['(Intercept)', '4.0426', '0.0050', '804.3778', '0.0000'], ['#TV vs.\xa0#sTV', '-0.8389', '0.0169', '-49.6883', '0.0000'], ['[p] vs.\xa0mean', '-0.1383', '0.0079', '-17.5160', '0.0000'], ['[t] vs.\xa0mean', '-0.0311', '0.0068', '-4.5706', '0.0000'], ['#sTV:[p]', '-0.1026', '0.0257', '-3.9979', '0.0001'], ['#sTV:[t]', '0.0694', '0.0214', '3.2432', '0.0012']] | Violin plots with box-plots of durations in ms of VOT in the training data based on two conditions: when word-initial #TV sequence is not preceded by [s] (#sTV) and when it is preceded by [s] (#sTV) accross the three places of articulation: [p], [t], [k]. (b) Fitted values of VOT durations with 95% confidence intervals from the Gamma (with log-link) To test the significance of the presence of [s] as a predictor of VOT duration, the data were fit to a Gamma regression model (with log-link) with two predictors: Structure (the presence vs. absence of [s]) and Place of articulation of the target stop (with three levels — [p], [t], [k]) and their interaction. Structure was treatment-coded (with absence of [s] as the reference level), while place of articulation of the stop was sum-coded (with [k] as reference). The interaction term is significant (AIC = 46873.25 vs. 46885.73), which is why it is kept in the final model. (β=−0.84,t=−49.69,p<0.0001). Fitted values for #TV are 56.97 ms [56.41, 57.53] ms and for #sTV 24.62 ms [23.86, 25.41]. The difference between the means is 32.35 ms. The ratio of VOT durations (estimated with the emmeans package; emmeans) between the two conditions (#TV#sTV) equals 2.34, (SE=0.039). The significant interaction #sTV:[t] is not informative for our purposes. |
[ | 2006.03965 | Table 6: Coefficients of a generalized additive model with center of gravity as the dependent variable with the marginal value of z-variables (strong). The model was fit with correction for autocorrelation with ρ=0.7. | ['A. parametric coefficients', 'Estimate', 'Std. Error', 't-value', 'p-value'] | [['(Intercept) = [ITALIC] z11', '4751.7378', '84.7008', '56.1002', '< 0.0001'], ['[ITALIC] z5', '218.6576', '116.9490', '1.8697', '0.0618'], ['[ITALIC] z14', '-236.4061', '134.6301', '-1.7560', '0.0793'], ['[ITALIC] z26', '195.2722', '108.9736', '1.7919', '0.0734'], ['[ITALIC] z29', '103.6866', '107.8602', '0.9613', '0.3366'], ['[ITALIC] z49', '17.6464', '106.4109', '0.1658', '0.8683'], ['[ITALIC] z74', '108.7466', '113.8531', '0.9551', '0.3397'], ['B. smooth terms', 'edf', 'Ref.df', 'F-value', 'p-value'], ['s(zValuePerc) = [ITALIC] z11', '7.5348', '7.9933', '12.1238', '< 0.0001'], ['s(zValuePerc): [ITALIC] z5', '4.5539', '5.7457', '2.9261', '0.0081'], ['s(zValuePerc): [ITALIC] z14', '7.3604', '8.3734', '5.7228', '< 0.0001'], ['s(zValuePerc): [ITALIC] z26', '5.5900', '6.8683', '3.8049', '0.0005'], ['s(zValuePerc): [ITALIC] z29', '5.8536', '7.1301', '2.9198', '0.0045'], ['s(zValuePerc): [ITALIC] z49', '4.4714', '5.6434', '1.8590', '0.0803'], ['s(zValuePerc): [ITALIC] z74', '4.2765', '5.4186', '2.8162', '0.0136'], ['fs(zValuePerc,sameValues,m=1,k=10)', '143.4989', '288.0000', '1.0560', '< 0.0001'], ['fs(zValuePerc,trajectoryZ,m=1,k=7)', '168.3558', '1120.0000', '0.2032', '< 0.0001']] | The trajectory for center of gravity, for example, significantly differs between z11 and most of the other six variables. Overall kurtosis is significantly different when z11 is manipulated, compared to, for example, z26 and z29. Similarly, while z74 does not significantly attenuate amplitude of [s], it significantly differs in skew trajectory of [s]. The main function of z74 is thus likely in its control of spectral properties of frication of [s] (e.g. skew). |
[ | 2006.03965 | Table 7: Coefficients of a generalized additive model with kurtosis as the dependent variable with the marginal value of z-variables (strong). The model was fit with correction for autocorrelation with ρ=0.2. | ['A. parametric coefficients', 'Estimate', 'Std. Error', 't-value', 'p-value'] | [['(Intercept) = [ITALIC] z11', '1.0675', '0.1045', '10.2167', '< 0.0001'], ['[ITALIC] z5', '-0.4521', '0.1420', '-3.1842', '0.0015'], ['[ITALIC] z14', '0.3405', '0.1693', '2.0105', '0.0446'], ['[ITALIC] z26', '-0.4434', '0.1323', '-3.3517', '0.0008'], ['[ITALIC] z29', '-0.6225', '0.1339', '-4.6502', '< 0.0001'], ['[ITALIC] z49', '0.0431', '0.1332', '0.3234', '0.7464'], ['[ITALIC] z74', '-0.5129', '0.1383', '-3.7077', '0.0002'], ['B. smooth terms', 'edf', 'Ref.df', 'F-value', 'p-value'], ['s(zValuePerc) = [ITALIC] z11', '3.3590', '4.0455', '4.4859', '0.0013'], ['s(zValuePerc): [ITALIC] z5', '1.0001', '1.0001', '1.8978', '0.1686'], ['s(zValuePerc): [ITALIC] z14', '5.7086', '6.9165', '2.9066', '0.0054'], ['s(zValuePerc): [ITALIC] z26', '1.0000', '1.0000', '2.3717', '0.1238'], ['s(zValuePerc): [ITALIC] z29', '2.2995', '2.8348', '1.4855', '0.2361'], ['s(zValuePerc): [ITALIC] z49', '5.3523', '6.5335', '2.5656', '0.0106'], ['s(zValuePerc): [ITALIC] z74', '1.0000', '1.0000', '0.1912', '0.6620'], ['fs(zValuePerc,sameValues,m=1,k=10)', '69.5866', '288.0000', '0.4214', '< 0.0001'], ['fs(zValuePerc,trajectoryZ,m=1,k=7)', '174.7382', '1120.0000', '0.2422', '< 0.0001']] | The trajectory for center of gravity, for example, significantly differs between z11 and most of the other six variables. Overall kurtosis is significantly different when z11 is manipulated, compared to, for example, z26 and z29. Similarly, while z74 does not significantly attenuate amplitude of [s], it significantly differs in skew trajectory of [s]. The main function of z74 is thus likely in its control of spectral properties of frication of [s] (e.g. skew). |
[ | 2006.03965 | Table 8: Coefficients of a generalized additive model with skew as the dependent variable with the marginal value of z-variables (strong). The model was fit with correction for autocorrelation with ρ=0.7. | ['A. parametric coefficients', 'Estimate', 'Std. Error', 't-value', 'p-value'] | [['(Intercept) = [ITALIC] z11', '0.2726', '0.0841', '3.2434', '0.0012'], ['[ITALIC] z5', '-0.2686', '0.1197', '-2.2448', '0.0249'], ['[ITALIC] z14', '-0.0188', '0.1377', '-0.1368', '0.8912'], ['[ITALIC] z26', '-0.1965', '0.1115', '-1.7629', '0.0781'], ['[ITALIC] z29', '-0.2011', '0.1101', '-1.8270', '0.0679'], ['[ITALIC] z49', '-0.0403', '0.1063', '-0.3792', '0.7046'], ['[ITALIC] z74', '-0.2468', '0.1165', '-2.1193', '0.0342'], ['B. smooth terms', 'edf', 'Ref.df', 'F-value', 'p-value'], ['s(zValuePerc) = [ITALIC] z11', '4.5857', '5.4215', '1.3591', '0.3433'], ['s(zValuePerc): [ITALIC] z5', '4.2885', '5.5104', '2.0917', '0.0864'], ['s(zValuePerc): [ITALIC] z14', '6.4497', '7.7372', '3.3262', '0.0009'], ['s(zValuePerc): [ITALIC] z26', '6.4653', '7.7452', '2.2045', '0.0303'], ['s(zValuePerc): [ITALIC] z29', '3.8520', '4.9849', '2.0158', '0.0716'], ['s(zValuePerc): [ITALIC] z49', '1.0000', '1.0001', '0.0105', '0.9186'], ['s(zValuePerc): [ITALIC] z74', '4.0239', '5.1943', '1.9009', '0.0916'], ['fs(zValuePerc,sameValues,m=1,k=10)', '113.6068', '288.0000', '0.6943', '< 0.0001'], ['fs(zValuePerc,trajectoryZ,m=1,k=7)', '0.0001', '1120.0000', '0.0000', '0.9908']] | The trajectory for center of gravity, for example, significantly differs between z11 and most of the other six variables. Overall kurtosis is significantly different when z11 is manipulated, compared to, for example, z26 and z29. Similarly, while z74 does not significantly attenuate amplitude of [s], it significantly differs in skew trajectory of [s]. The main function of z74 is thus likely in its control of spectral properties of frication of [s] (e.g. skew). |
[ | 2006.03965 | Table 9: Coefficients of a generalized additive model with center of gravity as the dependent variable with the value of z-variables at the point before [s] ceases from the output (weak). | ['A. parametric coefficients', 'Estimate', 'Std. Error', 't-value', 'p-value'] | [['(Intercept) = [ITALIC] z11', '4396.2895', '88.8182', '49.4976', '< 0.0001'], ['[ITALIC] z5', '2.4059', '85.3386', '0.0282', '0.9775'], ['[ITALIC] z14', '109.3881', '101.5196', '1.0775', '0.2815'], ['[ITALIC] z26', '98.1943', '79.3503', '1.2375', '0.2162'], ['[ITALIC] z29', '-34.1064', '78.4139', '-0.4350', '0.6637'], ['[ITALIC] z49', '-42.5635', '77.1872', '-0.5514', '0.5815'], ['[ITALIC] z74', '19.5268', '85.7248', '0.2278', '0.8199'], ['B. smooth terms', 'edf', 'Ref.df', 'F-value', 'p-value'], ['s(zValuePerc) = [ITALIC] z11', '6.9763', '7.4135', '16.1815', '< 0.0001'], ['s(zValuePerc): [ITALIC] z5', '1.0002', '1.0003', '0.0007', '0.9793'], ['s(zValuePerc): [ITALIC] z14', '2.1327', '2.4962', '1.0245', '0.2793'], ['s(zValuePerc): [ITALIC] z26', '1.0072', '1.0110', '0.2178', '0.6468'], ['s(zValuePerc): [ITALIC] z29', '1.0003', '1.0004', '0.4048', '0.5249'], ['s(zValuePerc): [ITALIC] z49', '1.0001', '1.0002', '3.5280', '0.0606'], ['s(zValuePerc): [ITALIC] z74', '2.4946', '2.9500', '1.2008', '0.2568'], ['fs(zValuePerc,sameValues,m=1,k=10)', '198.6356', '288.0000', '3.3009', '< 0.0001'], ['fs(zValuePerc,trajectoryZ,m=1,k=7)', '413.0783', '1120.0000', '1.6191', '< 0.0001']] | The trajectory for center of gravity, for example, significantly differs between z11 and most of the other six variables. Overall kurtosis is significantly different when z11 is manipulated, compared to, for example, z26 and z29. Similarly, while z74 does not significantly attenuate amplitude of [s], it significantly differs in skew trajectory of [s]. The main function of z74 is thus likely in its control of spectral properties of frication of [s] (e.g. skew). |
[ | 2006.03965 | Table 10: Coefficients of a generalized additive model with kurtosis as the dependent variable with the value of z-variables at the point before [s] ceases from the output (weak). The model was fit with correction for autocorrelation with ρ=0.2. | ['A. parametric coefficients', 'Estimate', 'Std. Error', 't-value', 'p-value'] | [['(Intercept) = [ITALIC] z11', '0.6420', '0.1544', '4.1575', '< 0.0001'], ['[ITALIC] z5', '-0.0037', '0.1463', '-0.0256', '0.9796'], ['[ITALIC] z14', '-0.4010', '0.1685', '-2.3803', '0.0174'], ['[ITALIC] z26', '0.0230', '0.1368', '0.1678', '0.8668'], ['[ITALIC] z29', '0.0909', '0.1344', '0.6762', '0.4991'], ['[ITALIC] z49', '0.0870', '0.1323', '0.6576', '0.5109'], ['[ITALIC] z74', '0.1577', '0.1429', '1.1032', '0.2702'], ['B. smooth terms', 'edf', 'Ref.df', 'F-value', 'p-value'], ['s(zValuePerc) = [ITALIC] z11', '2.6481', '2.9134', '1.5639', '0.2107'], ['s(zValuePerc): [ITALIC] z5', '1.0000', '1.0000', '3.2961', '0.0697'], ['s(zValuePerc): [ITALIC] z14', '1.0000', '1.0001', '0.5569', '0.4556'], ['s(zValuePerc): [ITALIC] z26', '1.9006', '2.3489', '1.0773', '0.3078'], ['s(zValuePerc): [ITALIC] z29', '1.0000', '1.0000', '0.0284', '0.8661'], ['s(zValuePerc): [ITALIC] z49', '1.0000', '1.0000', '0.0002', '0.9887'], ['s(zValuePerc): [ITALIC] z74', '1.3675', '1.6177', '0.3165', '0.5648'], ['fs(zValuePerc,sameValues,m=1,k=10)', '181.3987', '288.0000', '2.3885', '< 0.0001'], ['fs(zValuePerc,trajectoryZ,m=1,k=7)', '128.9479', '1120.0000', '0.1673', '< 0.0001']] | The trajectory for center of gravity, for example, significantly differs between z11 and most of the other six variables. Overall kurtosis is significantly different when z11 is manipulated, compared to, for example, z26 and z29. Similarly, while z74 does not significantly attenuate amplitude of [s], it significantly differs in skew trajectory of [s]. The main function of z74 is thus likely in its control of spectral properties of frication of [s] (e.g. skew). |
[ | 2006.03965 | Table 11: Coefficients of a generalized additive model with skew as the dependent variable with the value of z-variables at the point before [s] ceases from the output (weak). The model was fit with correction for autocorrelation with ρ=0.3. | ['A. parametric coefficients', 'Estimate', 'Std. Error', 't-value', 'p-value'] | [['(Intercept) = [ITALIC] z11', '0.2432', '0.0734', '3.3145', '0.0009'], ['[ITALIC] z5', '-0.0384', '0.0758', '-0.5067', '0.6125'], ['[ITALIC] z14', '-0.0906', '0.0873', '-1.0373', '0.2998'], ['[ITALIC] z26', '-0.1433', '0.0705', '-2.0325', '0.0423'], ['[ITALIC] z29', '-0.0392', '0.0698', '-0.5613', '0.5747'], ['[ITALIC] z49', '-0.0191', '0.0687', '-0.2777', '0.7813'], ['[ITALIC] z74', '-0.0151', '0.0740', '-0.2043', '0.8381'], ['B. smooth terms', 'edf', 'Ref.df', 'F-value', 'p-value'], ['s(zValuePerc) = [ITALIC] z11', '5.2698', '5.9125', '3.3712', '0.0037'], ['s(zValuePerc): [ITALIC] z5', '1.0000', '1.0000', '0.5871', '0.4437'], ['s(zValuePerc): [ITALIC] z14', '1.0000', '1.0000', '1.7508', '0.1860'], ['s(zValuePerc): [ITALIC] z26', '1.0000', '1.0000', '0.1276', '0.7210'], ['s(zValuePerc): [ITALIC] z29', '1.5340', '1.8995', '0.3881', '0.6718'], ['s(zValuePerc): [ITALIC] z49', '1.0000', '1.0000', '0.5952', '0.4406'], ['s(zValuePerc): [ITALIC] z74', '2.1616', '2.7315', '0.7639', '0.3898'], ['fs(zValuePerc,sameValues,m=1,k=10)', '170.1493', '288.0000', '2.2288', '< 0.0001'], ['fs(zValuePerc,trajectoryZ,m=1,k=7)', '47.7523', '1120.0000', '0.0661', '0.0001']] | The trajectory for center of gravity, for example, significantly differs between z11 and most of the other six variables. Overall kurtosis is significantly different when z11 is manipulated, compared to, for example, z26 and z29. Similarly, while z74 does not significantly attenuate amplitude of [s], it significantly differs in skew trajectory of [s]. The main function of z74 is thus likely in its control of spectral properties of frication of [s] (e.g. skew). |
1 Introduction | 1612.07843 | Table 1: Test set performance of the ML models for 20-class document classification. | ['ML Model', 'Test Accuracy (%)'] | [['BoW/SVM ( [ITALIC] V=70631 words)', '80.10'], ['CNN1 ( [ITALIC] H=1, [ITALIC] F=600)', '79.79'], ['CNN2 ( [ITALIC] H=2, [ITALIC] F=800)', '[BOLD] 80.19'], ['CNN3 ( [ITALIC] H=3, [ITALIC] F=600)', '79.75']] | Herein CNN1, CNN2, CNN3 respectively denote neural networks with convolutional filter size H equal to 1, 2 and 3 (i.e. covering 1, 2 or 3 consecutive words in the document). One can see that the linear SVM performs on par with the neural networks, i.e. the non-linear structure of the CNN models does not yield a considerable advantage toward classification accuracy. This can be explained by the fact that for most topic categorization tasks, the different categories can be separated linearly in the very high-dimensional bag-of-words or bag-of-N-grams space thanks to sufficiently disjoint sets of features. |
Personalization in Goal-oriented Dialog | 1706.07503 | Table 6: Personalization task 2 (Updating API calls). The model correctly learns to focus on the original API call and the changes demanded by the user. It does not require multiple hops to solve this task. | ['[BOLD] Time', '[BOLD] Locutor', '[BOLD] Dialog History', '[BOLD] Hop #1', '[BOLD] Hop #2', '[BOLD] Hop #3'] | [['1', '[EMPTY]', 'female middle-aged', '0.004', '0', '0'], ['2', 'User', 'good morning', '0.008', '0', '0'], ['3', 'Bot', 'hello maam how can i help you', '0', '0', '0'], ['4', 'User', 'may i have a table in bombay with spanish food for four in a expensive price range', '0.001', '0', '0.999'], ['5', 'Bot', 'give me a second for processing the reservation', '0.001', '0', '0'], ['6', 'User', '', '0.082', '0', '0'], ['7', 'Bot', 'sure maam i’m finding some options for you', '0', '0', '0'], ['8', 'User', '', '0.05', '0', '0'], ['9', 'Bot', 'api_call spanish bombay four expensive', '0.214', '0.002', '0'], ['10', 'User', 'actually i would prefer with french food', '0.635', '0.997', '0'], ['11', 'Bot', 'great is there any other thing to modify', '0', '0', '0'], ['12', 'User', 'no', '0.001', '0', '0'], ['13', 'Bot', 'sure maam i’m finding some options for you', '0', '0', '0'], ['[BOLD] User input', '[BOLD] User input', '', '', '', ''], ['[BOLD] Correct answer', '[BOLD] Correct answer', 'api_call french bombay four expensive', 'api_call french bombay four expensive', 'api_call french bombay four expensive', 'api_call french bombay four expensive'], ['[BOLD] Predicted answer', '[BOLD] Predicted answer', 'api_call french bombay four expensive', 'api_call french bombay four expensive', 'api_call french bombay four expensive', 'api_call french bombay four expensive']] | They completed PT1 and PT2 (issuing and updating API calls) with a very high degree of accuracy. This indicates that the model is able to implicitly track dialog state and personalize the bot’s utterance based on the user’s profile. |
Personalization in Goal-oriented Dialog | 1706.07503 | Table 5: Personalization task 1 (Issuing API calls). The model solves the task successfully by directing its attention towards the memories containing the values of the fields of the API call. The signal grows stronger on the second hop and switches almost exclusively to the price range in the third. This indicates that the model does not need multiple hops for successfully completing PT1. (Zero attention weight signifies a very small value.) | ['[BOLD] Time', '[BOLD] Locutor', '[BOLD] Dialog History', '[BOLD] Hop #1', '[BOLD] Hop #2', '[BOLD] Hop #3'] | [['1', '[EMPTY]', 'female elderly', '0.006', '0', '0'], ['2', 'User', 'hi', '0.012', '0', '0'], ['3', 'Bot', 'good day madam how could i assist you today', '0', '0', '0'], ['4', 'User', 'may i have a table', '0.006', '0.248', '0'], ['5', 'Bot', 'thank you madam i shall start the reservation now', '0', '0', '0'], ['6', 'User', '', '0.064', '0', '0'], ['7', 'Bot', 'could you tell me your preference on the type of cuisine', '0.001', '0', '0'], ['8', 'User', 'i love french food', '0.307', '0.103', '0'], ['9', 'Bot', 'could you tell me where the restaurant should be located', '0', '0', '0'], ['10', 'User', 'madrid please', '0.265', '0.282', '0'], ['11', 'Bot', 'would you mind telling me how many guests shall be at your table', '0', '0', '0'], ['12', 'User', 'for four please', '0.225', '0.076', '0'], ['13', 'Bot', 'would you mind telling me your price range', '0', '0', '0'], ['14', 'User', 'in a cheap price range please', '0.11', '0.29', '0.999'], ['15', 'Bot', 'thank you madam i shall provide you with options shortly', '0', '0', '0'], ['[BOLD] User input', '[BOLD] User input', '', '', '', ''], ['[BOLD] Correct answer', '[BOLD] Correct answer', 'api_call french madrid four cheap', 'api_call french madrid four cheap', 'api_call french madrid four cheap', 'api_call french madrid four cheap'], ['[BOLD] Predicted answer', '[BOLD] Predicted answer', 'api_call french madrid four cheap', 'api_call french madrid four cheap', 'api_call french madrid four cheap', 'api_call french madrid four cheap']] | They completed PT1 and PT2 (issuing and updating API calls) with a very high degree of accuracy. This indicates that the model is able to implicitly track dialog state and personalize the bot’s utterance based on the user’s profile. All models were trained on the full dataset. |
Personalization in Goal-oriented Dialog | 1706.07503 | Table 8: Personalization task 3 (Displaying options). The model should ideally be focusing on factors that are used for implicit ranking, such as the user’s profile and the ratings, types and specialities of the various restaurants in the KB facts. It should also pay attention to the restaurants that have already been suggested to the user. However, it attends primarily to the locations, indicating that it is insufficient at reasoning over the KB. We have only shown important utterances in the table. | ['[BOLD] Time', '[BOLD] Locutor', '[BOLD] Dialog History', '[BOLD] Hop #1', '[BOLD] Hop #2', '[BOLD] Hop #3'] | [['1', '[EMPTY]', 'female young non-veg pizza', '0.0001', '0', '0'], ['20', 'User', 'resto_rome_moderate_italian_8stars_1 R_rating 8', '0.0001', '0', '0'], ['21', 'User', 'resto_rome_moderate_italian_8stars_1 R_type veg', '0', '0', '0'], ['22', 'User', 'resto_rome_moderate_italian_8stars_1 R_speciality pizza', '0', '0', '0'], ['29', 'User', 'resto_rome_moderate_italian_3stars_1 R_location rome', '0.879', '0.999', '0'], ['33', 'User', 'resto_rome_moderate_italian_3stars_1 R_type veg', '0.0001', '0', '0'], ['44', 'User', 'resto_rome_moderate_italian_2stars_2 R_rating 2', '0', '0', '0'], ['45', 'User', 'resto_rome_moderate_italian_2stars_2 R_type non-veg', '0.0001', '0', '0'], ['46', 'User', 'resto_rome_moderate_italian_2stars_2 R_speciality pasta', '0', '0', '0'], ['74', 'User', 'hi', '0', '0', '0'], ['75', 'Bot', 'hey girl how is it going', '0', '0', '0'], ['76', 'User', 'may i have a table in rome with italian food in a moderate price range', '0', '0', '0'], ['77', 'Bot', 'be right back with your reservation', '0', '0', '0.999'], ['83', 'Bot', 'how about this one: resto_rome_moderate_italian_8stars_2', '0', '0', '0'], ['84', 'User', 'no i don’t like that', '0', '0', '0'], ['85', 'Bot', 'sure finding something else', '0', '0', '0'], ['86', 'User', '', '0.0003', '0', '0'], ['87', 'Bot', 'how about this one: resto_rome_moderate_italian_3stars_2', '0', '0', '0'], ['88', 'User', 'no i don’t like that', '0', '0', '0'], ['89', 'Bot', 'sure finding something else', '0', '0', '0'], ['[BOLD] User Input', '[BOLD] User Input', '', '', '', ''], ['[BOLD] Correct Answer', '[BOLD] Correct Answer', 'how about this one: resto_rome_moderate_italian_8stars_1', 'how about this one: resto_rome_moderate_italian_8stars_1', 'how about this one: resto_rome_moderate_italian_8stars_1', 'how about this one: resto_rome_moderate_italian_8stars_1'], ['[BOLD] Predicted Answer', '[BOLD] Predicted Answer', 'what food are you looking for', 'what food are you looking for', 'what food are you looking for', 'what food are you looking for']] | Memory Networks with the split memory feature significantly outperformed the standard architecture for PT3, which lead to better performance on the full dialog task PT5. |