text
stringlengths
0
1.96k
"First, if my understanding of the paper is correct, the experiments show that (a) the Bayes-optimal classifier can be non-robust in real-world settings, and (b) even when the Bayes-optimal classifier is robust, NNs can learn a non-robust decision boundary." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"It is unclear on what basis one can say that real-world datasets are more like the symmetric case or the asymmetric case" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"CNN vs Linear SVM: I am confused about why we would expect a CNN to be able to learn the Bayes-optimal decision boundary but not the Linear SVM" "['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"The paper justifies the adversarial vulnerability of the Linear SVM by arguing that the Bayes-optimal classifier is not in the Linear SVM hypothesis class, which makes sense" "['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro']" "paper quality"
"For CNNs, however, it is unclear if the Bayes-optimal classifier lies in the hypothesis class (there are ""universal approximation"" arguments but these usually require arbitrarily wide networks and are non-constructive)couldn't it be that the CNNs used here is in the same boat as the Linear SVM (i.e. the Bayes-optimal decision boundary is not expressible by the CNN?)" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"Experimental setup: - One somewhat concerning (but perhaps unavoidable) thing about the experimental setup is that all the considered datasets are not perfectly linearly separable , i.e. the Bayes-optimal classifier has non-zero test error in expectation, and moreover the data variance is full-rank in the embedded space." "['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"This is in stark contrast to real datasets, where there seem to be many different ways to perfectly separate say, dogs from cats, and the variance of the data seems to be very heavily concentrated in a small subset of directions" "['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"A suggestion rather than a concern and not impacting my current score: but it would be very interesting to see what happens for robustly trained classifiers on the symmetric and asymmetric datasets." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"The authors recognize that since the dataset is synthetically generated it is not necessarily predictive of how methods would perform with real-world data, but still it can serve a useful and complementary role similar to the one CLEVR has served in image understanding" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro']" "paper quality"
"As this direction (of increased resolution to make the problem less artificial) is likely to be important, a brief discussion of this finding from the main paper text would be appropriate - p3 resiliance -> resilience - p4 objects is moved -> object is moved - p6 actions itself -> actions themselves; builds upon -> build upon - p7 looses all -> loses all; suited our -> suited to our; render's camera parameters -> render camera parameters; to solve it -> to solve the problem - p8 (Xiong, b;a) and (Xiong, b) -> these references are missing the year; models needs to -> models need to - p9 phenomenon -> phenomena; the the videos -> the videos; these observation -> these observations; of next -> of the next; in real world -> in the real world" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"Experimental results validate the theoretical analysis and demonstrate the effectiveness of A*MCTS over benchmark MCTS algorithms with value and policy networks" "['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro']" "paper quality"
"And it combines A* search with MCTS to improve the performance over the traditional MCTS approaches based on UCT or PUCT tree policies" "['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro']" "paper quality"
"For example, what kind of additional benefit will it bring when integrating the priority queue into the MCTS algorithms" "['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"How could it improve over the traditional tree policy (e.g., UCT) for the selection step in MCTS" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"For example, in line 8 of Algorithm 2, why only the top 3 child nodes are added to the queue" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"In particular, the probability in the second term of Theorem 1 is hard to parse" "['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"In fact, it is performed under the exact assumption where the theoretical analysis is done for the A*MCTS." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"More convincing experimental comparison should be done under real environment such as Atari games (by using the simulator as the environment model as shown in [Guo et al 2014] Deep learning for real-time atari game play using offline monte-carlo tree search planning)." "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"In practice, this is not true because even at the leaf node the value could still be estimated by an inaccurate value network (e.g., AlphaGo or AlphaZero)." "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"For the next versions of the manuscript, I would recommend using a spell/grammar checker." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"Notably, several classes of geometric bin packing problems admit polynomial-time approximation algorithms (for extended surveys about this topic, see e.g. Arindam Khans Ph.D." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"A. Khan has also found approximation algorithms for the 3D Knapsack problem with rotations." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"Also in the algorithm, what are l_i, w_i and h_i" "['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"Under such circumstances, it is quite impossible to reproduce experiments ." "['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non']" "paper quality"
"Attribution priors as you formalize it in section 2 (which seems like the core contribution of the paper) was introduced in 2017 pseudo-url where they use a mask on a saliency map to regularize the representation learned." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"Some of these should serve as baselines" "['con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"Most of the experiments revolve around existing attribution prior methods" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"Is it just smoothing ?" "['con', 'con', 'con', 'con', 'non']" "paper quality"
"The authors compare this approach on 4 environments with M3RL, which also solves (extensions of) principal-agent problems." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"As others have found in the past, a variational approximation to the partition function contribution to the loss function (i.e. the negative phase) results in the loss of the variational lower bound on log likelihood and the connection between the resulting approximation and the log likelihood becomes unclear." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"I note that I am aware of the theoretical representation differences between directed and undirected models, I am wondering how these differences actually matter in practical applications at scale" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"I would like to see this curve extended until we start to see signs of overfitting" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"Here again, MNIST would be a useful dataset" "['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"It seems as though, in the application of AdVIL to the DBM, the authors are exploiting the structure of the model in how they define their sampling procedure" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"Also, I would like to see the test estimated NLL (via AIS) learning curves for VCD and AdVIL" "['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"This paper is aimed at tackling a general issue in NLP: Hard-negative training data (negative but very similar to positive) can easily confuse standard NLP model." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"This implementation showed improvement of performance on both tasks." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"While it is possible that I am missing something, I have tried going through the paper a few times and the contribution is not immediately obvious" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"There are multiple ways of increasing the expressiveness of the underlying distribution: moving from RNNs to GRU or LSTMs, increasing the hierarchical depth of the recurrence by stacking the layers, increasing the size of the hidden state, more layers before the output layer, etc. A convincing justification behind using a VAE for the task seems to be missing" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"Theresults quantify how smooth Gaussian data should be to avoid the curse of dimensionality, and indicate that for kernel learning the relevant dimension of the data should be defined in terms of how the distance between nearest data points depends on sample numbers." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"Then, in Figure 2, human normalized scores are reported for varying amounts of experience for the variants of Rainbow, and compared against SiMPLe with 100k interactions, with the claim that the authors couldn't run the method for longer experiences." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"The paper chooses a single method class of model-based methods to do this comparison, namely dyna-style algorithms that use the model to generate new data." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"Can we get the same conclusions on a different domain where other model-based methods have been successful; e.g. continuous control tasks?" "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"A way to improve the paper would be to make it clear from the beginning that these results are about Dyna-style algorithms in the Atari domain ." "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non']" "paper quality"
"In fact, the major claim is that using a cascade of linear layers instead of a single layer can lead to better performance in deep neural networks." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"If this does not lead to the same improvement, there should be a value in the expansion" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"3) the small improvement of the expanded network can be given by the different initialization." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"In fact, each composing matrix is initialized randomly." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"
"The training should be done by using the small network" "['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con']" "paper quality"
"This paper proposed a dual graph representation method to learn the representation of nodes in a graph." "['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non']" "paper quality"