Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
Libraries:
Datasets
pandas
License:
doc_id
stringlengths
9
9
text
sequence
labels
sequence
BkCxP2Fez
[ "The paper presents a Depthwise Separable Graph Convolution network that aims at generalizing Depthwise convolutions, that exhibit a nice performance in image related tasks, to the graph domain. ", "In particular it targets Graph Convolutional Networks.", "In the abstract the authors mention that the Depthwise Separable Graph Convolution that they propose is the key to understand the connections between geometric convolution methods and traditional 2D ones. ", "I am afraid I have to disagree ", "as the proposed approach is not giving any better understanding of what needs to be done and why. ", "It is an efficient way to mimic what has worked so far for the planar domain ", "but I would not consider it as fundamental in \"closing the gap\".", "I feel that the text is often redundant and that it could be simplified a lot.", "For example the authors state in various parts that DSC does not work on non-Euclidean data. ", "Section 2 should be clearer and used to better explain related approaches to motivate the proposed one.", "In fact, the entire motivation, at least for me, never went beyond the simple fact that this happens to be a good way to improve performance. ", "The intuition given is not sufficient to substantiate some of the claims on generality and understanding of graph based DL.", "In 3.1, at point (2), the authors mention that DSC filters are learned from the data whereas GC uses a constant matrix. ", "This is not correct, ", "as also reported in equation 2. ", "The matrix U is learned from the data as well.", "Equation (4) shows that the proposed approach would weight Q different GC layers. ", "In practical terms this is a linear combination of these graph convolutional layers.", "What is not clear is the \\Delta_{ij} definition. ", "It is first introduced in 2.3 and described as the relative position of pixel i and pixel j on the image, but then used in the context of a graph in (4). ", "What is the coordinate system used by the authors in this case? ", "This is a very important point that should be made clearer.", "Why is the Related Work section at the end? ", "I would put it at the front.", "The experiments compare with the recent relevant literature. ", "I think that having less number of parameters is a good thing in this setting ", "as the data is scarce,", "however I would like to see a more in-depth comparison with respect to the number of features produced by the model itself. ", "For example GCN has a representation space (latent) much smaller than DSCG.", "No statistics over multiple runs are reported, ", "and given the high variance of results on these datasets I would like them to be reported.", "I think the separability of the filters in this case brings the right level of simplification to the learning task, ", "however as it also holds for the planar case it is not clear whether this is necessarily the best way forward.", "What are the underlying mathematical insights that lead towards selecting separable convolutions?", "Overall I found the paper interesting but not ground-breaking. ", "A nice application of the separable principle to GCN. ", "Results are also interesting ", "but should be further verified by multiple runs." ]
[ "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "request", "request", "evaluation", "request", "fact", "evaluation", "evaluation", "request", "evaluation", "fact", "request", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "request" ]
HyxmggJbM
[ "This paper proposes a new way of sampling data for updates in deep-Q networks. ", "The basic principle is to update Q values starting from the end of the episode in order to facility quick propagation of rewards back along the episode.", "The paper is interesting, ", "but it lacks the proper comparisons to previously published techniques.", "The results presented by this paper shows improvement over the baseline. ", "But the Atari results is still significantly worse than the current SOTA.", "In the non-tabular case, the authors have actually moved away from Q learning and defined an objective that is both on and off-policy. ", "Some (theoretical) analysis would be nice. ", "It is hard to judge whether the objective defined in the non-tabular defines a contraction operator at all in the tabular case.", "There has been a number of highly relevant papers. ", "Prioritized replay, for example, could have a very similar effect to proposed approach in the tabular case.", "In the non-tabular case, the Retrace algorithm, tree backup, Watkin's Q learning all bear significant resemblance to the proposed method. ", "Although the proposed algorithm is different from all 3, ", "the authors should still have compared to at least one of them as a baseline. ", "The Retrace algorithm specifically has also been shown to help significantly in the Atari case, ", "and it defines a convergent update rule." ]
[ "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "fact" ]
H1WORsdlG
[ "This paper addresses the important problem of understanding mathematically how GANs work. ", "The approach taken here is to look at GAN through the lense of the scattering transform.", "Unfortunately the manuscrit submitted is very poorly written.", "Introduction and flow of thoughts is really hard to follow.", "In method sections, the text jumps from one concept to the next without proper definitions.", "Sorry I stopped reading on page 3.", "I suggest to rewrite this work before sending it to review.", "Among many things: - For citations use citep and not citet to have () at the right places.", "- Why does it seems -> Why does it seem etc." ]
[ "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "request" ]
H1JzYwcxM
[ "=== SUMMARY === The paper considers a combination of Reinforcement Learning (RL) and Imitation Learning (IL), in the infinite horizon discounted MDP setting.", "The IL part is in the form of an oracle that returns a value function V^e, which is an approximation of the optimal value function. ", "The paper defines a new cost (or reward) function based on V^e, through shaping (Eq. 1). ", "It is known that shaping does not change the optimal policy.", "A key aspect of this paper is to consider a truncated horizon problem (say horizon k) with the reshaped cost function, instead of an infinite horizon MDP.", "For this truncated problem, one can write the (dis)advantage function as a k-step sum of reward plus the value returned by the oracle at the k-th step (cf. Eq. 5).", "Theorem 3.3 shows that the value of the optimal policy of the truncated MDP w.r.t. the original MDP is only O(gamma^k eps) worse than the optimal policy of the original problem (gamma is the discount factor and eps is the error between V^e and V*).", "This suggests two things: 1) Having an oracle that is accurate (small eps) leads to good performance. ", "If oracle is the same as the optimal value function, we do not need to plan more than a single step ahead.", "2) By planning for k steps ahead, one can decrease the error in the oracle geometrically fast. ", "In the limit of k —> inf, the error in the oracle does not matter.", "Based on this insight, the paper suggests an actor-critic-like algorithm called THOR (Truncated HORizon policy search) that minimizes the total cost over a truncated horizon with a modified cost function.", "Through a series of experiments on several benchmark problems (inverted pendulum, swimmer, etc.), the paper shows the effect of planning horizon k.", "=== EVALUATION & COMMENTS === I like the main idea of this paper. ", "The paper is also well-written. ", "But one of the main ideas of this paper (truncating the planning horizon and replacing it with approximation of the optimal value function) is not new and has been studied before, ", "but has not been properly cited and discussed.", "There are a few papers that discuss truncated planning. ", "Most closely is the following paper:", "Farahmand, Nikovski, Igarashi, and Konaka, “Truncated Approximate Dynamic Programming With Task-Dependent Terminal Value,” AAAI, 2016.", "The motivation of AAAI 2016 paper is different from this work. ", "The goal there is to speedup the computation of finite, but large, horizon problem with a truncated horizon planning. ", "The setting there is not the combination of RL and IL, but multi-task RL. ", "An approximation of optimal value function for each task is learned off-line and then used as the terminal cost. ", "The important point is that the learned function there plays the same role as the value provided by the oracle V^e in this work. ", "They both are used to shorten the planning horizon. ", "That paper theoretically shows the effect of various error terms, including terms related to the approximation in the planning process (this paper does not do that).", "Nonetheless, the resulting algorithms are quite different. ", "The result of this work is an actor-critic type of algorithm. ", "AAAI 2016 paper is an approximate dynamic programming type of algorithm.", "There are some other papers that have ideas similar to this work in relation to truncating the horizon. ", "For example, the multi-step lookahead policies and the use of approximate value function as the terminal cost in the following paper:", "Bertsekas, “Dynamic Programming and Suboptimal Control: A Survey from ADP to MPC,” European Journal of Control, 2005.", "The use of learned value function to truncate the rollout trajectory in a classification-based approximate policy iteration method has been studied by Gabillon, Lazaric, Ghavamzadeh, and Scherrer, “Classification-based Policy Iteration with a Critic,” ICML, 2011.", "Or in the context of Monte Carlo Tree Search planning, the following paper is relevant:", "Silver et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, 2016.", "Their “value network” has a similar role to V^e. ", "It provides an estimate of the states at the truncated horizon to shorten the planning depth.", "Note that even though these aforementioned papers are not about IL, ", "this paper’s stringent requirement of having access to V^e essentially make it similar to those papers.", "In short, a significant part of this work’s novelty has been explored before. ", "Even though not being completely novel is totally acceptable, ", "it is important that the paper better position itself compared to the prior art.", "Aside this main issue, there are some other comments: - Theorem 3.1 is not stated clearly and may suggest more than what is actually shown in the proof. ", "The problem is that it is not clear about the fact the choice of eps is not arbitrary.", "The proof works only for eps that is larger than 0.5. ", "With the construction of the proof, if eps is smaller than 0.5, there would not be any error, i.e., J(\\hat{pi}^*) = J(pi^*).", "The theorem basically states that if the error is very large (half of the range of value function), the agent does not not perform well. ", "Is this an interesting case?", "- In addition to the papers I mentioned earlier, there are some results suggesting that shorter horizons might be beneficial and/or sufficient under certain conditions. ", "A related work is a theorem in the PhD dissertation of Ng:", "Andrew Ng, Shaping and Policy Search in Reinforcement Learning, PhD Dissertation, 2003.", "(Theorem 5 in Appendix 3.B: Learning with a smaller horizon).", "It is shown that if the error between Phi (equivalent to V^e here) and V* is small, one may choose a discount factor gamma’ that is smaller than gamma of the original MDP, and still have some guarantees. ", "As the discount factor has an interpretation of the effective planning horizon, ", "this result is relevant. ", "The result, however, is not directly comparable to this work ", "as the planning horizon appears implicitly in the form of 1/(1-gamma’) instead of k,", "but I believe it is worth to mention and possibly compare.", "- The IL setting in this work is that an oracle provides V^e, which is the same as (Ross & Bagnell, 2014). ", "I believe this setting is relatively restrictive ", "as in many problems we only have access to (state, action) pairs, or sequence thereof, and not the associated value function. ", "For example, if a human is showing how a robot or a car should move, we do not easily have access to V^e (unless the reward function is known and we estimate the value with rollouts; which requires us having a long trajectory). ", "This is not a deal breaker, ", "and I would not consider this as a weakness of the work, ", "but the paper should be more clear and upfront about this.", "- The use of differential operator nabla instead of gradient of a function (a vector field) in Equations (10), (14), (15) is non-standard.", "- Figures are difficult to read, ", "as the colors corresponding to confidence regions of different curves are all mixed up. ", "Maybe it is better to use standard error instead of standard deviation." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "reference", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "reference", "fact", "fact", "reference", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "reference", "quote", "fact", "fact", "evaluation", "evaluation", "fact", "request", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "request", "fact", "evaluation", "fact", "request" ]
BktJHw_lM
[ "The paper discusses a setting in which an existing dataset/trained model is augmented/refined by adding additional datapoints.", "Issues of how to price the new data are discussed in a high level, abstract way, and arguments against retrieving the new data for free or encrypting it are presented.", "Overall, the paper is of an expository nature,", "discussing high-level ideas rather than actually implementing them,", "and does not experimentally or theoretically substantiate any of its claims.", "This makes the technical contribution rather shallow.", "Interesting questions do arise, such as how to assess the value of new data and how to price datapoints,", "but these questions are never addressed (neither theoretically nor empirically).", "Though main points are valid,", "the paper is also rife with informal statements and logical jumps,", "perhaps due to the expository/high-level approach taken in discussing these issues.", "Detailed comments:The (informal) information theoretic argument has a few holes.", "The claim is roughly that every datapoint (~1Mbyte image) contributes ~1M bits of changes in a model,", "which can be quite revealing.", "As a result, there is no benefit from encrypting the datapoint, as the mapping from inputs to changes is insecure (in an information-theoretic sense) in itself.", "This assumes that every step of stochastic gradient descent (one step per image) is done in the clear;", "this is not what one would consider secure in cryptography literature.", "A secure function evaluation (SFE) would encrypt the data and the computation in an end-to-end fashion;", "in particular, it would only reveal the final outcome of SGD over all images in the dataset without revealing any intermediate steps.", "Presuming that the new dataset is large (i.e., having N images), the \"information theoretic\" limit becomes ~N x 1Mbyte inputs for ~1M function outputs (the finally-trained model).", "In this sense, this argument that \"encryption is hopeless\" is somewhat brittle.", "Encryption-issues aside, the paper would have been much stronger if it spent more effort in formalizing or evaluating different methods for assessing the value of data.", "The authors approach this by treating the ML algorithm as a blackbox, and using influence functions (a la Bastani 2017) to assess the impact of different inputs on the finally trained model", "(again, this is proposed but not implemented/explored/evaluated in any way).", "This is a design choice, but it is not obvious.", "There is extensive literature in statistics and machine learning on the areas of experimental design and active learning.", "Both are active, successful research areas, and both can be provide tools to formally reason about the value of data/labels not yet seen;", "the paper summarily ignores this literature.", "Examples of imprecise/informal statements: \"The fairness in the pricing is highly questionable\"", "\"implicit contracts get difficult to verify\"", "\"The fairness in the pricing is dubious\"", "\"As machine learning models become more and more complicated, its (sic) capability can outweigh the privacy guarantees encryption gives us\"", "\"as an image classifier's model architecture changes, all the data would need to be collected and purchased again\"", "\"Interpretability solutions aim to alleviate the notoriety of reasonability of neural networks\"" ]
[ "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "request", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "quote", "quote", "quote", "quote", "quote", "quote" ]
BkEYMCPlG
[ "The authors present RDA, the Recurrent Discounted Attention unit, that improves upon RWA, the earlier introduced Recurrent Weighted Average unit, by adding a discount factor. ", "While the RWA was an interesting idea with bad results (far worse than the standard GRU or LSTM with standard attention except for hand-picked tasks), ", "the RDA brings it more on-par with the standard methods.", "On the positive side, the paper is clearly written and adding discount to RWA, while a small change, is original. ", "On the negative side, in almost all tasks the RDA is on par or worse than the standard GRU - ", "except for MultiCopy where it trains faster, but not to better results ", "and it looks like the difference is between few and very-few training steps anyway. ", "The most interesting result is language modeling on Hutter Prize Wikipedia, ", "where RDA very significantly improves upon RWA - ", "but again, only matches a standard GRU or LSTM. ", "So the results are not strongly convincing, ", "and the paper lacks any mention of newer work on attention. ", "This year strong improvements over state-of-the-art have been achieved using attention for translation (\"Attention is All You Need\") and image classification (e.g., Non-local Neural Networks, but also others in ImageNet competition). ", "To make the evaluation convincing enough for acceptance, RDA should be combined with those models and evaluated more competitively on multiple widely-studied tasks." ]
[ "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "request" ]
rkZAtAaxM
[ "This manuscript is fairly well-written, ", "and discusses how the batch normalization step helps to stabilize the scale of the gradients. ", "Intriguingly, the analysis suggests that using a shallower but wider resnet should provide competitive performance, which is supported by empirical evidence. ", "This work should help elucidate the structure in the learning, and help to support efforts to improve both learning algorithms and the architecture.", "Pros: Clean, simple analysis", "Empirical support suggests that theory captures reasonable effects behind learning", "Cons: The reasonableness of the assumptions used in the analysis needs a more careful analysis. ", "In particular, the assumption that all weights are independent is valid only at the first random iteration. ", "Therefore, the utility of this theory during initialization seems reasonable, ", "but during learning the theory seems quite tenuous. ", "I would encourage the authors to discuss their assumptions, and talk about how the math would change as a result of relaxing the assumptions.", "The empirical support does provide evidence that the theory is reasonable. ", "However, it is limited to a single dataset. ", "It would be nice to see that the effect happens more generally. ", "Second, it is clear that shallow+wide networks may be better than deep+narrow networks, ", "but it's not clear about how the width is evaluated and supported. ", "I would encourage the authors to do more extensive experiments and evaluate the architecture further." ]
[ "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "request", "fact", "evaluation", "evaluation", "request", "fact", "fact", "evaluation", "evaluation", "evaluation", "request" ]
BknXbsdxG
[ "In this paper, an number of very strong (even extraordinary) claims are made:", "* The abstract promises \"a framework to understand the unprecedented performance and robustness of deep neural networks using field theory.\"", "* Page 8 states that this is \"This is a first attempt to describe a neural network with a scalar quantum field theory.\"", "* Page 2 promises the use of the \"Goldstone theorem\" (no less) to understand phase transition in deep learning", "* It also claim that many \"seemingly different experimental results can be explained by the presence of these zero eigenvalue weights.\"", "* Three important results are stated as \"theorem\", with a statement like \"Deep feedforward networks learn by breaking symmetries\" proven in 5 lines, with no formal mathematics.", "These are extraordinary claims,", "but when reaching page 5, one sees that the basis of these claims seems to be the Lagrangian of a simple phi-4 theory,", "and Fig. 1 shows the standard behaviour of the so-called mexican hat in physics, the basis of the second-order transition.", "Given physicists have been working on neural network for more than three or four decades,", "I am surprise that this would enough to solve all these problems!", "I tried to understand these many results,", "but I am afraid I cannot really understand or see them.", "In many case, the explanation seems to be a vague analogy.", "These are not without interest,", "and maybe there is indeed something deep in this paper, but it is so far hidden by the hype.", "Still, I fail to see how the fact that phase transitions and negative direction in the landscape is a new phenomena, and how it explains all the stated phenomenology.", "Beside, there are quite a lot of things known about the landscape of these problems", "Maybe I am indeed missing something,", "but i clearly suspect the authors are simply overselling physics results.", "I have been wrong many times,", "but I beleive that the authors should probably precise their claim, and clarify the relation between their results and both the physics AND statistics litterature, or better, with the theoretical physics litterature applied to learning, which is ---astonishing-- absent in the paper.", "About the content: The main problem for me is that the whole construction using field theory seems to be used to advocate for the appearence of a phase transition in neural nets and in learning.", "This rises three comments: (1) So we really need to use quantum field theory for this?", "I do not see what should be quantum here", "(despite the very vague remarks page 12 \"WHY QUANTUM FIELD THEORY?\")", "(2) This is not new.", "Phase transitions in learning in neural nets are being discussed since aboutn 40 years, see for instance all the pionnering work of Sompolinky et al.", "one can see for instance the nice review in https://arxiv.org/abs/1710.09553", "In non aprticular order, phase transition and symmetry breaking are discussed in * \"Statistical mechanics of learning from examples\", Phys. Rev. A 45, 6056 – Published 1 April 1992", "* \"The statistical mechanics of learning a rule\", Rev. Mod. Phys. 65, 499 – Published 1 April 1993", "* Phase transitions in the generalization behaviour of multilayer neural networks", "http://iopscience.iop.org/article/10.1088/0305-4470/28/16/010/meta", "* Note that some of these results are now rigourous,", "as shown in \"Phase Transitions, Optimal Errors and Optimality of Message-Passing in Generalized Linear Models\", https://arxiv.org/abs/1708.03395", "* The landscape of these problems has been studied quite extensivly,", "see for instance \"Identifying and attacking the saddle point problem in high-dimensional non-convex optimization\", https://arxiv.org/abs/1406.2572", "(3) There is nothing particular about deep neural net and neural nets about this.", "Negative direction in the Hessian in learning problems appears in matrix and tensor factorizaion, where phase transition are well understood (even rigorously, see for instance, https://arxiv.org/abs/1711.05424 ) or in problems such as unsupervised learning, as e.g.:", "https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.86.2174", "https://journals.aps.org/pre/pdf/10.1103/PhysRevE.50.1766", "Here are additional comments: PAGE 1: * \"It has been discovered that the training process ceases when it goes through an information bottleneck (ShwartzZiv & Tishby, 2017)\".", "While this paper indeed make a nice suggestion, I would not call it a discovery yet as this has never been shown on a large network.", "Beside, another paper in the conference is claiming exacly the opposite,", "see : \"On the Information Bottleneck Theory of Deep Learning\".", "This is still subject of discussion.", "* \"In statistical terms, a quantum theory describes errors from the mean of random variables. \"", "Last time I studied quantum theory, it was a theory that aim to explain the physical behaviours at the molecular, atomic and sub-atomic levels, usinge either on the wave function (Schrodinger) or the Matrix operatir formalism (Hesienbger) (or if you want, the path integral formalism of Feynman).", "It is certainly NOT a theory that describes errors from the mean of random variables.", "This is, i beleive, the field of \"statistics\" or \"probability\" for correlated variables.", "It is certianly used in physics, and heavily both in statistical physics and in quantum thoery,", "but this is not what the theory is about in the first place.", "Beside, there is little quantum in this paper,", "I think most of what the authors say apply to a statistical field theory", "( https://en.wikipedia.org/wiki/Statistical_field_theory )", "* \"In the limit of a continuous sample space, the quantum theory becomes a quantum field theory.\"", "Again, what is quantum about all this?", "This true for a field theory, as well for continous theories of, say, mechanics, fracture, etc...", "PAGE 2: * \"Using a scalar field theory we show that a phase transition must exist towards the end of training based on empirical results.\"", "So it is a scalar classical field theory after all.", "This sounds a little bit less impressive that a quantum field theory.", "Note that the fact that phase transition arises in learning, and in a statistical theory applied to any learning process, is an old topic, with a classical litterature.", "The authors might be interested by the review \"The statistical mechanics of learning a rule\", Rev. Mod. Phys. 65, 499 – Published 1 April 1993", "PAGE 8: * \"In this work we solved one of the most puzzling mysteries of deep learning by showing that deep neural networks undergo spontaneous symmetry breaking.\"", "I am afraid I fail to see what is so mysterious about this nor what the authors showed about it.", "In any case, gradient descent break symmetry spontaneously in many systems, including phi-4, the Ising model or (in learning problems) the community detection problem", "(see eg https://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.011047).", "I am afraid I miss what is new there...", "* \"This is a first attempt to describe a neural network with a scalar quantum field theory.\"", "Given there seems to be little quantum in the paper,", "I fail to see the relevance of the statement.", "Secondly, I beleive that field theory has been used, many times and in greater lenght, both for statistical and dynamical problems in neural nets, see eg.", "* http://iopscience.iop.org/article/10.1088/0305-4470/27/6/016/meta", "* https://arxiv.org/pdf/q-bio/0701042.pdf", "* http://www.lps.ens.fr/~derrida/PAPIERS/1987/gardner-zippelius-87.pdf", "* http://iopscience.iop.org/article/10.1088/0305-4470/21/1/030/meta", "* https://arxiv.org/pdf/cond-mat/9805073.pdf" ]
[ "evaluation", "fact", "quote", "fact", "quote", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "non-arg", "evaluation", "evaluation", "non-arg", "evaluation", "non-arg", "fact", "fact", "reference", "reference", "reference", "reference", "reference", "fact", "reference", "fact", "reference", "evaluation", "fact", "reference", "reference", "quote", "evaluation", "fact", "reference", "evaluation", "quote", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "reference", "quote", "non-arg", "fact", "quote", "fact", "evaluation", "fact", "evaluation", "quote", "evaluation", "fact", "reference", "evaluation", "quote", "evaluation", "evaluation", "evaluation", "reference", "reference", "reference", "reference", "reference" ]
SksrEW9eG
[ "Summary:The paper proposes a new dialog model combining both retrieval-based and generation-based modules. ", "Answers are produced in three phases: a retrieval-based model extracts candidate answers; a generator model, conditioned on retrieved answers, produces an additional candidate; a reranker outputs the best among all candidates.", "The approach is interesting: ", "the proposed ensemble can improve on both the retrieval module and the generation module, ", "since it does not restrict modeling power (e.g. the generator is not forced to be consistent with the candidates). ", "I am not aware of similar approaches for this task. ", "One work that comes to mind regarding the blend of retrieval and generation is Memory Networks ", "(e.g. https://arxiv.org/pdf/1606.03126.pdf and references): ", "given a query, a set of relevant memories is extracted from a KB using an inverted index and the memories are fed into the generator. ", "However, the extracted items in the current work are candidate answers which are used both to feed the generator and to participate in reranking.", "The experimental section focuses on the task of building conversational systems. ", "The performance measures used are 1) a human evaluation score with three volunteers and 2) BLUE scores. ", "While these methods are not very satisfying, ", "effective evaluation of such systems is a known difficulty. ", "The results show that the ensemble outperforms the individual modules, indicating that: ", "the multi-seq2seq models have learned to use the new inputs as needed and that the ranker is correlated with the evaluation metrics.", "However, the results themselves do not look impressive to me: ", "the subjective evaluation is close to the \"borderline\" score; ", "in the examples provided, one is good, the other is borderline/bad, and the baseline always provides something very short. ", "Does the LSTM work particularly poor on this dataset? ", "Given that this is a novel dataset, I don't know what the state-of-the-art should be. ", "Could you provide more insight? ", "Have you considered adding a benchmark dataset (e.g. a QA dataset)?", "Specific questions:1. The paper motivates conditioning on the candidates in two ways. ", "First, that the candidates bring additional information which the decoder can use (e.g. read from the candidates locations, actions, etc.). ", "Second, that the probability of universal replies must decrease due to the additional condition. ", "I think the second argument depends on how the conditioning is performed: ", "if the candidates are simply appended to the input, the model can learn to ignore them.", "2. The copy mechanism is a nice touch, encouraging the decoder to use the provided queries. ", "Why not copy from the query too, e.g. with some answers reusing part of the query <\"Where are you going?\", \"I'm going to the park\">?", "3. How often does the model select the generated answer vs. the extracted answers? ", "In both examples provided the selected answer is the one merging the candidate answers.", "Minor issues:- Section 3.2: using and the state", "- Section 3.2: more than one replies", "- last sentence on page 3: what are the \"following principles\"?" ]
[ "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "reference", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "request", "request", "fact", "fact", "fact", "fact", "fact", "evaluation", "request", "request", "fact", "fact", "fact", "request" ]
BytyNwclz
[ "This paper presents an analysis of the communication systems that arose when neural network based agents played simple referential games. ", "The set up is that a speaker and a listener engage in a game where both can see a set of possible referents (either represented symbolically in terms of features, or represented as simple images) and the speaker produces a message consisting of a sequence of numbers while the listener has to make the choice of which referent the speaker intends. ", "This is a set up that has been used in a large amount of previous work, ", "and the authors summarize some of this work. ", "The main novelty in this paper is the choice of models to be used by speaker and listener, ", "which are based on LSTMs and convolutional neural networks. ", "The results show that the agents generate effective communication systems, ", "and some analysis is given of the extent to which these communications systems develop compositional properties ", "– a question that is currently being explored in the literature on language creation.", "This is an interesting question, ", "and it is nice to see worker playing modern neural network models to his question and exploring the properties of the solutions of the phone. ", "However, there are also a number of issues with the work.", "1. One of the key question is the extent to which the constructed communication systems demonstrate compositionality. ", "The authors note that there is not a good quantitative measure of this. ", "However, this is been the topic of much research of the literature and language evolution. ", "This work has resulted in some measures that could be applied here, ", "see for example Carr et al. (2016): http://www.research.ed.ac.uk/portal/files/25091325/Carr_et_al_2016_Cognitive_Science.pdf", "2. In general the results occurred be more quantitative. ", "In section 3.3.2 it would be nice to see statistical tests used to evaluate the claims. ", "Minimally I think it is necessary to calculate a null distribution for the statistics that are reported.", "3. As noted above the main novelty of this work is the use of contemporary network models. ", "One of the advantages of this is that it makes it possible to work with more complex data stimuli, such as images. ", "However, unfortunately the image example that is used is still very artificial being based on a small set of synthetically generated images.", "Overall, I see this as an interesting piece of work that may be of interest to researchers exploring questions around language creation and language evolution, ", "but I think the results require more careful analysis and the novelty is relatively limited, at least in the way that the results are presented here." ]
[ "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "reference", "request", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation" ]
Hyl2iJgGG
[ "This paper examines the very popular and useful ADAM optimization algorithm, and locates a mistake in its proof of convergence (for convex problems).", "Not only that, the authors also show a specific toy convex problem on which ADAM fails to converge.", "Once the problem was identified to be the decrease in v_t (and increase in learning rate), they modified the algorithm to solve that problem.", "They then show the modified algorithm does indeed converge and show some experimental results comparing it to ADAM.", "The paper is well written, interesting and very important given the popularity of ADAM.", "Remarks: - The fact that your algorithm cannot increase the learning rate seems like a possible problem in practice.", "A large gradient at the first steps due to bad initialization can slow the rest of training.", "The experimental part is limited,", "as you state \"preliminary\",", "which is a unfortunate for a work with possibly an important practical implication.", "Considering how easy it is to run experiments with standard networks using open-source software,", "this can easily improve the paper.", "That being said, I understand that the focus of this work is theoretical and well deserves to be accepted based on the theoretical work.", "- On page 14 the fourth inequality not is clear to me.", "- On page 6 you talk about an alternative algorithm using smoothed gradients which you do not mention anywhere else", "and this isn't that clear (more then one way to smooth).", "A simple pseudo-code in the appendix would be welcome.", "Minor remarks:- After the proof of theorem 1 you jump to the proof of theorem 6", "(which isn't in the paper)", "and then continue with theorem 2.", "It is a bit confusing.", "- Page 16 at the bottom v_t= ... sum beta^{t-1-i}g_i should be g_i^2", "- Page 19 second line, you switch between j&t and it is confusing.", "Better notation would help.", "- The cifarnet uses LRN layer that isn't used anymore." ]
[ "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "fact", "evaluation", "request", "fact", "fact", "fact", "evaluation", "request", "evaluation", "request", "fact" ]
rynqOnBez
[ "My problem with this paper that all the theoretical contributions / the new approach refer to 2 arXiv papers, ", "what's then left is an application of that approach to learning form imperfect demonstrations.", "Quality ====== The approach seems sound ", "but the paper does not provide many details on the underlying approach. ", "The application to learning from (partially adversarial) demonstrations is a cool idea ", "but effectively is a very straightforward application based on the insight that the approach can handle truly off-policy samples. ", "The experiments are OK ", "but I would have liked a more thorough analysis.", "Clarity ===== The paper reads well, ", "but it is not really clear what the claimed contribution is.", "Originality ========= The application seems original.", "Significance ========== Having an RL approach that can benefit from truly off-policy samples is highly relevant.", "Pros and Cons ============ + good results", "+ interesting idea of using the algorithm for RLfD", "- weak experiments for an application paper", "- not clear what's new" ]
[ "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation" ]
BkE3cW5gG
[ "Summary: This paper presents a thorough examination of the effects of pruning on model performance. ", "Importantly, they compare the performance of \"large-sparse\" models (large models that underwent pruning in order to reduce memory footprint of model) and \"small-dense\" models, showing that \"large-sparse\" models typically perform better than the \"small-dense\" models of comparable size (in terms of number of non-zero parameters, and/or memory footprint). ", "They present results across a number of domains (computer vision, language modelling, and neural machine translation) and model types (CNNs, LSTMs). ", "They also propose a way of performing pruning with a pre-defined sparsity schedule, simplifying the pruning process in a way which works across domains. ", "They are able to show convincingly that pruning is an effective way of trading off accuracy for model size (more effective than simply reducing the size of model architecture), ", "although there does come a point where too much sparsity degrades the model performance considerably; ", "this suggests that pruning a medium size model to 80%-90% sparsity is likely better than pruning a larger model to >= 95% sparsity.", "Review: Quality: The quality of the work is high ", "--- the experiments are extensive and thorough. ", "I would have liked to see \"small-dense\" vs. \"large-sparse\" comparisons on Inception (only large-sparse results are reported).", "Clarity: The paper is clearly written, ", "though there is room for improvement. ", "For example, many of the results are presented in a redundant manner (in both tables and figures, where the table and figure are often not next to each other in the document). ", "Also, it is not clear in several cases exactly which training/heldout/test sets are used, and on which partition of the data the accuracies/BLEU scores/perplexities presented correspond to. ", "A small section (before \"Methods\") describing the datasets/features in detail would be helpful. ", "Also, it would have probably been nice to explain all of the tasks and datasets early on, and then present all the results at once (NIT: include the plots in paper, and move the tables to an appendix).", "Originality: Although the experiments are informative, ", "the work as a whole is not very original. ", "The method proposed of using a sparsity schedule to perform pruning is simple and effective, ", "but is a rather incremental contribution. ", "The primary contribution of this paper is its experiments, which for the most part compare known methods.", "Significance: The paper makes a nice contribution, ", "though it is not particularly significant or surprising. ", "The primary observations are: (1) large-sparse is typically better than small-dense, for a fixed number of non-zero parameters and/or memory footprint.", "(2) There is a point at which increasing the sparsity percentage severely degrades the performance of the model, ", "which suggests that there is a \"sweet-spot\" when it comes to choosing the model architecture and sparsity percentage which give the best performance (for a fixed memory footprint).", "Result #1 is not very surprising, ", "given that Han et al (2016) were able to show significant compression without loss in accuracy; ", "thus, because one would expect a smaller dense model to perform worse than the large dense model, ", "it would also perform worse than the large sparse model.", "Result #2 had already been seen in Han et al (2016) (for example, in Figure 6).", "Pros: - Very thorough experiments across a number of domains", "Cons: - Methodological contributions are minor.", "- Results are not surprising, and are in line with previous papers." ]
[ "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation" ]
HypMNiy-G
[ "Training GAN in a hierarchical optimization schedule shows promising performance recently (e.g. Zhao et al., 2016). ", "However, these works utilize the prior knowledge of the data (e.g. image) ", "and it's hard to generalize it to other data types (e.g. text). ", "The paper aims to learn these hierarchies directly instead of designing by human. ", "However, several parts are missing and not well-explained. ", "Also, many claims in paper are not proved properly by theory results or empirical results. ", "(1) It is not clear to me how to train the proposed algorithm. ", "My understanding is train a simple ALI, then using the learned latent as the input and train the new layer. ", "Do the authors use a separate training ? or a joint training algorithms. ", "The authors should provide a more clear and rigorous objective function. ", "It would be even better to have a pseudo code. ", "(2) In abstract, the authors claim the theoretical results are provided. ", "I am not sure whether it is sec 3.2 ", "The claims is not clear and limited. ", "For example, what's the theory statement of [Johnsone 200; Baik 2005]. ", "What is the error measure used in the paper? ", "For different error, the matrix concentration bound might be different. ", "Also, the union bound discussed in sec 3.2 is also problematic. ", "Lats, for using simple standard GAN to learn mixture of Gaussian, the rigorous theory result doesn't seem easy (e.g. [1]) ", "The author should strive for this results if they want to claim any theory guarantee.", "(3) The experiments part is not complete. ", "The experiment settings are not described clearly. ", "Therefore, it is hard to justify whether the proposed algorithm is really useful based on Fig 3. ", "Also, the authors claims it is applicable to text data in Section 1, this part is missing in the experiment. ", "Also, the idea of \"local\" disentangled LV is not well justified to be useful.", "[1] On the limitations of first order approximation in GAN dynamics, ICLR 2018 under review" ]
[ "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "request", "request", "fact", "evaluation", "evaluation", "non-arg", "non-arg", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "reference" ]
BkQD60b-f
[ "The paper proposes the use of a GAN to learn the distribution of image classes from an existing classifier, ", "that is a nice and straightforward idea. ", "From the point of view of forensic analysis of a classifier, it supposes a more principled strategy than a brute force attack based on the classification of a database and some conditional density estimation of some intermediate image features. ", "Unfortunately, the experiments are inconclusive. ", "Quality: The key question of the proposed scheme is the role of the auxiliary dataset. ", "In the EMNIST experiment, the results for the “exact same” and “partly same” situations are good, ", "but it seems that for the “mutually exclusive” situation the generated samples look like letters, not numbers, ", "and raises the question on the interpolation ability of the generator. ", "In the FaceScrub experiment is even more difficult to interpret the results, ", "basically because we do not even know the full list of person identities. ", "It seems that generated images contain only parts of the auxiliary images related to the most discriminative features of the given classifier. ", "Does this imply that the GAN models a biased probability distribution of the image class? ", "What is the result when the auxiliary dataset comes from a different kind of images? ", "Due to the difficulty of evaluating GAN results, more experiments are needed to determine the quality and significance of this work.", "Clarity: The paper is well structured and written, ", "but Sections 1-4 could be significantly shorter to leave more space to additional and more conclusive experiments. ", "Some typos on Appendix A should be corrected.", "Originality: the paper is based on a very smart and interesting idea and a straightforward use of GANs. ", "Significance: If additional simulations confirm the author’s claims, this work can represent a significant contribution to the forensic analysis of discriminative classifiers." ]
[ "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "request", "request", "evaluation", "request", "request", "evaluation", "evaluation" ]
rJ74wm5xM
[ "The paper describes a neural network-based approach to active localization based upon RGB images. ", "The framework employs Bayesian filtering to maintain an estimate of the agent's pose using a convolutional network model for the measurement (perception) function. ", "A convolutional network models the policy that governs the action of the agent. ", "The architecture is trained in an end-to-end manner via reinforcement learning. ", "The architecture is evaluated in 2D and 3D simulated environments of varying complexity and compared favorably to traditional (structured) approaches to passive and active localization.", "As the paper correctly points out, there is large body of work on map-based localization, ", "but relatively little attention has been paid to decision theoretic formulations to localization, whereby the agent's actions are chosen in order to improve localization accuracy. ", "More recent work instead focuses on the higher level objective of navigation, whereby any effort act in an effort to improve localization are secondary to the navigation objective. ", "The idea of incorporating learned representations with a structured Bayesian filtering approach is interesting, ", "but it's utility could be better motivated. ", "What are the practical benefits to learning the measurement and policy model beyond (i) the temptation to apply neural networks to this problem and (ii) the ability to learn these in an end-to-end fashion? ", "That's not to say that there aren't benefits, but rather that they aren't clearly demonstrated here. ", "Further, the paper seems to assume (as noted below) that there is no measurement uncertainty and, with the exception of the 3D evaluations, no process noise.", "The evaluation demonstrates that the proposed method yields estimates that are more accurate according to the proposed metric than the baseline methods, with a significant reduction in computational cost. ", "However, the environments considered are rather small by today's standards ", "and the baseline methods almost 20 years old. ", "Further, the evaluation makes a number of simplifying assumptions, the largest being that the measurements are not subject to noise ", "(the only noise that is present is in the motion for the 3D experiments). ", "This assumption is clearly not valid in practice. ", "Further, it is not clear from the evaluation whether the resulting distribution that is maintained is consistent (e.g., are the estimates over-/under-confident?). ", "This has important implications if the system were to actually be used on a physical system. ", "Further, while the computational requirements at test time are significantly lower than the baselines, ", "the time required for training is likely very large. ", "While this is less of an issue in simulation, it is important for physical deployments. ", "Ideally, the paper would demonstrate performance when transferring a policy trained in simulation to a physical environment (e.g., using diversification, which has proven effective at simulation-to-real transfer).", "Comments/Questions:* The nature of the observation space is not clear.", "* Recent related work has focused on learning neural policies for navigation, and any localization-specific actions are secondary to the objective of reaching the goal. ", "It would be interesting to discuss how one would balance the advantages of choosing actions that improve localization with those in the context of a higher-level task (or at least including a cost on actions as with the baseline method of Fox et al.).", "* The evaluation that assigns different textures to each wall is unrealistic.", "* It is not clear why the space over which the belief is maintained flips as the robot turns and shifts as it moves.", "* The 3D evaluation states that a 360 deg view is available. ", "What happens when the agent can only see in one (forward) direction?", "* AML includes a cost term in the objective. ", "Did the author(s) experiment with setting this cost to zero?", "* The 3D environments rely upon a particular belief size (70 x 70) being suitable for all environments. ", "What would happen if the test environment was larger than those encountered in training?", "* The comment that the PoseNet and VidLoc methods \"lack a strainghtforward method to utilize past map data to do localization in a new environment\" is unclear.", "* The environments that are considered are quite small compared to the domains currently considered for", "* Minor: It might be better to move Section 3 into Section 4 after introducing notation (to avoid redundancy).", "* The paper should be proofread for grammatical errors (e.g., \"bayesian\" --> \"Bayesian\", \"gaussian\" --> \"Gaussian\")" ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "fact", "request", "evaluation", "evaluation", "fact", "request", "fact", "non-arg", "fact", "request", "evaluation", "evaluation", "request", "request" ]
S1EAO5qxM
[ "In the centre loss, the centre is learned. ", "Now it's calculated as the average of the last layer's features", "To enable training with SGD, the authors calculate the centre within a mini batch" ]
[ "fact", "fact", "fact" ]
SkDHZUXlG
[ "The authors train an RNN to perform deduced reckoning (ded reckoning) for spatial navigation, ", "and then study the responses of the model neurons in the RNN. ", "They find many properties reminiscent of neurons in the mammalian entorhinal cortex (EC): grid cells, border cells, etc. ", "When regularization of the network is not used during training, the trained RNNs no longer resemble the EC. ", "This suggests that those constraints (lower overall connectivity strengths, and lower metabolic costs) might play a role in the EC's navigation function. ", "The paper is overall quite interesting and the study is pretty thorough: ", "no major cons come to mind. ", "Some suggestions / criticisms are given below.", "1) The findings seem conceptually similar to the older sparse coding ideas from the visual cortex. ", "That connection might be worth discussing ", "because removing the regularizing (i.e., metabolic cost) constraint from your RNNS makes them learn representations that differ from the ones seen in EC. ", "The sparse coding models see something similar: ", "without sparsity constraints, the image representations do not resemble those seen in V1, ", "but with sparsity, the learned representations match V1 quite well. ", "That the same observation is made in such disparate brain areas (V1, EC) suggests that sparsity / efficiency might be quite universal constraints on the neural code.", "2) The finding that regularizing the RNN makes it more closely match the neural code is also foreshadowed somewhat by the 2015 Nature Neuro paper by Susillo et al. ", "That could be worthy of some (brief) discussion.", "Sussillo, D., Churchland, M. M., Kaufman, M. T., & Shenoy, K. V. (2015). A neural network that finds a naturalistic solution for the production of muscle activity. Nature neuroscience, 18(7), 1025-1033.", "3) Why the different initializations for the recurrent weights for the hexagonal vs other environments? ", "I'm guessing it's because the RNNs don't \"work\" in all environments with the same initialization (i.e., they either don't look like EC, or they don't obtain small errors in the navigation task). ", "That seems important to explain more thoroughly than is done in the current text.", "4) What happens with ongoing training? ", "Animals presumably continue to learn throughout their lives. ", "With on-going (continous) training, do the RNN neurons' spatial tuning remain stable, or do they continue to \"drift\" (so that border cells turn into grid cells turn into irregular cells, or some such)? ", "That result could make some predictions for experiment, ", "that would be testable with chronic methods (like Ca2+ imaging) that can record from the same neurons over multiple experimental sessions.", "5) It would be nice to more quantitatively map out the relation between speed tuning, direction tuning, and spatial tuning (illustrated in Fig. 3). ", "Specifically, I would quantify the cells' direction tuning using the circular variance methods that people use for studying retinal direction selective neurons. ", "And I would quantify speed tuning via something like the slope of the firing rate vs speed curves. ", "And quantify spatial tuning somehow (a natural method would be to use the sparsity measures sometimes applied to neural data to quantify how selective the spatial profile is to one or a few specific locations). ", "Then make scatter plots of these quantities against each other. ", "Basically, I'd love to see the trends for how these types of tuning relate to each other over the whole populations: ", "those trends could then be tested against experimental data (possibly in a future study)." ]
[ "fact", "fact", "fact", "fact", "evaluation", "evaluation", "non-arg", "non-arg", "evaluation", "request", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "reference", "non-arg", "evaluation", "request", "non-arg", "evaluation", "non-arg", "evaluation", "evaluation", "request", "request", "request", "request", "request", "request", "evaluation" ]
BynVEQJGM
[ "This paper considers the problem of autonomous lane changing for self-driving cars in multi-lane multi-agent slot car setting. ", "The authors propose a new learning strategy called Q-masking which couples well a defined low level controller with a high level tactical decision making policy.", "The authors rightly say that one of the skills an autonomous car must have is the ability to change lanes, ", "however this task is not one of the most difficult for autonomous vehicles to achieve and this ability has already been implemented in real vehicles. ", "Real vehicles also decouple wayfinding with local vehicle control, similar to the strategy employed here. ", "To make a stronger case for this research being relevant to the real autonomous driving problem, the authors would need to compare their algorithm to a real algorithm and prove that it is more “data efficient.” ", "This is a difficult comparison ", "since the sensing strategies employed by real vehicles – LIDAR, computer vision, recorded, labeled real maps are vastly different from the slot car model proposed by the authors. ", "In term of impact, this is a theoretical paper looking at optimizing a sandbox problem where the results may be one day applicable to the real autonomous driving case.", "In this paper the authors investigate “the use and place” of deep reinforcement learning in solving the autonomous lane change problem they propose a framework that uses Q-learning to learn “high level tactical decisions” and introduce “Q-masking” a way of limiting the problem that the agent has to learn to force it to learn in a subspace of the Q-values.", "The authors claim that “By relying on a controller for low-level decisions we are also able to completely eliminate collisions during training or testing, which makes it a possibility to perform training directly on real systems.” ", "I am not sure what is meant by this since in this paper the authors never test their algorithm on real systems ", "and in real systems it is not possible to completely eliminate collisions. ", "If it were, this would be a much sought breakthrough. ", "Additionally for their experiment authors use the SUMO top view driving simulator. ", "This choice makes their algorithm not currently relevant to most autonomous vehicles that use ego-centric sensing. ", "This paper presents a learning algorithm that can “outperform a greedy baseline in terms of efficiency” and “humans driving the simulator in terms of safety and success” within their top view driving game. ", "The game can be programmed to have an “n” lane highway, where n could reasonable go up to five to represent larger highways. ", "The authors limit the problem by specifying that all simulated cars must operate between a preset minimum and maximum and follow a target (random) speed within these limits. ", "Cars follow a fixed model of behavior, do not collide with each other and cannot switch lanes. ", "It is unclear if the simulator extends beyond a single straight section of highway, as shown in Figure 1. ", "The agent is tasked with driving the ego-car down the n-lane highway and stopping at “the exit” in the right hand lane D km from the start position. ", "The authors use deep Q learning from Mnih et al 2015 to learn their optimal policy. ", "They use a sparse reward function of +10 for reaching the goal and -10x(lane difference from desired lane) as a penalty for failure. ", "This simple reward function is possible because the authors do not require the ego car to obey speed limits or avoid collisions. ", "The authors limit what the car is able to do ", "– for example it is not allowed to take actions that would get it off the highway. ", "This makes the high level learning strategy more efficient ", "because it does not have to explore these possibilities (Q-masking). ", "The authors claim that this limitation of the simulation is made valid by the ability of the low level controller to incorporate prior knowledge and perfectly limit these actions. ", "In the real world, however, it is unlikely that any low level controller would be able to do this perfectly.", "In terms of evaluation, the authors do not compare their result against any other method. ", "Instead, using only one set of test parameters, the authors compare their algorithm to a “greedy baseline” policy that is specified a “always try to change lanes to the right until the lane is correct” then it tries to go as fast as possible while obeying the speed limit and not colliding with any car in front. ", "It seems that baseline is additionally constrained vs the ego car due to the speed limit and the collision avoidance criteria and is not a fair comparison. ", "So given a fixed policy and these constraints it is not surprising that it underperforms the Q-masked Q-learning algorithm. ", "With respect to the comparison vs. human operators of the car simulation, the human operators were not experts. ", "They were only given “a few trials” to learn how to operate the controls before the test. ", "It was reported that the human participants “did not feel comfortable” with the low level controller on, ", "possibly indicating that the user experience of controlling the car was less than ideal. ", "With the low level controller off, collisions became possible. ", "It is possibly not a fair claim to say that human drivers were “less safe” but rather that it was difficult to play the game or control the car with the safety module on. ", "This could be seen as a game design issue. ", "It was not clear from this presentation how the human participants were rewarded for their performance. ", "In more typical HCI experiments the gender distribution and ages ranges of participants are specified as well as how participants were recruited and how the game was motivated, including compensation (reward) are specified. ", "Overall, this paper presents an overly simplified game simulation with a weak experimental result." ]
[ "fact", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "evaluation", "evaluation", "quote", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation" ]
HJSdXVqxG
[ "This paper creates a layered representation in order to better learn segmentation from unlabeled images. ", "It is well motivated, ", "as Fig. 1 clearly shows the idea that if the segmentation was removed properly, the result would still be a natural image. ", "However, the method itself as described in the paper leaves many questions about whether they can achieve the proposed goal.", "I cannot see from the formulation why would this model work as it is advertised. ", "The formulation (3-4) looks like a standard GAN, with some twist about measuring the GAN loss in the z space (this has been used in e.g. PPGN and CVAE-GAN). ", "I don't see any term that would guarantee:1) Each layer is a natural image. ", "This was advertised in the paper, ", "but the loss function is only on the final product G_K. ", "The way it is written in the paper, the result of each layer does not need to go through a discriminator. ", "Nothing seems to have been done to ensure that each layer outputs a natural image.", "2) None of the layers is degenerate. ", "There does not seem to be any constraint either regularizing the content in each layer, or preventing any layer to be non-degenerate.", "3) The mask being contiguous. ", "I don't see any term ensuring the mask being contiguous, ", "I imagine normally without such terms doing such kinds of optimization would lead to a lot of fragmented small areas being considered as the mask.", "The claim that this paper is for unsupervised semantic segmentation is overblown. ", "A major problem is that when conducting experiments, all the images seem to be taken from a single category, this implicitly uses the label information of the category. ", "In that regard, this cannot be viewed as an unsupervised algorithm.", "Even with that, the results definitely looked too good to be true. ", "I have a really difficult time believing why such a standard GAN optimization would not generate any of the aforementioned artifacts and would perform exactly as the authors advertised. ", "Even if it does work as advertised, the utilization of implicit labels would make it subject to comparisons with a lot of weakly-supervised learning papers with far better results than shown in this paper. ", "Hence I am pretty sure that this is not up to the standards of ICLR." ]
[ "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation" ]
B1oFM1FeG
[ "This paper presents, and analyzes, a method for learning word relationships based on co-occurrence. ", "In the method, relationships between pairs of words (A, B) are represented by the terms that tend to occur around co-mentions of A and B in text. ", "The paper shows the start of some interesting ideas, ", "but needs revisions and much more extensive experiments.", "On the plus side, the method proposed here does perform relatively well (Table 1) and probably merits further investigation. ", "The experiments in Table 1 can only be considered preliminary, however. ", "They only evaluate over a small number of relationships (three) ", "-- looking at 20 or so different relationships would greatly improve confidence in the conclusions.", "Beyond Table 1 the paper makes a number of claims that are not supported or weakly supported (the paper uses only a handful of examples as evidence). ", "An attempt to explain what Word2Vec is doing should be made with careful experiments over many relations and hundreds of examples, ", "whereas this paper presents only a handful of examples for most of its claims. ", "Further, whether the behavior of the proposed algorithm actually reflects what word2vec is doing is left as a significant open question.", "I appreciate the clarity of Assumption 1 and Proposition 1, ", "but ultimately this formalism is not used ", "and because Assumption 1 about which nouns are \"semantically related\" to which other nouns attempts to trivialize a complex notion (semantics) and is clearly way too strong ", "-- the paper would be better off without it. ", "Also Assumption 1 does not actually claim what the text says it claims ", "(the text says words outside the window are *not* semantically related, but the assumption does not actually say this) ", "and furthermore is soon discarded and only the frequency of noun occurrences around co-mentions is used. ", "I think the description of the algorithm could be retained without including Assumption 1.", "minor: References to numbered algorithms or assumptions should be capitalized in the text.", "what the introduction means about the \"dynamics\" of the vector equation is a little unclear", "A submission shouldn't have acknowledgments, and in particular with names that undermine anonymity", "MLE has a particular technical meaning that is not utilized here, ", "I would just refer to the most frequent words as \"most related nouns\" or similar", "In Table 1, are the \"same dataset\" results with w2v for the nouns-only corpus, or with all the other words?", "The argument made assuming a perfect Zipf distribution (with exponent equal to one) should be made with data.", "will likely by observed -> will likely be observed", "lions:dolphins probably ends up that way because of \"sea lions\"", "Table 4 caption: frequencies -> currencies", "Table 2 -- claim is that improvements from k=10 to k=20 are 'nominal' but they look non-negligible to me", "I did not understand how POS lying in the same subspace means that Vec(D) has to be in the span of Vecs A-C." ]
[ "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "fact", "request", "evaluation", "request", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "fact", "fact", "fact", "request", "request", "evaluation", "request", "fact", "request", "request", "request", "request", "request", "request", "evaluation", "evaluation" ]
r1IWuK2lf
[ "The paper presents a method for navigating in an unknown and partially observed environment is presented.", "The proposed approach splits planning into two levels: 1) local planning based on the observed space and 2) a global planner which receives the local plan, observation features, and access to an addressable memory to decide on which action to select and what to write into memory.", "The contribution of this work is the use of value iteration networks (VINs) for local planning on a locally observed map that is fed into a learned global controller that references history and a differential neural computer (DNC), local policy, and observation features select an action and update the memory.", "The core concept of learned local planner providing additional cues for a global, memory-based planner is a clever idea", "and the thorough analysis clearly demonstrates the benefit of the approach.", "The proposed method is tested against three problems: a gridworld, a graph search, and a robot environment.", "In each case the proposed method is more performant than the baseline methods.", "The ablation study of using LSTM instead of the DNC and the direct comparison of CNN + LSTM support the authors’ hypothesis about the benefits of the two components of their method.", "While the author’s compare to DRL methods with limited horizon (length 4), there is no comparison to memory-based RL techniques.", "Furthermore, a comparison of related memory-based visual navigation techniques on domains for which they are applicable should be considered", "as such an analysis would illuminate the relative performance over the overlapping portions problem domains", "For example, analysis of the metric map approaches on the grid world or of MACN on their tested environments.", "Prior work in visual navigation in partially observed and unknown environments have used addressable memory (e.g., Oh et al.) and used VINs (e.g., Gupta et al.) to plan as noted.", "In discussing these methods, the authors state that these works are not comparable as they operate strictly on discretized 2d spaces.", "However, it appears to the reviewer that several of these methods can be adapted to higher dimensions and be applicable at least a subclass (for the euclidean/metric map approaches) or the full class of the problems (for Oh et al.),", "which appears to be capable to solve non-euclidean tasks like the graph search problem.", "If this assessment is correct, the authors should differentiate between these approaches more thoroughly and consider empirical comparisons.", "The authors should further consider contrasting their approach with “Neural SLAM” by Zhang et al.", "A limitation of the presented method is requirement that the observation “reveals the labeling of nearby states.”", "This assumption holds in each of the examples presented: the neighborhood map in the gridworld and graph examples and the lidar sensor in the robot navigation example.", "It would be informative for the authors to highlight this limitation and/or identify how to adapt the proposed method under weaker assumptions such as a sensor that doesn’t provide direct metric or connectivity information such as a RGB camera.", "Many details of the paper are missing and should be included to clarify the approach and ensure reproducible results.", "The reviewer suggests providing both more details in the main section of the paper and providing the precise architecture including hyperparameters in the supplementary materials section." ]
[ "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "request", "fact", "fact", "fact", "fact", "evaluation", "fact", "request", "request", "evaluation", "fact", "request", "request", "request" ]
BkTXGMKlf
[ "This paper proposes a family of first-order stochastic optimization schemes", "based on (1) normalizing (batches of) stochastic gradient descents and (2) choosing from a step size updating scheme. ", "The authors argue that iterative first-order optimization algorithms can be interpreted as a choice of an update direction and a step size, ", "so they suggest that one should always normalize the gradient when computing the direction and then choose a step size using the normalized gradient. ", "\\n\\nThe presentation in the paper is clear, ", "and the exposition is easy to follow.", "The authors also do a good job of presenting related work and putting their ideas in the proper context. ", "The authors also test their proposed method on many datasets,", "which is appreciated.\\n\\n", "However, I didn't find the main idea of the paper to be particularly compelling. ", "The proposed technique is reasonable on its own, ", "but the empirical results do not come with any measure of statistical significance. ", "The authors also do not analyze the sensitivity of the different optimization algorithms to hyperparameter choice, opting to only use the default. ", "Moreover, some algorithms were used as benchmarks on some datasets but not others. ", "For a primarily empirical paper, every state-of-the-art algorithm should be used as a point of comparison on every dataset considered. ", "These factors altogether render the experiments uninformative in comparing the proposed suite of algorithms to state-of-the-art methods. ", "The theoretical result in the convex setting is also not data-dependent, despite the fact that it is the normalized gradient version of AdaGrad, which does come with a data-dependent convergence guarantee.\\n\\n", "Given the suite of optimization algorithms in the literature and in use today, any new optimization framework should either demonstrate improved (or at least matching) guarantees in some common (e.g. convex) settings or definitively outperform state-of-the-art methods on problems that are of widespread interest. ", "Unfortunately, this paper does neither. ", "\\n\\nBecause of these points, I do not feel the quality, originality, and significance of the work to be high enough to merit acceptance. ", "\\n\\nSome specific comments: \\np. 2: \\\"adaptive feature-dependent step size has attracted lots of attention\\\". ", "When you apply feature-dependent step sizes, you are effectively changing the direction of the gradient, ", "so your meta learning formulation, as posed, doesn't make as much sense.", "\\np. 2: \"we hope the resulting methods can benefit from both techniques\\\". ", "What reason do you have to hope for this? ", "Why should they be complimentary? ", "Existing optimization techniques are based on careful design and coupling of gradients or surrogate gradients, with specific learning rate schedules. ", "Arbitrarily mixing the two doesn't seem to be theoretically well-motivated.", "\\np. 2: \\\"numerical results shows that normalized gradient always helps to improve the performance of the original methods when the network structure is deep\\\". ", "It would be great to provide some intuition for this. ", "\\np. 2: \\\"we also provide a convergence proof under this framework when the problem is convex and the stepsize is adaptive\\\". ", "The result that you prove guarantees a \\\\theta(\\\\sqrt{T}) convergence rate. ", "On the other hand, the AdaGrad algorithm guarantees a data-dependent bound that is O(\\\\sqrt{T}) ", "but can also be much smaller. ", "This suggests that there is no theoretical motivation to use NGD with an adaptive step size over AdaGrad.", "\\np. 2-3: \\\"NGD can find a \\\\eps-optimal solution....when the objective function is quasi-convex. ....extended NGD for upper semi-continuous quasiconvex objective functions...\\\". ", "This seems like a typo. ", "How are results that go from quasi-convex to upper semi-continuous quasi-convex an extension?", "\\np. 3: There should be a reference for RMSProp.", "\\np. 3: \\\"where each block of parameters x^i can be viewed as parameters associated to the ith layer in the network\\\". ", "Why is layer parametrization (and later on normalization) a good way idea? ", "There should be either a reference or an explanation.", "\\np. 4: \\\"x=(x_1, x_2, \\\\ldots, x_B)\\\". ", "Should these subscripts be superscripts?", "\\np. 4: \\\"For all the algorithms, we use their default settings.\\\" ", "This seems insufficient for an empirical paper, ", "since most problems often involve some amount of hyperparameter tuning. ", "How sensitive is each method to the choice of hyperparameters? ", "What about the impact of initialization?", "\\np. 4-8: None of the experimental results have error bars or any measure of statistical significance.", "\\np. 5: \\\"NG... is a variant of the NG_{UNIT} method\\\". ", "This method is never motivated.", "\\np. 5-6: Why are SGD and Adam used for MNIST but not on CIFAR? ", "\\np. 5: \\\"we chose the best heyper-paerameter from the 56 layer residual network.\\\" ", "Apart from the typos, are these parameters chosen from the training set or the test set? ", "\\np. 6: Why isn't Adam tested on ImageNet?" ]
[ "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "request", "evaluation", "fact", "request", "fact", "evaluation", "quote", "fact", "evaluation", "quote", "non-arg", "non-arg", "fact", "evaluation", "quote", "request", "quote", "fact", "fact", "fact", "evaluation", "quote", "evaluation", "evaluation", "request", "quote", "non-arg", "request", "quote", "non-arg", "quote", "evaluation", "fact", "non-arg", "non-arg", "fact", "quote", "fact", "non-arg", "quote", "non-arg", "non-arg" ]
Bk6nbuf-M
[ "The authors use deep learning to learn a surrogate model for the motion vector in the advection-diffusion equation that they use to forecast sea surface temperature.", "In particular, they use a CNN encoder-decoder to learn a motion field, and a warping function from the last component to provide forecasting.", "I like the idea of using deep learning for physical equations.", "I would like to see a description of the algorithm with the pseudo-code in order to understand the flow of the method.", "I got confused at several points", "because it was not clear what was exactly being estimated with the CNN.", "Having an algorithmic environment would make the description easier.", "I know that authors are going to publish the code,", "but this is not enough at this point of the revision.", "Physical processes in Machine learning have been studied from the perspective of Gaussian processes.", "Just to mention a couple of references “Linear latent force models using Gaussian processes” and \"Numerical Gaussian Processes for Time-dependent and Non-linear Partial Differential Equations\"", "In Theorem 2, do you need to care about boundary conditions for your equation?", "I didn’t see any mention to those in the definition for I(x,t).", "You only mention initial conditions.", "How do you estimate the diffusion parameter D?", "Are you assuming isotropic diffusion?", "Is that realistic?", "Can you provide more details about how you run the data assimilation model in the experiments?", "Did you use your own code?" ]
[ "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "reference", "request", "fact", "fact", "request", "request", "evaluation", "request", "non-arg" ]
H1g6bb9gG
[ "The approach solves an important problem ", "as getting labelled data is hard. ", "The focus is on the key aspect, which is generalisation across heteregeneous data. ", "The novel idea is the dataset embedding ", "so that their RL policy can be trained to work across diverse datasets.", "Pros: 1. The approach performs well against all the baselines, and also achieves good cross-task generalisation in the tasks they evaluated on. ", "2. In particular, they alsoevaluated on test datasets with fairly different statistics from the training datasets, which isnt very common in most meta-learning papers today, ", "so it’s encouraging that the method works in that regime.", "Cons: 1. The embedding strategy, especially the representative and discriminative histograms, is complicated. ", "It is unclear if the strategy is general enough to work on harder problems / larger datasets, or with higher dimensional data like images. ", "More evidence in the paper for why it would work on harder problems would be great. ", "2. The policy network would have to output a probability for each datapoint in the dataset U, ", "which could be fairly large, ", "thus the method is computationally much more expensive than random sampling. ", "A section devoted to showing what practical problems could be potentially solved by this method would be useful.", "3. It is unclear to me if the results in table 3 and 4 are achieved by retraining from scratch with an RBF SVM, or by freezing the policy network trained on a linear SVM and directly evaluating it with a RBF SVM base learner.", "Significance/Conclusion: The idea of meta-learning or learning to learn is fairly common now. ", "While they do show good performance, ", "it’s unclear if the specific embedding strategy suggested in this paper will generalise to harder tasks. ", "Comments: There’s lots of typos, ", "please proof read to improve the paper." ]
[ "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request" ]
rJMoToYlz
[ "The authors present a derivation of previous work of [1].", "In particular they propose the method of using the error signal of a dynamics model as curiosity for exploration, such as [1], but without any additionaly auxiliary methods.", "This the author call Curiosity by Bootstrapping Feature (CBF).\\n", "\\nIn particular they show over a set of auxiliary learning methods (hindsight ER, inverse dynamics model[1]) there is\\nnot a clear cut edge one method has over the other (or over using no auxilirary method all, that is CBF).\\n\\n", "Overall I think the novelty is too limited for acceptance.", "The main point of the authors (heterogeneous results\\nover different auxilirary learning methods), is not suprising at all, and to be expected.", "The method the authors introduce\\nis just a submodule of already published results[1].\\n\\n", "For instance, section 4 discusses challenges related to these class of approaches such as the presence of stochasticity.", "Had the authors proposed a solution to these challenges that would have benefited the paper greatly.\\n\\n", "Minor: The light green link color make the paper hard on the eye,", "I suggest using [hidelinks] for hyperref.\\n", "Figure 2 is very small and hard to read.\\n\\n\\n", "[1] Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. Curiosity-driven exploration by\\nself-supervised prediction. In ICML, 2017" ]
[ "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "request", "evaluation", "request", "evaluation", "reference" ]
BJ9DfkxWM
[ "The paper is clear and well written.", "It is an incremental modification of prior work (ResNeXt) that performs better on several experiments selected by the author; ", "comparisons are only included relative to ResNeXt.", "This paper is not about gating (c.f., gates in LSTMs, mixture of experts, etc) but rather about masking or perhaps a kind of block sparsity, ", "as the \"gates\" of the paper do not depend upon the input: ", "they are just fixed masking matrices (see eq (2)).", "The main contribution appears to be the optimisation procedure for the binary masking tensor g. ", "But this procedure is not justified: ", "does each step minimise the loss? ", "This seems unlikely due to the sampling. ", "Can the authors show that the procedure will always converge? ", "It would be good to contrast this with other attempts to learn discrete random variables ", "(for example, The Concrete Distribution: Continuous Relaxation of Continuous Random Variables, Maddison et al, ICLR 2017)." ]
[ "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "request", "evaluation", "request", "request", "reference" ]
BJiW7IkZM
[ "In this work, the objective is to analyze the robustness of a neural network to any sort of attack.", "This is measured by naturally linking the robustness of the network to the local Lipschitz properties of the network function. ", "This approach is quite standard in learning theory, ", "I am not aware of how original this point of view is within the deep learning community.", "This is estimated by obtaining values of the norm of the gradient (also naturally linked to the Lipschitz properties of the function) by backpropagation. ", "This is again a natural idea." ]
[ "fact", "fact", "evaluation", "evaluation", "fact", "evaluation" ]
Hy7Gjh9eM
[ "The authors proposed to supplement adversarial training with an additional regularization that forces the embeddings of clean and adversarial inputs to be similar.", "The authors demonstrate on MNIST and CIFAR that the added regularization leads to more robustness to various kinds of attacks.", "The authors further propose to enhance the network with cascaded adversarial training, that is, learning against iteratively generated adversarial inputs, and showed improved performance against harder attacks.", "The idea proposed is fairly straight-forward.", "Despite being a simple approach, the experimental results are quite promising.", "The analysis on the gradient correlation coefficient and label leaking phenomenon provide some interesting insights.", "As pointed out in section 4.2, increasing the regularization coefficient leads to degenerated embeddings.", "Have the authors consider distance metrics that are less sensitive to the magnitude of the embeddings, for example, normalizing the inputs before sending it to the bidirectional or pivot loss, or use cosine distance etc.?", "Table 4 and 5 seem to suggest that cascaded adversarial learning have more negative impact on test set with one-step attacks than clean test set,", "which is a bit counter-intuitive.", "Do the authors have any insight on this?", "Comments: 1. The writing of the paper could be improved.", "For example, \"Transferability analysis\" in section 1 is barely understandable;", "2. Arrow in Figure 3 are not quite readable;", "3. The paper is over 11 pages.", "The authors might want to consider shrink it down the recommended length." ]
[ "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "request", "fact", "evaluation", "non-arg", "evaluation", "evaluation", "evaluation", "fact", "request" ]
ryjxrEwlM
[ "The authors propose a mechanism for learning task-specific region embeddings for use in text classification. ", "Specifically, this comprises a standard word embedding an accompanying local context embedding. ", "The key idea here is the introduction of a (h x c x v) tensor K, where h is the embedding dim (same as the word embedding size), c is a fixed window size around a target word, and v is the vocabulary size. ", "Each word in v is then associated with an (h x c) matrix that is meant to encode how it affects nearby words, ", "in particular this may be viewed as parameterizing a projection to be applied to surrounding word embeddings. ", "The authors propose two specific variants of this approach, which combine the K matrix and constituent word embeddings (in a given region) in different ways. ", "Region embeddings are then composed (summed) and fed through a standard model. ", "Strong points--- + The proposed approach is simple and largely intuitive: ", "essentially the context matrix allows word-specific contextualization. ", "Further, the work is clearly presented.", "+ At the very least the model does seem comparable in performance to various recent methods (as per Table 2), ", "however as noted below the gains are marginal ", "and I have some questions on the setup.", "+ The authors perform ablation experiments, ", "which are always nice to see. ", "Weak points--- - I have a critical question for clarification in the experiments. ", "The authors write 'Optimal hyperparameters are tuned with 10% of the training set on Yelp Review Full dataset, and identical hyperparameters are applied to all datasets' ", "-- is this true for *all* models, or only the proposed approach? ", "- The gains here appear to be consistent, ", "but they seem marginal. ", "The biggest gain achieved over all datasets is apparently .7, ", "and most of the time the model very narrowly performs better (.2-.4 range). ", "Moreoever, it is not clear if these results are averaged over multiple runs of SGD or not ", "(variation due to initialization and stochastic estimation can account for up to 1 point in variance ", "-- see \"A sensitivity analysis of (and practitioners guide to) CNNs...\" Zhang and Wallace, 2015.)", "- The related work section seems light. ", "For instance, there is no discussion at all of LSTMs and their application to text classificatio (e.g., Tang et al., EMNLP 2015) ", "-- although it is noted that the authors do compare against D-LSTM, or char-level CNNs for the same (see Zhang et al., NIPs 2015). ", "Other relevant work not discussed includes Iyyer et al. (ACL 2015). ", "In their respective ways, these papers address some of the same issues the authors consider here. ", "- The two approaches to inducing the final region embedding (word-context and then context-word in sections 3.2 and 3.3, respectively) feel a bit ad-hoc. ", "I would have appreciated more intuition behind these approaches. ", "Small comments---There is a typo in Figure 4 -- \"Howerver\" should be \"However\"" ]
[ "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "request", "fact", "fact", "fact", "fact", "evaluation", "fact", "reference", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request" ]
H1Mbr8b4f
[ "General comment ============== Low-rank decomposing convolutional filters has been used to speedup convolutional networks at the cost of a drop in prediction performance.", "The authors a) extended existing decomposition techniques by an iterative method for decomposition and fine-tuning convolutional filter weights,", "and b) and algorithm to determine the rank of each convolutional filter.", "The authors show that their method enables a higher speedup and lower accuracy drop than existing methods when applied to VGG16.", "The proposed method is a useful extension of existing methods but needs to evaluated more rigorously.", "The manuscript is hard to read due to unclear descriptions and grammatical errors.", "Major comments ============= 1. The authors authors showed that their method enables a higher speedup and lower drop in accuracy than existing methods when applied to VGG16.", "The authors should analyze if this also holds true for ResNet and Inception, which are more widely used than VGG16.", "2. The authors measured the actual speedup on a single CPU (Intel Core i5).", "The authors should measure the actual speedup also on a single GPU.", "3. It is unclear how the actual speedup was measured.", "Does it correspond to the seconds per update step or the overall training time?", "In the latter case, how long were models trained?", "4. How and which hyper-parameters were optimized?", "The authors should use the same hyper-parameters for all methods (Jaderberg, Zhang, Rank selection).", "The authors should also analyze the sensitivity of speedup and accuracy drop depending on the learning rate for ‘Rank selection’.", "5. Figure 4: the authors should show the same plot for more convolutional layers at varying depth from both VGG and ResNet.", "6. The manuscript is hard to understand and not written clearly enough.", "In the abstract, what does ‘two-pass decomposition’, ‘proper ranks’, ‘the instability problem’, or ‘systematic’ mean?", "What are ‘edge devices’, ‘vanilla parameters’?", "The authors should also avoid uninformative adjectives, clutter, and vague terms throughout the manuscript such as ‘vital importance’ or ‘little room for fine-tuning’.", "Minor comments ============= 1. The authors should use ‘significantly’ only if a statistical hypothesis was performed.", "2. The manuscript contains several typos and grammatical flaws,", "e.g. ‘have been widely applied to have the breakthrough’, ‘The CP decomposition factorizes the tensors into a sum of series rank-one tensors.’, ‘Our two-pass decomposition provides the better result as compared with the original CP decomposition’.", "3. For clarity, the authors should express equation 5 in terms of Y_1, Y_2, Y_3, and Y_4.", "4. Equation 2, bottom: C_in, W_f, H_f, and C_out are undefined at this point." ]
[ "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "request", "fact", "request", "evaluation", "request", "request", "request", "request", "request", "request", "evaluation", "non-arg", "non-arg", "request", "request", "evaluation", "quote", "request", "fact" ]
Hyx7bEPez
[ "In this paper, the authors studied the problem of semi-supervised few-shot classification, by extending the prototypical networks into the setting of semi-supervised learning with examples from distractor classes. ", "The studied problem is interesting, ", "and the paper is well-written. ", "Extensive experiments are performed to demonstrate the effectiveness of the proposed methods. ", "While the proposed method is a natural extension of the existing works (i.e., soft k-means and meta-learning).", "On top of that, It seems the authors have over-claimed their model capability at the first place ", "as the proposed model cannot properly classify the distractor examples but just only consider them as a single class of outliers. ", "Overall, I would like to vote for a weakly acceptance regarding this paper." ]
[ "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation" ]
Byw1O6Fgz
[ "The paper is interesting, ", "but needs more work, ", "and should provide clear and fair comparisons. ", "Per se, the model is incrementally new, ", "but it is not clear what the strengths are, ", "and the presentations needs to be done more carefully.", "In detail: - please fix several typos throughout the manuscript, and have a native speaker (and preferably an ASR expert) proofread the paper", "Introduction - please define HMM/GMM model (and other abbreviations that will be introduced later), ", "it cannot be assumed that the reader is familiar with all of them (\"ASG\" is used before it is defined, ...)", "- The standard units that most ASR systems use can be called \"senones\", ", "and they are context dependent sub-phonetic units (see http://ssli.ee.washington.edu/~mhwang/), not phonetic states. ", "Also the units that generate the alignment and the units that are trained on an alignment can be different ", "(I can use a system with 10000 states to write alignments for a system with 3000 states) ", "- this needs to be corrected.", "- When introducing CNNs, please also cite Waibel and TDNNs ", "- they are *the same* as 1-d CNNs, and predate them. ", "They have been extended to 2-d later on (Spatio-temporal TDNNs)", "- The most influential deep learning paper here might be Seide, Li, Yu Interspeech 2011 on CD-DNN-HMMs, rather than overview articles", "- Many papers get rid of the HMM pipeline, ", "I would add https://arxiv.org/abs/1408.2873, which predates Deep Speech", "- What is a \"sequence-level variant of CTC\"? ", "CTC is a sequence training criterion", "- The reason that Deep Speech 2 is better on noisy test sets is not only the fact they trained on more data, but they also trained on \"noisy\" (matched) data", "- how is this an end-to-end approach if you are using an n-gram language model for decoding? ", "Architecture - MFSC are log Filterbanks ...", "- 1D CNNs would be TDNNs", "- Figure 2: can you plot the various transition types (normalized, un-normalized, ...) in the plots? ", "not sure if it would help, but it might", "- Maybe provide a reference for HMM/GMM and EM (forward backward training)", "- MMI was also widely used in HMM/GMM systems, not just NN systems", "- the \"blank\" states do *not* model \"garbage\" frames, ", "if one wants to interpret them, they might be said to model \"non-stationary\" frames between CTC \"peaks\", ", "but these are different from silence, garbage, noise, ...", "- what is the relationship of the presented ASG criterion to MMI? ", "the form of equation (3) looks like an MMI criterion to me?", "Experiments - Many of the previous comments still hold, ", "please proofread", "- you say there is no \"complexity\" incrase when using \"logadd\" ", "- how do you measure this? ", "number of operations? ", "is there an implementation of \"logadd\" that is (absolutely) as fast as \"add\"?", "- There is discussion as to what i-vectors model (speaker or environment information) ", "- I would leave out this discussion entirely here, ", "it is enough to mention that other systems use adaptation, and maybe re-run an unadapted baselien for comparsion", "- There are techniques for incremental adaptation and a constrained MLLR (feature adaptation) approaches that are very eficient, if one wnats to get into this", "- it may also be interesting to discuss the role of the language model to see which factors influence system performance", "- some of the other papers might use data augmentation, which would increase noise robustness ", "(did not check, but this might explain some of the results in table 4)", "- I am confused by the references in the caption of Table 3 ", "- surely the Waibel reference is meant to be for TDNNs ", "(and should appear earlier in the paper), ", "while p-norm came later ", "(Povey used it first for ASR, I think) ", "and is related to Maxout", "- can you also compare the training times? ", "Conculsion - can you show how your approach is not so computationally expensive as RNN based approaches? ", "either in terms of FLOPS or measured times" ]
[ "evaluation", "request", "request", "evaluation", "evaluation", "request", "request", "request", "evaluation", "fact", "fact", "fact", "fact", "request", "request", "fact", "fact", "evaluation", "evaluation", "request", "non-arg", "fact", "fact", "fact", "fact", "fact", "request", "evaluation", "request", "evaluation", "fact", "fact", "fact", "request", "fact", "evaluation", "request", "fact", "request", "request", "request", "fact", "request", "evaluation", "fact", "request", "evaluation", "evaluation", "evaluation", "fact", "request", "fact", "fact", "evaluation", "request", "request", "request" ]
HJ_m58weG
[ "This paper proposes to use neural network and gradient descent to automatically design for engineering tasks.", "It uses two networks, parameterization network and prediction network to model the mapping from design parameters to fitness.", "It uses back propagation (gradient descent) to improve the design.", "The method is evaluated on heat sink design and airfoil design.", "This paper targets at a potentially very useful application of neural networks that can have real world impacts.", "However, I have three main concerns: 1) Presentation. The organization of the paper could be improved.", "It mixes the method, the heat sink example and the airfoil example throughout the entire paper.", "Sometimes I am very confused about what is being described.", "My suggestion would be to completely separate these three parts:", "present a general method first,", "then use heat sink as the first experiment and airfoil as the second experiment.", "This organization would make the writing much clearer.", "2) In the paragraph above Section 4.1, the paper made two arguments.", "I might be wrong, but I do not agree with either of them in general.", "First of all, \"neural networks are good at generalizing to examples outside their train set\".", "This depends entirely on whether the sample distribution of training and testing are similar and whether you have enough training examples that cover important sample space.", "This is especially critical if a deep neural network is used since overfitting is a real issue.", "Second, \"it is easy to imagine a hybrid system where a network is trained on a simulation and fine tuned ...\".", "Implementing such a hybrid system is nontrivial due to the reality gap.", "There is an entire research field about closing the reality gap and transfer learning.", "So I am not convinced by these two arguments made by this paper.", "They might be true for a narrow field of application.", "But in general, I think they are not quite correct.", "3) The key of this paper is to approximate the dynamics using neural network (which is a continuous mapping) and take advantage of its gradient computation.", "However, many of dynamic systems are inherently discontinuous (collision/contact dynamics) or chaotic (turbulent flow).", "In those scenarios, the proposed method might not work well and we may have to resort to the gradient free methods.", "It seems that the proposed method works well for heat sink problem and the steady flow around airfoil,", "both of which do not fall into the more complex physics regime.", "It would be great that the paper could be more explicit about its limitations.", "In summary, I like the idea, the application and the result of this paper.", "The writing could be improved.", "But more importantly, I think that the proposed method has its limitation about what kind of physical systems it can model.", "These limitation should be discussed more explicitly and more thoroughly." ]
[ "fact", "fact", "fact", "fact", "evaluation", "request", "fact", "evaluation", "request", "request", "request", "evaluation", "fact", "evaluation", "quote", "evaluation", "evaluation", "quote", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "request", "evaluation", "request", "evaluation", "request" ]
SkOj779lM
[ "This paper proposes the concept of optimal representation space and suggests that a model should be evaluated in its optimal representation space to get good performance.", "It could be a good idea if this paper could suggest some ways to find the optimal representation space in general, instead of just showing two cases.", "It is disappointing, because this paper is named as \"finding optimal representation spaces ...\".", "In addition, one of the contributions claimed in this paper is about introducing the \"formalism\" of an optimal representation space.", "However, I didn't see any formal definition of this concept or theoretical justification.", "About FastSent or any other log-linear model, the reason that dot product (or cosine similarity) is a good metric is because the model is trained to optimize the dot product, as shown in equation 5", "--- I think this simple fact is missed in this paper.", "The experimental results are not convincing,", "because I didn't find any consistent pattern that shows the performance is getting better once we evaluated the model in its optimal representation space.", "There are statements in this paper that I didn't agree with", "1) Distributional hypothesis from Harris (1954) is about words not sentences.", "2) Not sure the following line makes sense:", "\"However, these unsupervised tasks are more interesting from a general AI point of view, as they test whether the machine truly understands the human notion of similarity, without being explicitly told what is similar\"" ]
[ "fact", "request", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "quote" ]
rkCp66Tef
[ "The paper proposes a deep learning framework called DeePa that supports multiple dimensions of parallelism in computation to accelerate training of convolutional neural networks.", "Whereas the majority of work on parallel or distributed deep learning partitions training over bootstrap samples of training data (called image parallelism in the paper),", "DeePa is able to additionally partition the operations over image height, width and channel.", "This gives more options to parallelize different parts of the neural network.", "For example, the best DeePa configurations studied in the paper for AlexNet, VGG-16, and Inception-v3 typically use image parallelism for the initial layers, reduce GPU utilization for the deeper layers to reduce data transfer overhead, and use model parallelism on a smaller number of GPUs for fully connected layers.", "The net is that DeePa allows such configurations to be created that provide an increase in training throughput and lower data transfer in practice for training these networks.", "These configurations for parellism are not easily programmed in other frameworks like TensorFlow and PyTorch.", "The paper can potentially be improved in a few ways.", "One is to explore more demanding training workloads that require larger-scale distribution and parallelism.", "The ImageNet 22-K would be a good example and would really highlight the benefits of the DeePa in practice.", "Beyond that, more complex workloads like 3D CNNs for video modeling would also provide a strong motivation for having multiple dimensions of the data for partitioning operations." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "request" ]
Bk_UdcKxf
[ "*Summary* The paper proposes to use hyper-networks [Ha et al. 2016] for the tuning of hyper-parameters, along the lines of [Brock et al. 2017]. ", "The core idea is to have a side neural network sufficiently expressive to learn the (large-scale, matrix-valued) mapping from a given configuration of hyper-parameters to the weights of the model we wish to tune.", "The paper gives a theoretical justification of its approach, ", "and then describes several variants of its core algorithm which mix the training of the hyper-networks together with the optimization of the hyper-parameters themselves. ", "Finally, experiments based on MNIST illustrate the properties of the proposed approach.", "While the core idea may appear as appealing, ", "the paper suffers from several flaws (as further detailed afterwards):", "-Insufficient related work", "-Correctness/rigor of Theorem 2.1", "-Clarity of the paper (e.g., Sec. 2.4)", "-Experiments look somewhat artificial", "-How scalable is the proposed approach in the perspective of tuning models way larger/more complex than those treated in the experiments?", "*Detailed comments* -\"...and training the model to completion.\" and \"This is wasteful, since it trains the model from scratch each time...\" (and similar statement in Sec. 2.1): ", "Those statements are quite debatable. ", "There are lines of work, e.g., in Bayesian optimization, to model early stopping/learning curves (e.g., Domhan2014, Klein2017 and references therein) and where training procedures are explicitly resumed (e.g., Swersky2014, Li2016). ", "The paper should reformulate its statements in the light of this literature.", "-\"Uncertainty could conceivably be incorporated into the hypernet...\". ", "This seems indeed an important point, ", "but it does not appear as clear how to proceed (e.g., uncertainty on w_phi(lambda) which later needs to propagated to L_val); ", "could the authors perhaps further elaborate?", "-I am concerned about the rigor/correctness of Theorem 2.1; ", "for instance, how is the continuity of the best-response exploited? ", "Also, throughout the paper, the argmin is defined as if it was a singleton ", "while in practice it is rather a set-valued mapping (except if there is a unique minimizer for L_train(., lambda), ", "which is unlikely to be the case given the nature of the considered neural-net model). ", "In the same vein, Jensen's inequality states that Expectation[g(X)] >= g(Expectation[X]) for some convex function g and random variable X; ", "how does it precisely translate into the paper's setting (convexity, which function g, etc.)? ", "-Specify in Alg. 1 that \"hyperopt\" refers to a generic hyper-parameter procedure.", "-More details should be provided to better understand Sec. 2.4. ", "At the moment, it is difficult to figure out (and potentially reproduce) the model which is proposed.", "-The training procedure in Sec. 4.2 seems quite ad hoc; ", "how sensitive was the overall performance with respect to the optimization strategy? ", "For instance, in 4.2 and 4.3, different optimization parameters are chosen.", "-typo: \"weight decay is applied the...\" --> \"weight decay is applied to the...\"", "-\"a standard Bayesian optimization implementation from sklearn\": Could more details be provided? ", "(there does not seem to be implementation there http://scikit-learn.org/stable/model_selection.html to the best of my knowledge)", "-The experimental set up looks a bit far-fetched and unrealistic: ", "first scalar, than diagonal and finally matrix-weighted regularization schemes. ", "While the first two may be used in practice, ", "the third scheme is not used in practice to the best of my knowledge.", "-typo: \"fit a hypernet same dataset.\" --> \"fit a hypernet on the same dataset.\"", "-(Franceschi2017) could be added to the related work section.", "*References* (Domhan2014) Domhan, T.; Springenberg, T. & Hutter, F. Extrapolating learning curves of deep neural networks ICML 2014 AutoML Workshop, 2014", "(Franceschi2017) Franceschi, L.; Donini, M.; Frasconi, P. & Pontil, M. Forward and Reverse Gradient-Based Hyperparameter Optimization preprint arXiv:1703.01785, 2017", "(Klein2017) Klein, A.; Falkner, S.; Springenberg, J. T. & Hutter, F. Learning curve prediction with Bayesian neural networks International Conference on Learning Representations (ICLR), 2017, 17", "(Li2016) Li, L.; Jamieson, K.; DeSalvo, G.; Rostamizadeh, A. & Talwalkar, A. Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization preprint arXiv:1603.06560, 2016", "(Swersky2014) Swersky, K.; Snoek, J. & Adams, R. P. Freeze-Thaw Bayesian Optimization preprint arXiv:1406.3896, 2014" ]
[ "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "quote", "evaluation", "fact", "request", "quote", "evaluation", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "request", "request", "request", "evaluation", "evaluation", "request", "fact", "fact", "request", "fact", "evaluation", "fact", "fact", "fact", "request", "request", "reference", "reference", "reference", "reference", "reference" ]
r1cczyqef
[ "The authors propose a new network architecture for RL that contains some relevant inductive biases about planning.", "This fits into the recent line of work on implicit planning where forms of models are learned to be useful for a prediction/planning task.", "The proposed architecture performs something analogous to a full-width tree search using an abstract model (learned end-to-end).", "This is done by expanding all possible transitions to a fixed depth before performing a max backup on all expanded nodes.", "The final backup value is the Q-value prediction for a given state, or can represent a policy through a softmax.", "I thought the paper was clear and well-motivated.", "The architecture (and various associated tricks like state vector normalization) are well-described for reproducibility.", "Experimental results seem promising", "but I wasn’t fully convinced of its conclusions.", "In both domains, TreeQN and AtreeC are compared to a DQN architecture,", "but it wasn’t clear to me that this is the right baseline.", "Indeed TreeQN and AtreeC share the same conv stack in the encoder (I think?),", "but also have the extra capacity of the tree on top.", "Can the performance gain we see in the Push task as a function of tree depth be explained by the added network capacity?", "Same comment in Atari,", "but there it’s not really obvious that the proposed architecture is helping.", "Baselines could include unsharing the weights in the tree, removing the max backup, having a regular MLP with similar capacity, etc.", "Page 5, the auxiliary loss on reward prediction seems appropriate,", "but it’s not clear from the text and experiments whether it actually was necessary.", "Is it that makes interpretability of the model easier (like we see in Fig 5c)?", "Or does it actually lead to better performance?", "Despite some shortcomings in the result section, I believe this is good work and worth communicating as is." ]
[ "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "request", "request", "evaluation", "request", "evaluation", "evaluation", "request", "request", "evaluation" ]
B1_TQ-clG
[ "This paper studies learning to play two-player general-sum games with state (Markov games).", "The idea is to learn to cooperate (think prisoner's dilemma) but in more complex domains.", "Generally, in repeated prisoner's dilemma, one can punish one's opponent for noncooperation.", "In this paper, they design an apporach to learn to cooperate in a more complex game, like a hybrid pong meets prisoner's dilemma game.", "This is fun but I did not find it particularly surprising from a game-theoretic or from a deep learning point of view.", "From a game-theoretic point of view, the paper begins with somewhat sloppy definitions followed by a theorem that is not very surprising.", "It is basically a straightforward generalization of the idea of punishing, which is common in \"folk theorems\" from game theory, to give a particular equilibrium for cooperating in Markov games.", "Many Markov games do not have a cooperative equilibrium, so this paper restricts attention to those that do.", "Even in games where there is a cooperative solution that maximizes the total welfare, it is not clear why players would choose to do so.", "When the game is symmetric, this might be \"the natural\" solution", "but in general it is far from clear why all players would want to maximize the total payoff.", "The paper follows with some fun experiments implementing these new game theory notions.", "Unfortunately, since the game theory was not particularly well-motivated,", "I did not find the overall story compelling.", "It is perhaps interesting that one can make deep learning learn to cooperate,", "but one could have illustrated the game theory equally well with other techniques.", "In contrast, the paper \"Coco-Q: Learning in Stochastic Games with Side Payments\" by Sodomka et. al. is an example where they took a well-motivated game theoretic cooperative solution concept and explored how to implement that with reinforcement learning.", "I would think that generalizing such solution concepts to stochastic games and/or deep learning might be more interesting.", "It should also be noted that I was asked to review another ICLR submission entitled \"CONSEQUENTIALIST CONDITIONAL COOPERATION IN SOCIAL DILEMMAS WITH IMPERFECT INFORMATION\"", "which amazingly introduced the same \"Pong Player’s Dilemma\" game as in this paper.", "Notice the following suspiciously similar paragraphs from the two papers:From \"MAINTAINING COOPERATION IN COMPLEX SOCIAL DILEMMAS USING DEEP REINFORCEMENT LEARNING\":", "We also look at an environment where strategies must be learned from raw pixels.", "We use the method of Tampuu et al. (2017) to alter the reward structure of Atari Pong so that whenever an agent scores a point they receive a reward of 1 and the other player receives −2.", "We refer to this game as the Pong Player’s Dilemma (PPD).", "In the PPD the only (jointly) winning move is not to play.", "However, a fully cooperative agent can be exploited by a defector.", "From \"CONSEQUENTIALIST CONDITIONAL COOPERATION IN SOCIAL DILEMMAS WITH IMPERFECT INFORMATION\":", "To demonstrate this we follow the method of Tampuu et al. (2017) to construct a version of Atari Pong which makes the game into a social dilemma.", "In what we call the Pong Player’s Dilemma (PPD) when an agent scores they gain a reward of 1 but the partner receives a reward of −2.", "Thus, in the PPD the only (jointly) winning move is not to play,", "but selfish agents are again tempted to defect and try to score points even though this decreases total social reward.", "We see that CCC is a successful, robust, and simple strategy in this game." ]
[ "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "non-arg", "fact", "reference", "quote", "quote", "quote", "quote", "quote", "reference", "quote", "quote", "quote", "quote", "quote" ]
BJBWMqqlf
[ "This paper proposes a new theoretically-motivated method for combining reinforcement learning and imitation learning for acquiring policies that are as good as or superior to the expert. ", "The method assumes access to an expert value function (which could be trained using expert roll-outs) and uses the value function to shape the reward function and allow for truncated-horizon policy search. ", "The algorithm can gracefully handle suboptimal demonstrations/value functions, ", "since the demonstrations are only used for reward shaping, ", "and the experiments demonstrate faster convergence and better performance compared to RL and AggreVaTeD on a range of simulated control domains. ", "The paper is well-written and easy to understand.", "My main feedback is with regard to the experiments: I appreciate that the experiments used 25 random seeds! ", "This provides a convincing evaluation.", "It would be nice to see experimental results on even higher dimensional domains such as the ant, humanoid, or vision-based tasks, ", "since the experiments seem to suggest that the benefit of the proposed method is diminished in the swimmer and hopper domains compared to the simpler settings.", "Since the method uses demonstrations, ", "it would be nice to see three additional comparisons: (a) training with supervised learning on the expert roll-outs, (b) initializing THOR and AggreVaTeD (k=1) with a policy trained with supervised learning, and (c) initializing TRPO with a policy trained with supervised learning. ", "There doesn't seem to be any reason not to initialize in such a way, when expert demonstrations are available, ", "and such an initialization should likely provide a significant speed boost in training for all methods.", "How many demonstrations were used for training the value function in each domain? ", "I did not see this information in the paper.", "With regard to the method and discussion: The paper discusses the connection between the proposed method and short-horizon imitation and long-horizon RL, describing the method as a midway point. ", "It would also be interesting to see a discussion of the relation to inverse RL, ", "which considers long-term outcomes from expert demonstrations. ", "For example, MacGlashn & Littman propose a midway point between imitation and inverse RL [1].", "Theoretically, would it make sense to anneal k from small to large? (to learn the most effectively from the smallest amount of experience)", "[1] https://www.ijcai.org/Proceedings/15/Papers/519.pdf", "Minor feedback: - The RHS of the first inequality in the proof of Thm 3.3 seems to have an error in the indexing of i and exponent, which differs from the line before and line after" ]
[ "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "fact", "request", "evaluation", "evaluation", "request", "fact", "fact", "request", "fact", "fact", "non-arg", "reference", "fact" ]
B1A7YkceM
[ "The authors propose a procedure to generate an ensemble of sparse structured models. ", "To do this, the authors propose to (1) sample models using SG-MCMC with group sparse prior, (2) prune hidden units with small weights, (3) and retrain weights by optimizing each pruned model. ", "The ensemble is applied to MNIST classification and language modelling on PTB dataset. ", "I have two major concerns on the paper. ", "First, the proposed procedure is quite empirically designed. ", "So, it is difficult to understand why it works well in some problems. ", "Particularly. the justification on the retraining phase is weak. ", "It seems more like to use SG-MCMC to *initialize* models which will then be *optimized* to find MAP with the sparse-model constraints. ", "The second problem is about the baselines in the MNIST experiments. ", "The FNN-300-100 model without dropout, batch-norm, etc. seems unreasonably weak baseline. ", "So, the results on Table 1 on this small network is not much informative practically. ", "Lastly, I also found a significant effort is also desired to improve the writing. ", "The following reference also needs to be discussed in the context of using SG-MCMC in RNN.", "- \"Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling\", Zhe Gan*, Chunyuan Li*, Changyou Chen, Yunchen Pu, Qinliang Su, Lawrence Carin" ]
[ "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "reference" ]
B1P-gBclf
[ "The quality of the paper is good, and clarity is mostly good. ", "The proposed metric is interesting, ", "but it is hard to judge the significance without more thorough experiments demonstrating that it works in practice.", "Pros:- clear definitions of terms", " - overall outline of paper is good", " - novel metric", "Cons - text is a bit over-wordy, and flow/meaning sometimes get lost. ", "A strict editor would be helpful, ", "because the underlying content is good", " - odd that your definition of generalization in GANs appears immediately preceding the section titled \"Generalisation in GANs\"", " - the paragraph at the end of the \"Generalisation in GANs\" section is confusing. ", "I think this section and the previous (\"The objective of unsupervised learning\") could be combined, removing some repetition, adding some subtitles to improve clarity. ", "This would cut down the text a bit to make space for more experiments.", " - why is your definition of generalization that the test set distance is strictly less than training set ? ", "I would think this should be less-than-or-equal", " - there is a sentence that doesn't end at the top of p.3: \"... the original GAN paper showed that [ends here]\"", " - should state in the abstract what your \"notion of generalization\" for gans is, instead of being vague about it", " - more experiments showing a comparison of the proposed metric to others (e.g. inception score, Mturk assessments of sample quality, etc.) would be necessary to find the metric convincing", " - what is a \"pushforward measure\"? (p.2)", " - the related work section is well-written and interesting, ", "but it's a bit odd to have it at the end. ", "Earlier in the work (e.g. before experiments and discussion) would allow the comparison with MMD to inform the context of the introduction", " - there are some errors in figures that I think were all mentioned by previous commentators." ]
[ "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "request", "evaluation", "non-arg", "evaluation", "fact", "request", "request", "non-arg", "evaluation", "evaluation", "request", "evaluation" ]
rkCi3T3lG
[ "Summary: The paper proposes a new dataset for reading comprehension, called DuoRC. ", "The questions and answers in the DuoRC dataset are created from different versions of a movie plot narrating the same underlying story. ", "The DuoRC dataset offers the following challenges compared to the existing reading comprehension (RC) datasets – ", "1) low lexical overlap between questions and their corresponding passages, ", "2) requires use of common-sense knowledge to answer the question, ", "3) requires reasoning across multiples sentences to answer the question, ", "4) consists of those questions as well that cannot be answered from the given passage. ", "The paper experiments with two types of models ", "– 1) a model which only predicts the span in a document and ", "2) a model which generates the answer after predicting the span. ", "Both these models are built off of an existing model on SQuAD – the Bidirectional Attention Flow (BiDAF) model. ", "The experimental results show that the span based model performs better than the model which generates the answers. ", "But the accuracy of both the models is significantly lower than that of their base model (BiDAF) on SQuAD, demonstrating the difficulty of the DuoRC dataset. ", "Strengths:1.\tThe data collection process is interesting. ", "The challenges in the proposed dataset as outlined in the paper seem worth pushing for.", "2.\tThe paper is well written making it easy to follow.", "3.\tThe experiments and analysis presented in the paper are insightful.", "Weaknesses:1.\tIt would be good if the paper can throw some more light on the comparison between the existing MovieQA dataset and the proposed DuoRC dataset, other than the size.", "2.\tThe dataset is motivated as consisting of four challenges (described in the summary above) that do not exist in the existing RC datasets.", "However, the paper lacks an analysis on what percentage of questions in the proposed dataset belong to each category of the four challenges. ", "Such an analysis would helpful to accurately get an estimate of the proportion of these challenges in the dataset.", "3.\tIt is not clear from the paper how should the questions which are unanswerable be evaluated. ", "As in, what should be the ground-truth answer against which the answers should such questions be evaluated. ", "Clearly, string matching would not work ", "because a model could say “don’t know” whereas some other model could say “unanswerable”. ", "So, does the training data have a particular string as the ground truth answer for such questions, so that a model can just be trained to spit out that particular string when it thinks it can’t answer the questions? ", "4.\tOne of the observations made in the paper is that “training on one dataset and evaluating on the other results in a drop in the performance.” ", "However, in table 4, evaluating on Paraphrase RC is better when trained on Self RC as opposed to when trained on Paraphrase RC. ", "This seems to be in conflict with the observation drawn in the paper. ", "Could authors please clarify this? ", "Also, could authors please throw some light on why this might be happening?", "5.\tIn the third phase of data collection (Paraphrase RC), was waiting for 2-3 weeks the only step taken in order to ensure that the workers for this stage are different from those in stage 2, or was something more sophisticated implemented which did not allow a worker who has worked in stage 2 to be able to participate in stage 3?", "6.\tTypo: Dataset section, phrases --> phases", "Overall: The challenges proposed in the DuoRC dataset are interesting. ", "The paper is well written ", "and the experiments are interesting. ", "However, there are some questions (as mentioned in the Weaknesses section) which need to be clarified before I can recommend acceptance for the paper." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "fact", "request", "evaluation", "evaluation", "fact", "fact", "non-arg", "fact", "fact", "fact", "request", "request", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation" ]
S1GVQk5gG
[ "This paper is about rethinking how to use encoder-decoder architectures for representation learning when the training objective contains a similarity between the decoder output and the encoding of something else.", "For example, for the skip-thought RNN encoder-decoder that encodes a sentence and decodes neighboring sentences: rather than use the final encoder hidden state as the representation of the sentence, the paper uses some function of the decoder,", "since the training objective is to maximize each dot product between a decoder hidden state and the embedding of a context word.", "If dot product (or cosine similarity) is going to be used as the similarity function for the representation, then it makes more sense, the paper argues, to use the decoder hidden state(s) as the representation of the input sentence.", "The paper considers both averaging and concatenating hidden states.", "One difficulty here is that the neighboring sentences are typically not available in downstream tasks,", "so the paper runs the decoder to produce a predicted sentence one word-at-a-time, using the predicted words as inputs to the decoder RNNs.", "Then those decoder RNN hidden states are used via averaging or concatenation", "as the representation of a sentence in downstream tasks.", "This paper is a source of contributions,", "but I think in its current form it is not yet ready for publication.", "Pros: I think it makes sense to pay attention to the training objective when deciding how to use the model for downstream tasks.", "I like the empirical investigation of combining RNN and BOW encoders and decoders.", "The experimental results show that a single encoder-decoder model can be trained and then two different functions of it can be used at test time for different kinds of tasks (RNN-RNN for supervised transfer and RNN-RNN-mean for unsupervised transfer).", "I think this is an interesting result.", "Cons: I have several concerns.", "The first relate to the theoretical arguments and their empirical support.", "Regarding the theoretical arguments: First, the paper discusses the notion of an \"optimal representation space\" and describes the argument as theoretical,", "but I don't see much of a theoretical argument here.", "As far as I can tell, the paper does not formally define its terms or define in what sense the representation space is \"optimal\".", "I can only find heuristic statements like those in the paragraph in Sec 3.2 that begins \"These observations...\".", "What exactly is meant formally by statements like \"any model where the decoder is log-linear with respect to the encoder\" or \"that distance is optimal with respect to the model’s objective\"?", "It seems like the paper may want to start with formal definitions of an encoder and a decoder, then define what is meant by a \"decoder that is log-linear with respect to the encoder\", and define what it means for a distance to be optimal with respect to a training objective.", "That seems necessary in order to provide the foundation to make any theoretical statement about choices for encoders, decoders, and training objectives.", "I am still not exactly sure what that theoretical statement might look like,", "but maybe defining the terms would help the authors get started in heading toward the goal of defining a statement to prove.", "Second, the paper's theoretical story seems to diverge almost immediately from the choices used in the model and experimental procedure.", "For example, in Sec. 3.2, it is stated that cosine similarity \"is the appropriate similarity measure in the case of log-linear decoders.\"", "But the associated footnote (footnote 2) seems to admit a contradiction here by noting that actually the appropriate similarity measure is dot product:", "\"Evidently, the correct measure is actually the dot product.\"", "This is a bit confusing.", "It also raises a question: If cosine similarity will be used later for computing similarity, then why not try using cosine similarity in place of dot product in the model?", "That is, replace \"u_w \\cdot h_i\" in Eq. (2) with \"cos(u_w, h_i)\".", "If the paper's story is correct (and if I understand the ideas correctly), training with cosine similarity should work better than training with dot product,", "because the similarity function used during training is more similar to that used in testing.", "This seems like a natural experiment to try.", "Other natural experiments would be to vary both the similarity function used in the model during training and the similarity function used at test time.", "The authors' claims could be validated if the optimal choices always use the same choice for the training and test-time similarity functions.", "That is, if Euclidean distance is used during training, then will Euclidean distance be the best choice at test time?", "Another example of the divergence lies in the use of the skip-thought decoder on downstream tasks.", "Since the decoder hidden states depend on neighboring sentences and these are considered to be unavailable at test time,", "the paper \"unrolls\" the decoder for several steps by using it to predict words which are then used as inputs on the next time step.", "To me, this is a potentially very significant difference between training and testing.", "Since much of the paper is about reconciling training and testing conditions in terms of the representation space and similarity function,", "this difference feels like a divergence from the theoretical story.", "It is only briefly mentioned at the end of Sec. 3.3 and then discussed again later in the experiments section.", "I think this should be described in more detail in Section 3.3", "because it is an important note about how the model will be used in practice.", "It would be nice to be able to quantify the impact (of unrolling the decoder with predicted words) by, for example, using the decoder on a downstream evaluation dataset that has neighboring sentences in it.", "Then the actual neighboring sentences can be used as inputs to the decoder when it is unrolled, which would be closer to the training conditions", "and we could empirically see the difference.", "Perhaps there is an evaluation dataset with ordered sentences so that the authors could empirically compare using real vs predicted inputs to the decoder on a downstream task?", "The above experiments might help to better connect the experiments section with the theoretical arguments.", "Other concerns, including more specific points, are below: Sec. 2: When describing inferior performance of RNN-based models on unsupervised sentence similarity tasks, the paper states: \"While this shortcoming of SkipThought and RNN-based models in general has been pointed out, to the best of our knowledge, it has never been systematically addressed in the literature before.\"", "The authors may want to check Wieting & Gimpel (2017) (and its related work) which investigates the inferiority of LSTMs compared to word averaging for unsupervised sentence similarity tasks.", "They found that averaging the encoder hidden states can work better than using the final encoder hidden state;", "the authors may want to try that as well.", "Sec. 3.2: When describing FastSent, the paper includes \"Due to the model's simplicity, it is particularly fast to train and evaluate, yet has shown state-of-the-art performance in unsupervised similarity tasks (Hill et al., 2015).\"", "I don't think it makes much sense to cite the SimLex-999 paper in this context,", "as that is a word similarity task and that paper does not include any results of FastSent.", "Maybe the Hill et al (2016) FastSent citation was meant instead?", "But in that case, I don't think it is quite accurate to make the claim that FastSent is SOTA on unsupervised similarity tasks.", "In the original FastSent paper (Hill et al., 2016), FastSent is not as good as CPHRASE or \"DictRep BOW+embs\" on average across the unsupervised sentence similarity evaluations.", "FastSent is also not as good as sent2vec from Pagliardini et al (2017) or charagram-phrase from Wieting et al. (2016).", "Sec. 3.3:In describing skip-thought, the paper states: \"While computationally complex, it is currently the state-of-the-art model for supervised transfer tasks (Hill et al., 2016).\"", "I don't think it is accurate to state that skip-thought is still state-of-the-art for supervised transfer tasks, in light of recent work (Conneau et al., 2017; Gan et al., 2017).", "Sec. 3.3:When discussing averaging the decoder hidden states, the paper states: \"Intuitively, this corresponds to destroying the word order information the decoder has learned.\"", "I'm not sure this strong language can be justified here.", "Is there any evidence to suggest that averaging the decoder hidden states will destroy word order information?", "The hidden states may be representing word order information in a way that is robust to averaging, i.e., in a way such that the average of the hidden states can still lead to the reconstruction of the word order.", "Sec. 4: What does it mean to use an RNN encoder and a BOW decoder?", "This seems to be a strongly-performing setting and competitive with RNN-mean,", "but I don't know exactly what this means.", "Minor things:Sec. 3.1:When defining v_w, it would be helpful to make explicit that it's in \\mathbb{R}^d.", "Sec. 4: For TREC question type classification, I think the correct citation should be Li & Roth (2002) instead of Vorhees (2002).", "Sec. 5:I think there's a typo in the following sentence: \"Our results show that, for example, the raw encoder output for SkipThought (RNN-RNN) achieves strong performance on supervised transfer, whilst its mean decoder output (RNN-mean) achieves strong performance on supervised transfer.\"", "I think \"unsupervised\" was meant in the latter mention.", "References: Conneau, A., Kiela, D., Schwenk, H., Barrault, L., & Bordes, A. (2017). Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. EMNLP.", "Gan, Z., Pu, Y., Henao, R., Li, C., He, X., & Carin, L. (2017). Learning generic sentence representations using convolutional neural networks. EMNLP.", "Li, X., & Roth, D. (2002). Learning question classifiers. COLING.", "Pagliardini, M., Gupta, P., & Jaggi, M. (2018). Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features. arXiv preprint arXiv:1703.02507.", "Wieting, J., Bansal, M., Gimpel, K., & Livescu, K. (2016). Charagram: Embedding words and sentences via character n-grams. EMNLP.", "Wieting, J., & Gimpel, K. (2017). Revisiting Recurrent Networks for Paraphrastic Sentence Embeddings. ACL." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "non-arg", "request", "evaluation", "evaluation", "request", "evaluation", "fact", "fact", "quote", "evaluation", "evaluation", "fact", "fact", "fact", "request", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "request", "evaluation", "request", "request", "request", "non-arg", "evaluation", "fact", "request", "fact", "request", "fact", "evaluation", "fact", "non-arg", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "request", "fact", "request", "evaluation", "evaluation", "request", "request", "request", "request", "reference", "reference", "reference", "reference", "reference", "reference" ]
BJggQbceG
[ "Summary: This paper proposes an adversarial learning framework for machine comprehension task. ", "Specifically, authors consider a reader network which learns to answer the question by reading the passage and a narrator network which learns to obfuscate the passage so that the reader can fail in its task. ", "Authors report results in 3 different reading comprehension datasets ", "and the proposed learning framework results in improving the performance of GMemN2N.", "My Comments: This paper is a direct application of adversarial learning to the task of reading comprehension. ", "It is a reasonable idea ", "and authors indeed show that it works.", "1. The paper needs a lot of editing. ", "Please check the minor comments.", "2. Why is the adversary called narrator network? ", "It is bit confusing ", "because the job of that network is to obfuscate the passage.", "3. Why do you motivate the learning method using self-play? ", "This is just using the idea of adversarial learning (like GAN) and it is not related to self-play.", "4. In section 2, first paragraph, authors mention that the narrator prevents catastrophic forgetting. ", "How is this happening? ", "Can you elaborate more?", "5. The learning framework is not explained in a precise way. ", "What do you mean by re-initializing and retraining the narrator? ", "Isn’t it costly to reinitialize the network and retrain it for every turn? ", "How many such epochs are done? ", "You say that test set also contains obfuscated documents. ", "Is it only for the validation set? ", "Can you please explain if you use obfuscation when you report the final test performance too? ", "It would be more clear if you can provide a complete pseudo-code of the learning procedure.", "6. How does the narrator choose which word to obfuscate? ", "Do you run the narrator model with all possible obfuscations and pick the best choice?", "7. Why don’t you treat number of hops as a hyper-parameter and choose it based on validation set? ", "I would like to see the results in Table 1 where you choose number of hops for each of the three models based on validation set.", "8. In figure 2, how are rounds constructed? ", "Does the model sees the same document again and again for 100 times or each time it sees a random document and you sample documents with replacement? ", "This will be clear if you provide the pseudo-code for learning.", "9. I do not understand author's’ justification for figure-3. ", "Is it the case that the model learns to attend to last sentences for all the questions? ", "Or where it attends varies across examples?", "10. Are you willing to release the code for reproducing the results?", "Minor comments: Page 1, “exploit his own decision” should be “exploit its own decision”", "In page 2, section 2.1, sentence starting with “Indeed, a too low percentage …” needs to be fixed.", "Page 3, “forgetting is compensate” should be “forgetting is compensated”.", "Page 4, “for one sentences” needs to be fixed.", "Page 4, “unknow” should be “unknown”.", "Page 4, “??” needs to be fixed.", "Page 5, “for the two first datasets” needs to be fixed.", "Table 1, “GMenN2N” should be “GMemN2N”. ", "In caption, is it mean accuracy or maximum accuracy?", "Page 6, “dataset was achieves” needs to be fixed.", "Page 7, “document by obfuscated this word” needs to be fixed.", "Page 7, “overall aspect of the two first readers” needs to be fixed.", "Page 8, last para, references needs to be fixed.", "Page 9, first sentence, please check grammar.", "Section 6.2, last sentence is irrelevant." ]
[ "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "request", "request", "evaluation", "fact", "request", "fact", "fact", "request", "request", "evaluation", "request", "evaluation", "request", "fact", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "evaluation", "request", "request", "non-arg", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "evaluation" ]
HJv0cb5xG
[ "This work addresses an important and outstanding problem: accurate long-term forecasting using deep recurrent networks.", "The technical approach seems well motivated, plausible, and potentially a good contribution,", "but the experimental work has numerous weaknesses which limit the significance of the work in current form.", "For one, the 3 datasets tested are not established as among the most suitable, well-recognized benchmarks for evaluating long-term forecasting.", "It would be far more convincing if the author’s used well-established benchmark data, for which existing best methods have already been well-tuned to get their best results.", "Otherwise, the reader is left with concerns that the author’s may not have used the best settings for the baseline method results reported, which indeed is a concern here (see below).", "One weakness with the experiments is that it is not clear that they were fair to RNN or LSTM,", "for example, in terms of giving them the same computation as the TT-RNNs.", "Section Hyper-parameter Analysis” on page 7 explains that they determined best TT rank and lags via grid search.", "But presumably larger values for rank and lag require more computation,", "so to be fair to RNN and LSTM they should be given more computation as well, for example allowing them more hidden units than TT-RNNs get, so that overall computation cost is the same for all 3 methods.", "As far as this reviewer can tell, the authors offer no experiments to show that a larger number of units for RNN or LSTM would not have helped them in improving long-term forecasting accuracies,", "so this seems like a very serious and plausible concern.", "Also, on page 6 the authors say that they tried ARMA but that it performed about 5% worse than LSTM, and thus dismissing direct comparisons of ARMA against TT-RNN.", "But they are unclear whether they gave ARMA as much hyper-parameter tuning (e.g. for number of lags) via grid search as their proposed TT-RNN benefited from.", "Again, the concern here is that the experiments are plausibly not being fair to all methods equally.", "So, due to the weaknesses in the experimental work,", "this work seems a bit premature.", "It should more clearly establish that their proposed TT-RNN are indeed performing well compared to existing SOTA." ]
[ "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request" ]
HkZ8Gb9eG
[ "This paper is well constructed and written.", "It consists of a number of broad ideas regarding density estimation using transformations of autoregressive networks.", "Specifically, the authors examine models involving linear maps from past states (LAM) and recurrence relationships (RAM).", "The critical insight is that the hidden states in the LAM are not coupled allowing considerable flexibility between consecutive conditional distributions.", "This is at the expense of an increased number of parameters and a lack of information sharing.", "In contrast, the RAM transfers information between conditional densities via the coupled hidden states allowing for more constrained smooth transitions.", "The authors then explored a variety of transformations designed to increase the expressiveness of LAM and RAM.", "The authors importantly note that one important restriction on the class of transformations is the ability to evaluate the Jacobian of the transformation efficiently.", "A composite of transformations coupled with the LAM/RAM networks provides a highly expressive model for modelling arbitrary joint densities but retaining interpretable conditional structure.", "There is a rich variety of synthetic and real data studies which demonstrate that LAM and RAM consistently rank amongst the top models demonstrating potential utility for this class of models.", "Whilst the paper provides no definitive solutions, this is not the point of the work which seeks to provide a description of a general class of potentially useful models." ]
[ "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation" ]
SyZQxkmxG
[ "CONTRIBUTION The main contribution of the paper is not clearly stated. ", "To the reviewer, It seems “life-long learning” is the same as “online learning”. ", "However, the whole paper does not define what “life-long learning” is.", "The limited budget scheme is well established in the literature. ", "1. J. Hu, H. Yang, I. King, M. R. Lyu, and A. M.-C. So. Kernelized online imbalanced learning with fixed budgets. In AAAI, Austin Texas, USA, Jan. 25-30 2015. 
", "2. Y. Engel, S. Mannor, and R. Meir. The kernel recursive least-squares algorithm. IEEE Transactions on Signal Processing, 52(8):2275–2285, 2004.", "It is not clear what the new proposal in the paper.", "WRITING QUALITY The paper is not well written in a good shape. ", "Many meanings of the equations are not stated clearly, e.g., $phi$ in eq. (7). ", "Furthermore, the equation in algorithm 2 is not well formatted. ", "DETAILED COMMENTS 1. The mapping function $phi$ appears in Eq. (1) without definition.", "2. The last equation in pp. 3 defines the decision function f by an inner product. ", "In the equation, the notation x_t and i_t is not clearly defined. ", "More seriously, a comma is missed in the definition of the inner product.", "3. Some equations are labeled but never referenced, e.g., Eq. (4).", "4. The physical meaning of Eq.(7) is unclear. ", "However, this equation is the key proposal of the paper. ", "For example, what is the output of the Eq. (7)? ", "What is the main objective of Eq. (7)? ", "Moreover, what support vectors should be removed by optimizing Eq. (7)? ", "One main issue is that the notation $phi$ is not clearly defined. ", "The computation of f-y_r\\phi(s_r) makes it hard to understand. ", "Especially, the dimension of $phi$ in Eq.(7) is unknown. ", "ABOUT EXPERIMENTS 1.\tIt is unclear how to tune the hyperparameters.", "2.\tIn Table 1, the results only report the standard deviation of AUC. ", "No standard deviations of nSV and Time are reported." ]
[ "evaluation", "evaluation", "fact", "fact", "reference", "reference", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "request", "request", "request", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact" ]
Hy4_ANE-f
[ "This paper studies new off-policy policy optimization algorithm using relative entropy objective and use EM algorithm to solve it. ", "The general idea is not new, aka, formulating the MDP problem as a probabilistic inference problem. ", "There are some technical questions: 1. For parametric EM case, there is asymptotic convergence guarantee to local optima case; ", "However, for nonparametric EM case, there is no guarantee for that. ", "This is the biggest concern I have for the theoretical justification of the paper.", "2. In section 4, it is said that Retrace algorithm from Munos et al. (2016) is used for policy evaluation. ", "This is not true. ", "The Retrace algorithm, is per se, a value iteration algorithm. ", "I think the author could say using the policy evaluation version of Retrace, or use the truncated importance weights technique as used in Retrace algorithm, which is more accurate.", "Besides, a minor point: Retrace algorithm is not off-policy stable with function approximation, as shown in several recent papers, such as “Convergent Tree-Backup and Retrace with Function Approximation”. ", "But this is a minor point if the author doesn’t emphasize too much about off-policy stability.", "3. The shifting between the unconstrained multiplier formulation in Eq.9 to the constrained optimization formulation in Eq.10 should be clarified. ", "Usually, an in-depth analysis between the choice of \\lambda in multiplier formulation and the \\epsilon in the constraint should be discussed, ", "which is necessary for further theoretical analysis. ", "4. The experimental conclusions are conducted without sound evidence. ", "For example, the author claims the method to be 'highly data efficient' compared with existing approaches, ", "however, there is no strong evidence supporting this claim. ", "Overall, although the motivation of this paper is interesting, ", "I think there is still a lot of details to improve." ]
[ "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "request", "fact", "evaluation", "request", "request", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation" ]
HyjN-YPlz
[ "The manuscript proposes two objective functions based on the manifold assumption as defense mechanisms against adversarial examples. ", "The two objective functions are based on assigning low confidence values to points that are near or off the underlying (learned) data manifold while assigning high confidence values to points lying on the data manifold. ", "In particular, for an adversarial example that is distinguishable from the points on the manifold and assigned a low confidence by the model, is projected back onto the designated manifold such that the model assigns it a high confidence value. ", "The authors claim that the two objective functions proposed in this manuscript provide such a projection onto the desired manifold and assign high confidence for these adversarial points. ", "These mechanisms, together with the so-called shell wrapper around the model (a deep learning model in this case) will provide the desired defense mechanism against adversarial examples.", "The manuscript at the current stage seems to be a preliminary work that is not well matured yet. ", "The manuscript is overly verbose and the arguments seem to be weak and not fully developed yet. ", "More importantly, the experiments are very preliminary and there is much more room to deliver more comprehensive and compelling experiments." ]
[ "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request" ]
SJ2P_-YgG
[ "The main idea of this paper is to replace the feedforward summation", "y = f(W*x + b) where x,y,b are vectors, W is a matrix by an integral \\y = f(\\int W \\x + \\b) where \\x,\\y,\\b are functions, and W is a kernel. ", "A deep neural network with this integral feedforward is called a deep function machine. ", "The motivation is along the lines of functional PCA: ", "if the vector x was obtained by discretization of some function \\x, then one encounters the curse of dimensionality as one obtains finer and finer discretization. ", "The idea of functional PCA is to view \\x as a function is some appropriate Hilbert space, and expands it in some appropriate basis. ", "This way, finer discretization does not increase the dimension of \\x (nor its approximation), but rather improves the resolution. ", "This paper takes this idea and applies it to deep neural networks. ", "Unfortunately, beyond rather obvious approximation results, the paper does not get major mileage out of this idea. ", "This approach amounts to a change of basis - ", "and therefore the resolution invariance is not surprising. ", "In the experiments, results of this method should be compared not against NNs trained on the data directly, but against NNs trained on dimension reduced version of the data (eg: first fixed number of PCA components). ", "Unfortunately, this was not done. ", "I suspect that in this case, the results would be very similar." ]
[ "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "request", "evaluation", "evaluation" ]
rJ6Z7prxf
[ "This paper introdues NoisyNets, that are neural networks whose parameters are perturbed by a parametric noise function, and they apply them to 3 state-of-the-art deep reinforcement learning algorithms: DQN, Dueling networks and A3C.", "They obtain a substantial performance improvement over the baseline algorithms, without explaining clearly why.", "The general concept is nice,", "the paper is well written", "and the experiments are convincing,", "so to me this paper should be accepted, despite a weak analysis.", "Below are my comments for the authors. ---------------------------------", "General, conceptual comments: The second paragraph of the intro is rather nice,", "but it might be updated with recent work about exploration in RL.", "Note that more than 30 papers are submitted to ICLR 2018 mentionning this topic,", "and many things have happened since this paper was posted on arxiv (see the \"official comments\" too).", "p2: \"our NoisyNet approach requires only one extra parameter per weight\"", "Parameters in a NN are mostly weights and biases,", "so from this sentence one may understand that you close-to-double the number of parameters, which is not so few!", "If this is not what you mean, you should reformulate...", "p2: \"Though these methods often rely on a non-trainable noise of vanishing size as opposed to NoisyNet which tunes the parameter of noise by gradient descent.\"", "Two ideas seem to be collapsed here:", "the idea of diminishing noise over an experiment, exploring first and exploiting later,", "and the idea of adapting the amount of noise to a specific problem.", "It should be made clearer whether NoisyNet can address both issues and whether other algorithms do so too...", "In particular, an algorithm may adapt noise along an experiment or from an experiment to the next.", "From Fig.3, one can see that having the same initial noise in all environments is not a good idea,", "so the second mechanism may help much.", "BTW, the short section in Appendix B about initialization of noisy networks should be moved into the main text.", "p4: the presentation of NoisyNets is not so easy to follow and could be clarified in several respects:", "- a picture could be given to better explain the structure of parameters, particularly in the case of factorised (factorized, factored?) Gaussian noise.", "- I would start with the paragraph \"Considering a linear layer [...] below)\" and only after this I would introduce \\theta and \\xi as a more synthetic notation.", "Later in the paper, you then have to state \"...are now noted \\xi\" several times, which I found rather clumsy.", "p5: Why do you use option (b) for DQN and Dueling and option (a) for A3C?", "The reason why (if any) should be made clear from the clearer presentation required above.", "By the way, a wild question: if you wanted to use NoisyNets in an actor-critic architecture like DDPG, would you put noise both in the actor and the critic?", "The paragraph above Fig3 raises important questions which do not get a satisfactory answer.", "Why is it that, in deterministic environments, the network does not converge to a deterministic policy, which should be able to perform better?", "Why is it that the adequate level of noise changes depending on the environment?", "By the way, are we sure that the curves of Fig3 correspond to some progress in noise tuning (that is, is the level of noise really \"better\" through time with these curves, or they they show something poorly correlated with the true reasons of success?)?", "Finally, I would be glad to see the effect of your technique on algorithms like TRPO and PPO which require a stochastic policy for exploration,", "and where I believe that the role of the KL divergence bound is mostly to prevent the level of stochasticity from collasping too quickly.", "-----------------------------------Local comments: The first sentence may make the reader think you only know about 4-5 old works about exploration.", "Pp. 1-2 : \"the approach differs ... from variational inference. [...] It also differs variational inference...\"", "If you mean it differs from variational inference in two ways, the paragraph should be reorganized.", "p2: \"At a high level our algorithm induces a randomised network for exploration, with care exploration via randomised value functions can be provably-efficient with suitable linear basis (Osband et al., 2014)\"", "=> I don't understand this sentence at all.", "At the top of p3, you may update your list with PPO and ACKTR, which are now \"classical\" baselines too.", "Appendices A1 and A2 are a lot redundant with the main text (some sentences and equations are just copy-pasted),", "this should be improved.", "The best would be to need to reject nothing to the Appendix.", "--------------------------------------- Typos, language issues: p2 the idea ... the optimization process have been => has", "p2 Though these methods often rely on a non-trainable noise of vanishing size as opposed to NoisyNet which tunes the parameter of noise by gradient descent.", "=> you should make a sentence...", "p3 the the double-DQN", "several times, an equation is cut over two lines, a line finishing with \"=\",", "which is inelegant", "You should deal better with appendices:", "Every \"Sec. Ax/By/Cz\" should be replaced by \"Appendix Ax/By/Cz\".", "Besides, the big table and the list of performances figures should themselves be put in two additional appendices", "and you should refer to them as Appendix D or E rather than \"the Appendix\"." ]
[ "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "request", "fact", "fact", "quote", "fact", "fact", "request", "quote", "fact", "fact", "fact", "request", "fact", "evaluation", "evaluation", "request", "request", "request", "request", "evaluation", "request", "request", "request", "evaluation", "request", "request", "request", "request", "evaluation", "evaluation", "quote", "request", "quote", "evaluation", "request", "evaluation", "request", "evaluation", "request", "quote", "request", "quote", "fact", "evaluation", "request", "request", "request", "request" ]
Bkp-xJ5xf
[ "This paper presents a so-called cross-view training for semi-supervised deep models.", "Experiments were conducted on various data sets", "and experimental results were reported.", "Pros:* Studying semi-supervised learning techniques for deep models is of practical significance.", "Cons:* The novelty of this paper is marginal.", "The use of unlabeled data is in fact a self-training process.", "Leveraging the sub-regions of the image to improve performance is not new and has been widely-studied in image classification and retrieval.", "* The proposed approach suffers from a technical weakness or flaw.", "For the self-labeled data, the prediction of each view is enforced to be same as the assigned self-labeling.", "However, since each view related to a sub-region of the image (especially when the model is not so deep), it is less likely for this region to contain the representation of the concepts", "(e.g., some local region of an image with a horse may exhibit only grass);", "enforcing the prediction of this view to be the same self-labeled concepts (e.g,“horse”) may drive the prediction away from what it should be", "( e..g, it will make the network to predict grass as horse).", "Such a flaw may affect the final performance of the proposed approach.", "* The word “view” in this paper is misleading.", "The “view” in this paper is corresponding to actually sub-regions in the images", "* The experimental results indicate that the proposed approach fails to perform better than the compared baselines in table 2, which reduces the practical significance of the proposed approach." ]
[ "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation" ]
H1q18tjxM
[ "This paper presents a nearest-neighbor based continuous control policy.", "Two algorithms are presented: NN-1 runs open-loop trajectories from the beginning state,", "and NN-2 runs a state-condition policy that retrieves nearest state-action tuples for each state.", "The overall algorithm is very simple to implement and can do reasonably well on some simple control tasks,", "but quickly gets overwhelmed by higher-dimensional and stochastic environments.", "It is very similar to \"Learning to Steer on Winding Tracks Using Semi-Parametric Control Policies\" and is effectively an indirect form of tile coding (each could be seen as a fixed voronoi cell).", "I am sure this idea has been tried before in the 90s", "but I am not familiar enough with all the literature to find it", "(A quick google search brings this up: Reinforcement Learning of Active Recognition Behaviors, with a chapter on nearest-neighbor lookup for policies: https://people.eecs.berkeley.edu/~trevor/papers/1997-045/node3.html).", "Although I believe there is work to be done in the current round of RL research using nearest neighbor policies,", "I don't believe this paper delves very far into pushing new ideas", "(even a simple adaptive distance metric could have provided some interesting results, nevermind doing a learned metric in a latent space to allow for rapid retrainig of a policy on new domains....),", "and for that reason I don't think it has a place as a conference paper at ICLR.", "I would suggest its submission to a workshop where it might have more use triggering discussion of further work in this area." ]
[ "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "non-arg", "reference", "evaluation", "evaluation", "request", "evaluation", "request" ]
BkYwge9ef
[ "There could be an interesting idea here, ", "but the limitations and applicability of the proposed approach are not clear yet. ", "More analysis should be done to clarify its potential. ", "Besides, the paper seriously needs to be reworked. ", "The text in general, but also the notation, should be improved.", "In my opinion, the authors should explain how to apply their algorithm to more general network architectures, and test it, in particular to convnets. ", "An experiment on a modern dataset beyond MNIST would also be a welcome addition.", "Some comments:- The method is present as a fully-connected network training procedure. ", "But the resulting network is not really fully-connected, but modular. ", "This is clear in Fig. 1 and in the explanation in Sect. 3.1. ", "The newly added hidden neurons at every iteration do not project to the previous pool of hidden neurons. ", "It should be stressed that the networks end up with this non-conventional “tiled” architecture. ", "Are there studies where the capacity of such networks is investigated, when all the weights are trained concurrently.", "- It wasn’t clear to me whether the memory reallocation could be easily implemented in hardware. ", "A few references or remarks on this issue would be welcome.", "- The work “Efficient supervised learning in networks with binary synapses” by Baldassi et al. (PNAS 2007) should be cited. ", "Although usually ignored by the deep learning community, it actually was a pioneering study on the use of low resolution weights during inference while allowing for auxiliary variables during learning.", "- Coming back my main point above, I didn’t really get the discussion on Sect. 5.3. ", "Why didn’t the authors test their algorithm on a convnet? ", "Are there any obstacles in doing so? ", "It seems quite important to understand this point, ", "as the paper appeals to technical applications and convolution seems hard to sidestep currently.", "- Fig. 3: xx-axis: define storage efficiency and storage requirement.", "- Fig. 4: What’s an RSBL? ", "Acronyms should be defined.", "- Overall, language and notation should really be refined. ", "I had a hard time reading Algorithm 1, ", "as the notation is not even defined anywhere. ", "And this problem extends throughout the paper.", "For example, just looking at Sect. 4.1, “training and testing data x is normalized…”, if x is not properly defined, it’s best to omit it; ", "“… 2-dimentonal…”, at least major typos should be scanned and corrected." ]
[ "evaluation", "evaluation", "request", "request", "request", "request", "request", "fact", "fact", "fact", "fact", "fact", "non-arg", "evaluation", "request", "request", "fact", "evaluation", "non-arg", "non-arg", "evaluation", "evaluation", "request", "request", "request", "request", "evaluation", "fact", "fact", "request", "request" ]
HJfRKPFeM
[ "SUMMARY. The paper presents an extension of word2vec for structured features.", "The authors introduced a new compatibility function between features and, as in the skipgram approach, they propose a variation of negative sampling to deal with structured features.", "The learned representation of features is tested on a recommendation-like task. ", "---------- OVERALL JUDGMENT The paper is not clear ", "and thus I am not sure what I can learn from it.", "From what is written on the paper I have trouble to understand the definition of the model the authors propose and also an actual NLP task where the representation induced by the model can be useful.", "For this reason, I would suggest the authors make clear with a more formal notation, and the use of examples, what the model is supposed to achieve.", "---------- DETAILED COMMENTS When the authors refer to word2vec is not clear if they are referring to skipgram or cbow algorithm, ", "please make it clear.", "Bottom of page one: \"a positive example is 'semantic'\", ", "please, use another expression to describe observable examples, ", "'semantic' does not make sense in this context.", "Levi and Goldberg (2014) do not say anything about factorization machines, ", "could the authors clarify this point?", "Equation (4), what do i and j stand for? ", "what does \\beta represent? ", "is it the embedding vector? ", "How is this formula related to skipgram or cbow?", "The introduction of structured deep-in factorization machine should be more clear with examples that give the intuition on the rationale of the model.", "The experimental section is rather poor, ", "first, the authors only compare themselves with word2ve (cbow), ", "it is not clear what the reader should learn from the results the authors got.", "Finally, the most striking flaw of this paper is the lack of references to previous works on word embeddings and feature representation, ", "I would suggest the author check and compare themselves with previous work on this topic." ]
[ "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request", "quote", "request", "evaluation", "fact", "request", "request", "request", "request", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "request" ]
HJzYCPDlf
[ "Twin Networks: Using the Future as a Regularizer", "** PAPER SUMMARY ** The authors propose to regularize RNN for sequence prediction by forcing states of the main forward RNN to match the state of a secondary backward RNN.", "Both RNNs are trained jointly and only the forward model is used at test time.", "Experiments on conditional generation (speech recognition, image captioning), and unconditional generation (MNIST pixel RNN, language models) show the effectiveness of the regularizer.", "** REVIEW SUMMARY ** The paper reads well, has sufficient reference.", "The idea is simple and well explained.", "Positive empirial results support the proposed regularizer.", "** DETAILED REVIEW ** Overall, this is a good paper.", "I have a few suggestions along the text but nothing major.", "In related work, I would cite co-training approaches.", "In effect, you have two view of a point in time, its past and its future and you force these two views to agree,", "see (Blum and Mitchell, 1998) or Xu, Chang, Dacheng Tao, and Chao Xu. \"A survey on multi-view learning.\" arXiv preprint arXiv:1304.5634 (2013).", "I would also relate your work to distillation/model compression which tries to get one network to behave like another.", "On that point, is it important to train the forward and backward network jointly or could the backward network be pre-trained?", "In section 2, it is not obvious to me that the regularizer (4) would not be ignored in absence of regularization on the output matrix.", "I mean, the regularizer could push h^b to small norm, compensating with higher norm for the output word embeddings.", "Could you comment why this would not happen?", "In Section 4.2, you need to refer to Table 2 in the text.", "You also need to define the evaluation metrics used.", "In this section, why are you not reporting the results from the original Show&Tell paper?", "How does your implementation compare to the original work?", "On unconditional generation, your hypothesis on uncertainty is interesting and could be tested.", "You could inject uncertainty in the captioning task for instance, e.g. consider that multiple version of each word e.g. dogA, dogB, docC which are alternatively used instead of dog with predefined substitution rates.", "Would your regularizer still be helpful there?", "At which point would it break?" ]
[ "non-arg", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "request", "fact", "reference", "request", "request", "evaluation", "fact", "request", "request", "request", "request", "request", "evaluation", "request", "request", "request" ]
ry9RWezWM
[ "The authors purpose a method for creating mini batches for a student network by using a second learned representation space to dynamically selecting examples by their 'easiness and true diverseness'. ", "The framework is detailed ", "and results on MNIST, cifar10 and fashion-MNIST are presented. ", "The work presented is novel but there are some notable omissions: ", " - there are no specific numbers presented to back up the improvement claims; ", "graphs are presented but not specific numeric results", "- there is limited discussion of the computational cost of the framework presented ", "- there is no comparison to a baseline in which the additional learning cycles used for learning the embedding are used for training the student model.", "- only small data sets are evaluated. ", "This is unfortunate ", "because if there are to be large gains from this approach, it seems that they are more likely to be found in the domain of large scale problems, than toy data sets like mnist." ]
[ "fact", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation" ]
HyvZDmueM
[ "This paper proposed a new optimization framework for semi-supervised learning based on derived inversion scheme for deep neural networks. ", "The numerical experiments show a significant improvement in accuracy of the approach." ]
[ "fact", "evaluation" ]
ByvABDcxz
[ "The authors present an algorithm for training ensembles of policy networks that regularly mixes different policies in the ensemble together by distilling a mixture of two policies into a single policy network, adding it to the ensemble and selecting the strongest networks to remain (under certain definitions of a \"strong\" network). ", "The experiments compare favorably against PPO and A2C baselines on a variety of MuJoCo tasks, ", "although I would appreciate a wall-time comparison as well, ", "as training the \"crossover\" network is presumably time-consuming.", "It seems that for much of the paper, the authors could dispense with the genetic terminology altogether - and I mean that as a compliment. ", "There are few if any valuable ideas in the field of evolutionary computing ", "and I am glad to see the authors use sensible gradient-based learning for GPO, even if it makes it depart from what many in the field would consider \"evolutionary\" computing. ", "Another point on terminology that is important to emphasize - the method for training the crossover network by direct supervised learning from expert trajectories is technically not imitation learning but behavioral cloning. ", "I would perhaps even call this a distillation network rather than a crossover network. ", "In many robotics tasks behavioral cloning is known for overfitting to expert trajectories, ", "but that may not be a problem in this setting as \"expert\" trajectories can be generated in unlimited quantities." ]
[ "fact", "fact", "request", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact" ]
BkSq8vBxG
[ "The authors report a number of experiments using off-the-shelf sentence embedding methods for performing extractive summarisation, using a number of simple methods for choosing the extracted sentences. ", "Unfortunately the contribution is too minor, and the work too incremental, to be worthy of a place at a top-tier international conference such as ICLR. ", "The overall presentation is also below the required standard. ", "The work would be better suited for a focused summarisation workshop, ", "where there would be more interest from the participants.", "Some of the statements motivating the work are questionable. ", "I don't know if sentence vectors *in particular* have been especially successful in recent NLP (unless we count neural MT with attention as using \"sentence vectors\"). ", "It's also not the case that the sentence reordering and text simplification problems have been solved, as is suggested on p.2. ", "The most effective method is a simple greedy technique. ", "I'm not sure I'd describe this as being \"based on fundamental principles of vector semantics\" (p.4).", "The citations often have the authors mentioned twice.", "The reference to \"making or breaking applications\" in the conclusion strikes me as premature to say the least." ]
[ "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation" ]
HkN9lyRxG
[ "This paper proposes to bring together multiple inductive biases that hope to correct for inconsistencies in sequence decoding. ", "Building on previous works that utilize modified objectives to generate sequences, this work proposes to optimize for the parameters of a pre-defined combination of various sub-objectives. ", "The human evaluation is straight-forward and meaningful to compensate for the well-known inaccuracies of automatic evaluation. ", "While the paper points out that they introduce multiple inductive biases that are useful to produce human-like sentences, ", "it is not entirely correct that the objective is being learnt as claimed in portions of the paper. ", "I would like this point to be clarified better in the paper. ", "I think showing results on grounded generation tasks like machine translation or image-captioning would make a stronger case for evaluating relevance. ", "I would like to see comparisons on these tasks." ]
[ "fact", "fact", "evaluation", "fact", "evaluation", "request", "request", "request" ]
Sye2eNDxM
[ "This paper aims to learn hierarchical policies by using a recursive policy structure regulated by a stochastic temporal grammar.", "The experiments show that the method is better than a flat policy for learning a simple set of block-related skills in minecraft (find, get, put, stack) ", "and generalizes better to a modification of the environment (size of room).", "The sequence of subtasks generated by the policy are interpretable.\\n\\n", "Strengths:\\n- The grammar and policies are trained using a sparse reward upon task completion. ", "\\n- The method is well ablated; ", "Figures 4 and 5 answered most questions I had while reading.\\n", "- Theoretically, the method makes few assumptions about the environment and the relationships between tasks.\\n", "- The interpretability of the final behaviors is a good result. ", "\\n\\nWeaknesses:\\n- The implementation gives the agent a -0.5 reward if it generates a currently unexecutable goal g\\u2019. ", "Providing this reward requires knowing the full state of the world. ", "If this hack is required, then this method would not be useful in a real world setting, ", "defeating the purpose of the sparse reward mentioned above. ", "I would really like to see how the method performs without this hack. \\n", "- There are no comparisons to other multitask or hierarchical methods. ", "Progressive Networks or Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning seem like natural comparisons.\\n", "- A video to show what the environments and tasks look like during execution would be helpful.\\n", "- The performances of the different ablations are rather close. ", "Please a standard deviation over multiple training runs. ", "Also, why does figure 4.b not include a flat policy?\\n", "- The stages are ordered in a semantically meaningful order (find is the first stage), ", "but the authors claim that the order is arbitrary. ", "If this claim is going to be included in the paper, it needs to be proven (results shown for random orderings) ", "because right now I do not believe it. ", "\\n\\nQuality:\\nThe method does provide hierarchical and interpretable policies for executing instructions, ", "this is a meaningful direction to work on.", "\\n\\nClarity:\\nAlthough the method is complicated, the paper was understandable.", "\\n\\nOriginality and significance:\\nAlthough the method is interesting, I am worried that the environment has been too tailored for the method, ", "and that it would fail in realistic scenarios. ", "The results would be more significant if the tasks had an additional degree of complexity,", "e.g. \\u201cput blue block next to the green block\\u201d \\u201cget the blue block in room 2\\u201d. ", "Then the sequences of subtasks would be a bit less linear", "(e.g., first need to find blue, then get, then find green, then put). ", "At the moment the tasks are barely more than the actions provided in the environment.", "\\n\\nAnother impedance to the paper\\u2019s significance is the number of hacks to make the method work ", "(ordering of stages, alternating policy optimization, first training each stage on only tasks of previous stage). ", "Because the method is only evaluated on one simple environment, ", "it unclear which hacks are for the method generally, and which hacks are for the method to work on the environment." ]
[ "fact", "fact", "fact", "evaluation", "fact", "evaluation", "non-arg", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "request", "fact", "evaluation", "request", "evaluation", "request", "request", "fact", "fact", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "non-arg", "evaluation", "fact", "evaluation", "evaluation", "non-arg", "fact", "evaluation" ]
rk-GXLRgz
[ "This paper suggests a simple yet effective approach for learning with weak supervision. ", "This learning scenario involves two datasets, one with clean data (i.e., labeled by the true function) and one with noisy data, collected using a weak source of supervision. ", "The suggested approach assumes a teacher and student networks, ", "and builds the final representation incrementally, by taking into account the \"fidelity\" of the weak label when training the student at the final step. ", "The fidelity score is given by the teacher, after being trained over the clean data, ", "and it's used to build a cost-sensitive loss function for the students. ", "The suggested method seems to work well on several document classification tasks. ", "Overall, I liked the paper. ", "I would like the authors to consider the following questions - - Over the last 10 years or so, many different frameworks for learning with weak supervision were suggested (e.g., indirect supervision, distant supervision, response-based, constraint-based, to name a few). ", "First, I'd suggest acknowledging these works and discussing the differences to your work. ", "Second - Is your approach applicable to these frameworks? ", "It would be an interesting to compare to one of those methods (e.g., distant supervision for relation extraction using a knowledge base), and see if by incorporating fidelity score, results improve. ", "- Can this approach be applied to semi-supervised learning? ", "Is there a reason to assume the fidelity scores computed by the teacher would not improve the student in a self-training framework?", "- The paper emphasizes that the teacher uses the student's initial representation, when trained over the clean data. ", "Is it clear that this step in needed? ", "Can you add an additional variant of your framework when the fidelity score are computed by the teacher when trained from scratch? ", "using different architecture than the student?" ]
[ "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "request", "request", "request", "request", "request", "fact", "request", "request", "request" ]
Sy4HaTtlz
[ "Quality: The work focuses on a novel problem of generating text sample using GAN and a novel in-filling mechanism of words. ", "Using GAN to generate samples in adversarial setup in texts has been limited due to the mode collapse and training instability issues. ", "As a remedy to these problems an in-filling-task conditioning on the surrounding text has been proposed. ", "But, the use of the rewards at every time step (RL mechanism) to employ the actor-critic training procedure could be challenging computationally challenging.", "Clarity: The mechanism of generating the text samples using the proposed methodology has been described clearly. ", "However the description of the reinforcement learning step could have been made a bit more clear.", "Originality: The work indeed use a novel mechanism of in-filling via a conditioning approach to overcome the difficulties of GAN training in text settings. ", "There has been some work using GAN to generate adversarial examples in textual context too to check the robustness of classifiers. ", "How this current work compares with the existing such literature?", "Significance: The research problem is indeed significant ", "since the use of GAN in generating adversarial examples in image analysis has been more prevalent compared to text settings. ", "Also, the proposed actor-critic training procedure via RL methodology is indeed significant from its application in natural language processing.", "pros: (a) Human evaluations applications to several datasets show the usefulness of MaskGen over the maximum likelihood trained model in generating more realistic text samples.", "(b) Using a novel in-filling procedure to overcome the complexities in GAN training.", "(c) generation of high quality samples even with higher perplexity on ground truth set.", "cons: (a) Use of rewards at every time step to the actor-critic training procure could be computationally expensive.", "(b) How to overcome the situation where in-filling might introduce implausible text sequences with respect to the surrounding words?", "(c) Depending on the Mask quality GAN can produce low quality samples. ", "Any practical way of choosing the mask?" ]
[ "fact", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "fact", "request", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "fact", "request" ]
H1_YBgxZz
[ "This paper presents a method for clustering based on latent representations learned from the classification of transformed data after pseudo-labellisation corresponding to applied transformation.", "Pipeline: -Data are augmented with domain-specific transformations.", "For instance, in the case of MNIST, rotations with different degrees are applied.", "All data are then labelled as \"original\" or \"transformed by ...(specific transformation)\".", "-Classification task is performed with a neural network on augmented dataset according to the pseudo-labels.", "-In parallel of the classification, the neural network also learns the latent representation in an unsupervised fashion.", "-k-means clustering is performed on the representation space observed in the hidden layer preceding the augmented softmax layer.", "Detailed Comments: (*) Pros -The method outperforms the state-of-art regarding unsupervised methods for handwritten digits clustering on MNIST.", "-Use of ACOL and GAR is interesting, also the idea to make \"labeled\" data from unlabelled ones by using data augmentation.", "(*) Cons -minor: in the title, I find the expression \"unsupervised clustering\" uselessly redundant since clustering is by definition unsupervised.", "-Choice of datasets: we already obtained very good accuracy for the classification or clustering of handwritten digits.", "This is not a very challenging task.", "And just because something works on MNIST, does not mean it works in general.", "What are the performances on more challenging datasets like colored images (CIFAR-10, labelMe, ImageNet, etc.)?", "-This is not clear what is novel here", "since ACOL and GAR already exist.", "The novelty seems to be in the adaptation to GAR from the semi-supervised to the unsupervised setting with labels indicating if data have been transformed or not." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "evaluation", "fact", "evaluation" ]
ByR8Gr5gf
[ "The paper proposed a copula-based modification to an existing deep variational information bottleneck model, such that the marginals of the variables of interest (x, y) are decoupled from the DVIB latent variable model, allowing the latent space to be more compact when compared to the non-modified version. ", "The experiments verified the relative compactness of the latent space, and also qualitatively shows that the learned latent features are more 'disentangled'. ", "However, I wonder how sensitive are the learned latent features to the hyper-parameters and optimizations?", "Quality: Ok. ", "The claims appear to be sufficiently verified in the experiments. ", "However, it would have been great to have an experiment that actually makes use of the learned features to make predictions. ", "I struggle a little to see the relevance of the proposed method without a good motivating example.", "Clarity: Below average. ", "Section 3 is a little hard to understand. ", "Is q(t|x) in Fig 1 a typo? ", "How about t_j in equation (5)? ", "There is a reference that appeared twice in the bibliography (1st and 2nd).", "Originality and Significance: Average. ", "The paper (if I understood it correctly) appears to be mainly about borrowing the key ideas from Rey et. al. 2014 and applying it to the existing DVIB model." ]
[ "fact", "fact", "non-arg", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "non-arg", "non-arg", "fact", "evaluation", "evaluation" ]
BkC-HgcxG
[ "In this paper, the authors present an analysis of SGD within an SDE framework. ", "The ideas and the presented results are interesting and are clearly of interest to the deep learning community. ", "The paper is well-written overall.", "However, the paper has important problems. ", "1) The analysis is widely based on the recent paper by Mandt et al. ", "While being an interesting work on its own, the assumptions made in that paper are very strict and not very realistic. ", "For instance, the assumption that the stochastic gradient noise being Gaussian is very restrictive and trying to justify it just by the usual CLT is not convincing especially when the parameter space is extremely large, ", "the setting that is considered in the paper.", "2) There is a mistake in the proof Theorem 1. ", "Even with the assumption that the gradient of sigma is bounded, eq 20 cannot be justified and the equality can only be \"approximately equal to\". ", "The result will only hold if sigma does not depend on theta. ", "However, letting sigma depend on theta is the only difference from Mandt et al. ", "On the other hand, with constant sigma the result is very trivial and can be found in any text book on SDEs (showing the Gibbs distribution). ", "Therefore, presenting it as a new result is misleading. ", "3) Even if the sigma is taken constant and theorem 1 is corrected, I don't think theorem 2 is conclusive. ", "Theorem 2 basically assumes that the distribution is locally a proper Gaussian (it is stated as locally convex, however it is taken as quadratic) ", "and the result just boils down to computing some probability under a Gaussian distribution, ", "which is still quite trivial. ", "Apart from this assumption not being very realistic, ", "the result does not justify the claims on \"the probability of ending in a certain minimum\" ", "-- which is on the other hand a vague statement. ", "First of all \"ending in\" a certain area depends on many different factors, such as the structure of the distribution, the initial point, the distance between the modes etc. ", "Also it is not very surprising that the inverse image of a wider Gaussian density is larger than of a pointy one. ", "This again does not justify the claims. ", "For instance consider a GMM with two components, where the means of the individual components are close to each other, but one component having a very large variance and a smaller weight, and the other one having a lower variance and higher weight. ", "With authors' claim, the algorithm should spend more time on the wider one, ", "however it is evident that this will not be the case. ", "4) There is a conceptual mistake that the authors assume that SGD will attain the exact stationary distribution even when the SDE is simulated by the fixed step-size Euler integrator. ", "As soon as one uses eta>0 the algorithm will never attain the stationary distribution of the continuous-time process, but will attain a stationary distribution that is close to the ideal one (of course with several smoothness, growth assumptions). ", "The error between the ideal distribution and the empirical distribution will be usually O(eta) depending on the assumption ", "and therefore changing eta will result in a different distribution than the ideal one. ", "With this in mind the stationary distributions for (eta/S) and (2eta/2S) will be clearly different. ", "The experiments are very interesting and I do not underestimate their value. ", "However, the current analysis unfortunately does not properly explain the rather strong claims of the authors, which is supposed to be the main contribution of this paper." ]
[ "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation" ]

Dataset Card for AMPERE

Dataset Description

This dataset is released together with our NAACL 2019 Paper "Argument Mining for Understanding Peer Reviews". If you find our work useful, please cite:

@inproceedings{hua-etal-2019-argument,
    title = "Argument Mining for Understanding Peer Reviews",
    author = "Hua, Xinyu  and
      Nikolov, Mitko  and
      Badugu, Nikhil  and
      Wang, Lu",
    booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
    month = jun,
    year = "2019",
    address = "Minneapolis, Minnesota",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N19-1219",
    doi = "10.18653/v1/N19-1219",
    pages = "2131--2137",
}

This dataset includes 400 scientific peer reviews collected from ICLR 2018 hosted at the Openreview platform. Each review is segmented into multiple propositions. We include the original untokenized text for each proposition. Each proposition is labeled as one of the following types:

  • evaluation: a proposition that is not objectively verifiable and does not require any action to be performed, such as qualitative judgement and interpretation of the paper, e.g. "The paper shows nice results on a number of small tasks."
  • request: a proposition that is not objectively verifiable and suggests a course of action to be taken, such as recommendation and suggestion for new experiments, e.g. "I would really like to see how the method performs without this hack."
  • fact: a proposition that is verifiable with objective evidence, such as mathematical conclusion and common knowledge of the field, e.g. "This work proposes a dynamic weight update scheme."
  • quote: a quote from the paper or another source, e.g. "The author wrote 'where r is lower bound of feature norm'."
  • reference: a proposition that refers to an objective evidence, such as URL link and citation, e.g. "see MuseGAN (Dong et al), MidiNet (Yang et al), etc."
  • non-arg: a non-argumentative discourse unit that does not contribute to the overall agenda of the review, such as greetings, metadata, and clarification questions, e.g. "Aha, now I understand."

Dataset Structure

The dataset is partitioned into train/val/test sets. Each set is uploaded as a jsonl format. Each line contains the following elements:

  • doc_id (str): a unique id for review document
  • text (list[str]): a list of segmented propositions
  • labels (list[str]): a list of labels corresponding to the propositions

An example looks as follows.

{
    "doc_id": "H1WORsdlG",
    "text": [
      "This paper addresses the important problem of understanding mathematically how GANs work.",
      "The approach taken here is to look at GAN through the lense of the scattering transform.",
      "Unfortunately the manuscrit submitted is very poorly written.",
      "Introduction and flow of thoughts is really hard to follow.",
      "In method sections, the text jumps from one concept to the next without proper definitions.",
      "Sorry I stopped reading on page 3.",
      "I suggest to rewrite this work before sending it to review.",
      "Among many things: - For citations use citep and not citet to have () at the right places.",
      "- Why does it seems -> Why does it seem etc.",
    ],
    "labels": [
      'fact',
      'fact',
      'evaluation',
      'evaluation',
      'evaluation',
      'evaluation',
      'request',
      'request',
      'request',
    ]
}

Dataset Creation

For human annotators, they will be asked to first read the above definitions and controversial cases carefully. The dataset to be annotated consists of 400 reviews partitioned in 20 batches. Each annotator will follow the following steps for annotation:

  • Step 1: Open a review file with a text editor. The unannotated review file contains only one line, please separate it into multiple lines with each line corresponding to one single proposition. Repeat the above actions on all 400 reviews.
  • Step 2: Based on the segmented units, label the type for each proposition. Start labeling at the end of each file with the marker "## Labels:". Indicate the line number of the proposition first, then annotate the type, e.g. "1. evaluation" for the first proposition. Repeat the above actions on all 400 reviews.

A third annotator then resolves the disagreements between the two annotators on both segmentation and proposition type.

Downloads last month
292
Edit dataset card