mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
22.3 kB
sentence_id text position
707 iclr20_526_3_28 I would also like to see a comparison to CD-k, which often outperforms PCD-k. NEG
1210 neuroai19_2_2_0 This is nice work that addresses the credit assignment problem with a meta-learning approach. POS
1016 midl19_56_3_5 Authors explicitly that the work is not intended for segmentation, but many previous shape modeling works (including SSMs) were used as regularization in segmentation. NEG
473 iclr19_866_1_6 Notably, these together with the ability to train the components separately will generally increase the efficiency of learning. POS
326 iclr19_242_2_0 This paper tested a very simple idea: when we do large batch training, instead of sampling more training data for each minibatch, we use data augmentation techniques to generate training data from a small minibatch. NA
1323 neuroai19_37_3_1 However, the multiple grandiose statements, and some that are downright misleading left me puzzling what I learned. NEG
1214 neuroai19_2_2_4 The model and implementation make sense as far as I can tell from this brief submission. POS
590 iclr20_1724_2_7 I am positive with respect to acceptance of this paper. NA
1290 neuroai19_32_1_16 They draw an analogy between the ventral and dorsal stream of cortex and bilinear models of images. NA
1286 neuroai19_32_1_12 Does this mean previous methods learned the same transformation for all features. NA
1211 neuroai19_2_2_1 The motivation needs to be a bit clearer. NEG
445 iclr19_495_1_17 " To improve the paper, stronger experiments need to be performed. """ NEG
1373 neuroai19_54_3_12 Also, do the CNN layers correspond to cell populations, and if so, why is it reasonable to collapse the time dimension after the first layer? NEG
1352 neuroai19_53_1_2 Space is of course limited, but the mathematics presented seem to pass all sanity checks and gives sufficiently rigor to the authors' approach. POS
355 iclr19_242_2_33 However, the proposed method use a N times larger batch and same number of iterations, and hence N times more computation resources. NA
1168 midl20_85_3_0 The key idea in the paper is to use functional prior that is completely uncertain about prediction of any class. NA
1274 neuroai19_32_1_0 They make modifications to an existing generative model of natural images. NA
587 iclr20_1724_2_4 " The primitive action classification task is ""solved"" by nearly all methods and only serves for debugging purposes." NA
53 graph20_29_3_21 Similar argument about the second and third paragraphs in p. 9. NEG
160 graph20_43_1_7 The situation in which motor width differs from visual width seems fairly niche overall, and the examples cited in the introduction where visual width is greather than motor width seems like a situation that will almost always be due to poor interface implementation, rather than a conscious design decision. NEG
13 graph20_25_2_13 Such alternative design is similar to BendyPass along many dimensions (e.g., users need to carry an additional device, but offers a more familiar interface). NA
220 graph20_56_1_27 Because of that last point, I am somewhat on the fence about this paper, but am willing to consider that it might be acceptable. NA
835 midl19_14_2_9 The clear contribution of the article is, in my opinion, the ability to exploit complementary information from different data sets. NA
1344 neuroai19_37_3_23 " The devil is in the details, the ""how"" of ""suddenly""." NA
17 graph20_25_2_17 Also, entering PIN on touchscreen devices is notoriously difficult for people who are visually impaired, so it is no wonder that BendyPass outperforms it. NEG
698 iclr20_526_3_19 I was surprised not to see how this model performs on the binarized MNIST dataset, and would like to see that result as well as CIFAR likelihood. NEG
962 midl19_51_2_2 Pros: 1- If this approach is accepted by the community, it could remove the need for additional training to the pathologists. POS
110 graph20_36_1_24 The projection method inevitably show the precise spot for pouring syrup. NA
151 graph20_39_3_12 I would have liked a little more discussion on the limitations of the authors proposed guidelines at the end and how did or did not mitigate this issue. NEG
392 iclr19_304_3_5 The criteria remain very vague and seem be to applicable mainly to the evaluated data set (e.g. what defines a steep decrease?). NEG
893 midl19_40_3_11 Is using a pre-trained network really helping ? NEG
510 iclr19_997_3_3 In the exploitation step, architectures are generated by a Bayesian Network. NA
978 midl19_51_2_18 Please provide results of the inter-rater reliability of two pathologists using a point scale on the quality of image digital staining. NEG
1400 neuroai19_59_3_25 " The paper should also seek to connect with more of the recent work being done in spiking recurrent neural networks.""" NEG
426 iclr19_304_3_40 Due to the lack of numerical measures, the experimental evaluation necessarily remains vague by showing some graphs that show that all criteria are roughly met by regularization parameter on the cifar data set. NEG
1143 midl20_71_1_0 The authors proposed a 4D encoder-decoder CNN with convolutional recurrent gate units to learn multiple sclerosis (MS) lesion activity maps using 3D volumes from 2 time points. NA
681 iclr20_526_3_2 For the negative phase, the authors use two separate variational approximations, one of which involves the modeling of the latent variable prior under the approximating distribution, The approach is novel, as far as I know, though not particularly so, and I view this as one of the weak point of the paper. NEG
98 graph20_36_1_12 For example between the hears and the leaf the syrup is either a series of dots or a continuous line. NA
568 iclr20_1493_2_27 CNN vs Linear SVM: I am confused about why we would expect a CNN to be able to learn the Bayes-optimal decision boundary but not the Linear SVM. NEG
1050 midl20_100_1_1 It is well-written, well-structured and easy to read for someone without knowledge on IVF and ART. POS
415 iclr19_304_3_29 Section 3.3 is confusing to me. NEG
1130 midl20_56_4_11 It seems that the DWP need to generate a specific weight each time. NA
23 graph20_25_2_24 In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI 17). NA
268 iclr19_1091_1_6 The presented results indicate that SRL is useful (Table 1), and that auto-encoding alone is often not enough. POS
1368 neuroai19_54_3_7 If not, is it really an explanation of the OSR? NA
1084 midl20_108_3_0 In this paper, the authors aimed to improve the representations learned by Neural Image Compression (NIC) algorithms when applied to Whole Slide Images (WSI) for pathology analysis. NA
1160 midl20_77_4_7 Please modify the paper to make this clear. NA
754 iclr20_727_1_9 The Neural Hawkes model suffers from slow training because of the inclusion of a sampling step in the likelihood calculation. NA
1041 midl19_59_3_9 This should also be shown in table 2. NEG
657 iclr20_2157_3_3 Attribution priors as you formalize it in section 2 (which seems like the core contribution of the paper) was introduced in 2017 pseudo-url where they use a mask on a saliency map to regularize the representation learned. NA
383 iclr19_261_3_23 Unified pragmatic models for generating and following instructions. NA
1008 midl19_52_2_27 b- Please explain (a.u.) NA
1073 midl20_100_1_24 It is quite well known that more training data, in general, results in improved performance of networks. NA
369 iclr19_261_3_4 I have just a few comments below: NA
525 iclr20_1042_2_3 I like that this paper uses a single global probabilistic model instead of separate discriminative and generative ones. POS
289 iclr19_1291_3_2 These two new extensions enable their model to work in either cooperative or a mix of competitive and competitive/collaborative settings. NA
412 iclr19_304_3_26 You state two assumptions or claims, 'the accuracy curve is strictly monotonically decreasing for increasing randomness and 'we also expect that accuracy drops if the regularization of the model is increased, and then state that 'This shows that the accuracy is strictly monotonically decreasing as a function of randomness and regularization. NA
586 iclr20_1724_2_3 A variety of models from recent work are evaluated on the three proposed tasks, demonstrating the validity of the above motivation for the construction of the dataset. NA
152 graph20_39_3_13 " I think these changes/clarifications can be made easily, and therefore I would argue for the acceptance of this paper pending these changes. """ NA
513 iclr19_997_3_6 The effect of each proposed technique is appropriately evaluated. POS
1196 midl20_96_3_7 Why is for example the output temporally smoothed instead of using spatio-temporal consistency in higher dimensional networks? NEG
877 midl19_36_2_2 I think the effort of disentangling a complicated task into simpler ones makes sense, and the experiments have shown promising results. POS
770 iclr20_855_3_6 In Figure 1, OTRainbow is compared against the reported results in (Kaiser et al, 2019), along with other baselines, when limiting the experience to 100k interactions. NA
140 graph20_39_3_1 The authors perform three phases: An interview with providers to assess their needs, sessions with patients to gather their unique medical history and develop several visualizations for their data, and going back to providers with these visualizations to gather their ideas of how well these visualizations would assist them. NA
768 iclr20_855_3_4 I recommend this paper to be accepted only if the following issues are addressed. NA
606 iclr20_2046_2_11 In addition, more experimental analysis should also be presented to support why such a combination is the key contribution to the performance gain. NEG
885 midl19_40_3_3 Those maps are used for training with a partial cross-entropy. NA
300 iclr19_1291_3_13 Why do IRIC and IC work worst in the medium in comparison to hard in TJ in table1? NEG
389 iclr19_304_3_2 They propose three potential criteria based on the curves for determining when a model overfits and use those to determine the smallest l1-regularization parameter value that does not overfit. NA
1105 midl20_127_4_0 The authors present the AF-Net, which is a U-net with three adjustments. NA
656 iclr20_2157_3_2 The paper should have a single focus. NEG
361 iclr19_242_2_39 The proposed method looks unstable. NEG
109 graph20_36_1_23 First of all I am unsure a pixel comparison metric is fair. NEG
869 midl19_14_2_45 " Springer, Cham, 2016.""" NA
649 iclr20_2094_1_23 In Eq (2) what is d_i? NEG
115 graph20_36_1_29 Also, how many times could participants practice? NEG
1325 neuroai19_37_3_3 It offers a call to action to do more comp-neuro, in that it could revolutionise AI. POS
240 graph20_61_2_18 Q1 can be reformulated with plural to avoid gender bias (so that this is harmonized with similar efforts along the paper). NEG
170 graph20_45_2_5 This is a nice paper that I believe proposes and novel and useful visualization scheme. POS
48 graph20_29_3_16 DESIGN APPLICATIONS I am not sure that the possible applications of this model are well described or argued for in this paper. NEG
863 midl19_14_2_37 I would suggest reorganizing these first line by following something like: (i) Despite the fact that there are several available data sets of fundus pictures, none of them contains labels for all the structures of interest for retinal image analysis, either anatomical or pathological. NEG
1355 neuroai19_53_1_5 The authors directly tried to associate biological learning rules with deep network learning rules in AI. NA
1335 neuroai19_37_3_13 A major draw-back of spiking models is that they are much more costly than ANNs, because of the small time-steps required. NA
1287 neuroai19_32_1_13 They make an interesting connection to speed of processing that rapid changes better represented by the magnocellular pathway would be associated with transformations and slow parvo with identity. POS
620 iclr20_2046_2_25 More convincing experimental comparison should be done under real environment such as Atari games (by using the simulator as the environment model as shown in [Guo et al 2014] Deep learning for real-time atari game play using offline monte-carlo tree search planning). NEG
26 graph20_26_3_1 I reviewed the previous submission as R2. NA
284 iclr19_1091_1_22 This should also include some discussion on why this metric allows judging sufficiency and disentangledness. NEG
618 iclr20_2046_2_23 This may bring some advantage for the proposed algorithm. NA
1282 neuroai19_32_1_8 Paper was organized, figures clear and readable. POS
1039 midl19_59_3_7 Minor: - Testing for statistical significance is only shown in the appendix. NEG
241 graph20_61_2_19 VISUALIZATION DESIGN The rationale for visualization design is clearly explained and illustrated. POS
20 graph20_25_2_20 Thus, I look forward to seeing this paper as part of the program. NA
1272 neuroai19_3_3_7 It would be good to compare and fit the proposed models to real human/primate behavior in normal and pathological conditions and make testable predictions. NEG
253 graph20_61_2_31 SUPPLEMENTARY VIDEO The video introduces the application domain and showcases diverse tasks supported by the tool presented in the submission. NA
871 midl19_25_3_1 The approach is clearly explained and the results presented are sufficient to give merit to the idea. POS
548 iclr20_1493_2_5 Previously, all studies of this sort had to be done with small-scale classifiers and simplistic datasets such as Gaussians. NA
1088 midl20_108_3_4 This is a very well written paper. POS
339 iclr19_242_2_15 The largest batch considered is 64*32, which is relatively small. NEG
186 graph20_53_2_13 Q10 only refers to realism - where is the immersion aspect coming from here? NA
546 iclr20_1493_2_3 This demonstrates that even when the Bayes-optimal classifier is robust, we may need to explicitly regularize/incentivize neural networks to learn the correct decision boundary. NA
953 midl19_51_1_21 The choice de-speckle network architecture is somewhat not sound, with the multiplicative residual connection near the outputs of the network and the median filtering operation. NEG
1144 midl20_71_1_1 The proposed architecture connects the encoder and decoder with GRU to incorporate temporal information. NA
1255 neuroai19_26_1_11 Only real point for improvement is more earnest bench marking/model comparison. NEG
347 iclr19_242_2_23 Following the authors logic, normal large batch training decrease the variability of <H>_k and which converges to flat minima. NA
742 iclr20_720_2_15 The feeling I get is that the authors are trying to make their experiments less about what they are proposing in this paper and more about empirical insights about the nature of hierarchy overall. NEG
1078 midl20_100_1_29 Regardless, trying to paint others work negatively by arguments to some general issue with established performance metrics is disingenuous. NEG
664 iclr20_2157_3_11 Most of the experiments revolve around existing attribution prior methods. NEG
571 iclr20_1493_2_30 " For CNNs, however, it is unclear if the Bayes-optimal classifier lies in the hypothesis class (there are ""universal approximation"" arguments but these usually require arbitrarily wide networks and are non-constructive)couldn't it be that the CNNs used here is in the same boat as the Linear SVM (i.e. the Bayes-optimal decision boundary is not expressible by the CNN?)" NA
1212 neuroai19_2_2_2 Is the work trying to address the credit assignment problem in general, or just when applied to online learning tasks? NEG
407 iclr19_304_3_21 You state that the regularization parameter should decrease complexity of the model. NA
394 iclr19_304_3_7 Additionally, only one type of regularization was assumed, namely l1-regularization, though other types are arguably more common in the deep (convolutional) learning literature. NEG
249 graph20_61_2_27 See pseudo-url USER EVALUATION AND FEEDBACK The user evaluation and feedback proposes analysis of user logs that informed changes in metrics for measuring improvement in learning program once their system was adopted by residents and reviewers; and their feedback. NA
860 midl19_14_2_34 Please, clarify that point in the text. NA
1320 neuroai19_36_1_8 The premise of the work must be clarified. NEG
410 iclr19_304_3_24 " What does ""similar scale mean?" NEG
593 iclr20_1724_2_10 I have a few minor comments / questions / editing notes that would be good to address: - The random baseline isn't described in the main text, it would be good to briefly mention it (this will also help to clarify why the value is particularly high for tasks 1 and 2) - The grid resolution ablation results presented in the supplement are actually quite important -- they demonstrate that with a small increase in granularity of the grid the traditional tracking methods begin to be the best performers. NEG
840 midl19_14_2_14 Is there a reason for not using it? NA
157 graph20_43_1_4 Despite the above, I am not very enthusiastic about this paper. NEG
572 iclr20_1493_2_32 Experimental setup: - One somewhat concerning (but perhaps unavoidable) thing about the experimental setup is that all the considered datasets are not perfectly linearly separable, i.e. the Bayes-optimal classifier has non-zero test error in expectation, and moreover the data variance is full-rank in the embedded space. NEG
880 midl19_36_2_5 Noble, A. Zisserman, In MICCAI 2015 Workshop. NA
574 iclr20_1493_2_34 I am concerned that these properties are what drive the Bayes-optimal classifier for the symmetric dataset to be robust (concretely, if 0.01 * Identity was not added to the covariance matrix of the symmetric model and the covariance was left to be low-rank, then any classifier which was Bayes-optimal along the positive-variance directions would be Bayes-optimal, and could behave arbitrarily poorly along the zero-variance directions, still being vulnerable). NEG
68 graph20_29_3_36 " First, for a metric that can often be between 0 and 15%, 2 and 9% are not ""similar"" values." NEG
414 iclr19_304_3_28 I actually dont understand the purpose of this paragraph. NEG
902 midl19_41_1_2 There is no detail on qualitatively visual comparison of generated MR to ground truth. NEG
948 midl19_51_1_16 This doesn't mean that cycle-GAN type of techniques are not suited for medical imaging since they might wipe out their diagnostic value, but it means that every study around this topic needs to prove that the diagnostic value is indeed kept! NA
141 graph20_39_3_2 The authors then suggest some design guidelines at the end for developing usable patient data visualizations. NA
493 iclr19_866_1_26 " Relevant to the discussion of learning from demonstration for language understanding is the following paper by Duvallet et al. Duvalet, Kollar, and Stentz, ""Imitation learning for natural language direction following through unknown environments,"" ICRA 2014 - The paper is overly verbose and redundant in places." NEG
756 iclr20_727_1_11 Including the training time for the baselines, as well as the method proposed by the authors, will help settle the point. NA
986 midl19_52_2_5 The utilized network architecture can be better explained with an emphasis on specific design choices. NEG
1338 neuroai19_37_3_16 But the individual statements are sometimes seductive. POS
640 iclr20_2094_1_14 So, even if those results do not preclude the use of sophisticated DRL techniques for solving geometric knapsack problems, it would be legitimate to empirically compare these techniques with the polytime asymptotic approximation algorithms already found in the literature. NA
845 midl19_14_2_19 The area under the ROC curve is not a proper metric for evaluating a vessel segmentation algorithm due to the class imbalance between the TP and TN classes (vessels vs. background ratio is around 12% in fundus pictures). NEG
947 midl19_51_1_15 " After the publication at MICCAI 2019 of the work ""Distribution Matching Losses Can Hallucinate Features in Medical Image Translation"" and similar other works, it has started becoming apparent that the simple visual similarity between samples generated by a GAN and true samples from a specific distribution doesn't ensure that diagnostic value is kept." NA
395 iclr19_304_3_8 Overall, I think this paper is not fit for publication, because the contributions of the paper seem very vague and are neither thoroughly defined nor tested. NEG
1398 neuroai19_59_3_23 The work is a basic proof-of-concept of results that may not do much to advance understanding since they are what one would expect to see (i.e. the antithesis of their thesis seems very unlikely). NEG
501 iclr19_938_3_6 Con - MAAC still requires all observations and actions of all other agents as an input to the value function, which makes this approach not scalable to settings with many agents. NEG
883 midl19_40_3_1 The paper is very well written and easy to follow ; figure 1 does an excellent job at summarizing the method. POS
1360 neuroai19_53_1_10 " A final addition that would have made this work more compelling would have been to more thoroughly explore e-prop for computations that unfold on timescales beyond those built-in to the neurons (e.g. membrane or adaptation timescales) and which instead rely on reverberating network activity.""" NEG
630 iclr20_2094_1_4 For the next versions of the manuscript, I would recommend using a spell/grammar checker. NA
344 iclr19_242_2_20 I would expect at least the following baselines: i) use normal large batch training and complicated data augmentation, train the model for same number of epochs ii) use normal large batch training and complicated data augmentation, train the model for same number of iterations ii) use normal large batch training and complicated data augmentation, scale the learning rate up as in Goyal et al. 2017 4. NEG
915 midl19_49_1_10 This kind of vertical comparison is insufficient to support the claims made in the study. NEG