paper_id,title,keywords,abstract,meta_review 1,"""Information Theoretic Model Predictive Q-Learning""","['entropy regularized reinforcement learning', 'information theoretic MPC', 'robotics']","""Model-free Reinforcement Learning (RL) algorithms work well in sequential decision-making problems when experience can be collected cheaply and model-based RL is effective when system dynamics can be modeled accurately. However, both of these assumptions can be violated in real world problems such as robotics, where querying the system can be prohibitively expensive and real-world dynamics can be difficult to model accurately. Although sim-to-real approaches such as domain randomization attempt to mitigate the effects of biased simulation, they can still suffer from optimization challenges such as local minima and hand-designed distributions for randomization, making it difficult to learn an accurate global value function or policy that directly transfers to the real world. In contrast to RL, Model Predictive Control (MPC) algorithms use a simulator to optimize a simple policy class online, constructing a closed-loop controller that can effectively contend with real-world dynamics. MPC performance is usually limited by factors such as model bias and the limited horizon of optimization. In this work, we present a novel theoretical connection between information theoretic MPC and entropy regularized RL and develop a Q-learning algorithm that can leverage biased models. We validate the proposed algorithm on sim-to-sim control tasks to demonstrate the improvements over optimal control and reinforcement learning from scratch. Our approach paves the way for deploying reinforcement learning algorithms on real-robots in a systematic manner.""","""The authors develop a novel connection between information theoretic MPC and entropy regularized RL. Using this connection, they develop Q learning algorithm that can work with biased models. They evaluate their proposed algorithm on several control tasks and demonstrate performance over the baseline methods. Unfortunately, reviewers were not convinced that the technical contribution of this work was sufficient. They felt that this was a fairly straightforward extension of MPPI. Furthermore, I would have expected a comparison to POLO. As the authors note, their approach is more theoretically principled, so it would be nice to see them outperforming POLO as a validation of their framework. Given the large number of high-quality submissions this year, I recommend rejection at this time.""" 2,"""Cross-Lingual Ability of Multilingual BERT: An Empirical Study""","['Cross-Lingual Learning', 'Multilingual BERT']","""Recent work has exhibited the surprising cross-lingual abilities of multilingual BERT (M-BERT) -- surprising since it is trained without any cross-lingual objective and with no aligned data. In this work, we provide a comprehensive study of the contribution of different components in M-BERT to its cross-lingual ability. We study the impact of linguistic properties of the languages, the architecture of the model, and the learning objectives. The experimental study is done in the context of three typologically different languages -- Spanish, Hindi, and Russian -- and using two conceptually different NLP tasks, textual entailment and named entity recognition. Among our key conclusions is the fact that the lexical overlap between languages plays a negligible role in the cross-lingual success, while the depth of the network is an integral part of it. All our models and implementations can be found on our project page: pseudo-url.""","""This paper introduces a set of new analysis methods to try to better understand the reasons that multilingual BERT succeeds. The findings substantially bolster the hypothesis behind the original multilingual BERT work: that this kind of model discovers and uses substantial structural and semantic correspondences between languages in a fully unsupervised setting. This is a remarkable result with serious implications for representation learning work more broadly. All three reviewers saw ways in which the paper could be expanded or improved, and one reviewer argued that the novelty and scope of the paper are below the standard for ICLR. However, I am inclined to side with the two more confident reviewers and argue for acceptance. I don't see any substantive reasons to reject the paper, the methods are novel and appropriate (even in light of the prior work that exists on this question), and the results are surprising and relevant a high-profile ongoing discussion in the literature on representation learning for language.""" 3,"""Omnibus Dropout for Improving The Probabilistic Classification Outputs of ConvNets""","['Uncertainty Estimation', 'Calibration', 'Deep Learning']","""While neural network models achieve impressive classification accuracy across different tasks, they can suffer from poor calibration of their probabilistic predictions. A Bayesian perspective has recently suggested that dropout, a regularization strategy popularly used during training, can be employed to obtain better probabilistic predictions at test time (Gal & Ghahramani, 2016a). However, empirical results so far have not been encouraging, particularly with convolutional networks. In this paper, through the lens of ensemble learning, we associate this unsatisfactory performance with the correlation between the models sampled with dropout. Motivated by this, we explore the use of various structured dropout techniques to promote model diversity and improve the quality of probabilistic predictions. We also propose an omnibus dropout strategy that combines various structured dropout methods. Using the SVHN, CIFAR-10 and CIFAR-100 datasets, we empirically demonstrate the superior performance of omnibus dropout relative to several widely used strong baselines in addition to regular dropout. Lastly, we show the merit of omnibus dropout in a Bayesian active learning application. ""","""The paper investigates how to improve the performance of dropout and proposes an omnibus dropout strategy to reduce the correlation between the individual models. All the reviewers felt that the paper requires more work before it can be accepted. In particular, the reviewers raised several concerns about novelty of the method relative to existing methods, significance of performance improvements and clarity of the presentation. I encourage the authors to revise the draft based on the reviewers feedback and resubmit to a different venue. """ 4,"""Reflection-based Word Attribute Transfer""","['embedding', 'representation learning', 'analogy', 'geometry']","""We propose a word attribute transfer framework based on reflection to obtain a word vector with an inverted target attribute for a given word in a word embedding space. Word embeddings based on Pointwise Mutual Information (PMI) represent such analogic relations as king - man + woman \approx queen. These relations can be used for changing a words attribute from king to queen by changing its gender. This attribute transfer can be performed by subtracting a difference vector man - woman from king when we have explicit knowledge of the gender of given word king. However, this knowledge cannot be developed for various words and attributes in practice. For transferring queen into king in this analogy-based manner, we need to know that queen denotes a female and add the difference vector to it. In this work, we transfer such binary attributes based on an assumption that such transfer mapping will become identity mapping when we apply it twice. We introduce a framework based on reflection mapping that satisfies this property; queen should be transferred back to king with the same mapping as the transfer from king to queen. Experimental results show that the proposed method can transfer the word attributes of the given words, and does not change the words that do not have the target attributes.""","""This paper proposes a way to transform word vectors based on a binary attribute (e.g. male/female) based on reflection, with the property that applying the reflection operator twice, the vector for a word is left unchanged. By identifying parameterized mirror planes for each word, the proposed method can leave neutral words left unchanged. The paper received 3 weak accepts. There was initially one reject, but the revisions convinced the reviewer to update their score to a weak accept. Overall, the reviewers appreciated the idea of reflection-based binary word attribute transfer. suggestions, the authors made small improvements to the writing, added missing citations, as well as additional results for another word embedding (GloVE) and another dataset (antonyms). One of the main remaining weakness of the work, is still the small dataset. Although somewhat alleviated by the inclusion of the antonym dataset, this is still a weakness of the paper. The AC agrees that the paper has an nice idea and is well presented. However, the work is limited in scope and is likely to be of limited interest to the ICLR community and would be more appreciated in the NLP community. The authors are encouraged to improve upon the work, and resubmit to an appropriate venue. """ 5,"""Why ADAM Beats SGD for Attention Models ""","['Optimization', 'ADAM', 'Deep learning']","""While stochastic gradient descent (SGD) is still the de facto algorithm in deep learning, adaptive methods like Adam have been observed to outperform SGD across important tasks, such as attention models. The settings under which SGD performs poorly in comparison to Adam are not well understood yet. In this paper, we provide empirical and theoretical evidence that a heavy-tailed distribution of the noise in stochastic gradients is a root cause of SGD's poor performance. Based on this observation, we study clipped variants of SGD that circumvent this issue; we then analyze their convergence under heavy-tailed noise. Furthermore, we develop a new adaptive coordinate-wise clipping algorithm (ACClip) tailored to such settings. Subsequently, we show how adaptive methods like Adam can be viewed through the lens of clipping, which helps us explain Adam's strong performance under heavy-tail noise settings. Finally, we show that the proposed ACClip outperforms Adam for both BERT pretraining and finetuning tasks.""","""This paper tries to explain why Adam is better than sgd for training attention model. In specific, it first provides some empirical and theoretical evidence that a heavy-tailed distribution of the noise in stochastic gradients is the cause of SGD's worse performance. Then the authors studied a clipped variant of SGD that circumvents this issue, and revisited Adam through the lens of clipping. Overall, this paper conveys some interesting ideas. On the other hand, the theorems proved in this paper do not provide additional insight besides the intuition and the experiments are weak (hyperparameters are not carefully tuned). So even after author response, it still does not gather sufficient support from the reviewers. This is a borderline paper, and due to a rather limited number of papers the conference can accept, I encourage the authors to improve this paper and resubmit it to future conference. """ 6,"""Distilled embedding: non-linear embedding factorization using knowledge distillation""","['Model Compression', 'Embedding Compression', 'Low Rank Approximation', 'Machine Translation', 'Natural Language Processing', 'Deep Learning']","""Word-embeddings are a vital component of Natural Language Processing (NLP) systems and have been extensively researched. Better representations of words have come at the cost of huge memory footprints, which has made deploying NLP models on edge-devices challenging due to memory limitations. Compressing embedding matrices without sacrificing model performance is essential for successful commercial edge deployment. In this paper, we propose Distilled Embedding, an (input/output) embedding compression method based on low-rank matrix decomposition with an added non-linearity. First, we initialize the weights of our decomposition by learning to reconstruct the full word-embedding and then fine-tune on the downstream task employing knowledge distillation on the factorized embedding. We conduct extensive experimentation with various compression rates on machine translation, using different data-sets with a shared word-embedding matrix for both embedding and vocabulary projection matrices. We show that the proposed technique outperforms conventional low-rank matrix factorization, and other recently proposed word-embedding matrix compression methods. ""","""This paper proposes to further distill token embeddings via what is effectively a simple autoencoder with a ReLU activation. All reviewers expressed concerns with the degree of technical contribution of this paper. As Reviewer 3 identifies, there are simple variants (e.g. end-to-end training with the factorized model) and there is no clear intuition for why the proposed method should outperform its variants as well as the other baselines (as noted by Reviewer 1). Reviewer 2 further expresses concerns about the merits of the propose approach over existing approaches, given the apparently small effect size of the improvement (let alone the possibility that the improvement may not in fact be statistically significant). """ 7,"""GraphNVP: an Invertible Flow-based Model for Generating Molecular Graphs""","['Graph Neural Networks', 'graph generative model', 'invertible flow', 'graphNVP']","""We propose GraphNVP, an invertible flow-based molecular graph generation model. Existing flow-based models only handle node attributes of a graph with invertible maps. In contrast, our model is the first invertible model for the whole graph components: both of dequantized node attributes and adjacency tensor are converted into latent vectors through two novel invertible flows. This decomposition yields the exact likelihood maximization on graph-structured data. We decompose the generation of a graph into two steps: generation of (i) an adjacency tensor and(ii) node attributes. We empirically demonstrate that our model and the two-step generation efficiently generates valid molecular graphs with almost no duplicated molecules, although there are no domain-specific heuristics ingrained in the model. We also confirm that the sampling (generation) of graphs is faster in magnitude than other models in our implementation. In addition, we observe that the learned latent space can be used to generate molecules with desired chemical properties""","""The authors propose an invertible flow-based model for molecular graph generation. The reviewers like the idea but have several concerns: in particular, overfitting in the model, need for more experiments and missing related work. It is important for authors to address them in a future submission""" 8,"""Actor-Critic Approach for Temporal Predictive Clustering""","['Temporal Clustering', 'Predictive Clustering', 'Actor-Critic']","""Due to the wider availability of modern electronic health records (EHR), patient care data is often being stored in the form of time-series. Clustering such time-series data is crucial for patient phenotyping, anticipating patients prognoses by identifying similar patients, and designing treatment guidelines that are tailored to homogeneous patient subgroups. In this paper, we develop a deep learning approach for clustering time-series data, where each cluster comprises patients who share similar future outcomes of interest (e.g., adverse events, the onset of comorbidities, etc.). The clustering is carried out by using our novel loss functions that encourage each cluster to have homogeneous future outcomes. We adopt actor-critic models to allow back-propagation through the sampling process that is required for assigning clusters to time-series inputs. Experiments on two real-world datasets show that our model achieves superior clustering performance over state-of-the-art benchmarks and identifies meaningful clusters that can be translated into actionable information for clinical decision-making.""","""This paper proposes a reinforcement learning approach to clustering time-series data. The reviewers had several questions related to clarity and concerns related to the novelty of the method, the connection to RL, and experimental results. While the authors were able to address some of these questions and concerns in the rebuttal, the reviewers believe that the paper is not quite ready for publication.""" 9,"""Efficient Wrapper Feature Selection using Autoencoder and Model Based Elimination""","['Wrapper Feature Selection', 'AMBER', 'Ranker Model', 'Generative Training', 'Wireless Subsampling']","""We propose a computationally efficient wrapper feature selection method - called Autoencoder and Model Based Elimination of features using Relevance and Redundancy scores (AMBER) - that uses a single ranker model along with autoencoders to perform greedy backward elimination of features. The ranker model is used to prioritize the removal of features that are not critical to the classification task, while the autoencoders are used to prioritize the elimination of correlated features. We demonstrate the superior feature selection ability of AMBER on 4 well known datasets corresponding to different domain applications via comparing the accuracies with other computationally efficient state-of-the-art feature selection techniques. Interestingly, we find that the ranker model that is used for feature selection does not necessarily have to be the same as the final classifier that is trained on the selected features. Finally, we hypothesize that overfitting the ranker model on the training set facilitates the selection of more salient features.""","""In this paper the authors propose a wrapper feature selection method that selects features based on 1) redundancy, i.e. the sensitivity of the downstream model to feature elimination, and 2) relevance, i.e. how the individual features impact the accuracy of the target task. The authors use a combination of the redundancy and relevance scores to eliminate the features. While acknowledging that the proposed model is potentially useful, the reviewers raised several important concerns that were viewed by AC as critical issues: (1) all reviewers agreed that the proposed approach lacks theoretical justification or convincing empirical evaluations in order to show its effectiveness and general applicability -- see R1s and R2s requests for evaluation with more datasets/diverse tasks to assess the applicability and generality of the proposed model; see R1s, R4s concerns regarding theoretical analysis; (2) all reviewers expressed concerns regarding the technical issue of combining the redundancy and relevance scores -- see R4s and R2s concerns regarding the individual/disjoint calibration of scores; see R1s suggestion to learn to reweigh the scores; (3) experimental setup requires improvement both in terms of clarity of presentation and implementation -- see R1s comment regarding the ranker model, see R4s concern regarding comparison with a standard deep learning model that does feature learning for a downstream task; both reviewers also suggested to analyse how autoencoders with different capacity could impact the results. Additionally R1 raised a concern regarding relevant recent works that were overlooked. The authors have tried to address some of these concerns during rebuttal, but an insufficient empirical evidence still remains a critical issue of this work. To conclude, the reviewers and AC suggest that in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper. """ 10,"""Action Semantics Network: Considering the Effects of Actions in Multiagent Systems""","['multiagent coordination', 'multiagent learning']","""In multiagent systems (MASs), each agent makes individual decisions but all of them contribute globally to the system evolution. Learning in MASs is difficult since each agent's selection of actions must take place in the presence of other co-learning agents. Moreover, the environmental stochasticity and uncertainties increase exponentially with the increase in the number of agents. Previous works borrow various multiagent coordination mechanisms into deep learning architecture to facilitate multiagent coordination. However, none of them explicitly consider action semantics between agents that different actions have different influences on other agents. In this paper, we propose a novel network architecture, named Action Semantics Network (ASN), that explicitly represents such action semantics between agents. ASN characterizes different actions' influence on other agents using neural networks based on the action semantics between them. ASN can be easily combined with existing deep reinforcement learning (DRL) algorithms to boost their performance. Experimental results on StarCraft II micromanagement and Neural MMO show ASN significantly improves the performance of state-of-the-art DRL approaches compared with several network architectures.""","""The authors address the challenge of sample-efficient learning in multi-agent systems. They propose a model that distinguishes actions in terms of their semantics, specifically in terms of whether they influence the acting agent and environment or whether they influence other agents. This additional structure is shown to substantially benefit learning speed when composed with a range of state of the art multi-agent RL algorithms. During the rebuttal, technical questions were well addressed and the overall quality of the paper improved. The paper provides interesting novel insights on how the proposed structure improves learning.""" 11,"""Generative Latent Flow""","['Generative Model', 'Auto-encoder', 'Normalizing Flow']","""In this work, we propose the Generative Latent Flow (GLF), an algorithm for generative modeling of the data distribution. GLF uses an Auto-encoder (AE) to learn latent representations of the data, and a normalizing flow to map the distribution of the latent variables to that of simple i.i.d noise. In contrast to some other Auto-encoder based generative models, which use various regularizers that encourage the encoded latent distribution to match the prior distribution, our model explicitly constructs a mapping between these two distributions, leading to better density matching while avoiding over regularizing the latent variables. We compare our model with several related techniques, and show that it has many relative advantages including fast convergence, single stage training and minimal reconstruction trade-off. We also study the relationship between our model and its stochastic counterpart, and show that our model can be viewed as a vanishing noise limit of VAEs with flow prior. Quantitatively, under standardized evaluations, our method achieves state-of-the-art sample quality and diversity among AE based models on commonly used datasets, and is competitive with GANs' benchmarks. ""","""The authors propose generative latent flow which uses autoencoder to learn latent representations and normalizing flows to map that distribution. The reviewers feel that there is limited novelty since it is a straightforward combination of existing ideas. """ 12,"""Reinforcement Learning with Probabilistically Complete Exploration""","['Reinforcement Learning', 'Exploration', 'sparse rewards', 'learning from demonstration']","""Balancing exploration and exploitation remains a key challenge in reinforcement learning (RL). State-of-the-art RL algorithms suffer from high sample complexity, particularly in the sparse reward case, where they can do no better than to explore in all directions until the first positive rewards are found. To mitigate this, we propose Rapidly Randomly-exploring Reinforcement Learning (R3L). We formulate exploration as a search problem and leverage widely-used planning algorithms such as Rapidly-exploring Random Tree (RRT) to find initial solutions. These solutions are used as demonstrations to initialize a policy, then refined by a generic RL algorithm, leading to faster and more stable convergence. We provide theoretical guarantees of R3L exploration finding successful solutions, as well as bounds for its sampling complexity. We experimentally demonstrate the method outperforms classic and intrinsic exploration techniques, requiring only a fraction of exploration samples and achieving better asymptotic performance.""","""This was a borderline paper, with both pros and cons. In the end, it was not considered sufficiently mature to accept in its current form. The reviewers all criticized the assumptions needed, and lamented the lack of clarity around the distinction between reinforcement learning and planning. The paper requires a clearer contribution, based on a stronger justification of the approach and weakening of the assumptions. The submitted comments should be able to help the authors strengthen this work.""" 13,"""End-to-end named entity recognition and relation extraction using pre-trained language models""","['named entity recognition', 'relation extraction', 'information extraction', 'information retrival', 'transfer learning', 'multi-task learning', 'BERT', 'transformers', 'language models']","""Named entity recognition (NER) and relation extraction (RE) are two important tasks in information extraction and retrieval (IE & IR). Recent work has demonstrated that it is beneficial to learn these tasks jointly, which avoids the propagation of error inherent in pipeline-based systems and improves performance. However, state-of-the-art joint models typically rely on external natural language processing (NLP) tools, such as dependency parsers, limiting their usefulness to domains (e.g. news) where those tools perform well. The few neural, end-to-end models that have been proposed are trained almost completely from scratch. In this paper, we propose a neural, end-to-end model for jointly extracting entities and their relations which does not rely on external NLP tools and which integrates a large, pre-trained language model. Because the bulk of our model's parameters are pre-trained and we eschew recurrence for self-attention, our model is fast to train. On 5 datasets across 3 domains, our model matches or exceeds state-of-the-art performance, sometimes by a large margin.""","""This paper presents an end-to-end technique for named entity recognition, that uses pre-trained models so as to avoid long training times, and evaluates it against several baselines. The paper was reviewed by three experts working in this area. R1 recommends Reject, giving the opinion that although the paper is well-written and results are good, they feel the technique itself has little novelty and that the main reason the technique works well is using BERT. R2 recommends Weak Reject based on similar reasoning, that the approach consists of existing components (albeit combined in a novel way) and suggest some ablation experiments to isolate the source of the good performance. R3 recommends Weak Accept but feels it is ""unsurprising"" that BERT allows for faster training and higher accuracy. In their response, authors emphasize that the application of pretraining to named entity recognition is new, and that theirs is a methodological advance, not purely a practical one (as R1 suggests and other reviews imply). They also argue it is not possible to do a fair ablation study that removes BERT, but make an attempt. The reviewers chose to keep their scores after the response. Given the split decision, the AC also read the paper. It is clear the paper has significant merit and significant practical value, as the reviews indicate. However, given that three expert reviewers -- all of whom are NLP researchers at top institutions -- feel that the contribution of the paper is weak (in the context of the expectations of ICLR) makes it not possible for us to recommend acceptance at this time. """ 14,"""Disentangling neural mechanisms for perceptual grouping""","['Perceptual grouping', 'visual cortex', 'recurrent feedback', 'horizontal connections', 'top-down connections']","""Forming perceptual groups and individuating objects in visual scenes is an essential step towards visual intelligence. This ability is thought to arise in the brain from computations implemented by bottom-up, horizontal, and top-down connections between neurons. However, the relative contributions of these connections to perceptual grouping are poorly understood. We address this question by systematically evaluating neural network architectures featuring combinations bottom-up, horizontal, and top-down connections on two synthetic visual tasks, which stress low-level ""Gestalt"" vs. high-level object cues for perceptual grouping. We show that increasing the difficulty of either task strains learning for networks that rely solely on bottom-up connections. Horizontal connections resolve straining on tasks with Gestalt cues by supporting incremental grouping, whereas top-down connections rescue learning on tasks with high-level object cues by modifying coarse predictions about the position of the target object. Our findings dissociate the computational roles of bottom-up, horizontal and top-down connectivity, and demonstrate how a model featuring all of these interactions can more flexibly learn to form perceptual groups.""","""All the reviewers recommend acceptance. The reviews found the paper to be interesting with substantial insights. """ 15,"""Unsupervised Domain Adaptation through Self-Supervision""",['unsupervised domain adaptation'],"""This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data. Like much of previous work, we seek to align the learned representations of the source and target domains while preserving discriminability. The way we accomplish alignment is by learning to perform auxiliary self-supervised task(s) on both domains simultaneously. Each self-supervised task brings the two domains closer together along the direction relevant to that task. Training this jointly with the main task classifier on the source domain is shown to successfully generalize to the unlabeled target domain. The presented objective is straightforward to implement and easy to optimize. We achieve state-of-the-art results on four out of seven standard benchmarks, and competitive results on segmentation adaptation. We also demonstrate that our method composes well with another popular pixel-level adaptation method.""","""Thanks for your detailed replies to the reviewers, which helped us a lot to clarify several issues. Although the paper discusses an interesting topic and contains potentially interesting idea, its novelty is limited. Given the high competition of ICLR2020, this paper is still below the bar unfortunately.""" 16,"""Reinforcement Learning with Chromatic Networks""","['reinforcement', 'learning', 'chromatic', 'networks', 'partitioning', 'efficient', 'neural', 'architecture', 'search', 'weight', 'sharing', 'compactification']","""We present a neural architecture search algorithm to construct compact reinforcement learning (RL) policies, by combining ENAS and ES in a highly scalable and intuitive way. By defining the combinatorial search space of NAS to be the set of different edge-partitionings (colorings) into same-weight classes, we represent compact architectures via efficient learned edge-partitionings. For several RL tasks, we manage to learn colorings translating to effective policies parameterized by as few as 17 weight parameters, providing >90 % compression over vanilla policies and 6x compression over state-of-the-art compact policies based on Toeplitz matrices, while still maintaining good reward. We believe that our work is one of the first attempts to propose a rigorous approach to training structured neural network architectures for RL problems that are of interest especially in mobile robotics with limited storage and computational resources.""","""This paper describes a method for learning compact RL policies suitable for mobile robotic applications with limited storage. The proposed pipeline is a scalable combination of efficient neural architecture search (ENAS) and evolution strategies (ES). Empirical evaluations are conducted on various OpenAI Gym and quadruped locomotion tasks, producing policies with as little as 10s of weight parameters, and significantly increased compression-reward trade-offs are obtained relative to some existing compact policies. Although reviewers appreciated certain aspects of this paper, after the rebuttal period there was no strong support for acceptance and several unsettled points were expressed. For example, multiple reviewers felt that additional baseline comparisons were warranted to better calibrate performance, e.g., random coloring, wider range of generic compression methods, classic architecture search methods, etc. Moreover, one reviewer remained concerned that the scope of this work was limited to very tiny model sizes whereby, at least in many cases, running the uncompressed model might be adequate.""" 17,"""Learning transport cost from subset correspondence""",[],"""Learning to align multiple datasets is an important problem with many applications, and it is especially useful when we need to integrate multiple experiments or correct for confounding. Optimal transport (OT) is a principled approach to align datasets, but a key challenge in applying OT is that we need to specify a cost function that accurately captures how the two datasets are related. Reliable cost functions are typically not available and practitioners often resort to using hand-crafted or Euclidean cost even if it may not be appropriate. In this work, we investigate how to learn the cost function using a small amount of side information which is often available. The side information we consider captures subset correspondence---i.e. certain subsets of points in the two data sets are known to be related. For example, we may have some images labeled as cars in both datasets; or we may have a common annotated cell type in single-cell data from two batches. We develop an end-to-end optimizer (OT-SI) that differentiates through the Sinkhorn algorithm and effectively learns the suitable cost function from side information. On systematic experiments in images, marriage-matching and single-cell RNA-seq, our method substantially outperform state-of-the-art benchmarks. ""","""The paper proposes an algorithm for learning a transport cost function that accurately captures how two datasets are related by leveraging side information such as a subset of correctly labeled points. The reviewers believe that this is an interesting and novel idea. There were several questions and comments, which the authors adequately addressed. I recommend that the paper be accepted.""" 18,"""Higher-Order Function Networks for Learning Composable 3D Object Representations""","['computer vision', '3d reconstruction', 'deep learning', 'representation learning']","""We present a new approach to 3D object representation where a neural network encodes the geometry of an object directly into the weights and biases of a second 'mapping' network. This mapping network can be used to reconstruct an object by applying its encoded transformation to points randomly sampled from a simple geometric space, such as the unit sphere. We study the effectiveness of our method through various experiments on subsets of the ShapeNet dataset. We find that the proposed approach can reconstruct encoded objects with accuracy equal to or exceeding state-of-the-art methods with orders of magnitude fewer parameters. Our smallest mapping network has only about 7000 parameters and shows reconstruction quality on par with state-of-the-art object decoder architectures with millions of parameters. Further experiments on feature mixing through the composition of learned functions show that the encoding captures a meaningful subspace of objects.""","""The submission presents an approach to single-view 3D reconstruction. The approach is quite creative and involves predicting the weights of a network that is then applied to a point set. The presentation is good. The experimental protocol is well-informed and the results are convincing. The reviewers' concerns have largely been addressed by the authors' responses and the revision. In particular, R2, who gave a ""3"", posted ""I would now advise to raise my score (3 previously) to a be in line with the 6: Weak Accept given by the other reviewers."" This means that all three reviewers recommend accepting the paper. The AC agrees.""" 19,"""Augmenting Genetic Algorithms with Deep Neural Networks for Exploring the Chemical Space""","['Generative model', 'Chemical Space', 'Inverse Molecular Design']","""Challenges in natural sciences can often be phrased as optimization problems. Machine learning techniques have recently been applied to solve such problems. One example in chemistry is the design of tailor-made organic materials and molecules, which requires efficient methods to explore the chemical space. We present a genetic algorithm (GA) that is enhanced with a neural network (DNN) based discriminator model to improve the diversity of generated molecules and at the same time steer the GA. We show that our algorithm outperforms other generative models in optimization tasks. We furthermore present a way to increase interpretability of genetic algorithms, which helped us to derive design principles""","""Paper received reviews of A, WA, WR. AC has carefully read all reviews/responses. R1 is less experienced in this area. AC sides with R2,R3 and feels paper should be accepted. Interesting topic and interesting problem. Authors are encouraged to strengthen experiments in final version. """ 20,"""Deep 3D Pan via local adaptive ""t-shaped"" convolutions with global and local adaptive dilations""","['Deep learning', 'Stereoscopic view synthesis', 'Monocular depth', 'Deep 3D Pan']","""Recent advances in deep learning have shown promising results in many low-level vision tasks. However, solving the single-image-based view synthesis is still an open problem. In particular, the generation of new images at parallel camera views given a single input image is of great interest, as it enables 3D visualization of the 2D input scenery. We propose a novel network architecture to perform stereoscopic view synthesis at arbitrary camera positions along the X-axis, or Deep 3D Pan, with t-shaped adaptive kernels equipped with globally and locally adaptive dilations. Our proposed network architecture, the monster-net, is devised with a novel t-shaped adaptive kernel with globally and locally adaptive dilation, which can efficiently incorporate global camera shift into and handle local 3D geometries of the target images pixels for the synthesis of naturally looking 3D panned views when a 2-D input image is given. Extensive experiments were performed on the KITTI, CityScapes, and our VICLAB_STEREO indoors dataset to prove the efficacy of our method. Our monster-net significantly outperforms the state-of-the-art method (SOTA) by a large margin in all metrics of RMSE, PSNR, and SSIM. Our proposed monster-net is capable of reconstructing more reliable image structures in synthesized images with coherent geometry. Moreover, the disparity information that can be extracted from the t-shaped kernel is much more reliable than that of the SOTA for the unsupervised monocular depth estimation task, confirming the effectiveness of our method.""","""Two reviewers recommend acceptance while one is negative. The authors propose t-shaped kernels for view synthesis, focusing on stereo images. AC finds the problem and method interesting and the results to be sufficiently convincing to warrant acceptance.""" 21,"""Distance-Based Learning from Errors for Confidence Calibration""","['Confidence Calibration', 'Uncertainty Estimation', 'Prototypical Learning']","""Deep neural networks (DNNs) are poorly calibrated when trained in conventional ways. To improve confidence calibration of DNNs, we propose a novel training method, distance-based learning from errors (DBLE). DBLE bases its confidence estimation on distances in the representation space. In DBLE, we first adapt prototypical learning to train classification models. It yields a representation space where the distance between a test sample and its ground truth class center can calibrate the model's classification performance. At inference, however, these distances are not available due to the lack of ground truth labels. To circumvent this by inferring the distance for every test sample, we propose to train a confidence model jointly with the classification model. We integrate this into training by merely learning from mis-classified training samples, which we show to be highly beneficial for effective learning. On multiple datasets and DNN architectures, we demonstrate that DBLE outperforms alternative single-model confidence calibration approaches. DBLE also achieves comparable performance with computationally-expensive ensemble approaches with lower computational cost and lower number of parameters.""","""All reviewers voted to accept this paper. The AC recommends acceptance.""" 22,"""Partial Simulation for Imitation Learning""","['Reinforcement Learning', 'Imitation Learning', 'Behavior Cloning', 'Partial Simulation']","""Model-based imitation learning methods require full knowledge of the transition kernel for policy evaluation. In this work, we introduce the Expert Induced Markov Decision Process (eMDP) model as a formulation of solving imitation problems using Reinforcement Learning (RL), when only partial knowledge about the transition kernel is available. The idea of eMDP is to replace the unknown transition kernel with a synthetic kernel that: a) simulate the transition of state components for which the transition kernel is known (s_r), and b) extract from demonstrations the state components for which the kernel is unknown (s_u). The next state is then stitched from the two components: s={s_r,s_u}. We describe in detail the recipe for building an eMDP and analyze the errors caused by its synthetic kernel. Our experiments include imitation tasks in multiplayer games, where the agent has to imitate one expert in the presence of other experts for whom we cannot provide a transition model. We show that combining a policy gradient algorithm with our model achieves superior performance compared to the simulation-free alternative.""","""The paper introduces the concept of an Expert Induced MDP (eMDP) to address imitation learning settings where environment dynamics are part known / part unknown. Based on the formulation a model-based imitation learning approach is derived and the authors obtain theoretical guarantees. Empirical validation focuses on comparison to behavior cloning. Reviewers raised concerns about the size of the contribution. For example, it is unclear to what degree the assumptions made here would hold in practical settings.""" 23,"""On the Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks""","['variational Bayes', 'Bayesian neural networks', 'mean field']","""Variational Bayesian Inference is a popular methodology for approximating posterior distributions in Bayesian neural networks. Recent work developing this class of methods has explored ever richer parameterizations of the approximate posterior in the hope of improving performance. In contrast, here we share a curious experimental finding that suggests instead restricting the variational distribution to a more compact parameterization. For a variety of deep Bayesian neural networks trained using Gaussian mean-field variational inference, we find that the posterior standard deviations consistently exhibits strong low-rank structure after convergence. This means that by decomposing these variational parameters into a low-rank factorization, we can make our variational approximation more compact without decreasing the models' performance. What's more, we find that such factorized parameterizations are easier to train since they improve the signal-to-noise ratio of stochastic gradient estimates of the variational lower bound, resulting in faster convergence.""","""This paper proposes to reduce the number of variational parameters for mean-field VI. A low-rank approximation is used for this purpose. Results on a few small problems are reported. As R3 has pointed out, the main reason to reject this paper is the lack of comparison of uncertainty estimates. I also agree that, recent Adam-like optimizers do use preconditioning that can be interpreted as variances, so it is not clear why reducing this will give better results. I agree with R2's comments about missing the ""point estimate"" baseline. Also the reason for rank 1,2,3 giving better accuracies is unclear and I think the reasons provided by the authors is speculative. I do believe that reducing the parameterization is a reasonable idea and could be useful. But it is not clear if the proposal of this paper is the right one. Due to this reason, I recommend to reject this paper. However, I highly encourage the authors to improve their paper taking these points into account.""" 24,"""GResNet: Graph Residual Network for Reviving Deep GNNs from Suspended Animation""","['Graph Neural Networks', 'Node Classification', 'Representation Learning']","""The existing graph neural networks (GNNs) based on the spectral graph convolutional operator have been criticized for its performance degradation, which is especially common for the models with deep architectures. In this paper, we further identify the suspended animation problem with the existing GNNs. Such a problem happens when the model depth reaches the suspended animation limit, and the model will not respond to the training data any more and become not learnable. Analysis about the causes of the suspended animation problem with existing GNNs will be provided in this paper, whereas several other peripheral factors that will impact the problem will be reported as well. To resolve the problem, we introduce the GRESNET (Graph Residual Network) framework in this paper, which creates extensively connected highways to involve nodes raw features or intermediate representations throughout the graph for all the model layers. Different from the other learning settings, the extensive connections in the graph data will render the existing simple residual learning methods fail to work. We prove the effectiveness of the introduced new graph residual terms from the norm preservation perspective, which will help avoid dramatic changes to the nodes representations between sequential layers. Detailed studies about the GRESNET framework for many existing GNNs, including GCN, GAT and LOOPYNET, will be reported in the paper with extensive empirical experiments on real-world benchmark datasets.""","""This paper studies the suspended animation limit of various graph neural networks (GNNs) and provides some theoretical analysis to explain its cause. To overcome the limitation, the authors propose Graph Residual Network (GRESNET) framework to involve nodes raw features or intermediate representations throughout the graph for all the model layers. The main concern of the reviewers is: the assumption made for theoretical analysis that the fully connected layer is identical mapping is too stringent. The paper does not gather sufficient support from the reviewers to merit acceptance, even after author response and reviewer discussion. I thus recommend reject.""" 25,"""DeepPCM: Predicting Protein-Ligand Binding using Unsupervised Learned Representations""","['Unsupervised Representation Learning', 'Computational biology', 'computational chemistry', 'protein-ligand binding']","""In-silico protein-ligand binding prediction is an ongoing area of research in computational chemistry and machine learning based drug discovery, as an accurate predictive model could greatly reduce the time and resources necessary for the detection and prioritization of possible drug candidates. Proteochemometric modeling (PCM) attempts to make an accurate model of the protein-ligand interaction space by combining explicit protein and ligand descriptors. This requires the creation of information-rich, uniform and computer interpretable representations of proteins and ligands. Previous work in PCM modeling relies on pre-defined, handcrafted feature extraction methods, and many methods use protein descriptors that require alignment or are otherwise specific to a particular group of related proteins. However, recent advances in representation learning have shown that unsupervised machine learning can be used to generate embeddings which outperform complex, human-engineered representations. We apply this reasoning to propose a novel proteochemometric modeling methodology which, for the first time, uses embeddings generated via unsupervised representation learning for both the protein and ligand descriptors. We evaluate performance on various splits of a benchmark dataset, including a challenging split that tests the models ability to generalize to proteins for which bioactivity data is greatly limited, and we find that our method consistently outperforms state-of-the-art methods.""","""This paper uses unsupervised learning to create useful representations to improve the performance of models in predicting protein-ligand binding. After reviewers had time to consider each other's comments, there was consensus that the current work is too lacking in novelty on the modeling side to warrant publication in ICLR. Additionally, current experiments are lacking comparisons with important baselines. The work in its current form may be better suited for a domain journal. """ 26,"""A General Upper Bound for Unsupervised Domain Adaptation""","['unsupervised domain adaptation', 'upper bound', 'joint error', 'hypothesis space constraint', 'cross margin discrepancy']","""In this work, we present a novel upper bound of target error to address the problem for unsupervised domain adaptation. Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks. Furthermore, Ben-David et al. (2010) provide an upper bound for target error when transferring the knowledge, which can be summarized as minimizing the source error and distance between marginal distributions simultaneously. However, common methods based on the theory usually ignore the joint error such that samples from different classes might be mixed together when matching marginal distribution. And in such case, no matter how we minimize the marginal discrepancy, the target error is not bounded due to an increasing joint error. To address this problem, we propose a general upper bound taking joint error into account, such that the undesirable case can be properly penalized. In addition, we utilize constrained hypothesis space to further formalize a tighter bound as well as a novel cross margin discrepancy to measure the dissimilarity between hypotheses which alleviates instability during adversarial learning. Extensive empirical evidence shows that our proposal outperforms related approaches in image classification error rates on standard domain adaptation benchmarks.""","""Given two distributions, source and target, the paper presents an upper bound on the target risk of a classifier in terms of its source risk and other terms comparing the risk under the source/target input distribution and target/source labeling function. In the end, the bound is shown to be minimized by the true labeling function for the source, and at this minimum, the value of the bound is shown to also control the ""joint error"", i.e., the best achievable risk on both target and source by a single classifier. The point of the analysis is to go beyond the target risk bound presented by Ben-David et al. 2010 that is in terms of the discrepancy between the source and target and the performance of the source labeling function on the target or vice versa, whichever is smaller. Apparently, concrete domain adaptation methods ""based on"" the Ben-David et al. bound do not end up controlling the joint error. After various heuristic arguments, the authors develop an algorithm for unsupervised domain adaptation based on their bound in terms of a two-player game. Only one reviewer ended up engaging with the authors in a nontrivial way. This review also argued for (weak) acceptance. Another reviewer mostly raised minor issues about grammar/style and got confused by the derivation of the ""general"" bound, which I've checked is ok. The third reviewer raised some issues around the realizability assumption and also asked for better understanding as to what aspects of the new proposal are responsible for the improved performance, e.g., via an ablation study. I'm sympathetic to reviewer 1, even though I wish they had engaged with the rebuttal. I don't believe the revision included any ablation study. I think this would improve the paper. I don't think the issues raised by reviewer 3 rise to the level of rejection, especially since their main technical concern is due to their own confusion. Reviewer 2 argues for weak acceptance. However, if there was support for this paper, it wasn't enough for reviewers to engage with each other, despite my encouragement, which was disappointing.""" 27,"""Mixture Distributions for Scalable Bayesian Inference""","['uncertainty estimation', 'Deep Ensembles', 'Adverserial Robustness']","""Bayesian Neural Networks (BNNs) provides a mathematically grounded framework to quantify uncertainty. However BNNs are computationally inefficient, thus are generally not employed on complicated machine learning tasks. Deep Ensembles were introduced as a Bootstrap inspired frequentist approach to the community, as an alternative to BNNs. Ensembles of deterministic and stochastic networks are a good uncertainty estimator in various applications (Although, they are criticized for not being Bayesian). We show Ensembles of deterministic and stochastic Neural Networks can indeed be cast as an approximate Bayesian inference. Deep Ensembles have another weakness of having high space complexity, we provide an alternative to it by modifying the original Bayes by Backprop (BBB) algorithm to learn more general concrete mixture distributions over weights. We show our methods and its variants can give better uncertainty estimates at a significantly lower parametric overhead than Deep Ensembles. We validate our hypothesis through experiments like non-linear regression, predictive uncertainty estimation, detecting adversarial images and exploration-exploitation trade-off in reinforcement learning.""","""This paper proposes to use mixture distributions to improve uncertainty estimates in BNNs. Ensemble methods are interpreted as a Bayesian mixture posterior approximation. To reduce the computation, a modification to BBB is provided based on a concrete mixture distribution. Both R1 and R3 have given useful feedback. It is clear that interpretation of ensemble as a Bayesian posterior is well known, and some of them also have theoretical issues. The experiment to clearly comparing proposed mixture posterior to more commonly used mixture distribution is also necessary. Due to these reasons, I recommend to reject this paper. I encourage the authors to use reviewers feedback to improve the paper.""" 28,"""Inferring Dynamical Systems with Long-Range Dependencies through Line Attractor Regularization""","['Recurrent Neural Networks', 'Nonlinear State Space Models', 'Generative Models', 'Long short-term memory', 'vanishing/exploding gradient problem', 'Nonlinear dynamics', 'Interpretable machine learning', 'Time series analysis']","""Vanilla RNN with ReLU activation have a simple structure that is amenable to systematic dynamical systems analysis and interpretation, but they suffer from the exploding vs. vanishing gradients problem. Recent attempts to retain this simplicity while alleviating the gradient problem are based on proper initialization schemes or orthogonality/unitary constraints on the RNNs recurrency matrix, which, however, comes with limitations to its expressive power with regards to dynamical systems phenomena like chaos or multi-stability. Here, we instead suggest a regularization scheme that pushes part of the RNNs latent subspace toward a line attractor configuration that enables long short-term memory and arbitrarily slow time scales. We show that our approach excels on a number of benchmarks like the sequential MNIST or multiplication problems, and enables reconstruction of dynamical systems which harbor widely different time scales.""","""The paper proposes an interesting idea to leave a very simple form for piecewise-linear RNN, but separate units in to two types, one of which acts as memory. The ""memory"" units are penalized towards the linear attractor parameters, i.e. making elements of pseudo-formula close to 1 and off-diagonal of pseudo-formula close to pseudo-formula . The benchmarks are presented that confirm the efficiency of the model. The reviewer opinion were mixed; one ""1"", one ""3"" and one ""6""; the Reviewer1 is far too negative and some of his claims are not very constructive, the ""positive"" reviewer is very short. Finally, the last reviewer raised a question about the actual quality on the results. This is not addressed. Although there is a motivation for such partial regularization, the main practical question is how many ""memory neurons"" are needed. I looked through the paper - this addressed only in the supplementary, where the value of pseudo-formula is mentioned (=0.5 M). For = M$ it is the L2 penalty; what happens if the fraction is 0.1, 0.2, ... and more? A very crucial hyperparameter (and of course, smart selection of it can not be worse than L2RNN). This study is lacking. In my opinion, one can also introduce weights and sparsity constraints on them (in order to detect the number of ""memory"" neurons more-or less automatically). Although I feel this paper has a potential, it is not still ready for publication and could be significantly improved.""" 29,"""Quaternion Equivariant Capsule Networks for 3D Point Clouds""","['3d', 'capsule networks', 'pointnet', 'quaternion', 'equivariant networks', 'rotations', 'local reference frame']","""We present a 3D capsule architecture for processing of point clouds that is equivariant with respect to the SO(3) rotation group, translation and permutation of the unordered input sets. The network operates on a sparse set of local reference frames, computed from an input point cloud and establishes end-to-end equivariance through a novel 3D quaternion group capsule layer, including an equivariant dynamic routing procedure. The capsule layer enables us to disentangle geometry from pose, paving the way for more informative descriptions and a structured latent space. In the process, we theoretically connect the process of dynamic routing between capsules to the well-known Weiszfeld algorithm, a scheme for solving iterative re-weighted least squares (IRLS) problems with provable convergence properties, enabling robust pose estimation between capsule layers. Due to the sparse equivariant quaternion capsules, our architecture allows joint object classification and orientation estimation, which we validate empirically on common benchmark datasets. ""","""This paper presents a capsule network to handle 3d point clouds which is equivariant to SO(3) rotations. It also provides the theoretical analysis to connect the dynamic routing approach to the Generalized Weiszfeld Iterations. The equivariant property of the method is demonstrated on classification and orientation estimation tasks of 3D shapes. While the technical contribution of the method is sound, the main concern raised by the reviewers was the lack of details in the presentation of methodology and results. Although the authors have made substantial efforts to update the paper, some reviewers were still not convinced and thus the scores remained the same. The paper was on the very borderline, but because of the limited capacity, I regret that I have to recommend rejection. Invariances and equivariances are indeed important topics in representation learning, for which the capsule network is known as one of the promising approaches but still not well investigated compared to other standard architectures. I encourage authors to resubmit the paper taking in the reviewers' comments.""" 30,"""Structured consistency loss for semi-supervised semantic segmentation""","['semi-supervised learning', 'semantic segmentation', 'structured prediction', 'structured consistency loss']","""The consistency loss has played a key role in solving problems in recent studies on semi-supervised learning. Yet extant studies with the consistency loss are limited to its application to classification tasks; extant studies on semi-supervised semantic segmentation rely on pixel-wise classification, which does not reflect the structured nature of characteristics in prediction. We propose a structured consistency loss to address this limitation of extant studies. Structured consistency loss promotes consistency in inter-pixel similarity between teacher and student networks. Specifically, collaboration with CutMix optimizes the efficient performance of semi-supervised semantic segmentation with structured consistency loss by reducing computational burden dramatically. The superiority of proposed method is verified with the Cityscapes; The Cityscapes benchmark results with validation and with test data are 81.9 mIoU and 83.84 mIoU respectively. This ranks the first place on the pixel-level semantic labeling task of Cityscapes benchmark suite. To the best of our knowledge, we are the first to present the superiority of state-of-the-art semi-supervised learning in semantic segmentation.""","""This submission proposes to combine the CutMix data augmentation of Yun et al 2019 with the standard consistency loss of and the structured consistency loss of Liu et al 2019 and applies the resulting approach to the Cityscapes dataset. The reviewers were unanimous that the paper is not suitable for publication at ICLR due to a lack of novelty in the method. No rebuttal was provided.""" 31,"""Phase Transitions for the Information Bottleneck in Representation Learning""","['Information Theory', 'Representation Learning', 'Phase Transition']","""In the Information Bottleneck (IB), when tuning the relative strength between compression and prediction terms, how do the two terms behave, and what's their relationship with the dataset and the learned representation? In this paper, we set out to answer these questions by studying multiple phase transitions in the IB objective: IB_[p(z|x)] = I(X; Z) I(Y; Z) defined on the encoding distribution p(z|x) for input X, target Y and representation Z, where sudden jumps of dI(Y; Z)/d and prediction accuracy are observed with increasing . We introduce a definition for IB phase transitions as a qualitative change of the IB loss landscape, and show that the transitions correspond to the onset of learning new classes. Using second-order calculus of variations, we derive a formula that provides a practical condition for IB phase transitions, and draw its connection with the Fisher information matrix for parameterized models. We provide two perspectives to understand the formula, revealing that each IB phase transition is finding a component of maximum (nonlinear) correlation between X and Y orthogonal to the learned representation, in close analogy with canonical-correlation analysis (CCA) in linear settings. Based on the theory, we present an algorithm for discovering phase transition points. Finally, we verify that our theory and algorithm accurately predict phase transitions in categorical datasets, predict the onset of learning new classes and class difficulty in MNIST, and predict prominent phase transitions in CIFAR10. ""","""This submission presents a theoretical study of phase transitions in IB: adjusting the IB parameter leads to step-wise behaviour of the prediction. Quoting R3: The core result is given by theorem 1: the phase transition betas necessarily satisfy an equation, where the LHS is expressed in terms of an optimal perturbation of the encoding function X->Z. This paper received a borderline review and two votes for weak accept. The main comment for the borderline review was about the rigor of a proof and the use of << symbols. The authors have updated the proof using limits as requested, addressing this primary concern. On the balance, the paper makes a strong contribution to understanding an important learning setting and a contribution to theoretical understanding of the behavior of information bottleneck predictors. """ 32,"""Masked Based Unsupervised Content Transfer""",[],"""We consider the problem of translating, in an unsupervised manner, between two domains where one contains some additional information compared to the other. The proposed method disentangles the common and separate parts of these domains and, through the generation of a mask, focuses the attention of the underlying network to the desired augmentation alone, without wastefully reconstructing the entire target. This enables state-of-the-art quality and variety of content translation, as demonstrated through extensive quantitative and qualitative evaluation. Our method is also capable of adding the separate content of different guide images and domains as well as remove existing separate content. Furthermore, our method enables weakly-supervised semantic segmentation of the separate part of each domain, where only class labels are provided. Our code is available at pseudo-url. ""","""This paper extends the prior work on disentanglement and attention guided translation to instance-based unsupervised content transfer. The method is somewhat complicated, with five different networks and a multi-component loss function, however the importance of each component appears to be well justified in the ablation study. Overall the reviewers agree that the experimental section is solid and supports the proposed method well. It demonstrates good performance across a number of transfer tasks, including transfer to out-of-domain images, and that the method outperforms the baselines. For these reasons, I recommend the acceptance of this paper.""" 33,"""Self-Imitation Learning via Trajectory-Conditioned Policy for Hard-Exploration Tasks""","['imitation learning', 'hard-exploration tasks', 'exploration and exploitation']","""Imitation learning from human-expert demonstrations has been shown to be greatly helpful for challenging reinforcement learning problems with sparse environment rewards. However, it is very difficult to achieve similar success without relying on expert demonstrations. Recent works on self-imitation learning showed that imitating the agent's own past good experience could indirectly drive exploration in some environments, but these methods often lead to sub-optimal and myopic behavior. To address this issue, we argue that exploration in diverse directions by imitating diverse trajectories, instead of focusing on limited good trajectories, is more desirable for the hard-exploration tasks. We propose a new method of learning a trajectory-conditioned policy to imitate diverse trajectories from the agent's own past experiences and show that such self-imitation helps avoid myopic behavior and increases the chance of finding a globally optimal solution for hard-exploration tasks, especially when there are misleading rewards. Our method significantly outperforms existing self-imitation learning and count-based exploration methods on various hard-exploration tasks with local optima. In particular, we report a state-of-the-art score of more than 20,000 points on Montezumas Revenge without using expert demonstrations or resetting to arbitrary states.""","""This paper addresses the problem of exploration in challenging RL environments using self-imitation learning. The idea behind the proposed approach is for the agent to imitate a diverse set of its own past trajectories. To achieve this, the authors introduce a policy conditioned on trajectories. The proposed approach is evaluated on various domains including Atari Montezuma's Revenge and MuJoCo. Given that the evaluation is purely empirical, the major concern is in the design of experiments. The amount of stochasticity induced by the random initial state alone does not lead to convincing results regarding the performance of the proposed approach compared with baselines (e.g. Go-Explore). With such simple stochasticity, it is not clear why one could not use a model to recover from it and then rely on an existing technique like Go-Explore. Although this paper tackles an important problem (hard-exploration RL tasks), all reviewers agreed that this limitation is crucial and I therefore recommend to reject this paper.""" 34,"""Mirror-Generative Neural Machine Translation""","['neural machine translation', 'generative model', 'mirror']",""" Training neural machine translation models (NMT) requires a large amount of parallel corpus, which is scarce for many language pairs. However, raw non-parallel corpora are often easy to obtain. Existing approaches have not exploited the full potential of non-parallel bilingual data either in training or decoding. In this paper, we propose the mirror-generative NMT (MGNMT), a single unified architecture that simultaneously integrates the source to target translation model, the target to source translation model, and two language models. Both translation models and language models share the same latent semantic space, therefore both translation directions can learn from non-parallel data more effectively. Besides, the translation models and language models can collaborate together during decoding. Our experiments show that the proposed MGNMT consistently outperforms existing approaches in all a variety of scenarios and language pairs, including resource-rich and low-resource languages. ""","""This paper proposes a novel method for considering translations in both directions within the framework of generative neural machine translation, significantly improving accuracy. All three reviewers appreciated the paper, although they noted that the gains were somewhat small for the increased complexity of the model. Nonetheless, the baselines presented are already quite competitive, so improvements on these datasets are likely to never be extremely large. Overall, I found this to be a quite nice paper, and strongly recommend acceptance, perhaps as an oral presentation.""" 35,"""Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking""","['Adversarial examples', 'object detection', 'object tracking', 'security', 'autonomous vehicle', 'deep learning']","""Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models. However, in such visual perception pipeline the detected objects must also be tracked, in a process called Multiple Object Tracking (MOT), to build the moving trajectories of surrounding obstacles. Since MOT is designed to be robust against errors in object detection, it poses a general challenge to existing attack techniques that blindly target objection detection: we find that a success rate of over 98% is needed for them to actually affect the tracking results, a requirement that no existing attack technique can satisfy. In this paper, we are the first to study adversarial machine learning attacks against the complete visual perception pipeline in autonomous driving, and discover a novel attack technique, tracker hijacking, that can effectively fool MOT using AEs on object detection. Using our technique, successful AEs on as few as one single frame can move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards. We perform evaluation using the Berkeley Deep Drive dataset and find that on average when 3 frames are attacked, our attack can have a nearly 100% success rate while attacks that blindly target object detection only have up to 25%.""","""The authors agree after reading the rebuttal that attacks on MOT are novel. While the datasets used are small, and the attacks are generated in digital simulation rather than the physical world, this paper still demonstrates an interesting attack on a realistic system.""" 36,"""StructPool: Structured Graph Pooling via Conditional Random Fields""","['Graph Pooling', 'Representation Learning', 'Graph Analysis']","""Learning high-level representations for graphs is of great importance for graph analysis tasks. In addition to graph convolution, graph pooling is an important but less explored research area. In particular, most of existing graph pooling techniques do not consider the graph structural information explicitly. We argue that such information is important and develop a novel graph pooling technique, know as the StructPool, in this work. We consider the graph pooling as a node clustering problem, which requires the learning of a cluster assignment matrix. We propose to formulate it as a structured prediction problem and employ conditional random fields to capture the relationships among assignments of different nodes. We also generalize our method to incorporate graph topological information in designing the Gibbs energy function. Experimental results on multiple datasets demonstrate the effectiveness of our proposed StructPool.""","""The paper proposed an operation called StructPool for graph-pooling by treating it as node clustering problem (assigning a label from 1..k to each node) and then use a pairwise CRF structure to jointly infer these labels. The reviewers all think that this is a well-written paper, and the experimental results are adequate to back up the claim that StructPool offers advantage over other graph-pooling operations. Even though the idea of the presented method is simple and it does add more (albeit by a constant factor) to the computational burden of graph neural network, I think this would make a valuable addition to the literature.""" 37,"""Understanding the Limitations of Variational Mutual Information Estimators""",[],"""Variational approaches based on neural networks are showing promise for estimating mutual information (MI) between high dimensional variables. However, they can be difficult to use in practice due to poorly understood bias/variance tradeoffs. We theoretically show that, under some conditions, estimators such as MINE exhibit variance that could grow exponentially with the true amount of underlying MI. We also empirically demonstrate that existing estimators fail to satisfy basic self-consistency properties of MI, such as data processing and additivity under independence. Based on a unified perspective of variational approaches, we develop a new estimator that focuses on variance reduction. Empirical results on standard benchmark tasks demonstrate that our proposed estimator exhibits improved bias-variance trade-offs on standard benchmark tasks.""","""This paper presents a critical appraisal of variational mutual information estimators, and suggests a slight variance-reducing improvement based on clipping density ratio estimates, and prove that this reduces variance (at the cost of bias). They also propose a set of criteria they term ""self-consistency"" for evaluation of MI estimators and, and show convincingly that variational MI estimators fall short with respect to these. Reviewers were generally positive about the contribution, and were happy with improvements made. While somewhat limited in scope, I believe this is nonetheless a valuable contribution to the conversation surrounding mutual information objectives that have become popular recently. I therefore recommend acceptance.""" 38,"""Efficient Content-Based Sparse Attention with Routing Transformers""","['Sparse attention', 'autoregressive', 'generative models']","""Self-attention has recently been adopted for a wide range of sequence modeling problems. Despite its effectiveness, self-attention suffers quadratic compute and memory requirements with respect to sequence length. Successful approaches to reduce this complexity focused on attention to local sliding windows or a small set of locations independent of content. Our work proposes to learn dynamic sparse attention patterns that avoid allocating computation and memory to attend to content unrelated to the query of interest. This work builds upon two lines of research: it combines the modeling flexibility of prior work on content-based sparse attention with the efficiency gains from approaches based on local, temporal sparse attention. Our model, the Routing Transformer, endows self-attention with a sparse routing module based on online k-means while reducing the overall complexity of attention to O(n^{1.5}d) from O(n^2d) for sequence length n and hidden dimension d. We show that our model outperforms comparable sparse attention models on language modeling on Wikitext-103 (15.8 vs 18.3 perplexity) as well as on image generation on ImageNet-64 (3.43 vs 3.44 bits/dim) while using fewer self-attention layers. Code will be open-sourced on acceptance.""","""This paper proposes a new model, the Routing Transformer, which endows self-attention with a sparse routing module based on online k-means while reducing the overall complexity of attention from O(n^2) to O(n^1.5). The model attained very good performance on WikiText-103 (in terms of perplexity) and similar performance to baselines (published numbers) in two other tasks. Even though the problem addressed (reducing the quadratic complexity of self-attention) is extremely relevant and the proposed approach is very intuitive and interesting, the reviewers raised some concerns, notably: - How efficient is the proposed approach in practice. Even though the theoretical complexity is reduced, more modules were introduced (e.g., forced clustering, mix of local heads and clustering heads, sorting, etc.) - Why is W_R fixed random? Since W_R is orthogonal, it's just a random (generalized) ""rotation"" (performed on the word embedding space). Does this really provide sensible ""routing""? - The experimental section can be improved to better understand the impact of the proposed method. Adding ablations, as suggested by the reviewers, would be an important part of this work. - Not clear why the work needs to be motivated through NMF, since the proposed method uses k-means. Unfortunately several points raised by the reviewers (except R2) were not addressed in the author rebuttal, and therefore it is not clear if some of the raised issues are fixable in camera ready time, which prevents me from recommend this paper to be accepted. However, I *do* think the proposed approach is very interesting and has great potential, once these points are clarified. The gains obtained in WikiText-103 are promising. Therefore, I strongly encourage the authors to resubmit this paper taking into account the suggestions made by the reviewers. """ 39,"""Discourse-Based Evaluation of Language Understanding""","['Natural Language Understanding', 'Pragmatics', 'Discourse', 'Semantics', 'Evaluation', 'BERT', 'Natural Language Processing']","""New models for natural language understanding have made unusual progress recently, leading to claims of universal text representations. However, current benchmarks are predominantly targeting semantic phenomena; we make the case that discourse and pragmatics need to take center stage in the evaluation of natural language understanding. We introduce DiscEval, a new benchmark for the evaluation of natural language understanding, that unites 11 discourse-focused evaluation datasets. DiscEval can be used as supplementary training data in a multi-task learning setup, and is publicly available, alongside the code for gathering and preprocessing the datasets. Using our evaluation suite, we show that natural language inference, a widely used pretraining task, does not result in genuinely universal representations, which opens a new challenge for multi-task learning.""","""This paper proposes a new benchmark to evaluate natural language processing models on discourse-related tasks based on existing datasets that are not available in other benchmarks (SentEval/GLUE/SuperGLUE). The authors also provide a set of baselines based on BERT, ELMo, and others; and estimates of human performance for some tasks. I think this has the potential to be a valuable resource to the research community, but I am not sure that it is the best fit for a conference such as ICLR. R3 also raises a valid concern regarding the performance of fine-tuned BERT that are comparable to human estimates on half of the tasks (3 out of 5), which slightly weakens the main motivation of having this new benchmark. My main suggestion to the authors is to have a very solid motivation for the new benchmark, including the reason of inclusion for each of the tasks. I believe that this is important to encourage the community to adopt it. For something like this, it would be nice (although not necessary) to have a clean website for submission as well. I believe that someone who proposes a new benchmark needs to do as best as they can to make it easy for other people to use it. Due to the above issues and space constraint, I recommend to reject the paper.""" 40,"""Learning World Graph Decompositions To Accelerate Reinforcement Learning""","['environment decomposition', 'subgoal discovery', 'generative modeling', 'reinforcement learning', 'unsupervised learning']","""Efficiently learning to solve tasks in complex environments is a key challenge for reinforcement learning (RL) agents. We propose to decompose a complex environment using a task-agnostic world graphs, an abstraction that accelerates learning by enabling agents to focus exploration on a subspace of the environment.The nodes of a world graph are important waypoint states and edges represent feasible traversals between them. Our framework has two learning phases: 1) identifying world graph nodes and edges by training a binary recurrent variational auto-encoder (VAE) on trajectory data and 2) a hierarchical RL framework that leverages structural and connectivity knowledge from the learned world graph to bias exploration towards task-relevant waypoints and regions. We show that our approach significantly accelerates RL on a suite of challenging 2D grid world tasks: compared to baselines, world graph integration doubles achieved rewards on simpler tasks, e.g. MultiGoal, and manages to solve more challenging tasks, e.g. Door-Key, where baselines fail.""","""This paper introduces an approach for structured exploration based on graph-based representations. While a number of the ideas in the paper are quite interesting and relevant to the ICLR community, the reviewers were generally in agreement about several concerns, which were discussed after the author response. These concerns include the ad-hoc nature of the approach, the limited technical novelty, and the difficulty of the experimental domains (and whether the approach could be applied to a more general class of challenging long-horizon problems such as those in prior works). Overall, the paper is not quite ready for publication at ICLR.""" 41,"""Principled Weight Initialization for Hypernetworks""","['hypernetworks', 'initialization', 'optimization', 'meta-learning']","""Hypernetworks are meta neural networks that generate weights for a main neural network in an end-to-end differentiable manner. Despite extensive applications ranging from multi-task learning to Bayesian deep learning, the problem of optimizing hypernetworks has not been studied to date. We observe that classical weight initialization methods like Glorot & Bengio (2010) and He et al. (2015), when applied directly on a hypernet, fail to produce weights for the mainnet in the correct scale. We develop principled techniques for weight initialization in hypernets, and show that they lead to more stable mainnet weights, lower training loss, and faster convergence.""","""All the reviewers agreed that this was a sensible application of mostly existing ideas from standard neural net initialization to the setting of hypernetworks. The main criticism was that this method was used to improve existing applications of hypernets, instead of extending their limits of applicability.""" 42,"""Zero-Shot Out-of-Distribution Detection with Feature Correlations""","['out-of-distribution', 'gram matrices', 'classification', 'out-of-distribution detection']","""When presented with Out-of-Distribution (OOD) examples, deep neural networks yield confident, incorrect predictions. Detecting OOD examples is challenging, and the potential risks are high. In this paper, we propose to detect OOD examples by identifying inconsistencies between activity patterns and class predicted. We find that characterizing activity patterns by feature correlations and identifying anomalies in pairwise feature correlation values can yield high OOD detection rates. We identify anomalies in the pairwise feature correlations by simply comparing each pairwise correlation value with its respective range observed over the training data. Unlike many approaches, this can be used with any pre-trained softmax classifier and does not require access to OOD data for fine-tuning hyperparameters, nor does it require OOD access for inferring parameters. The method is applicable across a variety of architectures and vision datasets and generally performs better than or equal to state-of-the-art OOD detection methods, including those that do assume access to OOD examples.""","""The paper proposes a new scoring function for OOD detection based on calculating the total deviation of the pairwise feature correlations. This is an important problem that is of general interest in our community. Reviewer 2 found the paper to be clear, provided a set of weaknesses relating to lack of explanations of performance and more careful ablations, along with a set of strategies to address them. Reviewer 1 recognized the importance of being useful for pretrained networks but also raised questions of explanation and theoretical motivations. Reviewer 3 was extremely supportive, used the authors' code to highlight the difference between far-from-distribution behaviour versus near-distribution OOD examples. The authors provided detailed responses to all points raised and provided additional eidence. There was no convergence of the review recommendations. The review added much more clarity to the paper and it is no a better paper. The paper demonstrates all the features of a good paper, but unfortunately didn't yet reach the level for acceptance for the next conference. """ 43,"""Global Momentum Compression for Sparse Communication in Distributed SGD""","['Distributed momentum SGD', 'Communication compression']","""With the rapid growth of data, distributed stochastic gradient descent~(DSGD) has been widely used for solving large-scale machine learning problems. Due to the latency and limited bandwidth of network, communication has become the bottleneck of DSGD when we need to train large scale models, like deep neural networks. Communication compression with sparsified gradient, abbreviated as \emph{sparse communication}, has been widely used for reducing communication cost in DSGD. Recently, there has appeared one method, called deep gradient compression~(DGC), to combine memory gradient and momentum SGD for sparse communication. DGC has achieved promising performance in practice. However, the theory about the convergence of DGC is lack. In this paper, we propose a novel method, called \emph{\underline{g}}lobal \emph{\underline{m}}omentum \emph{\underline{c}}ompression~(GMC), for sparse communication in DSGD. GMC also combines memory gradient and momentum SGD. But different from DGC which adopts local momentum, GMC adopts global momentum. We theoretically prove the convergence rate of GMC for both convex and non-convex problems. To the best of our knowledge, this is the first work that proves the convergence of distributed momentum SGD~(DMSGD) with sparse communication and memory gradient. Empirical results show that, compared with the DMSGD counterpart without sparse communication, GMC can reduce the communication cost by approximately 100 fold without loss of generalization accuracy. GMC can also achieve comparable~(sometimes better) performance compared with DGC, with an extra theoretical guarantee.""","""The author propose a method called global momentum compression for sparse communication setting, and provided some theoretical results on the convergence rate. The convergence result is interesting, but the underlying assumptions used in the analysis appear very strong. Moreover, the proposed algorithm has limited novelty as it is only a minor modification. Another main concern is that the proposed algorithm shows little performance improvement in the experiments. Moreover, more related algorithms should be included in the experimental comparison.""" 44,"""Symplectic ODE-Net: Learning Hamiltonian Dynamics with Control""","['Deep Model Learning', 'Physics-based Priors', 'Control of Mechanical Systems']","""In this paper, we introduce Symplectic ODE-Net (SymODEN), a deep learning framework which can infer the dynamics of a physical system, given by an ordinary differential equation (ODE), from observed state trajectories. To achieve better generalization with fewer training samples, SymODEN incorporates appropriate inductive bias by designing the associated computation graph in a physics-informed manner. In particular, we enforce Hamiltonian dynamics with control to learn the underlying dynamics in a transparent way, which can then be leveraged to draw insight about relevant physical aspects of the system, such as mass and potential energy. In addition, we propose a parametrization which can enforce this Hamiltonian formalism even when the generalized coordinate data is embedded in a high-dimensional space or we can only access velocity data instead of generalized momentum. This framework, by offering interpretable, physically-consistent models for physical systems, opens up new possibilities for synthesizing model-based control strategies.""","""This paper proposes a novel method for learning Hamiltonian dynamics from data. The data is obtained from systems subjected to an external control signal. The authors show the utility of their method for subsequent improved control in a reinforcement learning setting. The paper is well written, the method is derived from first principles, and the experimental validation is solid. The authors were also able to take into account the reviewers feedback and further improve their paper during the discussion period. Overall all of the reviewers agree that this is a great contribution to the field and hence I am happy to recommend acceptance.""" 45,"""Learning from Explanations with Neural Execution Tree""",[],"""While deep neural networks have achieved impressive performance on a range of NLP tasks, these data-hungry models heavily rely on labeled data, which restricts their applications in scenarios where data annotation is expensive. Natural language (NL) explanations have been demonstrated very useful additional supervision, which can provide sufficient domain knowledge for generating more labeled data over new instances, while the annotation time only doubles. However, directly applying them for augmenting model learning encounters two challenges: (1) NL explanations are unstructured and inherently compositional, which asks for a modularized model to represent their semantics, (2) NL explanations often have large numbers of linguistic variants, resulting in low recall and limited generalization ability. In this paper, we propose a novel Neural Execution Tree (NExT) framework to augment training data for text classification using NL explanations. After transforming NL explanations into executable logical forms by semantic parsing, NExT generalizes different types of actions specified by the logical forms for labeling data instances, which substantially increases the coverage of each NL explanation. Experiments on two NLP tasks (relation extraction and sentiment analysis) demonstrate its superiority over baseline methods. Its extension to multi-hop question answering achieves performance gain with light annotation effort.""","""This paper proposing a framework for augmenting classification systems with explanations was very well received by two reviewers, and on reviewer labeling themselves as ""perfectly neutral"". I see no reason not to recommend acceptance.""" 46,"""Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis""","['Speech Synthesis', 'Deep Generative Models', 'Latent Variable Models', 'Unsupervised Representation Learning']","""Recent work has explored sequence-to-sequence latent variable models for expressive speech synthesis (supporting control and transfer of prosody and style), but has not presented a coherent framework for understanding the trade-offs between the competing methods. In this paper, we propose embedding capacity (the amount of information the embedding contains about the data) as a unified method of analyzing the behavior of latent variable models of speech, comparing existing heuristic (non-variational) methods to variational methods that are able to explicitly constrain capacity using an upper bound on representational mutual information. In our proposed model (Capacitron), we show that by adding conditional dependencies to the variational posterior such that it matches the form of the true posterior, the same model can be used for high-precision prosody transfer, text-agnostic style transfer, and generation of natural-sounding prior samples. For multi-speaker models, Capacitron is able to preserve target speaker identity during inter-speaker prosody transfer and when drawing samples from the latent prior. Lastly, we introduce a method for decomposing embedding capacity hierarchically across two sets of latents, allowing a portion of the latent variability to be specified and the remaining variability sampled from a learned prior. Audio examples are available on the web.""","""This paper investigates variational models of speech for synthesis, and in particular ways of making them more controllable for a variety of synthesis tasks (e.g. prosody transfer, style transfer). They propose to do this via a modified VAE objective that imposes a learnable weight on the KL term, as well as using a hierarchical decomposition of latent variables. The paper shows promising results and includes a good amount of analysis, and should be very interesting for speech synthesis researchers. However, there is not much novelty from a machine learning perspective. Therefore, I think the paper is not a great fit for ICLR and is better suited for a speech conference/journal.""" 47,"""Angular Visual Hardness""","['angular similarity', 'self-training', 'hard samples mining']","""The mechanisms behind human visual systems and convolutional neural networks (CNNs) are vastly different. Hence, it is expected that they have different notions of ambiguity or hardness. In this paper, we make a surprising discovery: there exists a (nearly) universal score function for CNNs whose correlation with human visual hardness is statistically significant. We term this function as angular visual hardness (AVH) and in a CNN, it is given by the normalized angular distance between a feature embedding and the classifier weights of the corresponding target category. We conduct an in-depth scientific study. We observe that CNN models with the highest accuracy also have the best AVH scores. This agrees with an earlier finding that state-of-art models tend to improve on classification of harder training examples. We find that AVH displays interesting dynamics during training: it quickly reaches a plateau even though the training loss keeps improving. This suggests the need for designing better loss functions that can target harder examples more effectively. Finally, we empirically show significant improvement in performance by using AVH as a measure of hardness in self-training tasks. ""","""This paper proposes a new measure for CNN and show its correlation to human visual hardness. The topic of this paper is interesting, and it sparked many interesting discussions among reviews. After reviewing each others comments, reviewers decided to recommend reject due to a few severe concerns that are yet to be address. In particular, reviewer 1 and 2 both raised concerns about potentially misleading and perhaps confusing statements around the correlation between HSF and accuracy. A concrete step was suggested by a reviewer - reporting correlation between accuracy and HSF. A few other points were raised around its conflict/agreement with prior work [RRSS19], or self-contradictory statements as pointed out by Reviewer 1 and 2 (see reviewer 2s comment). We hope authors would use this helpful feedback to improve the paper for the future submission. """ 48,"""On importance-weighted autoencoders""","['variational inference', 'autoencoders', 'importance sampling']","""The importance weighted autoencoder (IWAE) (Burda et al., 2016) is a popular variational-inference method which achieves a tighter evidence bound (and hence a lower bias) than standard variational autoencoders by optimising a multi-sample objective, i.e. an objective that is expressible as an integral over > 1 Monte Carlo samples. Unfortunately, IWAE crucially relies on the availability of reparametrisations and even if these exist, the multi-sample objective leads to inference-network gradients which break down as pseudo-formula is increased (Rainforth et al., 2018). This breakdown can only be circumvented by removing high-variance score-function terms, either by heuristically ignoring them (which yields the 'sticking-the-landing' IWAE (IWAE-STL) gradient from Roeder et al. (2017)) or through an identity from Tucker et al. (2019) (which yields the 'doubly-reparametrised' IWAE (IWAE-DREG) gradient). In this work, we argue that directly optimising the proposal distribution in importance sampling as in the reweighted wake-sleep (RWS) algorithm from Bornschein & Bengio (2015) is preferable to optimising IWAE-type multi-sample objectives. To formalise this argument, we introduce an adaptive-importance sampling framework termed adaptive importance sampling for learning (AISLE) which slightly generalises the RWS algorithm. We then show that AISLE admits IWAE-STL and IWAE-DREG (i.e. the IWAE-gradients which avoid breakdown) as special cases.""","""The authors argue that directly optimizing the IS proposal distribution as in RWS is preferable to optimizing the IWAE multi-sample objective. They formalize this with an adaptive IS framework, AISLE, that generalizes RWS, IWAE-STL and IWAE-DREG. Generally reviewers found the paper to be well-written and the connections drawn in this paper interesting. However, all reviewers raised concerns about the lack of experiments (Reviewer 3 suggested several experiments that could be done to clarify remaining questions) and practical takeaways. The authors responded by explaining that ""the main ""practical"" takeaway from our work is the following: If one is interested in the bias-reduction potential offered by IWAEs over plain VAEs then the adaptive importance-sampling framework appears to be a better starting point for designing new algorithms than the specific multi-sample objective used by IWAE. This is because the former retains all of the benefits of the latter without inheriting its drawbacks."" I did not find this argument convincing as a primary advantage of variational approaches over WS is that the variational approach optimizes a unified objective. At least in principle, this is a serious drawback of the WS approaches. Experiments and/or a discussion of this is warranted. This paper is borderline, and unfortunately, due to the high number of quality submissions this year, I have to recommend rejection at this point. """ 49,"""Symplectic Recurrent Neural Networks""","['Hamiltonian systems', 'learning physical laws', 'symplectic integrators', 'recurrent neural networks', 'inverse problems']","""We propose Symplectic Recurrent Neural Networks (SRNNs) as learning algorithms that capture the dynamics of physical systems from observed trajectories. SRNNs model the Hamiltonian function of the system by a neural networks, and leverage symplectic integration, multiple-step training and initial state optimization to address the challenging numerical issues associated with Hamiltonian systems. We show SRNNs succeed reliably on complex and noisy Hamiltonian systems. Finally, we show how to augment the SRNN integration scheme in order to handle stiff dynamical systems such as bouncing billiards.""","""This paper proposes a novel architecture for learning Hamiltonian dynamics from data. The model outperforms the existing state of the art Hamiltonian Neural Networks on challenging physical datasets. It also goes further by proposing a way to deal with observation noise and a way to model stiff dynamical systems, like bouncing balls. The paper is well written, the model works well and the experimental evaluation is solid. All reviewers agree that this is an excellent contribution to the field, hence I am happy to recommend acceptance as an oral.""" 50,"""SpikeGrad: An ANN-equivalent Computation Model for Implementing Backpropagation with Spikes""","['spiking neural network', 'neuromorphic engineering', 'backpropagation']","""Event-based neuromorphic systems promise to reduce the energy consumption of deep neural networks by replacing expensive floating point operations on dense matrices by low energy, sparse operations on spike events. While these systems can be trained increasingly well using approximations of the backpropagation algorithm, this usually requires high precision errors and is therefore incompatible with the typical communication infrastructure of neuromorphic circuits. In this work, we analyze how the gradient can be discretized into spike events when training a spiking neural network. To accelerate our simulation, we show that using a special implementation of the integrate-and-fire neuron allows us to describe the accumulated activations and errors of the spiking neural network in terms of an equivalent artificial neural network, allowing us to largely speed up training compared to an explicit simulation of all spike events. This way we are able to demonstrate that even for deep networks, the gradients can be discretized sufficiently well with spikes if the gradient is properly rescaled. This form of spike-based backpropagation enables us to achieve equivalent or better accuracies on the MNIST and CIFAR10 datasets than comparable state-of-the-art spiking neural networks trained with full precision gradients. The algorithm, which we call SpikeGrad, is based on only accumulation and comparison operations and can naturally exploit sparsity in the gradient computation, which makes it an interesting choice for a spiking neuromorphic systems with on-chip learning capacities.""","""This paper proposes a learning framework for spiking neural networks that exploits the sparsity of the gradient during backpropagation to reduce the computational cost of training. The method is evaluated against prior works that use full precision gradients and shown comparable performance. Overall, the contribution of the paper is solid, and after a constructive rebuttal cycle, all reviewers reached a consensus of weak accept. Therefore, I recommend accepting this submission.""" 51,"""Generalized Clustering by Learning to Optimize Expected Normalized Cuts""","['Clustering', 'Normalized cuts', 'Generalizability']","""We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples. Our clustering objective is based on optimizing normalized cuts, a criterion which measures both intra-cluster similarity as well as inter-cluster dissimilarity. We define a differentiable loss function equivalent to the expected normalized cuts. Unlike much of the work in unsupervised deep learning, our trained model directly outputs final cluster assignments, rather than embeddings that need further processing to be usable. Our approach generalizes to unseen datasets across a wide variety of domains, including text, and image. Specifically, we achieve state-of-the-art results on popular unsupervised clustering benchmarks (e.g., MNIST, Reuters, CIFAR-10, and CIFAR-100), outperforming the strongest baselines by up to 10.9%. Our generalization results are superior (by up to 21.9%) to the recent top-performing clustering approach with the ability to generalize.""","""This paper proposes a deep clustering method based on normalized cuts. As the general idea of deep clustering has been investigated a fair bit, the reviewers suggest a more thorough empirical validation. Myself, I would also like further justification of many of the choices within the algorithm, the effect of changing the architecture.""" 52,"""The Ingredients of Real World Robotic Reinforcement Learning""","['Reinforcement Learning', 'Robotics']","""The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning. In this work, we discuss the required elements of a robotic system that can continually and autonomously improve with data collected in the real world, and propose a particular instantiation of such a system. Subsequently, we investigate a number of challenges of learning without instrumentation -- including the lack of episodic resets, state estimation, and hand-engineered rewards -- and propose simple, scalable solutions to these challenges. We demonstrate the efficacy of our proposed system on dexterous robotic manipulation tasks in simulation and the real world, and also provide an insightful analysis and ablation study of the challenges associated with this learning paradigm.""","""This is a very interesting paper which discusses practical issues and solutions around deploying RL on real physical robotic systems, specifically involving questions on the use of raw sensory data, crafting reward functions, and not having resets at the end of episodes. Many of the issues raised in the reviews and discussion were concerned with experimental details and settings, as well as relation to different areas of related work. These were all sufficiently handled in the rebuttal, and all reviewers were in favour of acceptance.""" 53,"""Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks""","['neural tangent kernel', 'polylogarithmic width', 'test error', 'gradient descent', 'classification']","""Recent theoretical work has guaranteed that overparameterized networks trained by gradient descent achieve arbitrarily low training error, and sometimes even low test error. The required width, however, is always polynomial in at least one of the sample size pseudo-formula , the (inverse) target error pseudo-formula , and the (inverse) failure probability pseudo-formula . This work shows that pseudo-formula iterations of gradient descent with pseudo-formula training examples on two-layer ReLU networks of any width exceeding pseudo-formula suffice to achieve a test misclassification error of pseudo-formula . We also prove that stochastic gradient descent can achieve pseudo-formula test error with polylogarithmic width and pseudo-formula samples. The analysis relies upon the separation margin of the limiting kernel, which is guaranteed positive, can distinguish between true labels and random labels, and can give a tight sample-complexity analysis in the infinite-width setting.""","""This paper studies how much overparameterization is required to achieve zero training error via gradient descent in one hidden layer neural nets. In particular the paper studies the effect of margin in data on the required amount of overparameterization. While the paper does not improve in the worse case in the presence of margin the paper shows that sometimes even logarithmic width is sufficient. The reviewers all seem to agree that this is a nice paper but had a few mostly technical concerns. These concerns were sufficiently addressed in the response. Based on my own reading I also find the paper to be interesting, well written with clever proofs. So I recommend acceptance. I would like to make a suggestion that the authors do clarify in the abstract intro that this improvement can not be achieved in the worst case as a shallow reading of the manuscript may cause some confusion (that logarithmic width suffices in general).""" 54,"""Collaborative Inter-agent Knowledge Distillation for Reinforcement Learning""","['Reinforcement learning', 'distillation']","""Reinforcement Learning (RL) has demonstrated promising results across several sequential decision-making tasks. However, reinforcement learning struggles to learn efficiently, thus limiting its pervasive application to several challenging problems. A typical RL agent learns solely from its own trial-and-error experiences, requiring many experiences to learn a successful policy. To alleviate this problem, we propose collaborative inter-agent knowledge distillation (CIKD). CIKD is a learning framework that uses an ensemble of RL agents to execute different policies in the environment while sharing knowledge amongst agents in the ensemble. Our experiments demonstrate that CIKD improves upon state-of-the-art RL methods in sample efficiency and performance on several challenging MuJoCo benchmark tasks. Additionally, we present an in-depth investigation on how CIKD leads to performance improvements. ""","""The paper introduces an ensemble of RL agents that share knowledge amongst themselves. Because there are no theoretical results, the experiments have to carry the paper. The reviewers had rather different views on the significance of these experiments and whether they are sufficient to convincingly validate the learning framework introduced. Overall, because of the high bar for ICLR acceptance, this paper falls just below the threshold. """ 55,"""Learning Underlying Physical Properties From Observations For Trajectory Prediction""","['Physical Games', 'Deep Learning', 'Physical Reasoning', 'Transfer of Knowledge']","""In this work we present an approach that combines deep learning together with laws of Newtons physics for accurate trajectory predictions in physical games. Our model learns to estimate physical properties and forces that generated given observations, learns the relationships between available players actions and estimated physical properties and uses these extracted forces for predictions. We show the advantages of using physical laws together with deep learning by evaluating it against two baseline models that automatically discover features from the data without such a knowledge. We evaluate our model abilities to extract physical properties and to generalize to unseen trajectories in two games with a shooting mechanism. We also evaluate our model capabilities to transfer learned knowledge from a 2D game for predictions in a 3D game with a similar physics. We show that by using physical laws together with deep learning we achieve a better human-interpretability of learned physical properties, transfer of knowledge to a game with similar physics and very accurate predictions for previously unseen data.""","""This paper aims to estimate the parameters of a projectile physical equation from a small number of trajectory observations in two computer games. The authors demonstrate that their method works, and that the learnt model generalises from one game to another. However, the reviewers had concerns about the simplicity of the tasks, the longer term value of the proposed method to the research community, and the writing of the paper. During the discussion period, the authors were able to address some of these questions, however many other points were left unanswered, and the authors did not modify the paper to reflect the reviewers feedback. Hence, in the current state this paper appears more suitable for a workshop rather than a conference, and I recommend rejection.""" 56,"""Actor-Critic Provably Finds Nash Equilibria of Linear-Quadratic Mean-Field Games""",[],"""We study discrete-time mean-field Markov games with infinite numbers of agents where each agent aims to minimize its ergodic cost. We consider the setting where the agents have identical linear state transitions and quadratic cost func- tions, while the aggregated effect of the agents is captured by the population mean of their states, namely, the mean-field state. For such a game, based on the Nash certainty equivalence principle, we provide sufficient conditions for the existence and uniqueness of its Nash equilibrium. Moreover, to find the Nash equilibrium, we propose a mean-field actor-critic algorithm with linear function approxima- tion, which does not require knowing the model of dynamics. Specifically, at each iteration of our algorithm, we use the single-agent actor-critic algorithm to approximately obtain the optimal policy of the each agent given the current mean- field state, and then update the mean-field state. In particular, we prove that our algorithm converges to the Nash equilibrium at a linear rate. To the best of our knowledge, this is the first success of applying model-free reinforcement learn- ing with function approximation to discrete-time mean-field Markov games with provable non-asymptotic global convergence guarantees.""","""The authors propose an actor-critic method for finding Nash equilibrium in linear-quadratic mean field games and establish linear convergence under some assumptions. There were some minor concerns about motivation and clarity, especially with regards to the simulator. In an extensive and interactive rebuttal, the authors were able to argue that their results/methods, which appear to be rather specialized to the LQ setting, offer insight/methods beyond the LQ setting.""" 57,"""DSReg: Using Distant Supervision as a Regularizer""",[],"""In this paper, we aim at tackling a general issue in NLP tasks where some of the negative examples are highly similar to the positive examples, i.e., hard-negative examples). We propose the distant supervision as a regularizer (DSReg) approach to tackle this issue. We convert the original task to a multi-task learning problem, in which we first utilize the idea of distant supervision to retrieve hard-negative examples. The obtained hard-negative examples are then used as a regularizer, and we jointly optimize the original target objective of distinguishing positive examples from negative examples along with the auxiliary task objective of distinguishing soften positive examples (comprised of positive examples and hard-negative examples) from easy-negative examples. In the neural context, this can be done by feeding the final token representations to different output layers. Using this unbelievably simple strategy, we improve the performance of a range of different NLP tasks, including text classification, sequence labeling and reading comprehension. ""","""This paper proposes a way to handle the hard-negative examples (those very close to positive ones) in NLP, using a distant supervision approach that serves as a regularization. The paper addresses an important issue and is well written; however, reviewers pointed put several concerns, including testing the approach on the state-of-art neural nets, and making experiments more convincing by testing on larger problems. """ 58,"""Address2vec: Generating vector embeddings for blockchain analytics""","['crypto-currency', 'bitcoin', 'blockchain', '2vec']","""Bitcoin is a virtual coinage system that enables users to trade virtually free of a central trusted authority. All transactions on the Bitcoin blockchain are publicly available for viewing, yet as Bitcoin is built mainly for security its original structure does not allow for direct analysis of address transactions. Existing analysis methods of the Bitcoin blockchain can be complicated, computationally expensive or inaccurate. We propose a computationally efficient model to analyze bitcoin blockchain addresses and allow for their use with existing machine learning algorithms. We compare our approach against Multi Level Sequence Learners (MLSLs), one of the best performing models on bitcoin address data.""","""The paper propose to analyze bitcoin addresses using graph embeddings. The reviewers found that the paper was too incomplete for publication. Important information such as a description of datasets and metrics was omitted.""" 59,"""SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference""","['machine learning', 'reinforcement learning', 'scalability', 'distributed', 'DeepMind Lab', 'ALE', 'Atari-57', 'Google Research Football']","""We present a modern scalable reinforcement learning agent called SEED (Scalable, Efficient Deep-RL). By effectively utilizing modern accelerators, we show that it is not only possible to train on millions of frames per second but also to lower the cost. of experiments compared to current methods. We achieve this with a simple architecture that features centralized inference and an optimized communication layer. SEED adopts two state-of-the-art distributed algorithms, IMPALA/V-trace (policy gradients) and R2D2 (Q-learning), and is evaluated on Atari-57, DeepMind Lab and Google Research Football. We improve the state of the art on Football and are able to reach state of the art on Atari-57 twice as fast in wall-time. For the scenarios we consider, a 40% to 80% cost reduction for running experiments is achieved. The implementation along with experiments is open-sourced so results can be reproduced and novel ideas tried out.""","""The paper presents a framework for scalable Deep-RL on really large-scale architecture, which addresses several problems on multi-machine training of such systems with many actors and learners running. Large-scale experiments and impovements over IMPALA are presented, leading to new SOTA results. The reviewers are very positive over this work, and I think this is an important contribution to the overall learning / RL community.""" 60,"""FSPool: Learning Set Representations with Featurewise Sort Pooling""","['set auto-encoder', 'set encoder', 'pooling']","""Traditional set prediction models can struggle with simple datasets due to an issue we call the responsibility problem. We introduce a pooling method for sets of feature vectors based on sorting features across elements of the set. This can be used to construct a permutation-equivariant auto-encoder that avoids this responsibility problem. On a toy dataset of polygons and a set version of MNIST, we show that such an auto-encoder produces considerably better reconstructions and representations. Replacing the pooling function in existing set encoders with FSPool improves accuracy and convergence speed on a variety of datasets.""","""Overall, this paper got strong scores from the reviewers (2 accepts and 1 weak accept). The paper proposes to address the responsibility problem, enabling encoding and decoding sets without worrying about permutations. This is achieved using permutation-equivariant set autoencoders and an 'inverse' operation that undoes the sorting in the decoder. The reviewers all agreed that the paper makes a meaningful contribution and should be accepted. Some concerns regarding clarity of exposition were initially raised but were addressed during the rebuttal period. I recommend that the paper be accepted.""" 61,"""Enhanced Convolutional Neural Tangent Kernels""","['neural tangent kernel', 'data augmentation', 'global average pooling', 'kernel regression', 'deep learning theory', 'kernel design']","""Recent research shows that for training with l2 loss, convolutional neural networks (CNNs) whose width (number of channels in convolutional layers) goes to infinity, correspond to regression with respect to the CNN Gaussian Process kernel (CNN-GP) if only the last layer is trained, and correspond to regression with respect to the Convolutional Neural Tangent Kernel (CNTK) if all layers are trained. An exact algorithm to compute CNTK (Arora et al., 2019) yielded the finding that classification accuracy of CNTK on CIFAR-10 is within 6-7% of that of the corresponding CNN architecture (best figure being around 78%) which is interesting performance for a fixed kernel. Here we show how to significantly enhance the performance of these kernels using two ideas. (1) Modifying the kernel using a new operation called Local Average Pooling (LAP) which preserves efficient computability of the kernel and inherits the spirit of standard data augmentation using pixel shifts. Earlier papers were unable to incorporate naive data augmentation because of the quadratic training cost of kernel regression. This idea is inspired by Global Average Pooling (GAP), which we show for CNN-GP and CNTK, GAP is equivalent to full translation data augmentation. (2) Representing the input image using a pre-processing technique proposed by Coates et al. (2011), which uses a single convolutional layer composed of random image patches. On CIFAR-10 the resulting kernel, CNN-GP with LAP and horizontal flip data augmentation achieves 89% accuracy, matching the performance of AlexNet (Krizhevsky et al., 2012). Note that this is the best such result we know of for a classifier that is not a trained neural network. Similar improvements are obtained for Fashion-MNIST.""","""This paper was assessed by three reviewers who scored it as 6/3/6. The reviewers liked some aspects of this paper e.g., a good performance, but they also criticized some aspects of work such as inventing new names for existing pooling operators, observation that large parts of improvements come from the pre-processing step rather than the proposed method, suspected overfitting. Taking into account all positives and negatives, AC feels that while the proposed idea has some positives, it also falls short of the quality required by ICLR2020, thus it cannot be accepted at this time. AC strongly encourages authors to go through all comments (especially these negative ones), address them and resubmit an improved version to another venue. """ 62,"""Incorporating BERT into Neural Machine Translation""","['BERT', 'Neural Machine Translation']","""The recently proposed BERT (Devlin et al., 2019) has shown great power on a variety of natural language understanding tasks, such as text classification, reading comprehension, etc. However, how to effectively apply BERT to neural machine translation (NMT) lacks enough exploration. While BERT is more commonly used as fine-tuning instead of contextual embedding for downstream language understanding tasks, in NMT, our preliminary exploration of using BERT as contextual embedding is better than using for fine-tuning. This motivates us to think how to better leverage BERT for NMT along this direction. We propose a new algorithm named BERT-fused model, in which we first use BERT to extract representations for an input sequence, and then the representations are fused with each layer of the encoder and decoder of the NMT model through attention mechanisms. We conduct experiments on supervised (including sentence-level and document-level translations), semi-supervised and unsupervised machine translation, and achieve state-of-the-art results on seven benchmark datasets. Our code is available at pseudo-url""","""The authors propose a novel way of incorporating a large pretrained language model (BERT) into neural machine translation using an extra attention model for both the NMT encoder and decoder. The paper presents thorough experimental design, with strong baselines and consistent positive results for supervised, semi-supervised and unsupervised experiments. The reviewers all mentioned lack of clarity in the writing and there was significant discussion with the authors. After improvements and clarifications, all reviewers agree that this paper would make a good contribution to ICLR and be of general use to the field. """ 63,"""Efficient meta reinforcement learning via meta goal generation""",[],"""Meta reinforcement learning (meta-RL) is able to accelerate the acquisition of new tasks by learning from past experience. Current meta-RL methods usually learn to adapt to new tasks by directly optimizing the parameters of policies over primitive actions. However, for complex tasks which requires sophisticated control strategies, it would be quite inefficient to to directly learn such a meta-policy. Moreover, this problem can become more severe and even fail in spare reward settings, which is quite common in practice. To this end, we propose a new meta-RL algorithm called meta goal-generation for hierarchical RL (MGHRL) by leveraging hierarchical actor-critic framework. Instead of directly generate policies over primitive actions for new tasks, MGHRL learns to generate high-level meta strategies over subgoals given past experience and leaves the rest of how to achieve subgoals as independent RL subtasks. Our empirical results on several challenging simulated robotics environments show that our method enables more efficient and effective meta-learning from past experience and outperforms state-of-the-art meta-RL and Hierarchical-RL methods in sparse reward settings.""","""This paper combines PEARL with HAC to create a hierarchical meta-RL algorithm that operates on goals at the high level and learns low-level policies to reach those goals. Reviewers remarked that its well-presented and well-organized, with enough details to be mostly reproducible. In the experiments conducted, it appears to show strong results. However there was strong consensus on two major weaknesses that render this paper unpublishable in its current form: 1) the continuous control tasks used dont seem to require hierarchy, and 2) the baselines dont appear to be appropriate. Reviewers remarked that a vital missing baseline is HER, and that its unfair to compare to PEARL, which is a more general meta-RL algorithm. The authors dont appear to have made revisions in response to these concerns. All reviewers made useful and constructive comments, and I urge the authors to take them into consideration when revising for a future submission.""" 64,"""SINGLE PATH ONE-SHOT NEURAL ARCHITECTURE SEARCH WITH UNIFORM SAMPLING""","['Neural Architecture Search', 'Single Path']","""We revisit the one-shot Neural Architecture Search (NAS) paradigm and analyze its advantages over existing NAS approaches. Existing one-shot method (Benderet al., 2018), however, is hard to train and not yet effective on large scale datasets like ImageNet. This work propose a Single Path One-Shot model to address the challenge in the training. Our central idea is to construct a simplified supernet, where all architectures are single paths so that weight co-adaption problem is alleviated. Training is performed by uniform path sampling. All architectures (and their weights) are trained fully and equally. Comprehensive experiments verify that our approach is flexible and effective. It is easy to train and fast to search. It effortlessly supports complex search spaces(e.g., building blocks, channel, mixed-precision quantization) and different search constraints (e.g., FLOPs, latency). It is thus convenient to use for various needs. It achieves start-of-the-art performance on the large dataset ImageNet.""","""This paper introduces a simple NAS method based on sampling single paths of the one-shot model based on a uniform distribution. Next to the private discussion with reviewers, I read the paper in detail. During the discussion, first, the reviewer who gave a weak reject upgraded his/her score to a weak accept since all reviewers appreciated the importance of neural architecture search and that the authors' approach is plausible. Then, however, it surfaced that the main claim of novelty in the paper, namely the uniform sampling of paths with weight-sharing, is not novel: Li & Talwalkar already introduced a uniform random sampling of paths with weight-sharing in the one-shot model in their paper ""Random Search and Reproducibility in NAS"" (pseudo-url), which was on arXiv since February 2019 and has been published at UAI 2019. This was their method ""RandomNAS with weight sharing"". The authors actually cite that paper but do not mention RandomNAS with weight sharing. This may be because their paper also has been on arXiv since March 2019 (6 weeks after the one above), and was therefore likely parallel work. Nevertheless, now, 9 months later, the situation has changed, and the authors should at least point out in their paper that they were not the first to introduce RandomNAS with weight sharing during the search, but that they rather study the benefits of that previously-introduced method. The only real novelty in terms of NAS methods that the authors provide is to use a genetic algorithm to select the architecture with the best one-shot model performance, rather than random search. This is a relatively minor contribution, discussed literally in a single paragraph in the paper (with missing details about the crossover operator used; please fill these in). Also, this step is very cheap, so one could potentially just run random search longer. Finally, the comparison presented may be unfair: evolution uses a population size of 50, and Figure 2 plots iterations. It is unclear whether each iteration for random search also evaluated 50 samples; if not, then evolution got 50x more samples than random search. The authors should fix this in a new version of the paper. The paper also appears to make some wrong claims in Section 2. For example, the authors write that gradient-based NAS methods like DARTS inherit the one-shot weights and fine-tune the discretized architectures, but all methods I know of actually retrain from scratch rather than fine-tuning. Also, equation (3) is not what DARTS does; that does a bi-level optimization. In Section 3, the authors say that their single-path strategy corresponds to a dropout rate of 1. I do not think that this is correct, since a dropout rate of 1 drops every connection (and does not leave one remaining). All of these issues should be rectified. The paper reports good results on ImageNet. Unfortunately, these may well be due to using a better training pipeline than other works, rather than due to a better NAS method (no code is available, so there is no way to verify this). On the other hand, the application to mixed-precision quantization is novel and interesting. AnonReviewer2 asked about the correlation of the one-shot performance and the final evaluation performance, and this question was not answered properly by the authors. This question is relevant, because this correlation has been shown to be very low in several works (e.g., Sciuto et al: ""Evaluating the search phase of Neural Architecture Search"" (pseudo-url), on arXiv since February 2019 and a parallel ICLR submission). In those cases, the proposed approach would definitely not work. The high scores the reviewers gave were based on the understanding that uniform sampling in the one-shot model was a novel contribution of this paper. Adjusting for that, the real score is much lower and right at the acceptance threshold. After a discussion with the PCs, due to limited capacity, the recommendation is to reject the current version. I encourage the authors to address the issues identified by the reviewers and in this meta-review and to submit to a future venue. """ 65,"""Robust And Interpretable Blind Image Denoising Via Bias-Free Convolutional Neural Networks""","['denoising', 'overfitting', 'generalization', 'robustness', 'interpretability', 'analysis of neural networks']","""We study the generalization properties of deep convolutional neural networks for image denoising in the presence of varying noise levels. We provide extensive empirical evidence that current state-of-the-art architectures systematically overfit to the noise levels in the training set, performing very poorly at new noise levels. We show that strong generalization can be achieved through a simple architectural modification: removing all additive constants. The resulting ""bias-free"" networks attain state-of-the-art performance over a broad range of noise levels, even when trained over a limited range. They are also locally linear, which enables direct analysis with linear-algebraic tools. We show that the denoising map can be visualized locally as a filter that adapts to both image structure and noise level. In addition, our analysis reveals that deep networks implicitly perform a projection onto an adaptively-selected low-dimensional subspace, with dimensionality inversely proportional to noise level, that captures features of natural images. ""","""This paper focuses on studying neural network-based denoising methods. The paper makes the interesting observation that most existing denoising approaches have a tendency to overfit to knowledge of the noise level. The authors claim that simply removing the bias on the network parameters enables a variety of improvements in this regard and provide some theoretical justification for their results. The reviewers were mostly postive but raised some concerns about generalization beyond Gaussian noise and not ""being very well theoretically motivated"". These concerns seem to have at least partially been alleviated during the discussion period. I agree with the reviewers. I think the paper looks at an important phenomena for denoising (role of variance parameter) and is well suited to ICLR. I recommend acceptance. I suggest that the authors continue to further improve the paper based on the reviewers' comments.""" 66,"""Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets""","['Adversarial Example', 'Transferability', 'Skip Connection', 'Neural Network']","""Skip connections are an essential component of current state-of-the-art deep neural networks (DNNs) such as ResNet, WideResNet, DenseNet, and ResNeXt. Despite their huge success in building deeper and more powerful DNNs, we identify a surprising \emph{security weakness} of skip connections in this paper. Use of skip connections \textit{allows easier generation of highly transferable adversarial examples}. Specifically, in ResNet-like (with skip connections) neural networks, gradients can backpropagate through either skip connections or residual modules. We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability. Our method is termed \emph{Skip Gradient Method} (SGM). We conduct comprehensive transfer attacks against state-of-the-art DNNs including ResNets, DenseNets, Inceptions, Inception-ResNet, Squeeze-and-Excitation Network (SENet) and robustly trained DNNs. We show that employing SGM on the gradient flow can greatly improve the transferability of crafted attacks in almost all cases. Furthermore, SGM can be easily combined with existing black-box attack techniques, and obtain high improvements over state-of-the-art transferability methods. Our findings not only motivate new research into the architectural vulnerability of DNNs, but also open up further challenges for the design of secure DNN architectures.""","""This paper makes the observation that, by adjusting the ratio of gradients from skip connections and residual connections in ResNet-family networks in a projected gradient descent attack (that is, upweighting the contribution of the skip connection gradient), one can obtain more transferable adversarial examples. This is evaluated empirically in the single-model black box transfer setting, against a wide range of models, both with and without countermeasures. Reviewers praised the novelty and simplicity of the method, the breadth of empirical results, and the review of related work. Concerns were raised regarding a lack of variance reporting, strength of the baselines vs. numbers reported in the literature, and the lack of consideration paid to the threat model under which an adversary employs an ensemble of source models, as well as the framing given by the original title and abstract. All of these appear to have been satisfactorily addressed, in a fine example of what ICLR's review & revision process can yield. It is therefore my pleasure to recommend acceptance.""" 67,"""SoftLoc: Robust Temporal Localization under Label Misalignment""","['deep learning', 'temporal localization', 'robustness', 'label misalignment', 'music', 'time series']","""This work addresses the long-standing problem of robust event localization in the presence of temporally of misaligned labels in the training data. We propose a novel versatile loss function that generalizes a number of training regimes from standard fully-supervised cross-entropy to count-based weakly-supervised learning. Unlike classical models which are constrained to strictly fit the annotations during training, our soft localization learning approach relaxes the reliance on the exact position of labels instead. Training with this new loss function exhibits strong robustness to temporal misalignment of labels, thus alleviating the burden of precise annotation of temporal sequences. We demonstrate state-of-the-art performance against standard benchmarks in a number of challenging experiments and further show that robustness to label noise is not achieved at the expense of raw performance. ""","""Main content: Blind review #3 summarizes it well: This paper proposes a new loss for training models that predict where events occur in a sequence when the training sequence has noisy labels. The central idea is to smooth the label sequence and prediction sequence and compare these rather than to force the model to treat all errors as equally serious. The proposed problem seems sensible, and the method is a reasonable approach. The evaluations are carried out on a variety of different tasks (piano onset detection, drum detection, smoking detection, video action segmentation). -- Discussion: The reviewers were concerned about the relatively low level of novelty, simplicity of the proposed approach (which the authors argue could be seen as a feature rather than a flaw, given its good performance), and inadequate motivation. -- Recommendation and justification: After the authors' revision in response to the reviews, this paper could be a weak accept if not for the large number of stronger submissions.""" 68,"""Single Episode Policy Transfer in Reinforcement Learning""","['transfer learning', 'reinforcement learning']","""Transfer and adaptation to new unknown environmental dynamics is a key challenge for reinforcement learning (RL). An even greater challenge is performing near-optimally in a single attempt at test time, possibly without access to dense rewards, which is not addressed by current methods that require multiple experience rollouts for adaptation. To achieve single episode transfer in a family of environments with related dynamics, we propose a general algorithm that optimizes a probe and an inference model to rapidly estimate underlying latent variables of test dynamics, which are then immediately used as input to a universal control policy. This modular approach enables integration of state-of-the-art algorithms for variational inference or RL. Moreover, our approach does not require access to rewards at test time, allowing it to perform in settings where existing adaptive approaches cannot. In diverse experimental domains with a single episode test constraint, our method significantly outperforms existing adaptive approaches and shows favorable performance against baselines for robust transfer.""","""This is an interesting paper that is concerned with single episode transfer to reinforcement learning problems with different dynamics models, assuming they are parameterised by a latent variable. Given some initial training tasks to learn about this parameter, and a new test task, they present an algorithm to probe and estimate the latent variable on the test task, whereafter the inferred latent variable is used as input to a control policy. There were several issues raised by the reviewers. Firstly, there were questions with the number of runs and the baseline implementations, which were all addressed in the rebuttals. Then, there were questions around the novelty and the main contribution being wall-clock time. These issues were also adequately addressed. In light of this, I recommend acceptance of this paper.""" 69,"""Learning Compositional Koopman Operators for Model-Based Control""","['Koopman operators', 'graph neural networks', 'compositionality']","""Finding an embedding space for a linear approximation of a nonlinear dynamical system enables efficient system identification and control synthesis. The Koopman operator theory lays the foundation for identifying the nonlinear-to-linear coordinate transformations with data-driven methods. Recently, researchers have proposed to use deep neural networks as a more expressive class of basis functions for calculating the Koopman operators. These approaches, however, assume a fixed dimensional state space; they are therefore not applicable to scenarios with a variable number of objects. In this paper, we propose to learn compositional Koopman operators, using graph neural networks to encode the state into object-centric embeddings and using a block-wise linear transition matrix to regularize the shared structure across objects. The learned dynamics can quickly adapt to new environments of unknown physical parameters and produce control signals to achieve a specified goal. Our experiments on manipulating ropes and controlling soft robots show that the proposed method has better efficiency and generalization ability than existing baselines.""","""This paper proposes using object-centered graph neural network embeddings of a dynamical system as approximate Koopman embeddings, and then learning the linear transition matrix to model the dynamics of the system according to the Koopman operator theory. The authors propose adding an inductive bias (a block diagonal structure of the transition matrix with shared components) to limit the number of parameters necessary to learn, which improves the computational efficiency and generalisation of the proposed approach. The authors also propose adding an additional input component that allows for external control of the dynamics of the system. The reviewers initially had concerns about the experimental section, since the approach was only tested on toy domains. The reviewers also asked for more baselines. The authors were able to answer some of the questions raised during the discussion period, and by the end of it all reviewers agreed that this is a solid and novel piece of work that deserves to be accepted. For this reason I recommend acceptance.""" 70,"""Universal Learning Approach for Adversarial Defense""","['Adversarial examples', 'Adversarial training', 'Universal learning', 'pNML for DNN']","""Adversarial attacks were shown to be very effective in degrading the performance of neural networks. By slightly modifying the input, an almost identical input is misclassified by the network. To address this problem, we adopt the universal learning framework. In particular, we follow the recently suggested Predictive Normalized Maximum Likelihood (pNML) scheme for universal learning, whose goal is to optimally compete with a reference learner that knows the true label of the test sample but is restricted to use a learner from a given hypothesis class. In our case, the reference learner is using his knowledge on the true test label to perform minor refinements to the adversarial input. This reference learner achieves perfect results on any adversarial input. The proposed strategy is designed to be as close as possible to the reference learner in the worst-case scenario. Specifically, the defense essentially refines the test data according to the different hypotheses, where each hypothesis assumes a different label for the sample. Then by comparing the resulting hypotheses probabilities, we predict the label and detect whether the sample is adversarial or natural. Combining our method with adversarial training we create a robust scheme which can handle adversarial input along with detection of the attack. The resulting scheme is demonstrated empirically.""","""The reviewers attempted to give this paper a fair assessment, but were unanimous in recommending rejection. The technical quality of motivation was questioned, while the experimental evaluation was not found to be clear or convincing. Hopefully the feedback provided can help the authors improve their paper.""" 71,"""NeuroFabric: Identifying Ideal Topologies for Training A Priori Sparse Networks""","['Sparsity', 'model compression', 'training', 'topology']","""Long training times of deep neural networks are a bottleneck in machine learning research. The major impediment to fast training is the quadratic growth of both memory and compute requirements of dense and convolutional layers with respect to their information bandwidth. Recently, training `a priori' sparse networks has been proposed as a method for allowing layers to retain high information bandwidth, while keeping memory and compute low. However, the choice of which sparse topology should be used in these networks is unclear. In this work, we provide a theoretical foundation for the choice of intra-layer topology. First, we derive a new sparse neural network initialization scheme that allows us to explore the space of very deep sparse networks. Next, we evaluate several topologies and show that seemingly similar topologies can often have a large difference in attainable accuracy. To explain these differences, we develop a data-free heuristic that can evaluate a topology independently from the dataset the network will be trained on. We then derive a set of requirements that make a good topology, and arrive at a single topology that satisfies all of them. ""","""This work proposes new initialization and layer topologies for training a priori sparse networks. Reviewers agreed that the direction is interesting and that the paper is well written. Additionally the theory presented on the toy matrix reconstruction task helped motivate the proposed approach. However, it is also necessary to validate the new approach by comparing with existing sparsity literature on standard benchmarks. I recommend resubmitting with the additional experiments suggested by the reviewers.""" 72,"""Distributionally Robust Neural Networks""","['distributionally robust optimization', 'deep learning', 'robustness', 'generalization', 'regularization']","""Overparameterized neural networks can be highly accurate on average on an i.i.d. test set, yet consistently fail on atypical groups of the data (e.g., by learning spurious correlations that hold on average but not in such groups). Distributionally robust optimization (DRO) allows us to learn models that instead minimize the worst-case training loss over a set of pre-defined groups. However, we find that naively applying group DRO to overparameterized neural networks fails: these models can perfectly fit the training data, and any model with vanishing average training loss also already has vanishing worst-case training loss. Instead, the poor worst-case performance arises from poor generalization on some groups. By coupling group DRO models with increased regularization---stronger-than-typical L2 regularization or early stopping---we achieve substantially higher worst-group accuracies, with 10-40 percentage point improvements on a natural language inference task and two image tasks, while maintaining high average accuracies. Our results suggest that regularization is important for worst-group generalization in the overparameterized regime, even if it is not needed for average generalization. Finally, we introduce a stochastic optimization algorithm for the group DRO setting and provide convergence guarantees for the new algorithm. ""","""This paper proposes distributionally robust optimization (DRO) to learn robust models that minimize worst-case training loss over a set of pre-defined groups. They find that increased regularization is necessary for worst-group performance in the overparametrized regime (something that is not needed for non-robust average performance). This is an interesting paper and I recommend acceptance. The discussion phase suggested a change in the title which slightly overstated the paper's contributions (a comment which I agree with). The authors agreed to change the title in the final version. """ 73,"""UW-NET: AN INCEPTION-ATTENTION NETWORK FOR UNDERWATER IMAGE CLASSIFICATION""","['Underwater image', 'Convolutional neural network', 'Image classification', 'Inception module', 'Attention module']","""The classification of images taken in special imaging environments except air is the first challenge in extending the applications of deep learning. We report on an UW-Net (Underwater Network), a new convolutional neural network (CNN) based network for underwater image classification. In this model, we simulate the visual correlation of background attention with image understanding for special environments, such as fog and underwater by constructing an inception-attention (I-A) module. The experimental results demonstrate that the proposed UW-Net achieves an accuracy of 99.3% on underwater image classification, which is significantly better than other image classification networks, such as AlexNet, InceptionV3, ResNet and Se-ResNet. Moreover, we demonstrate the proposed IA module can be used to boost the performance of the existing object recognition networks. By substituting the inception module with the I-A module, the Inception-ResnetV2 network achieves a 10.7% top1 error rate and a 0% top5 error rate on the subset of ILSVRC-2012, which further illustrates the function of the background attention in the image classifications.""","""The reviewers have issues with the lack of enough experimental results as well as with novelty of the solution proposed. I recommend rejection.""" 74,"""Unifying Graph Convolutional Networks as Matrix Factorization""","['graph convolutional networks', 'matrix factorization', 'unification']","""In recent years, substantial progress has been made on graph convolutional networks (GCN). In this paper, for the first time, we theoretically analyze the connections between GCN and matrix factorization (MF), and unify GCN as matrix factorization with co-training and unitization. Moreover, under the guidance of this theoretical analysis, we propose an alternative model to GCN named Co-training and Unitized Matrix Factorization (CUMF). The correctness of our analysis is verified by thorough experiments. The experimental results show that CUMF achieves similar or superior performances compared to GCN. In addition, CUMF inherits the benefits of MF-based methods to naturally support constructing mini-batches, and is more friendly to distributed computing comparing with GCN. The distributed CUMF on semi-supervised node classification significantly outperforms distributed GCN methods. Thus, CUMF greatly benefits large scale and complex real-world applications.""","""The paper makes an interesting attempt at connecting graph convolutional neural networks (GCN) with matrix factorization (MF) and then develops a MF solution that achieves similar prediction performance as GCN. While the work is a good attempt, the work suffers from two major issues: (1) the connection between GCN and other related models have been examined recently. The paper did not provide additional insights; (2) some parts of the derivations could be problematic. The paper could be a good publication in the future if the motivation of the work can be repositioned. """ 75,"""Deep Variational Semi-Supervised Novelty Detection""","['anomaly detection', 'semi-supervised anomaly detection', 'variational autoencoder']","""In anomaly detection (AD), one seeks to identify whether a test sample is abnormal, given a data set of normal samples. A recent and promising approach to AD relies on deep generative models, such as variational autoencoders (VAEs),for unsupervised learning of the normal data distribution. In semi-supervised AD (SSAD), the data also includes a small sample of labeled anomalies. In this work,we propose two variational methods for training VAEs for SSAD. The intuitive idea in both methods is to train the encoder to separate between latent vectors for normal and outlier data. We show that this idea can be derived from principled probabilistic formulations of the problem, and propose simple and effective algorithms. Our methods can be applied to various data types, as we demonstrate on SSAD datasets ranging from natural images to astronomy and medicine, and can be combined with any VAE model architecture. When comparing to state-of-the-art SSAD methods that are not specific to particular data types, we obtain marked improvement in outlier detection.""","""This paper presents two novel VAE-based methods for semi-supervised anomaly detection (SSAD) where one has also access to a small set of labeled anomalous samples. The reviewers had several concerns about the paper, in particular completely addressing reviewer #3's comments would strengthen the paper.""" 76,"""Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm""",[],"""How many training data are needed to learn a supervised task? It is often observed that the generalization error decreases as pseudo-formula where pseudo-formula is the number of training examples and pseudo-formula an exponent that depends on both data and algorithm. In this work we measure pseudo-formula when applying kernel methods to real datasets. For MNIST we find 0.4 and for CIFAR10 0.1 Remarkably, pseudo-formula is the same for regression and classification tasks, and for Gaussian or Laplace kernels. To rationalize the existence of non-trivial exponents that can be independent of the specific kernel used, we introduce the Teacher-Student framework for kernels. In this scheme, a Teacher generates data according to a Gaussian random field, and a Student learns them via kernel regression. With a simplifying assumption --- namely that the data are sampled from a regular lattice --- we derive analytically pseudo-formula for translation invariant kernels, using previous results from the kriging literature. Provided that the Student is not too sensitive to high frequencies, pseudo-formula depends only on the training data and their dimension. We confirm numerically that these predictions hold when the training points are sampled at random on a hypersphere. Overall, our results quantify how smooth Gaussian data should be to avoid the curse of dimensionality, and indicate that for kernel learning the relevant dimension of the data should be defined in terms of how the distance between nearest data points depends on pseudo-formula . With this definition one obtains reasonable effective smoothness estimates for MNIST and CIFAR10.""","""The paper studies, theoretically and empirically, the problem when generalization error decreases as pseudo-formula where pseudo-formula is not pseudo-formula . It analyses a Teacher-Student problem where the Teacher generates data from a Gaussian random field. The paper provides a theorem that derives pseudo-formula for Gaussian and Laplace kernels, and show empirical evidence supporting the theory using MNIST and CIFAR. The reviews contained two low scores, both of which were not confident. A more confident reviewer provided a weak accept score, and interacted multiple times with the authors during the discussion period (which is one of the nice things about the ICLR review process). However, this reviewer also noted that ICLR may not be the best venue for this work. Overall, while this paper shows promise, the negative review scores show that the topic may not be the best fit to the ICLR audience.""" 77,"""Unsupervised Out-of-Distribution Detection with Batch Normalization""",[],"""Likelihood from a generative model is a natural statistic for detecting out-of-distribution (OoD) samples. However, generative models have been shown to assign higher likelihood to OoD samples compared to ones from the training distribution, preventing simple threshold-based detection rules. We demonstrate that OoD detection fails even when using more sophisticated statistics based on the likelihoods of individual samples. To address these issues, we propose a new method that leverages batch normalization. We argue that batch normalization for generative models challenges the traditional \emph{i.i.d.} data assumption and changes the corresponding maximum likelihood objective. Based on this insight, we propose to exploit in-batch dependencies for OoD detection. Empirical results suggest that this leads to more robust detection for high-dimensional images.""","""The authors observe that batch normalization using the statistics computed from a *test* batch significantly improves out-of-distribution detection with generative models. Essentially, normalizing an OOD test batch using the test batch statistics decreases the likelihood of that batch and thus improves detection of OOD examples. The reviewers seemed concerned with this setting and they felt that it gives a significant advantage over existing methods since they typically deal with single test example. The reviewers thus wanted empirical comparisons to methods designed for this setting, i.e. traditional statistical tests for comparing distributions. Despite some positive discussion, this paper unfortunately falls below the bar for acceptance. The authors added significant experiments and hopefully adding these and additional analysis providing some insight into how the batchnorm is helping would make for a stronger submission to a future conference.""" 78,"""Image-guided Neural Object Rendering""","['Neural Rendering', 'Neural Image Synthesis']","""We propose a learned image-guided rendering technique that combines the benefits of image-based rendering and GAN-based image synthesis. The goal of our method is to generate photo-realistic re-renderings of reconstructed objects for virtual and augmented reality applications (e.g., virtual showrooms, virtual tours and sightseeing, the digital inspection of historical artifacts). A core component of our work is the handling of view-dependent effects. Specifically, we directly train an object-specific deep neural network to synthesize the view-dependent appearance of an object. As input data we are using an RGB video of the object. This video is used to reconstruct a proxy geometry of the object via multi-view stereo. Based on this 3D proxy, the appearance of a captured view can be warped into a new target view as in classical image-based rendering. This warping assumes diffuse surfaces, in case of view-dependent effects, such as specular highlights, it leads to artifacts. To this end, we propose EffectsNet, a deep neural network that predicts view-dependent effects. Based on these estimations, we are able to convert observed images to diffuse images. These diffuse images can be projected into other views. In the target view, our pipeline reinserts the new view-dependent effects. To composite multiple reprojected images to a final output, we learn a composition network that outputs photo-realistic results. Using this image-guided approach, the network does not have to allocate capacity on ``remembering'' object appearance, instead it learns how to combine the appearance of captured images. We demonstrate the effectiveness of our approach both qualitatively and quantitatively on synthetic as well as on real data.""","""The paper presents a new variation of neural (re) rendering of objects, that uses a set of two deep ConvNets to model non-Lambertian effects associated with an object. The paper has received mostly positive reviews. The reviewers agree that the contribution is well-described, valid and valuable. The method is validated against strong baselines including Hedman et al., though Reviewer4 rightfully points out that the comparison might have been more thorough. One additional concern not raised by the reviewers is the lack of comparison with [Thies et al. 2019], which is briefly mentioned but not discussed. The authors are encouraged to provide a corresponding comparison (as well as additional comparisons with Hedman et al) and discuss pros and cons w.r.t. [Thies et al] in the final version.""" 79,"""Gradient-based training of Gaussian Mixture Models in High-Dimensional Spaces""","['GMM', 'SGD']","""We present an approach for efficiently training Gaussian Mixture Models (GMMs) with Stochastic Gradient Descent (SGD) on large amounts of high-dimensional data (e.g., images). In such a scenario, SGD is strongly superior in terms of execution time and memory usage, although it is conceptually more complex than the traditional Expectation-Maximization (EM) algorithm. For enabling SGD training, we propose three novel ideas: First, we show that minimizing an upper bound to the GMM log likelihood instead of the full one is feasible and numerically much more stable way in high-dimensional spaces. Secondly, we propose a new regularizer that prevents SGD from converging to pathological local minima. And lastly, we present a simple method for enforcing the constraints inherent to GMM training when using SGD. We also propose an SGD-compatible simplification to the full GMM model based on local principal directions, which avoids excessive memory use in high-dimensional spaces due to quadratic growth of covariance matrices. Experiments on several standard image datasets show the validity of our approach, and we provide a publicly available TensorFlow implementation.""","""The paper presents an SGD-based learning of a Gaussian mixture model, designed to match a data streaming setting. The reviews state that the paper contains some quite good points, such as * the simplicity and scalability of the method, and its robustness w.r.t. the initialization of the approach; * the SOM-like approach used to avoid degenerated solutions; Among the weaknesses are * an insufficient discussion wrt the state of the art, e.g. for online EM; * the description of the approach seems yet not mature (e.g., the constraint enforcement boils down to considering that the pseudo-formula are obtained using softmax; the discussion about the diagonal covariance matrix vs the use of local principal directions is not crystal clear); * the fact that experiments need be strengthened. I thus encourage the authors to rewrite and polish the paper, simplifying the description of the approach and better positioning it w.r.t. the state of the art (in particular, mentioning the data streaming motivation from the start). Also, more evidence, and a more thorough analysis thereof, must be provided to back up the approach and understand its limitations.""" 80,"""Bayesian Meta Sampling for Fast Uncertainty Adaptation""","['Bayesian Sampling', 'Uncertainty Adaptation', 'Meta Learning', 'Variational Inference']","""Meta learning has been making impressive progress for fast model adaptation. However, limited work has been done on learning fast uncertainty adaption for Bayesian modeling. In this paper, we propose to achieve the goal by placing meta learning on the space of probability measures, inducing the concept of meta sampling for fast uncertainty adaption. Specifically, we propose a Bayesian meta sampling framework consisting of two main components: a meta sampler and a sample adapter. The meta sampler is constructed by adopting a neural-inverse-autoregressive-flow (NIAF) structure, a variant of the recently proposed neural autoregressive flows, to efficiently generate meta samples to be adapted. The sample adapter moves meta samples to task-specific samples, based on a newly proposed and general Bayesian sampling technique, called optimal-transport Bayesian sampling. The combination of the two components allows a simple learning procedure for the meta sampler to be developed, which can be efficiently optimized via standard back-propagation. Extensive experimental results demonstrate the efficiency and effectiveness of the proposed framework, obtaining better sample quality and faster uncertainty adaption compared to related methods.""","""This paper presents a meta-learning algorithm that represents uncertainty both at the meta-level and at the task-level. The approach contains an interesting combination of techniques. The reviewers raised concerns about the thoroughness of the experiments, which were resolved in a convincing way in the rebuttal. Concerns about clarity remain, and the authors are *strongly encouraged* to revise the paper throughout to make the presentation more clear and understandable, including to readers who do not have a meta-learning background. See the reviewer's comments for further details on how the organization of the paper and the presentation of the ideas can be improved.""" 81,"""Unknown-Aware Deep Neural Network""","['unknown', 'rejection', 'CNN', 'product relationship']","""An important property of image classification systems in the real world is that they both accurately classify objects from target classes (``knowns'') and safely reject unknown objects (``unknowns'') that belong to classes not present in the training data. Unfortunately, although the strong generalization ability of existing CNNs ensures their accuracy when classifying known objects, it also causes them to often assign an unknown to a target class with high confidence. As a result, simply using low-confidence detections as a way to detect unknowns does not work well. In this work, we propose an Unknown-aware Deep Neural Network (UDN for short) to solve this challenging problem. The key idea of UDN is to enhance existing CNNs to support a product operation that models the product relationship among the features produced by convolutional layers. This way, missing a single key feature of a target class will greatly reduce the probability of assigning an object to this class. UDN uses a learned ensemble of these product operations, which allows it to balance the contradictory requirements of accurately classifying known objects and correctly rejecting unknowns. To further improve the performance of UDN at detecting unknowns, we propose an information-theoretic regularization strategy that incorporates the objective of rejecting unknowns into the learning process of UDN. We experiment on benchmark image datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN, adding unknowns by injecting one dataset into another. Our results demonstrate that UDN significantly outperforms state-of-the-art methods at rejecting unknowns by 25 percentage points improvement in accuracy, while still preserving the classification accuracy. ""","""This paper proposes the unknown-aware deep neural network (UDN), which can discover out-of-distribution samples for CNN classifiers. Experiments show that the proposed method has an improved rejection accuracy while maintaining a good classification accuracy on the test set. Three reviewers have split reviews. Reviewer #2 provides positive review for this work, while indicating that he is not an expert in image classification. Reviewer #1 agrees that the topic is interesting, yet the experiment is not so convincing, especially with limited and simple databases. Reviewer #3 shared the similar concern that the experiments are not sufficient. Further, R3 felt that the main idea is not well explained. The ACs concur these major concerns and agree that the paper can not be accepted at its current state.""" 82,"""HIPPOCAMPAL NEURONAL REPRESENTATIONS IN CONTINUAL LEARNING""",[],"""The hippocampus has long been associated with spatial memory and goal-directed spatial navigation. However, the regions independent role in continual learning of navigational strategies has seldom been investigated. Here we analyse populationlevel activity of hippocampal CA1 neurons in the context of continual learning of two different spatial navigation strategies. Demixed Principal Component Analysis (dPCA) is applied on neuronal recordings from 612 hippocampal CA1 neurons of rodents learning to perform allocentric and egocentric spatial tasks. The components uncovered using dPCA from the firing activity reveal that hippocampal neurons encode relevant task variables such decisions, navigational strategies and reward location. We compare this hippocampal features with standard reinforcement learning algorithms, highlighting similarities and differences. Finally, we demonstrate that a standard deep reinforcement learning model achieves similar average performance when compared to animal learning, but fails to mimic animals during task switching. Overall, our results gives insights into how the hippocampus solves reinforced spatial continual learning, and puts forward a framework to explicitly compare biological and machine learning during spatial continual learning.""","""This paper analyzes neural recording data taken from rodents performing a continual learning task using demixed principal component analysis, and aims to find representations for behaviorally relevant variables. They compare these features with those of a deep RL agent. I am a big fan of papers like this that try to bridge between neuroscience and machine learning. It seems to have a great motivation and there are some interesting results presented. However the reviewers pointed out many issues that lead me to believe this work is not quite ready for publication. In particular, not considering space when analyzing hippocampal rodent data, as R2 points out, seems to be a major oversight. In addition, the sample size is incredibly small (5 rats, only 1 of which was used for the continual learning simulation). This seems to me like more of an exploratory, pilot study than a full experiment that is ready for publication, and therefore I am unfortunately recommending reject. Reviewer comments were very thorough and on point. Sounds like the authors are already working on the next version of the paper with these points in mind, so I look forward to it. """ 83,"""Self-Adversarial Learning with Comparative Discrimination for Text Generation""","['adversarial learning', 'text generation']","""Conventional Generative Adversarial Networks (GANs) for text generation tend to have issues of reward sparsity and mode collapse that affect the quality and diversity of generated samples. To address the issues, we propose a novel self-adversarial learning (SAL) paradigm for improving GANs' performance in text generation. In contrast to standard GANs that use a binary classifier as its discriminator to predict whether a sample is real or generated, SAL employs a comparative discriminator which is a pairwise classifier for comparing the text quality between a pair of samples. During training, SAL rewards the generator when its currently generated sentence is found to be better than its previously generated samples. This self-improvement reward mechanism allows the model to receive credits more easily and avoid collapsing towards the limited number of real samples, which not only helps alleviate the reward sparsity issue but also reduces the risk of mode collapse. Experiments on text generation benchmark datasets show that our proposed approach substantially improves both the quality and the diversity, and yields more stable performance compared to the previous GANs for text generation.""","""This paper proposes a method for improving training of text generation with GANs by performing discrimination between different generated examples, instead of solely between real and generated examples. R3 and R1 appreciated the general idea, and thought that while there are still concerns, overall the paper seems to be interesting enough to warrant publication at ICLR. R2 has a rating of ""weak reject"", but I tend to agree with the authors that comparison with other methods that use different model architectures is orthogonal to the contribution of this paper. In sum, I think that this paper would likely make a good contribution to ICLR and recommend acceptance.""" 84,"""All Simulations Are Not Equal: Simulation Reweighing for Imperfect Information Games""","['Contract Bridge', 'Simulation', 'Imperfect Information Games', 'Reweigh', 'Belief Modeling']","""Imperfect information games are challenging benchmarks for artificial intelligent systems. To reason and plan under uncertainty is a key towards general AI. Traditionally, large amounts of simulations are used in imperfect information games, and they sometimes perform sub-optimally due to large state and action spaces. In this work, we propose a simulation reweighing mechanism using neural networks. It performs backwards verification to public previous actions and assign proper belief weights to the simulations from the information set of the current observation, using an incomplete state solver network (ISSN). We use simulation reweighing in the playing phase of the game contract bridge, and show that it outperforms previous state-of-the-art Monte Carlo simulation based methods, and achieves better play per decision. ""","""A method is introduced to estimate the hidden state in imperfect information in multiplayer games, in particular Bridge. This is interesting, but the paper falls short in various ways. Several reviewers complained about the readability of the paper, and also about the quality and presentation of the interesting results. It seems that this paper represents an interesting idea, but is not yet ready for publication.""" 85,"""Hierarchical Graph-to-Graph Translation for Molecules""","['graph generation', 'deep learning']","""The problem of accelerating drug discovery relies heavily on automatic tools to optimize precursor molecules to afford them with better biochemical properties. Our work in this paper substantially extends prior state-of-the-art on graph-to-graph translation methods for molecular optimization. In particular, we realize coherent multi-resolution representations by interweaving the encoding of substructure components with the atom-level encoding of the original molecular graph. Moreover, our graph decoder is fully autoregressive, and interleaves each step of adding a new substructure with the process of resolving its attachment to the emerging molecule. We evaluate our model on multiple molecular optimization tasks and show that our model significantly outperforms previous state-of-the-art baselines.""","""Two reviewers are negative on this paper while the other reviewer is slightly positive. Overall, the paper does not make the bar of ICLR. A reject is recommended.""" 86,"""The asymptotic spectrum of the Hessian of DNN throughout training""","['theory of deep learning', 'loss surface', 'training', 'fisher information matrix']","""The dynamics of DNNs during gradient descent is described by the so-called Neural Tangent Kernel (NTK). In this article, we show that the NTK allows one to gain precise insight into the Hessian of the cost of DNNs: we obtain a full characterization of the asymptotics of the spectrum of the Hessian, at initialization and during training. ""","""This paper studies the spectrum of the Hessian through training, making connections with the NTK limit. While many of the results are perhaps unsurprising, and more empirically driven, together the paper represents a valuable contribution towards our understanding of generalization in deep learning. Please carefully account for the reviewer comments in the final version.""" 87,"""VILD: Variational Imitation Learning with Diverse-quality Demonstrations""","['Imitation learning', 'inverse reinforcement learning', 'noisy demonstrations']","""The goal of imitation learning (IL) is to learn a good policy from high-quality demonstrations. However, the quality of demonstrations in reality can be diverse, since it is easier and cheaper to collect demonstrations from a mix of experts and amateurs. IL in such situations can be challenging, especially when the level of demonstrators' expertise is unknown. We propose a new IL paradigm called Variational Imitation Learning with Diverse-quality demonstrations (VILD), where we explicitly model the level of demonstrators' expertise with a probabilistic graphical model and estimate it along with a reward function. We show that a naive estimation approach is not suitable to large state and action spaces, and fix this issue by using a variational approach that can be easily implemented using existing reinforcement learning methods. Experiments on continuous-control benchmarks demonstrate that VILD outperforms state-of-the-art methods. Our work enables scalable and data-efficient IL under more realistic settings than before.""","""The paper proposes a new imitation learning algorithm that explicitly models the quality of demonstrators. All reviewers agreed that the problem and the approach were interesting, the paper well-written, and the experiments well-conducted. However, there was a shared concern about the applicability of the method to more realistic settings, in which the model generating the demonstrations does not fall under the assumptions of the method. The authors did add a real-world experiment during the rebuttal, but the reviewers were suspicious of the reported InfoGAIL performance and were not persuaded to change their assessment. Following this discussion, I recommend rejection at this time, but it seems like a good paper and I encourage the authors to do a more careful validation experiment, and resubmit to a future venue.""" 88,"""Visual Interpretability Alone Helps Adversarial Robustness""","['adversarial robustness', 'visual explanation', 'CNN', 'image classification']","""Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability, and interpretability is itself susceptible to adversarial attacks. In this paper, we theoretically show that with the correct measurement of interpretation, it is actually difficult to hide adversarial examples, as confirmed by experiments on MNIST, CIFAR-10 and Restricted ImageNet. Spurred by that, we develop a novel defensive scheme built only on robust interpretation (without resorting to adversarial loss minimization). We show that our defense achieves similar classification robustness to state-of-the-art robust training methods while attaining higher interpretation robustness under various settings of adversarial attacks.""","""This work focuses on how one can design models with robustness of interpretations. While this is an interesting direction, the paper would benefit from a more careful treatment of its technical claims. """ 89,"""An Inductive Bias for Distances: Neural Nets that Respect the Triangle Inequality""","['metric learning', 'deep metric learning', 'neural network architectures', 'triangle inequality', 'graph distances']","""Distances are pervasive in machine learning. They serve as similarity measures, loss functions, and learning targets; it is said that a good distance measure solves a task. When defining distances, the triangle inequality has proven to be a useful constraint, both theoretically---to prove convergence and optimality guarantees---and empirically---as an inductive bias. Deep metric learning architectures that respect the triangle inequality rely, almost exclusively, on Euclidean distance in the latent space. Though effective, this fails to model two broad classes of subadditive distances, common in graphs and reinforcement learning: asymmetric metrics, and metrics that cannot be embedded into Euclidean space. To address these problems, we introduce novel architectures that are guaranteed to satisfy the triangle inequality. We prove our architectures universally approximate norm-induced metrics on pseudo-formula , and present a similar result for modified Input Convex Neural Networks. We show that our architectures outperform existing metric approaches when modeling graph distances and have a better inductive bias than non-metric approaches when training data is limited in the multi-goal reinforcement learning setting. ""","""This paper proposes a neural network approach to approximate distances, based on a representation of norms in terms of convex homogeneous functions. The authors show universal approximation of norm-induced metrics and present applications to value-function approximation in RL and graph distance problems. Reviewers were in general agreement that this is a solid paper, well-written and with compelling results. The AC shares this positive assessment and therefore recommends acceptance. """ 90,"""Improving Semantic Parsing with Neural Generator-Reranker Architecture""","['Natural Language Processing', 'Semantic Parsing', 'Neural Reranking']","""Semantic parsing is the problem of deriving machine interpretable meaning representations from natural language utterances. Neural models with encoder-decoder architectures have recently achieved substantial improvements over traditional methods. Although neural semantic parsers appear to have relatively high recall using large beam sizes, there is room for improvement with respect to one-best precision. In this work, we propose a generator-reranker architecture for semantic parsing. The generator produces a list of potential candidates and the reranker, which consists of a pre-processing step for the candidates followed by a novel critic network, reranks these candidates based on the similarity between each candidate and the input sentence. We show the advantages of this approach along with how it improves the parsing performance through extensive analysis. We experiment our model on three semantic parsing datasets (GEO, ATIS, and OVERNIGHT). The overall architecture achieves the state-of-the-art results in all three datasets. ""","""This paper presents and evaluates a technique for semantic parsing, and in particular proposes a model to re-rank the candidates generated by beam search. The paper was reviewed by 3 experts and received Reject, Weak Reject, and Weak Reject opinions. The reviews identified strengths of the paper but also significant concerns, mostly centered around the experimental evaluation (including choice of datasets, lack of direct comparison to baselines, need for more methodical and quantitative analysis, need for additional analysis, etc.) and some questions about the design of the technical approach. The authors submitted responses that addressed some of these concerns, but indicated that additional experimentation would be needed to address all of them. In light of these reviews, we are not able to recommend acceptance at this time, but I hope authors use the detailed, constructive feedback to improve the paper for another venue.""" 91,"""Natural- to formal-language generation using Tensor Product Representations""","['Neural Symbolic Reasoning', 'Deep Learning', 'Natural Language Processing', 'Structural Representation', 'Interpretation of Learned Representations']","""Generating formal-language represented by relational tuples, such as Lisp programs or mathematical expressions, from a natural-language input is an extremely challenging task because it requires to explicitly capture discrete symbolic structural information from the input to generate the output. Most state-of-the-art neural sequence models do not explicitly capture such structure information, and thus do not perform well on these tasks. In this paper, we propose a new encoder-decoder model based on Tensor Product Representations (TPRs) for Natural- to Formal-language generation, called TP-N2F. The encoder of TP-N2F employs TPR 'binding' to encode natural-language symbolic structure in vector space and the decoder uses TPR 'unbinding' to generate a sequence of relational tuples, each consisting of a relation (or operation) and a number of arguments, in symbolic space. TP-N2F considerably outperforms LSTM-based Seq2Seq models, creating a new state of the art results on two benchmarks: the MathQA dataset for math problem solving, and the AlgoList dataset for program synthesis. Ablation studies show that improvements are mainly attributed to the use of TPRs in both the encoder and decoder to explicitly capture relational structure information for symbolic reasoning. ""","""The paper proposed a new seq2seq method to implement natural language to formal language translation. Fixed length Tensor Product Representations are used as the intermediate representation between encoder and decoder. Experiments are conducted on MathQA and AlgoList datasets and show the effectiveness of the methods. Intensive discussions happened between the authors and reviewers. Despite of the various concerns raised by the reviewers, a main problem pointed by both reviewer#3 and reviewer#4 is that there is a gap between the theory and the implementation in this paper. The other reviewer (#2) likes the paper but is less confident and tend to agree with the other two reviewers.""" 92,"""Adaptive network sparsification with dependent variational beta-Bernoulli dropout""","['network sparsification', 'variational inference', 'pruning']","""While variational dropout approaches have been shown to be effective for network sparsification, they are still suboptimal in the sense that they set the dropout rate for each neuron without consideration of the input data. With such input independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss. To overcome this limitation, we propose adaptive variational dropout whose probabilities are drawn from sparsity inducing beta-Bernoulli prior. It allows each neuron to be evolved either to be generic or specific for certain inputs, or dropped altogether. Such input-adaptive sparsity- inducing dropout allows the resulting network to tolerate larger degree of sparsity without losing its expressive power by removing redundancies among features. We validate our dependent variational beta-Bernoulli dropout on multiple public datasets, on which it obtains significantly more compact networks than baseline methods, with consistent accuracy improvements over the base networks.""","""This paper introduces a new adaptive variational dropout approach to balance accuracy, sparsity and computation. The method proposed here is sound, the motivation for smaller (perhaps sparser) networks is easy to follow. The paper provides experiments in several data-sets and compares against several other regularization/pruning approaches, and measures accuracy, speedup, and memory. The reviewers agreed on all these points, but overall they found the results unconvincing. They requested (1) more baselines (which the authors added), (2) larger tasks/datasets, and (3) more variety in network architectures. The overall impression was it was hard to see a clear benefit of the proposed approach, based on the provided tables of results. The paper could sharpen its impact with several adjustments. The results are much more clear looking at the error vs speedup graphs. Presenting ""representative results"" in the tables was confusing, especially considering the proposed approach rarely dominated across all measures. It was unclear how the variants of the algorithms presented in the tables were selected---explaining this would help a lot. In addition, more text is needed to help the reader understand how improvements in speed, accuracy, and memory matter. For example in LeNet 500-300 is a speedup of ~12 @ 1.26 error for BB worth-it/important compared a speedup of ~8 for similar error for L_0? How should the reader think about differences in speedup, memory and accuracy---perhaps explanations linking to the impact of these metrics to their context in real applications. I found myself wondering this about pretty much every result, especially when better speedup and memory could be achieved at the cost of some accuracy---how much does the reduction in accuracy actually matter? Is speed and size the dominant thing? I don't know. Overall the analysis and descriptions of the results are very terse, leaving much to the reader to figure out. For example (fig 2 bottom right). If a result is worth including in the paper it's worth explaining it to the reader. Summary statements like ""BB and DBB either achieve significantly smaller error than the baseline methods, or significant speedup and memory saving at similar error rates."" Is not helpful where there are so many dimensions of performance to figure out. The paper spends a lot of time explaining what was done in a matter of fact way, but little time helping the reader interpret the results. There are other issues that hurt the paper, including reporting the results of only 3 runs, sometimes reporting median without explanation, undefined metrics like speedup ,%memory (explain how they are calculated), restricting the batchsize for all methods to a particular value without explanation, and overall somewhat informal and imprecise discussion of the empirical methodology. The authors did a nice job responding to the reviewers (illustrating good understanding of the area and the strengths of their method), and this could be a strong paper indeed if the changes suggested above were implemented. Including SSL and SVG in the appendix was great, but they really should have been included in the speedup vs error plots throughout the paper. This is a nice direction and was very close. Keep going!""" 93,"""Winning the Lottery with Continuous Sparsification""",[],"""The Lottery Ticket Hypothesis from Frankle & Carbin (2019) conjectures that, for typically-sized neural networks, it is possible to find small sub-networks which train faster and yield superior performance than their original counterparts. The proposed algorithm to search for such sub-networks (winning tickets), Iterative Magnitude Pruning (IMP), consistently finds sub-networks with 90-95% less parameters which indeed train faster and better than the overparameterized models they were extracted from, creating potential applications to problems such as transfer learning. In this paper, we propose a new algorithm to search for winning tickets, Continuous Sparsification, which continuously removes parameters from a network during training, and learns the sub-network's structure with gradient-based methods instead of relying on pruning strategies. We show empirically that our method is capable of finding tickets that outperforms the ones learned by Iterative Magnitude Pruning, and at the same time providing up to 5 times faster search, when measured in number of training epochs.""","""This paper proposes a new algorithm called Continuous Sparsification (CS) to search for winning tickets (in the context of the Lottery Ticket Hypothesis from Frankle & Carbin (2019)), as an alternative to the Iterative Magnitude Pruning (IMP) algorithm proposed therein. CS continuously removes parameters from a network during training, and learns the sub-network's structure with gradient-based methods instead of relying on pruning strategies. The papers shows empirically that CS finds lottery tickets that outperforms the ones learned by ITS with up to 5 times faster search, when measured in number of training epochs. While this paper presents a novel contribution of pruning and of finding winning lottery tickets and is very well written, there are some concerns raised by the reviewers regarding the current evaluation. The paper presents no concrete data on the comparative costs of performing CS and IMP even though the core claim is that CS is more efficient. The paper does not disclose enough detail to compute these costs, and it seems like CS is more expensive than IMP for standard workflows. Moreover, the current presentation of the data through ""pareto curves"" is misleadingly favorable to CS. The reviewers suggest including more experiments on ImageNet and a more thorough evaluation as a pruning technique beyond the lottery ticket hypothesis. We recommend the authors to address the detailed reviewers' comments in an eventual ressubmission. """ 94,"""Improving Exploration of Deep Reinforcement Learning using Planning for Policy Search""","['reinforcement learning', 'kinodynamic planning', 'policy search']","""Most Deep Reinforcement Learning methods perform local search and therefore are prone to get stuck on non-optimal solutions. Furthermore, in simulation based training, such as domain-randomized simulation training, the availability of a simulation model is not exploited, which potentially decreases efficiency. To overcome issues of local search and exploit access to simulation models, we propose the use of kino-dynamic planning methods as part of a model-based reinforcement learning method and to learn in an off-policy fashion from solved planning instances. We show that, even on a simple toy domain, D-RL methods (DDPG, PPO, SAC) are not immune to local optima and require additional exploration mechanisms. We show that our planning method exhibits a better state space coverage, collects data that allows for better policies than D-RL methods without additional exploration mechanisms and that starting from the planner data and performing additional training results in as good as or better policies than vanilla D-RL methods, while also creating data that is more fit for re-use in modified tasks. ""","""The paper is about exploration in deep reinforcement learning. The reviewers agree that this is an interesting and important topic, but the authors provide only a slim analysis and theoretical support for the proposed methods. Furthermore, the authors are encouraged to evaluate the proposed method on more than a single benchmark problem.""" 95,"""AdaX: Adaptive Gradient Descent with Exponential Long Term Memory""","['Optimization Algorithm', 'Machine Learning', 'Deep Learning', 'Adam']","""Adaptive optimization algorithms such as RMSProp and Adam have fast convergence and smooth learning process. Despite their successes, they are proven to have non-convergence issue even in convex optimization problems as well as weak performance compared with the first order gradient methods such as stochastic gradient descent (SGD). Several other algorithms, for example AMSGrad and AdaShift, have been proposed to alleviate these issues but only minor effect has been observed. This paper further analyzes the performance of such algorithms in a non-convex setting by extending their non-convergence issue into a simple non-convex case and show that Adam's design of update steps would possibly lead the algorithm to local minimums. To address the above problems, we propose a novel adaptive gradient descent algorithm, named AdaX, which accumulates the long-term past gradient information exponentially. We prove the convergence of AdaX in both convex and non-convex settings. Extensive experiments show that AdaX outperforms Adam in various tasks of computer vision and natural language processing and can catch up with SGD. ""","""This paper analyzes the non-convergence issue in Adam in a simple non-convex case. The authors propose a new adaptive gradient descent algorithm based on exponential long term memory, and analyze its convergence in both convex and non-convex settings. The major weakness of this paper pointed out by many reviewers is its experimental evaluation, ranging from experimental design to missing comparison with strong baseline algorithms. I agree with the reviewers evaluation and thus recommend reject.""" 96,"""Matrix Multilayer Perceptron""","['Multilayer Perceptron', 'symmetric positive definite', 'heteroscedastic regression', 'covariance estimation']","""Models that output a vector of responses given some inputs, in the form of a conditional mean vector, are at the core of machine learning. This includes neural networks such as the multilayer perceptron (MLP). However, models that output a symmetric positive definite (SPD) matrix of responses given inputs, in the form of a conditional covariance function, are far less studied, especially within the context of neural networks. Here, we introduce a new variant of the MLP, referred to as the matrix MLP, that is specialized at learning SPD matrices. Our construction not only respects the SPD constraint, but also makes explicit use of it. This translates into a model which effectively performs the task of SPD matrix learning even in scenarios where data are scarce. We present an application of the model in heteroscedastic multivariate regression, including convincing performance on six real-world datasets. ""","""This paper introduces a novel architecture and loss for estimating PSD matrices using neural networks. There is some theoretical justification for the architecture, and a small-scale but encouraging experiment. Overall, I think there is a sensible contribution here, but there are so many architectural and computational choices presented together at once that it's hard to tell what the important parts are. The main problems with this paper are: 1) The scalability of the approach O(N^3) 2) The derivation of the architecture and gradient computations wasn't clear about what choices were available and why. Several alternative choices were mentioned but not evaluated. I think the authors also need to improve their understanding of automatic differentiation. Backprop through eigendecomposition is already available in most autodiff packages. It was claimed that a certain kind of matrix derivative provided better generalization, which seems like a strong claim to make in general. 3) The experimental setup seemed contrived, except for the heteroskedastic regression experiments, which lacked competitive baselines. Why were the GP and MLPs homoskedastic? As a matter of personal preference, I found that having 4 different ""H""s differing only in font and capitalization for the network architecture was hard to keep track of. I agree that R1 had some unjustified comments and R2's review was contentless. I apologize for these inadequate reviews. """ 97,"""Deep amortized clustering""","['clustering', 'amortized inference', 'meta learning', 'deep learning']","""We propose a \textit{deep amortized clustering} (DAC), a neural architecture which learns to cluster datasets efficiently using a few forward passes. DAC implicitly learns what makes a cluster, how to group data points into clusters, and how to count the number of clusters in datasets. DAC is meta-learned using labelled datasets for training, a process distinct from traditional clustering algorithms which usually require hand-specified prior knowledge about cluster shapes/structures. We empirically show, on both synthetic and image data, that DAC can efficiently and accurately cluster new datasets coming from the same distribution used to generate training datasets. ""","""This paper introduces a new clustering method, which builds upon the work introduced by Lee et al, 2019 - contextual information across different dataset samples is gathered with a transformer, and then used to predict the cluster label for a given sample. All reviewers agree the writing should be improved and clarified. The novelty is also on the low side, given the previous work by Lee et al. Experiments should be more convincing. """ 98,"""Starfire: Regularization-Free Adversarially-Robust Structured Sparse Training""","['Structured Sparsity', 'Sparsity', 'Training', 'Compression', 'Adversarial', 'Regularization', 'Acceleration']","""This paper studies structured sparse training of CNNs with a gradual pruning technique that leads to fixed, sparse weight matrices after a set number of epochs. We simplify the structure of the enforced sparsity so that it reduces overhead caused by regularization. The proposed training methodology explores several options for structured sparsity. We study various tradeoffs with respect to pruning duration, learning-rate configuration, and the total length of training. We show that our method creates a sparse version of ResNet50 and ResNet50v1.5 on full ImageNet while remaining within a negligible <1% margin of accuracy loss. To make sure that this type of sparse training does not harm the robustness of the network, we also demonstrate how the network behaves in the presence of adversarial attacks. Our results show that with 70% target sparsity, over 75% top-1 accuracy is achievable. ""","""This paper concerns a training procedure for neural networks which results in sparse connectivity in the final resulting network, consisting of an ""early era"" of training in which pruning takes place, followed by fixed connectivity training thereafter, and a study of tradeoffs inherent in various approaches to structured and unstructured pruning, and an investigation of adversarial robustness of pruned networks. While some reviewers found the general approach interesting, all reviewers were critical of the lack of novelty, clarity and empirical rigour. R2 in particular raised concerns about the motivation, evaluation of computational savings (that FLOPS should be measured directly), and felt that the discussion of adversarial robustness was out of place and ""an afterthought"". Reviewers were unconvinced by rebuttals, and no attempts were made at improving the paper (additional experiments were promised, but not delivered). I therefore recommend rejection. """ 99,"""Walking the Tightrope: An Investigation of the Convolutional Autoencoder Bottleneck""","['convolutional autoencoder', 'bottleneck', 'representation learning']","""In this paper, we present an in-depth investigation of the convolutional autoencoder (CAE) bottleneck. Autoencoders (AE), and especially their convolutional variants, play a vital role in the current deep learning toolbox. Researchers and practitioners employ CAEs for a variety of tasks, ranging from outlier detection and compression to transfer and representation learning. Despite their widespread adoption, we have limited insight into how the bottleneck shape impacts the emergent properties of the CAE. We demonstrate that increased height and width of the bottleneck drastically improves generalization, which in turn leads to better performance of the latent codes in downstream transfer learning tasks. The number of channels in the bottleneck, on the other hand, is secondary in importance. Furthermore, we show empirically, that, contrary to popular belief, CAEs do not learn to copy their input, even when the bottleneck has the same number of neurons as there are pixels in the input. Copying does not occur, despite training the CAE for 1,000 epochs on a tiny (~ 600 images) dataset. We believe that the findings in this paper are directly applicable and will lead to improvements in models that rely on CAEs.""","""The paper investigates the effect of convolutional information bottlenecks to generalization. The paper concludes that the width and height of the bottleneck can greatly influence generalization, whereas the number of channels has smaller effect. The paper also shows evidence against a common belief that CAEs with sufficiently large bottleneck will learn an identity map. During the rebuttal period, there was a long discussion mainly about the sufficiency of the experimental setup and the trustworthiness of the claims made in the paper. A paper that empirically investigates an exiting method or belief should include extensive experiments of high quality in to enable general conclusions. Im thus recommending rejection, but encourage the authors to improve the experiments and resubmitting.""" 100,"""Deep Semi-Supervised Anomaly Detection""","['anomaly detection', 'deep learning', 'semi-supervised learning', 'unsupervised learning', 'outlier detection', 'one-class classification', 'deep anomaly detection', 'deep one-class classification']","""Deep approaches to anomaly detection have recently shown promising results over shallow methods on large and complex datasets. Typically anomaly detection is treated as an unsupervised learning problem. In practice however, one may have---in addition to a large set of unlabeled samples---access to a small pool of labeled samples, e.g. a subset verified by some domain expert as being normal or anomalous. Semi-supervised approaches to anomaly detection aim to utilize such labeled samples, but most proposed methods are limited to merely including labeled normal samples. Only a few methods take advantage of labeled anomalies, with existing deep approaches being domain-specific. In this work we present Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection. We further introduce an information-theoretic framework for deep anomaly detection based on the idea that the entropy of the latent distribution for normal data should be lower than the entropy of the anomalous distribution, which can serve as a theoretical interpretation for our method. In extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10, along with other anomaly detection benchmark datasets, we demonstrate that our method is on par or outperforms shallow, hybrid, and deep competitors, yielding appreciable performance improvements even when provided with only little labeled data.""","""Issues raised by the reviewers have been addressed by the authors, and thus I suggest the acceptance of this paper.""" 101,"""EvoNet: A Neural Network for Predicting the Evolution of Dynamic Graphs""","['temporal graphs', 'graph neural network', 'graph generative model', 'graph topology prediction']","""Neural networks for structured data like graphs have been studied extensively in recent years. To date, the bulk of research activity has focused mainly on static graphs. However, most real-world networks are dynamic since their topology tends to change over time. Predicting the evolution of dynamic graphs is a task of high significance in the area of graph mining. Despite its practical importance, the task has not been explored in depth so far, mainly due to its challenging nature. In this paper, we propose a model that predicts the evolution of dynamic graphs. Specifically, we use a graph neural network along with a recurrent architecture to capture the temporal evolution patterns of dynamic graphs. Then, we employ a generative model which predicts the topology of the graph at the next time step and constructs a graph instance that corresponds to that topology. We evaluate the proposed model on several artificial datasets following common network evolving dynamics, as well as on real-world datasets. Results demonstrate the effectiveness of the proposed model. ""","""The paper proposes a combination graph neural networks and graph generation model (GraphRNN) to model the evolution of dynamic graphs for predicting the topology of next graph given a sequence of graphs. The problem to be addressed seems interesting, but lacks strong motivation. Therefore it would be better if some important applications can be specified. The proposed approach lacks novelty. It would be better to point out why the specific combination of two existing models is the most appropriate approach to address the task. The experiments are not fully convincing. Bigger and comprehensive datasets (with the right motivating applications) should be used to test the effectiveness of the proposed model. In short, the current version failed to raise excitement from readers due to the reasons above. A major revision addressing these issues could lead to a strong publication in the future. """ 102,"""Step Size Optimization""","['Deep Learning', 'Step Size Adaptation', 'Nonconvex Optimization']","""This paper proposes a new approach for step size adaptation in gradient methods. The proposed method called step size optimization (SSO) formulates the step size adaptation as an optimization problem which minimizes the loss function with respect to the step size for the given model parameters and gradients. Then, the step size is optimized based on alternating direction method of multipliers (ADMM). SSO does not require the second-order information or any probabilistic models for adapting the step size, so it is efficient and easy to implement. Furthermore, we also introduce stochastic SSO for stochastic learning environments. In the experiments, we integrated SSO to vanilla SGD and Adam, and they outperformed state-of-the-art adaptive gradient methods including RMSProp, Adam, L4-Adam, and AdaBound on extensive benchmark datasets.""","""The paper is rejected based on unanimous reviews.""" 103,"""HiLLoC: lossless image compression with hierarchical latent variable models""","['compression', 'variational inference', 'lossless compression', 'deep latent variable models']","""We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model. We use this property, applying fully convolutional models to lossless compression, demonstrating a method to scale the VAE-based 'Bits-Back with ANS' algorithm for lossless compression to large color photographs, and achieving state of the art for compression of full size ImageNet images. We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression results.""","""The paper proposes a lossless image compression consisting of a hierarchical VAE and using a bits-back version of ANS. Compared to previous work, the paper (i) improves the compression rate performance by adapting the discretization of latent space required for the entropy coder ANS (ii) increases compression speed by implementing a vectorized version of ANS (iii) shows that a model trained on a low-resolution imagenet 32 dataset can generalize its compression capabilities to higher resolution. The authors addressed properly reviewers' concerns. Main critics which remain are (i) the method is not practical yet (long compression time) (ii) results are not state of the art - but the contribution is nevertheless solid.""" 104,"""Making Sense of Reinforcement Learning and Probabilistic Inference""","['Reinforcement learning', 'Bayesian inference', 'Exploration']","""Reinforcement learning (RL) combines a control problem with statistical estimation: The system dynamics are not known to the agent, but can be learned through experience. A recent line of research casts RL as inference and suggests a particular framework to generalize the RL problem as probabilistic inference. Our paper surfaces a key shortcoming in that approach, and clarifies the sense in which RL can be coherently cast as an inference problem. In particular, an RL agent must consider the effects of its actions upon future rewards and observations: The exploration-exploitation tradeoff. In all but the most simple settings, the resulting inference is computationally intractable so that practical RL algorithms must resort to approximation. We demonstrate that the popular RL as inference approximation can perform poorly in even very basic problems. However, we show that with a small modification the framework does yield algorithms that can provably perform well, and we show that the resulting algorithm is equivalent to the recently proposed K-learning, which we further connect with Thompson sampling. ""","""The paper explores in more detail the ""RL as inference"" viewpoint and highlights some issues with this approach, as well as ways to address these issues. The new version of the paper has effectively addressed some of the reviewers' initial concerns, resulting in an overall well-written paper with interesting insights.""" 105,"""Towards Principled Objectives for Contrastive Disentanglement""","['Disentanglement', 'Contrastive']","""Unsupervised learning is an important tool that has received a significant amount of attention for decades. Its goal is `unsupervised recovery,' i.e., extracting salient factors/properties from unlabeled data. Because of the challenges in defining salient properties, recently, `contrastive disentanglement' has gained popularity to discover the additional variations that are enhanced in one dataset relative to another. %In fact, contrastive disentanglement and unsupervised recovery are often combined in that we seek additional variations that exhibit salient factors/properties. Existing formulations have devised a variety of losses for this task. However, all present day methods exhibit two major shortcomings: (1) encodings for data that does not exhibit salient factors is not pushed to carry no signal; and (2) introduced losses are often hard to estimate and require additional trainable parameters. We present a new formulation for contrastive disentanglement which avoids both shortcomings by carefully formulating a probabilistic model and by using non-parametric yet easily computable metrics. We show on four challenging datasets that the proposed approach is able to better disentangle salient factors. ""","""The paper proposes new regularizations on contrastive disentanglement. After reading the author's response, all the reviewers still think that the contribution is too limited and all agree to reject.""" 106,"""Never Give Up: Learning Directed Exploration Strategies""","['deep reinforcement learning', 'exploration', 'intrinsic motivation']","""We propose a reinforcement learning agent to solve hard exploration games by learning a range of directed exploratory policies. We construct an episodic memory-based intrinsic reward using k-nearest neighbors over the agent's recent experience to train the directed exploratory policies, thereby encouraging the agent to repeatedly revisit all states in its environment. A self-supervised inverse dynamics model is used to train the embeddings of the nearest neighbour lookup, biasing the novelty signal towards what the agent can control. We employ the framework of Universal Value Function Approximators to simultaneously learn many directed exploration policies with the same neural network, with different trade-offs between exploration and exploitation. By using the same neural network for different degrees of exploration/exploitation, transfer is demonstrated from predominantly exploratory policies yielding effective exploitative policies. The proposed method can be incorporated to run with modern distributed RL agents that collect large amounts of experience from many actors running in parallel on separate environment instances. Our method doubles the performance of the base agent in all hard exploration in the Atari-57 suite while maintaining a very high score across the remaining games, obtaining a median human normalised score of 1344.0%. Notably, the proposed method is the first algorithm to achieve non-zero rewards (with a mean score of 8,400) in the game of Pitfall! without using demonstrations or hand-crafted features.""","""This paper tackles hard-exploration RL problems. The idea is to learn separate exploration and exploitation strategies using the same network (representation). The exploration is driven by intrinsic rewards, which are generated using an episodic memory and a lifelong novelty modules. Several experiments (simple and Atari domains) show that the proposed approach compares favourably with the baselines. The work is novel both in terms of the episodic curiosity metric and its integration with the life-long curiosity metric, and the results are convincing. All reviewers being positive about this paper, I therefore recommend acceptance.""" 107,"""Continual Learning with Adaptive Weights (CLAW)""",['Continual learning'],"""Approaches to continual learning aim to successfully learn a set of related tasks that arrive in an online manner. Recently, several frameworks have been developed which enable deep learning to be deployed in this learning scenario. A key modelling decision is to what extent the architecture should be shared across tasks. On the one hand, separately modelling each task avoids catastrophic forgetting but it does not support transfer learning and leads to large models. On the other hand, rigidly specifying a shared component and a task-specific part enables task transfer and limits the model size, but it is vulnerable to catastrophic forgetting and restricts the form of task-transfer that can occur. Ideally, the network should adaptively identify which parts of the network to share in a data driven way. Here we introduce such an approach called Continual Learning with Adaptive Weights (CLAW), which is based on probabilistic modelling and variational inference. Experiments show that CLAW achieves state-of-the-art performance on six benchmarks in terms of overall continual learning performance, as measured by classification accuracy, and in terms of addressing catastrophic forgetting. ""","""The paper proposes a new variational-inference-based continual learning algorithm with strong performance. There was some disagreement in the reviews, with perhaps the one shared concern being the complexity of the proposed method. One reviewer brought up other potentially related work, but this was convincingly rebutted by the authors. Finally, one reviewer had an issue with the simplicity with the networks in the experiments, but the authors rightly pointed out that the architectures were simply designed to match those from the baselines. Continual learning has been an active area for quite some time and convincingly achieving SOTA in a new way is a strong contribution, and will be of interest to the community. Progress in a field is sometimes made by iteratively simplifying an initially complex solution, and this work lays in a brick in that direction. For these reasons, I recommend acceptance. """ 108,"""Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models""","['regularization', 'finetuning', 'dropout', 'dropconnect', 'adaptive L2-penalty', 'BERT', 'pretrained language model']","""In natural language processing, it has been observed recently that generalization could be greatly improved by finetuning a large-scale language model pretrained on a large unlabeled corpus. Despite its recent success and wide adoption, finetuning a large pretrained language model on a downstream task is prone to degenerate performance when there are only a small number of training instances available. In this paper, we introduce a new regularization technique, to which we refer as mixout, motivated by dropout. Mixout stochastically mixes the parameters of two models. We show that our mixout technique regularizes learning to minimize the deviation from one of the two models and that the strength of regularization adapts along the optimization trajectory. We empirically evaluate the proposed mixout and its variants on finetuning a pretrained language model on downstream tasks. More specifically, we demonstrate that the stability of finetuning and the average accuracy greatly increase when we use the proposed approach to regularize finetuning of BERT on downstream tasks in GLUE.""","""This paper presents mixout, a regularization method that stochastically mixes parameters of a pretrained language model and a target language model. Experiments on GLUE show that the proposed technique improves the stability and accuracy of finetuning a pretrained BERT on several downstream tasks. The paper is well written and the proposed idea is applicable in many settings. The authors have addressed reviewers concerns' during the rebuttal period and all reviewers are now in agreement that this paper should be accepted. I think this paper would be a good addition to ICLR and recommend to accept it. """ 109,"""Undersensitivity in Neural Reading Comprehension""","['reading comprehension', 'undersensitivity', 'adversarial questions', 'adversarial training', 'robustness', 'biased data setting']","""Neural reading comprehension models have recently achieved impressive gener- alisation results, yet still perform poorly when given adversarially selected input. Most prior work has studied semantically invariant text perturbations which cause a models prediction to change when it should not. In this work we focus on the complementary problem: excessive prediction undersensitivity where input text is meaningfully changed, and the models prediction does not change when it should. We formulate a noisy adversarial attack which searches among semantic variations of comprehension questions for which a model still erroneously pro- duces the same answer as the original question and with an even higher prob- ability. We show that despite comprising unanswerable questions SQuAD2.0 and NewsQA models are vulnerable to this attack and commit a substantial frac- tion of errors on adversarially generated questions. This indicates that current modelseven where they can correctly predict the answerrely on spurious sur- face patterns and are not necessarily aware of all information provided in a given comprehension question. Developing this further, we experiment with both data augmentation and adversarial training as defence strategies: both are able to sub- stantially decrease a models vulnerability to undersensitivity attacks on held out evaluation data. Finally, we demonstrate that adversarially robust models gener- alise better in a biased data setting with a train/evaluation distribution mismatch; they are less prone to overly rely on predictive cues only present in the training set and outperform a conventional model in the biased data setting by up to 11% F1.""","""The paper investigates the sensitivity of a QA model to perturbations in the input, by replacing content words, such as named entities and nouns, in questions to make the question not answerable by the document. Experimental analysis demonstrates while the original QA performance is not hurt, the models become significantly less vulnerable to such attacks. Reviewers all agree that the paper includes a thorough analysis, at the same time they all suggested extensions to the paper, such as comparison to earlier work, experimental results, which the authors made in the revision. However, reviewers also question the novelty of the approach, given data augmentation methods. Hence, I suggest rejecting the paper.""" 110,"""A novel Bayesian estimation-based word embedding model for sentiment analysis""","['sentiment analysis', 'sentiment word embeddings', 'maximum likelihood estimation', 'Bayesian estimation']","""The word embedding models have achieved state-of-the-art results in a variety of natural language processing tasks. Whereas, current word embedding models mainly focus on the rich semantic meanings while are challenged by capturing the sentiment information. For this reason, we propose a novel sentiment word embedding model. In line with the working principle, the parameter estimating method is highlighted. On the task of semantic and sentiment embeddings, the parameters in the proposed model are determined by using both the maximum likelihood estimation and the Bayesian estimation. Experimental results show the proposed model significantly outperforms the baseline methods in sentiment analysis for low-frequency words and sentences. Besides, it is also effective in conventional semantic and sentiment analysis tasks.""","""This paper proposes a method to improve word embedding by incorporating sentiment probabilities. Reviewer appreciate the interesting and simple approach and acknowledges improved results in low-frequency words. However, reviewers find that the paper is lacking in two major aspects: 1) Writing is unclear, and thus it is difficult to understand and judge the contributions of this research. 2) Perhaps because of 1, it is not convincing that the improvements are significant and directly resulting from the modeling contributions. I thank the authors for submitting this work to ICLR, and I hope that the reviewers' comments are helpful in improving this research for future submission.""" 111,"""Pseudo-LiDAR++: Accurate Depth for 3D Object Detection in Autonomous Driving""","['pseudo-LiDAR', '3D-object detection', 'stereo depth estimation', 'autonomous driving']","""Detecting objects such as cars and pedestrians in 3D plays an indispensable role in autonomous driving. Existing approaches largely rely on expensive LiDAR sensors for accurate depth information. While recently pseudo-LiDAR has been introduced as a promising alternative, at a much lower cost based solely on stereo images, there is still a notable performance gap. In this paper we provide substantial advances to the pseudo-LiDAR framework through improvements in stereo depth estimation. Concretely, we adapt the stereo network architecture and loss function to be more aligned with accurate depth estimation of faraway objects --- currently the primary weakness of pseudo-LiDAR. Further, we explore the idea to leverage cheaper but extremely sparse LiDAR sensors, which alone provide insufficient information for 3D detection, to de-bias our depth estimation. We propose a depth-propagation algorithm, guided by the initial depth estimates, to diffuse these few exact measurements across the entire depth map. We show on the KITTI object detection benchmark that our combined approach yields substantial improvements in depth estimation and stereo-based 3D object detection --- outperforming the previous state-of-the-art detection accuracy for faraway objects by 40%. Our code is available at pseudo-url.""","""Three knowledgable reviewers give a positive evaluation of the paper. The decision is to accept.""" 112,"""Empirical Bayes Transductive Meta-Learning with Synthetic Gradients""","['Meta-learning', 'Empirical Bayes', 'Synthetic Gradient', 'Information Bottleneck']","""We propose a meta-learning approach that learns from multiple tasks in a transductive setting, by leveraging the unlabeled query set in addition to the support set to generate a more powerful model for each task. To develop our framework, we revisit the empirical Bayes formulation for multi-task learning. The evidence lower bound of the marginal log-likelihood of empirical Bayes decomposes as a sum of local KL divergences between the variational posterior and the true posterior on the query set of each task. We derive a novel amortized variational inference that couples all the variational posteriors via a meta-model, which consists of a synthetic gradient network and an initialization network. Each variational posterior is derived from synthetic gradient descent to approximate the true posterior on the query set, although where we do not have access to the true gradient. Our results on the Mini-ImageNet and CIFAR-FS benchmarks for episodic few-shot classification outperform previous state-of-the-art methods. Besides, we conduct two zero-shot learning experiments to further explore the potential of the synthetic gradient.""","""Three reviewers have assessed this paper and they have scored it 6/6/6 after rebuttal. Nonetheless, the reviewers have raised a number of criticisms and the authors are encouraged to resolve them for the camera-ready submission.""" 113,"""Neural Network Out-of-Distribution Detection for Regression Tasks""","['Out-of-distribution', 'deep learning', 'regression']","""Neural network out-of-distribution (OOD) detection aims to identify when a model is unable to generalize to new inputs, either due to covariate shift or anomalous data. Most existing OOD methods only apply to classification tasks, as they assume a discrete set of possible predictions. In this paper, we propose a method for neural network OOD detection that can be applied to regression problems. We demonstrate that the hidden features for in-distribution data can be described by a highly concentrated, low dimensional distribution. Therefore, we can model these in-distribution features with an extremely simple generative model, such as a Gaussian mixture model (GMM) with 4 or fewer components. We demonstrate on several real-world benchmark data sets that GMM-based feature detection achieves state-of-the-art OOD detection results on several regression tasks. Moreover, this approach is simple to implement and computationally efficient.""","""The paper investigates out-of-distribution detection for regression tasks. The reviewers raised several concerns about novelty of the method relative to existing methods, motivation & theoretical justification and clarity of the presentation (in particular, the discussion around regression vs classification). I encourage the authors to revise the draft based on the reviewers feedback and resubmit to a different venue. """ 114,"""Data Valuation using Reinforcement Learning""","['Data valuation', 'Domain adaptation', 'Robust learning', 'Corrupted sample discovery']","""Quantifying the value of data is a fundamental problem in machine learning. Data valuation has multiple important use cases: (1) building insights about the learning task, (2) domain adaptation, (3) corrupted sample discovery, and (4) robust learning. To adaptively learn data values jointly with the target task predictor model, we propose a meta learning framework which we name Data Valuation using Reinforcement Learning (DVRL). We employ a data value estimator (modeled by a deep neural network) to learn how likely each datum is used in training of the predictor model. We train the data value estimator using a reinforcement signal of the reward obtained on a small validation set that reflects performance on the target task. We demonstrate that DVRL yields superior data value estimates compared to alternative methods across different types of datasets and in a diverse set of application scenarios. The corrupted sample discovery performance of DVRL is close to optimal in many regimes (i.e. as if the noisy samples were known apriori), and for domain adaptation and robust learning DVRL significantly outperforms state-of-the-art by 14.6% and 10.8%, respectively. ""","""The paper suggests an RL-based approach to design a data valuation estimator. The reviewers agree that the proposed method is new and promising, but they also raised concerns about the empirical evaluations, including not comparing with other approaches of data valuation and limited ablation study. The authors provided a rebuttal to address these concerns. It improves the evaluation of one of the reviewers, but it is difficult to recommend acceptance given that we did not have a champion for this paper and the overall score is not high enough.""" 115,"""Continual Learning via Principal Components Projection""","['Neural network', 'continual learning', 'catastrophic forgetting', 'lifelong learning']","""Continual learning in neural networks (NN) often suffers from catastrophic forgetting. That is, when learning a sequence of tasks on an NN, the learning of a new task will cause weight changes that may destroy the learned knowledge embedded in the weights for previous tasks. Without solving this problem, it is difficult to use an NN to perform continual or lifelong learning. Although researchers have attempted to solve the problem in many ways, it remains to be challenging. In this paper, we propose a new approach, called principal components projection (PCP). The idea is that in learning a new task, if we can ensure that the gradient updates will only occur in the orthogonal directions to the input vectors of the previous tasks, then the weight updates for learning the new task will not affect the previous tasks. We propose to compute the principal components of the input vectors and use them to transform the input and to project the gradient updates for learning each new task. PCP does not need to store any sampled data from previous tasks or to generate pseudo data of previous tasks and use them to help learn a new task. Empirical evaluation shows that the proposed method PCP markedly outperforms the state-of-the-art baseline methods.""","""There is no author response for this paper. The paper addresses the issue of catastrophic forgetting in continual learning. The authors build upon the idea from [Zheng,2019], namely finding gradient updates in the space perpendicular to the input vectors of the previous tasks resulting in less forgetting, and propose an improvement, namely to use principal component analysis to enable learning new tasks without restricting their solution space as in [Zheng,2019]. While the reviewers acknowledge the importance to study continual learning, they raised several concerns that were viewed by the AC as critical issues: (1) convincing experimental evaluation -- an analysis that clearly shows how and when the proposed method can solve the issue that [Zheng,2019] faces with (task similarity/dissimilarity scenario) would substantially strengthen the evaluation and would allow to assess the scope and contributions of this work; also see R3s detailed concerns and questions on empirical evaluation, R2s suggestion to follow the standard protocols, and R1s suggestion to use PackNet and HAT as baselines for comparison; (2) lack of presentation clarity -- see R2s concerns how to improve, and R1s suggestions on how to better position the paper. A general consensus among reviewers and AC suggests, in its current state the manuscript is not ready for a publication. It needs clarifications, more empirical studies and polish to achieve the desired goal. """ 116,"""Using Hindsight to Anchor Past Knowledge in Continual Learning""","['Continual Learning', 'Lifelong Learning', 'Catastrophic Forgetting']","""In continual learning, the learner faces a stream of data whose distribution changes over time. Modern neural networks are known to suffer under this setting, as they quickly forget previously acquired knowledge. To address such catastrophic forgetting, state-of-the-art continual learning methods implement different types of experience replay, re-learning on past data stored in a small buffer known as episodic memory. In this work, we complement experience replay with a meta-learning technique that we call anchoring: the learner updates its knowledge on the current task, while keeping predictions on some anchor points of past tasks intact. These anchor points are learned using gradient-based optimization as to maximize forgetting of the current task, in hindsight, when the learner is fine-tuned on the episodic memory of past tasks. Experiments on several supervised learning benchmarks for continual learning demonstrate that our approach improves the state of the art in terms of both accuracy and forgetting metrics and for various sizes of episodic memories. ""","""This paper proposes a continual learning method that uses anchor points for experience replay. Anchor points are learned with gradient-based optimization to maximize forgetting on the current task. Experiments MNIST, CIFAR, and miniImageNet show the benefit of the proposed approach. As noted by other reviewers, there are some grammatical issues with the paper. It is missing some important details in the experiments. It is unclear to me whether the five random seeds how the datasets (tasks) are ordered in the experiments. Do the five random seeds correspond to five different dataset orderings? I think it would also be very interesting to see the anchor points that are chosen in practice. This issue is brought up by R4, and the authors responded that anchor points do not correspond to classes. Since the main idea of this paper is based on anchor points, it would be nice to analyze further to get a better understanding what they represent. Finally, the authors only evaluate their method on image classification. While I believe the technique can be applied in other domains (e.g., reinforcement learning, natural language processing) with some modifications, without providing concrete empirical evidence in the paper, the authors need to clearly state that their proposed method is only evaluated on image classification and not sell it as a general method (yet). The authors also miss citations to some prior work on memory-based parameter adaptation and its variants. Regardless all of the above issues, this is still a borderline paper. However, due to space constraint, I recommend to reject this paper for ICLR.""" 117,"""Which Tasks Should Be Learned Together in Multi-task Learning?""","['multi-task learning', 'Computer Vision']","""Many computer vision applications require solving multiple tasks in real-time. A neural network can be trained to solve multiple tasks simultaneously using 'multi-task learning'. This saves computation at inference time as only a single network needs to be evaluated. Unfortunately, this often leads to inferior overall performance as task objectives compete, which consequently poses the question: which tasks should and should not be learned together in one network when employing multi-task learning? We systematically study task cooperation and competition and propose a framework for assigning tasks to a few neural networks such that cooperating tasks are computed by the same neural network, while competing tasks are computed by different networks. Our framework offers a time-accuracy trade-off and can produce better accuracy using less inference time than not only a single large multi-task neural network but also many single-task networks. ""","""An approach to make multi-task learning is presented, based on the idea of assigning tasks through the concepts of cooperation and competition. The main idea is well-motivated and explained well. The experiments demonstrate that the method is promising. However, there are a few concerns regarding fundamental aspects, such as: how are the decisions affected by the number of parameters? Could ad-hoc algorithms with human in the loop provide the same benefit, when the task-set is small? More importantly, identifying task groups for multi-task learning is an idea presented in prior work, e.g. [1,2,3]. This important body of prior work is not discussed at all in this paper. [1] Han and Zhang. ""Learning multi-level task groups in multi-task learning"" [2] Bonilla et al. ""Multi-task Gaussian process prediction"" [3] Zhang and Yang. ""A Survey on Multi-Task Learning"" """ 118,"""HUBERT Untangles BERT to Improve Transfer across NLP Tasks""","['Tensor Product Representation', 'BERT', 'Transfer Learning', 'Neuro-Symbolic Learning']","""We introduce HUBERT which combines the structured-representational power of Tensor-Product Representations (TPRs) and BERT, a pre-trained bidirectional transformer language model. We validate the effectiveness of our model on the GLUE benchmark and HANS dataset. We also show that there is shared structure between different NLP datasets which HUBERT, but not BERT, is able to learn and leverage. Extensive transfer-learning experiments are conducted to confirm this proposition.""","""The paper introduces additional layers on top BERT type models for disentangling of semantic and positional information. The paper demonstrates (small) performance gains in transfer learning compared to pure BERT baseline. Both reviewers and authors have engaged in a constructive discussion of the merits of the proposed method. Although the reviewers appreciate the ideas and parts of the paper the consensus among the reviewers is that the evaluation of the method is not clearcut enough to warrant publication. Rejection is therefore recommended. Given the good ideas presented in the paper and the promising results the authors are encouraged to take the feedback into account and submit to the next ML conference. """ 119,"""Towards Stable and comprehensive Domain Alignment: Max-Margin Domain-Adversarial Training""","['domain adaptation', 'transfer learning', 'adversarial training']",""" Domain adaptation tackles the problem of transferring knowledge from a label-rich source domain to an unlabeled or label-scarce target domain. Recently domain-adversarial training (DAT) has shown promising capacity to learn a domain-invariant feature space by reversing the gradient propagation of a domain classifier. However, DAT is still vulnerable in several aspects including (1) training instability due to the overwhelming discriminative ability of the domain classifier in adversarial training, (2) restrictive feature-level alignment, and (3) lack of interpretability or systematic explanation of the learned feature space. In this paper, we propose a novel Max-margin Domain-Adversarial Training (MDAT) by designing an Adversarial Reconstruction Network (ARN). The proposed MDAT stabilizes the gradient reversing in ARN by replacing the domain classifier with a reconstruction network, and in this manner ARN conducts both feature-level and pixel-level domain alignment without involving extra network structures. Furthermore, ARN demonstrates strong robustness to a wide range of hyper-parameters settings, greatly alleviating the task of model selection. Extensive empirical results validate that our approach outperforms other state-of-the-art domain alignment methods. Additionally, the reconstructed target samples are visualized to interpret the domain-invariant feature space which conforms with our intuition. ""","""This paper proposes max-margin domain adversarial training with an adversarial reconstruction network that stabilizes the gradient by replacing the domain classifier. Reviewers and AC think that the method is interesting and motivation is reasonable. Concerns were raised regarding weak experimental results in the diversity of datasets and the comparison to state-of-the-art methods. The paper needs to show how the method works with respect to stability and interpretability. The paper should also clearly relate the contrastive loss for reconstruction to previous work, given that both the loss and the reconstruction idea have been extensively explored for DA. Finally, the theoretical analysis is shallow and the gap between the theory and the algorithm needs to be closed. Overall this is a borderline paper. Considering the bar of ICLR and limited quota, I recommend rejection.""" 120,"""Do Image Classifiers Generalize Across Time?""","['robustness', 'image classification', 'distribution shift']","""We study the robustness of image classifiers to temporal perturbations derived from videos. As part of this study, we construct ImageNet-Vid-Robust and YTBB-Robust, containing a total 57,897 images grouped into 3,139 sets of perceptually similar images. Our datasets were derived from ImageNet-Vid and Youtube-BB respectively and thoroughly re-annotated by human experts for image similarity. We evaluate a diverse array of classifiers pre-trained on ImageNet and show a median classification accuracy drop of 16 and 10 percent on our two datasets. Additionally, we evaluate three detection models and show that natural perturbations induce both classification as well as localization errors, leading to a median drop in detection mAP of 14 points. Our analysis demonstrates that perturbations occurring naturally in videos pose a substantial and realistic challenge to deploying convolutional neural networks in environments that require both reliable and low-latency predictions.""","""This paper proposed to evaluate the robustness of CNN models on similar video frames. The authors construct two carefully labeled video databases. Based on extensive experiments, they conclude that the state of the art classification and detection models are not robust when testing on very similar video frames. While Reviewer #1 is overall positive about this work, Reviewer #2 and #3 rated weak reject with various concerns. Reviewer #2 concerns limited contribution since the results are similar to our intuition. Reviewer #3 appreciates the value of the databases, but concerns that the defined metrics make the contribution look huge. The authors and Reviewer #3 have in-depth discussion on the metric, and Reviewer #3 is not convinced. Given the concerns raised by the reviewers, the ACs agree that this paper can not be accepted at its current state.""" 121,"""Sparse Networks from Scratch: Faster Training without Losing Performance""","['sparse learning', 'sparse networks', 'sparsity', 'efficient deep learning', 'efficient training']","""We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training. In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network. ""","""This paper presents a method for training sparse neural networks that also provides a speedup during training, in contrast to methods for training sparse networks which train dense networks (at normal speed) and then prune weights. The method provides modest theoretical speedups during training, never measured in wallclock time. The authors improved their paper considerably in response to the reviews. I would be inclined to accept this paper despite not being a big win empirically, however a couple points of sloppiness pointed out (and maintained post-rebuttal) by R1 tip the balance to reject, in my opinion. Specifically: 1) ""I do not agree that keeping the learning rate fixed across methods is the right approach."" This seems like a major problem with the experiments to me. 2) ""I would request the authors to slightly rewrite certain parts of their paper so as not to imply that momentum decreases the variance of the gradients in general."" I agree.""" 122,"""Safe Policy Learning for Continuous Control""","['reinforcement learning', 'policy gradient', 'safety']","""We study continuous action reinforcement learning problems in which it is crucial that the agent interacts with the environment only through safe policies, i.e.,~policies that keep the agent in desirable situations, both during training and at convergence. We formulate these problems as {\em constrained} Markov decision processes (CMDPs) and present safe policy optimization algorithms that are based on a Lyapunov approach to solve them. Our algorithms can use any standard policy gradient (PG) method, such as deep deterministic policy gradient (DDPG) or proximal policy optimization (PPO), to train a neural network policy, while guaranteeing near-constraint satisfaction for every policy update by projecting either the policy parameter or the selected action onto the set of feasible solutions induced by the state-dependent linearized Lyapunov constraints. Compared to the existing constrained PG algorithms, ours are more data efficient as they are able to utilize both on-policy and off-policy data. Moreover, our action-projection algorithm often leads to less conservative policy updates and allows for natural integration into an end-to-end PG training pipeline. We evaluate our algorithms and compare them with the state-of-the-art baselines on several simulated (MuJoCo) tasks, as well as a real-world robot obstacle-avoidance problem, demonstrating their effectiveness in terms of balancing performance and constraint satisfaction.""","""The paper is about learning policies in RL while ensuring safety (avoid constraint violations) during training and testing. For this meta review, I ignore Reviewer #3 because that review is useless. The discussion between the authors and Reviewer #1 was useful. Overall, the paper introduces an interesting idea, and the wider context (safe learning) is very relevant. However, I also have some concerns. One of my biggest concerns is that the method proposed here relies heavily on linearizations to deal with nonlinearities. However, the fact that this leads to approximation errors is not being acknowledged much. There are also small things, such as the (average) KL divergence between parameters, which makes no sense to me because the parameters don't have distributions (section 3.1). In terms of experiments, I appreciate that the authors tested the proposed method on multiple environments. The results, however, show that safety cannot be guaranteed. For example, in Figure 1(c), SDDPG clearly violates the constraints. The figures are also misleading because they show the summary statistics of the trajectories (mean and standard deviation). If we were to look at individual trajectories, we would find trajectories that violate the constraints. This fact is brushed under the carpet in the evaluation, and the paper even claims that ""our algorithms quickly stabilize the constraint cost below the threshold"". This may be true on average, but not for all trajectories. A more careful analysis and a more honest discussion would have been useful. In the robotics experiment, I would like to understand why we allow for any collisions. Why can't we set pseudo-formula , thereby disallowing for collisions. The threshold in the paper looks pretty arbitrary. Again, the paper states that ""Figure 4a and Figure 4b show that the Lyapunov-based PG algorithms have higher success rates"". This is a pretty optimistic interpretation of the figure given the size of the error bars. There are some points in the conclusion, I also disagree with: 1) ""achieve safe learning"": Given that some trajectories violate the constraints, ""safe"" is maybe a bit of an overstatement 2) ""better data efficiency"": compared to what? 3) ""scalable to tackle real-world problems"": I disagree with this one as well because for all experiments you will need to run an excessive number of trials, which will not be feasible on a real-world system (assuming we are talking about robots). Overall, I think the paper has some potential, but it needs some more careful theoretical analysis (e.g., effect of linearization errors) and some better empirical analysis. Additionally, given that the paper is at around 9 pages (including the figures in the appendix, which the main paper cites), we are supposed to have higher standards on acceptance than an 8-pages paper. Therefore, I recommend to reject this paper.""" 123,"""A Constructive Prediction of the Generalization Error Across Scales""","['neural networks', 'deep learning', 'generalization error', 'scaling', 'scalability', 'vision', 'language']","""The dependency of the generalization error of neural networks on model and dataset size is of critical importance both in practice and for understanding the theory of neural networks. Nevertheless, the functional form of this dependency remains elusive. In this work, we present a functional form which approximates well the generalization error in practice. Capitalizing on the successful concept of model scaling (e.g., width, depth), we are able to simultaneously construct such a form and specify the exact models which can attain it across model/data scales. Our construction follows insights obtained from observations conducted over a range of model/data scales, in various model types and datasets, in vision and language tasks. We show that the form both fits the observations well across scales, and provides accurate predictions from small- to large-scale models and data.""","""The paper presents a very interesting idea for estimating the held-out error of deep models as a function of model and data set size. The authors intuit what the shape of the error should be, then they fit the parameters of a function of the desired shape and show that this has predictive power. I find this idea quite refreshing and the paper is well written with good experiments. Please make sure that the final version contains the cross-validation results provided during the rebuttal.""" 124,"""Calibration, Entropy Rates, and Memory in Language Models""","['information theory', 'natural language processing', 'calibration']","""Building accurate language models that capture meaningful long-term dependencies is a core challenge in natural language processing. Towards this end, we present a calibration-based approach to measure long-term discrepancies between a generative sequence model and the true distribution, and use these discrepancies to improve the model. Empirically, we show that state-of-the-art language models, including LSTMs and Transformers, are \emph{miscalibrated}: the entropy rates of their generations drift dramatically upward over time. We then provide provable methods to mitigate this phenomenon. Furthermore, we show how this calibration-based approach can also be used to measure the amount of memory that language models use for prediction.""","""This paper shows empirically that the state-of-the-art language models have a problem of increasing entropy when generating long sequences. The paper then proposes a method to mitigate this problem. As the authors re-iterated through their rebuttal, this paper approaches this problem theoretically, rather than through a comprehensive set of empirical comparisons. After discussions among the reviewers, this paper is not recommended to be accepted. Some skepticism and concerns remain as to whether the paper makes sufficiently clear and proven theoretical contributions. We all appreciate the approach and potential of this paper and encourage the authors to re-submit a revision to a future related venue.""" 125,"""Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation""","['spiking neural networks', 'ann-snn conversion', 'spike-based backpropagation', 'imagenet']","""Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes) which can potentially lead to higher energy-efficiency in neuromorphic hardware implementations. Many works have shown that an SNN for inference can be formed by copying the weights from a trained Artificial Neural Network (ANN) and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time steps to achieve competitive accuracy which diminishes the energy savings. The number of time steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron's spike time. The weight update is proportional to the difference in spike timing between the current time step and the most recent time step the neuron generated an output spike. The SNNs trained with our hybrid conversion-and-STDB training perform at pseudo-formula fewer number of time steps and achieve similar accuracy compared to purely converted SNNs. The proposed training methodology converges in less than pseudo-formula epochs of spike-based backpropagation for most standard image classification datasets, thereby greatly reducing the training complexity compared to training SNNs from scratch. We perform experiments on CIFAR-10, CIFAR-100 and ImageNet datasets for both VGG and ResNet architectures. We achieve top-1 accuracy of pseudo-formula for ImageNet dataset on SNN with pseudo-formula time steps, which is pseudo-formula faster compared to converted SNNs with similar accuracy.""","""After the rebuttal, all reviewers rated this paper as a weak accept. The reviewer leaning towards rejection was satisfied with the author response and ended up raising their rating to a weak accept. The AC recommends acceptance.""" 126,"""Modeling Winner-Take-All Competition in Sparse Binary Projections""","['Sparse Representation', 'Sparse Binary Projection', 'Winner-Take-All']","""Inspired by the advances in biological science, the study of sparse binary projection models has attracted considerable recent research attention. The models project dense input samples into a higher-dimensional space and output sparse binary data representations after Winner-Take-All competition, subject to the constraint that the projection matrix is also sparse and binary. Following the work along this line, we developed a supervised-WTA model when training samples with both input and output representations are available, from which the optimal projection matrix can be obtained with a simple, efficient yet effective algorithm. We further extended the model and the algorithm to an unsupervised setting where only the input representation of the samples is available. In a series of empirical evaluation on similarity search tasks, the proposed models reported significantly improved results over the state-of-the-art methods in both search accuracy and running time. The successful results give us strong confidence that the work provides a highly practical tool to real world applications. ""","""This paper proposes a WTA models for binary projection. While there are notable partial contributions, there is disagreement among the reviewers. I am most persuaded by the concern expressed that the experiments are not done on datasets that are large enough to be state-of-the-art compared to other random projection investigations.""" 127,"""Generalization of Two-layer Neural Networks: An Asymptotic Viewpoint""","['Neural Networks', 'Generalization', 'High-dimensional Statistics']","""This paper investigates the generalization properties of two-layer neural networks in high-dimensions, i.e. when the number of samples pseudo-formula , features pseudo-formula , and neurons pseudo-formula tend to infinity at the same rate. Specifically, we derive the exact population risk of the unregularized least squares regression problem with two-layer neural networks when either the first or the second layer is trained using a gradient flow under different initialization setups. When only the second layer coefficients are optimized, we recover the \textit{double descent} phenomenon: a cusp in the population risk appears at n$ and further overparameterization decreases the risk. In contrast, when the first layer weights are optimized, we highlight how different scales of initialization lead to different inductive bias, and show that the resulting risk is \textit{independent} of overparameterization. Our theoretical and experimental results suggest that previously studied model setups that provably give rise to \textit{double descent} might not translate to optimizing two-layer neural networks.""","""This paper focuses on studying the double descent phenomenon in a one layer neural network training in an asymptotic regime where various dimensions go to infinity together with fixed ratios. The authors provide precise asymptotic characterization of the risk and use it to study various phenomena. In particular they characterize the role of various scales of the initialization and their effects. The reviewers all agree that this is an interesting paper with nice contributions. I concur with this assessment. I think this is a solid paper with very precise and concise theory. I recommend acceptance.""" 128,"""Reducing Computation in Recurrent Networks by Selectively Updating State Neurons""","['recurrent neural networks', 'conditional computation', 'representation learning']","""Recurrent Neural Networks (RNN) are the state-of-the-art approach to sequential learning. However, standard RNNs use the same amount of computation at each timestep, regardless of the input data. As a result, even for high-dimensional hidden states, all dimensions are updated at each timestep regardless of the recurrent memory cell. Reducing this rigid assumption could allow for models with large hidden states to perform inference more quickly. Intuitively, not all hidden state dimensions need to be recomputed from scratch at each timestep. Thus, recent methods have begun studying this problem by imposing mainly a priori-determined patterns for updating the state. In contrast, we now design a fully-learned approach, SA-RNN, that augments any RNN by predicting discrete update patterns at the fine granularity of independent hidden state dimensions through the parameterization of a distribution of update-likelihoods driven entirely by the input data. We achieve this without imposing assumptions on the structure of the update pattern. Better yet, our method adapts the update patterns online, allowing different dimensions to be updated conditional to the input. To learn which to update, the model solves a multi-objective optimization problem, maximizing accuracy while minimizing the number of updates based on a unified control. Using publicly-available datasets we demonstrate that our method consistently achieves higher accuracy with fewer updates compared to state-of-the-art alternatives. Additionally, our method can be directly applied to a wide variety of models containing RNN architectures.""","""This paper introduces a new RNN architecture which uses a small network to decide which cells get updated at each time step, with the goal of reducing computational cost. The idea makes sense, although it requires the use of a heuristic gradient estimator because of the non-differentiability of the update gate. The main problem with this paper in my view is that the reduction in FLOPS was not demonstrated to correspond to a reduction in wallclock time, and I don't expect it would, since the sparse updates are different for each example in each batch, and only affect one hidden unit at a time. The only discussion of this problem is ""we compute the FLOPs for each method as a surrogate for wall-clock time, which is hardware-dependent and often fluctuates dramatically in practice."" Because this method reduces predictive accuracy, the reduction in FLOPS should be worth it! Minor criticism: 1) Figure 1 is confusing, showing not the proposed architecture in general but instead the connections remaining after computing the sparse updates. """ 129,"""Deep Innovation Protection""","['Neuroevolution', 'innovation protection', 'world models', 'genetic algorithm']","""Evolutionary-based optimization approaches have recently shown promising results in domains such as Atari and robot locomotion but less so in solving 3D tasks directly from pixels. This paper presents a method called Deep Innovation Protection (DIP) that allows training complex world models end-to-end for such 3D environments. The main idea behind the approach is to employ multiobjective optimization to temporally reduce the selection pressure on specific components in a world model, allowing other components to adapt. We investigate the emergent representations of these evolved networks, which learn a model of the world without the need for a specific forward-prediction loss. ""","""This paper is a very borderline case. Mixed reviews. R2 score originally 4, moved to 5 (rounded up to WA 6), but still borderline. R1 was 6 (WA) and R3 was 3 (WR). R2 expert on this topic, R1 and R3 less so. AC has carefully read the reviews/rebuttal/comments and looked closely at the paper. AC feels that R2's review is spot on and that the contribution does not quite reach ICLR acceptance level, despite it being interesting work. So the AC feels the paper cannot be accepted at this time. But the work is definitely interesting -- the authors should improve their paper using R2's comments and resubmit. """ 130,"""Selection via Proxy: Efficient Data Selection for Deep Learning""","['data selection', 'active-learning', 'core-set selection', 'deep learning', 'uncertainty sampling']","""Data selection methods, such as active learning and core-set selection, are useful tools for machine learning on large datasets. However, they can be prohibitively expensive to apply in deep learning because they depend on feature representations that need to be learned. In this work, we show that we can greatly improve the computational efficiency by using a small proxy model to perform data selection (e.g., selecting data points to label for active learning). By removing hidden layers from the target model, using smaller architectures, and training for fewer epochs, we create proxies that are an order of magnitude faster to train. Although these small proxy models have higher error rates, we find that they empirically provide useful signals for data selection. We evaluate this ""selection via proxy"" (SVP) approach on several data selection tasks across five datasets: CIFAR10, CIFAR100, ImageNet, Amazon Review Polarity, and Amazon Review Full. For active learning, applying SVP can give an order of magnitude improvement in data selection runtime (i.e., the time it takes to repeatedly train and select points) without significantly increasing the final error (often within 0.1%). For core-set selection on CIFAR10, proxies that are over 10 faster to train than their larger, more accurate targets can remove up to 50% of the data without harming the final accuracy of the target, leading to a 1.6 end-to-end training time improvement.""","""This paper proposes to perform sample selection for deep learning - which can be very computationally expensive - using a smaller and simpler proxy network. The paper shows that such proxies are faster to train and do not substantially harm the accuracy of the final network. The reviewers were all in agreement that the problem is important, and that the paper is comprehensive and well executed. I therefore recommend it should be accepted.""" 131,"""Overcoming Catastrophic Forgetting via Hessian-free Curvature Estimates""","['catastrophic forgetting', 'multi-task learning', 'continual learning']","""Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks. This leads to a poor performance on old tasks a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix (FIM) changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network. This requires to calculate the Hessian around a mode, which makes learning tractable. In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task. Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work.""","""The reviewers have provided thorough reviews of your work. I encourage you to read them carefully should you decide to resubmit it to a later conference.""" 132,"""On the Invertibility of Invertible Neural Networks""","['Invertible Neural Networks', 'Stability', 'Normalizing Flows', 'Generative Models', 'Evaluation of Generative Models']","""Guarantees in deep learning are hard to achieve due to the interplay of flexible modeling schemes and complex tasks. Invertible neural networks (INNs), however, provide several mathematical guarantees by design, such as the ability to approximate non-linear diffeomorphisms. One less studied advantage of INNs is that they enable the design of bi-Lipschitz functions. This property has been used implicitly by various works to design generative models, memory-saving gradient computation, regularize classifiers, and solve inverse problems. In this work, we study Lipschitz constants of invertible architectures in order to investigate guarantees on stability of their inverse and forward mapping. Our analysis reveals that commonly-used INN building blocks can easily become non-invertible, leading to questionable ``exact'' log likelihood computations and training difficulties. We introduce a set of numerical analysis tools to diagnose non-invertibility in practice. Finally, based on our theoretical analysis, we show how to guarantee numerical invertibility for one of the most common INN architectures.""","""This submission analyses the numerical invertibility of analytically invertible neural networks and shows that analytical invertibility does not guarantee numerical invertibility of some invertible networks under certain conditions (e.g. adversarial perturbation). Strengths: -The work is interesting and the theoretical analysis is insightful. Weaknesses: -The main concern shared by all reviewers was the weakness of the experimental section including (i) insufficient motivation of the decorrelation task; (ii) missing comparisons and experimental settings. -The paper clarity could be improved. Both weaknesses were not sufficiently addressed in the rebuttal. All reviewer recommendations were borderline to reject. """ 133,"""Differential Privacy in Adversarial Learning with Provable Robustness""","['differential privacy', 'adversarial learning', 'robustness bound', 'adversarial example']","""In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples. We leverage the sequential composition theory in DP, to establish a new connection between DP preservation and provable robustness. To address the trade-off among model utility, privacy loss, and robustness, we design an original, differentially private, adversarial objective function, based on the post-processing property in DP, to tighten the sensitivity of our model. An end-to-end theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of DP deep neural networks.""","""The authors propose a framework for relating adversarial robustness, privacy and utility and show how one can train models to simultaneously attain these properties. The paper also makes interesting connections between the DP literature and the robustness literature thereby porting over composition theorems to this new setting. The paper makes very interesting contributions, but a few key points require some improvement: 1) The initial version of the paper relied on an approximation of the objective function in order to obtain DP guarantees. While the authors clarified how the approximation impacts model performance in the rebuttal and revision, the reviewers still had concerns about the utility-privacy-robustness tradeoff achieved by the algorithm. 2) The presentation of the paper seems tailored to audiences familiar with DP and is not easy for a broader audience to follow. Despite this limitations, the paper does make significant novel contributions on an improtant problem (simultaneously achieveing privacy, robustness and utility) and could be of interest. Overall, I consider this paper borderline and vote for rejection, but strongly encourage the authors to improve the paper wrt the above concerns and resubmit to a future venue.""" 134,"""Towards Physics-informed Deep Learning for Turbulent Flow Prediction""",[],"""While deep learning has shown tremendous success in a wide range of domains, it remains a grand challenge to incorporate physical principles in a systematic manner to the design, training, and inference of such models. In this paper, we aim to predict turbulent flow by learning its highly nonlinear dynamics from spatiotemporal velocity fields of large-scale fluid flow simulations of relevance to turbulence modeling and climate modeling. We adopt a hybrid approach by marrying two well-established turbulent flow simulation techniques with deep learning. Specifically, we introduce trainable spectral filters in a coupled model of Reynolds-averaged Navier-Stokes (RANS) and Large Eddy Simulation (LES), followed by a specialized U-net for prediction. Our approach, which we call Turbulent-Flow Net (TF-Net), is grounded in a principled physics model, yet offers the flexibility of learned representations. We compare our model, TF-Net, with state-of-the-art baselines and observe significant reductions in error for predictions 60 frames ahead. Most significantly, our method predicts physical fields that obey desirable physical characteristics, such as conservation of mass, whilst faithfully emulating the turbulent kinetic energy field and spectrum, which are critical for accurate prediction of turbulent flows.""","""The reviewers all agree that this is an interesting paper with good results. The authors' rebuttal response was very helpful. However, given the competitiveness of the submissions this year, the submission did not make it. We encourage the authors to resubmit the work including the new results obtained during the rebuttal.""" 135,"""Generalization through Memorization: Nearest Neighbor Language Models""","['language models', 'k-nearest neighbors']","""We introduce pseudo-formula NN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a pseudo-formula -nearest neighbors ( pseudo-formula NN) model. The nearest neighbors are computed according to distance in the pre-trained LM embedding space, and can be drawn from any text collection, including the original LM training data. Applying this transformation to a strong Wikitext-103 LM, with neighbors drawn from the original training set, our pseudo-formula NN-LM achieves a new state-of-the-art perplexity of 15.79 -- a 2.9 point improvement with no additional training. We also show that this approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training. Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowledge. Together, these results strongly suggest that learning similarity between sequences of text is easier than predicting the next word, and that nearest neighbor search is an effective approach for language modeling in the long tail.""","""This paper proposes an idea of using a pre-trained language model on a potentially smaller set of text, and interpolating it with a k-nearest neighbor model over a large datastore. The authors provide extensive evaluation and insightful results. Two reviewers vote for accepting the paper, and one reviewer is negative. After considering the points made by reviewers, the AC decided that the paper carries value for the community and should be accepted.""" 136,"""Attacking Lifelong Learning Models with Gradient Reversion""","['lifelong learning', 'adversarial learning']","""Lifelong learning aims at avoiding the catastrophic forgetting problem of traditional supervised learning models. Episodic memory based lifelong learning methods such as A-GEM (Chaudhry et al., 2018b) are shown to achieve the state-of-the-art results across the benchmarks. In A-GEM, a small episodic memory is utilized to store a random subset of the examples from previous tasks. While the model is trained on a new task, a reference gradient is computed on the episodic memory to guide the direction of the current update. While A-GEM has strong continual learning ability, it is not clear that if it can retain the performance in the presence of adversarial attacks. In this paper, we examine the robustness ofA-GEM against adversarial attacks to the examples in the episodic memory. We evaluate the effectiveness of traditional attack methods such as FGSM and PGD.The results show that A-GEM still possesses strong continual learning ability in the presence of adversarial examples in the memory and simple defense techniques such as label smoothing can further alleviate the adversarial effects. We presume that traditional attack methods are specially designed for standard supervised learning models rather than lifelong learning models. we therefore propose a principled way for attacking A-GEM called gradient reversion(GREV) which is shown to be more effective. Our results indicate that future lifelong learning research should bear adversarial attacks in mind to develop more robust lifelong learning algorithms.""","""The paper investigates questions around adversarial attacks in a continual learning algorithm, i.e., A-GEM. While reviewers agree that this is a novel topic of great importance, the contributions are quite narrow, since only a single model (A-GEM) is considered and it is not immediately clear whether this method transfers to other lifelong learning models (or even other models that belong to the same family as A-GEM). This is an interesting submission, but at the moment due to its very narrow scope, it seems more appropriate as a workshop submission investigating a very particular question (that of attacking A-GEM). As such, I cannot recommend acceptance.""" 137,"""Multilingual Alignment of Contextual Word Representations""","['multilingual', 'natural language processing', 'embedding alignment', 'BERT', 'word embeddings', 'transfer']","""We propose procedures for evaluating and strengthening contextual embedding alignment and show that they are useful in analyzing and improving multilingual BERT. In particular, after our proposed alignment procedure, BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model, remarkably matching pseudo-fully-supervised translate-train models for Bulgarian and Greek. Further, to measure the degree of alignment, we introduce a contextual version of word retrieval and show that it correlates well with downstream zero-shot transfer. Using this word retrieval task, we also analyze BERT and find that it exhibits systematic deficiencies, e.g. worse alignment for open-class parts-of-speech and word pairs written in different scripts, that are corrected by the alignment procedure. These results support contextual alignment as a useful concept for understanding large multilingual pre-trained models.""","""This paper proposes a method to improve alignments of a multilingual contextual embedding model (e.g., multilingual BERT) using parallel corpora as an anchor. The authors show the benefit of their approach in a zero-shot XNLI experiment and present a word retrieval analysis to better understand multilingual BERT. All reviewers agree that this is an interesting paper with valuable contributions. The authors and reviewers have been engaged in a thorough discussion during the rebuttal period and the revised paper has addressed most of the reviewers concerns. I think this paper would be a good addition to ICLR so I recommend accepting this paper.""" 138,"""Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells""","['Grid cell', 'space encoding', 'spatially explicit model', 'multi-scale periodic representation', 'unsupervised learning']","""Unsupervised text encoding models have recently fueled substantial progress in NLP. The key idea is to use neural networks to convert words in texts to vector space representations (embeddings) based on word positions in a sentence and their contexts, which are suitable for end-to-end training of downstream tasks. We see a strikingly similar situation in spatial analysis, which focuses on incorporating both absolute positions and spatial contexts of geographic objects such as POIs into models. A general-purpose representation model for space is valuable for a multitude of tasks. However, no such general model exists to date beyond simply applying discretization or feed-forwardnets to coordinates, and little effort has been put into jointly modeling distributions with vastly different characteristics, which commonly emerges from GIS data. Meanwhile, Nobel Prize-winning Neuroscience research shows that grid cells in mammals provide a multi-scale periodic representation that functions as a metric for location encoding and is critical for recognizing places and for path-integration. Therefore, we propose a representation learning model called Space2Vec to encode the absolute positions and spatial relationships of places. We conduct experiments on two real-world geographic data for two different tasks: 1) predicting types of POIs given their positions and context, 2) image classification leveraging their geo-locations. Results show that because of its multi-scale representations, Space2Vec outperforms well-established ML approaches such as RBF kernels, multi-layer feed-forward nets, and tile embedding approaches for location modeling and image classification tasks. Detailed analysis shows that all baselines can at most well handle distribution at one scale but show poor performances in other scales. In contrast, Space2Vec s multi-scale representation can handle distributions at different scales.""","""This paper proposes to follow inspiration from NLP method that use position embeddings and adapt them to spatial analysis that also makes use of both absolute and contextual information, and presents a representation learning approach called space2vec to capture absolute positions and spatial relationships of places. Experiments show promising results on real data compared to a number of existing approaches. Reviewers recognize the promise of this approach and suggested a few additional experiments such as using this spatial encoding as part of other tasks such as image classification, as well as clarification and further explanations on many important points. Authors performed these experiments and incorporated the results in their revisions, further strengthening the submission. They also provided more analyses and explanations about the granularity of locality and motivation for their approach, which answered the main concerns of reviewers. Overall, the revised paper is solid and we recommend acceptance.""" 139,"""Understanding Why Neural Networks Generalize Well Through GSNR of Parameters""","['DNN', 'generalization', 'GSNR', 'gradient descent']","""As deep neural networks (DNNs) achieve tremendous success across many application domains, researchers tried to explore in many aspects on why they generalize well. In this paper, we provide a novel perspective on these issues using the gradient signal to noise ratio (GSNR) of parameters during training process of DNNs. The GSNR of a parameter is simply defined as the ratio between its gradient's squared mean and variance, over the data distribution. Based on several approximations, we establish a quantitative relationship between model parameters' GSNR and the generalization gap. This relationship indicates that larger GSNR during training process leads to better generalization performance. Futher, we show that, different from that of shallow models (e.g. logistic regression, support vector machines), the gradient descent optimization dynamics of DNNs naturally produces large GSNR during training, which is probably the key to DNNs remarkable generalization ability.""","""Quoting a reviewer for a very nice summary: ""In this work, the authors suggest a new point of view on generalization through the lens of the distribution of the per-sample gradients. The authors consider the variance and mean of the per-sample gradients for each parameter of the model and define for each parameter the Gradient Signal to Noise ratio (GSNR). The GSNR of a parameter is the ratio between the mean squared of the gradient per parameter per sample (computed over the samples) and the variance of the gradient per parameter per sample (also computed over the samples). The GSNR is promising as a measure of generalization and the authors provide a nice leading order derivation of the GSNR as a proxy for the measure of the generalization gap in the model."" The majority of the reviewers vote to accept this paper. We can view the 3 as a weak signal as that reviewer stated in his review that he struggled to rate the paper because it contained a lot of math.""" 140,"""Implicit Bias of Gradient Descent based Adversarial Training on Separable Data""","['implicit bias', 'adversarial training', 'robustness', 'gradient descent']","""Adversarial training is a principled approach for training robust neural networks. Despite of tremendous successes in practice, its theoretical properties still remain largely unexplored. In this paper, we provide new theoretical insights of gradient descent based adversarial training by studying its computational properties, specifically on its implicit bias. We take the binary classification task on linearly separable data as an illustrative example, where the loss asymptotically attains its infimum as the parameter diverges to infinity along certain directions. Specifically, we show that for any fixed iteration pseudo-formula , when the adversarial perturbation during training has proper bounded L2 norm, the classifier learned by gradient descent based adversarial training converges in direction to the maximum L2 norm margin classifier at the rate of pseudo-formula , significantly faster than the rate T}$ of training with clean data. In addition, when the adversarial perturbation during training has bounded Lq norm, the resulting classifier converges in direction to a maximum mixed-norm margin classifier, which has a natural interpretation of robustness, as being the maximum L2 norm margin classifier under worst-case bounded Lq norm perturbation to the data. Our findings provide theoretical backups for adversarial training that it indeed promotes robustness against adversarial perturbation.""","""This paper provides theoretical guarantees for adversarial training. While the reviews raise a variety of criticisms (e.g., the results are under a variety of assumptions), overall the paper constitutes valuable progress on an emerging problem.""" 141,"""Learning robust visual representations using data augmentation invariance""","['deep neural networks', 'visual cortex', 'invariance', 'data augmentation']","""Deep convolutional neural networks trained for image object categorization have shown remarkable similarities with representations found across the primate ventral visual stream. Yet, artificial and biological networks still exhibit important differences. Here we investigate one such property: increasing invariance to identity-preserving image transformations found along the ventral stream. Despite theoretical evidence that invariance should emerge naturally from the optimization process, we present empirical evidence that the activations of convolutional neural networks trained for object categorization are not robust to identity-preserving image transformations commonly used in data augmentation. As a solution, we propose data augmentation invariance, an unsupervised learning objective which improves the robustness of the learned representations by promoting the similarity between the activations of augmented image samples. Our results show that this approach is a simple, yet effective and efficient (10 % increase in training time) way of increasing the invariance of the models while obtaining similar categorization performance.""","""This paper introduces an unsupervised learning objective that attempts to improve the robustness of the learnt representations. This approach is empirically demonstrated on cifar10 and tiny imagenet with different network architectures including all convolutional net, wide residual net and dense net. Two of three reviewers felt that the paper was not suitable for publication at ICLR in its current form. Self supervision based on preserving network outputs despite data transformations is a relatively minor contribution, the framing of the approach as inspired by biological vision notwithstanding. Several references, including at a past ICLR include: pseudo-url and Gidaris, P. Singh, and N. Komodakis. Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations (ICLR), 2018.""" 142,"""Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation""","['off-policy evaluation', 'infinite horizon', 'doubly robust', 'reinforcement learning']","""Infinite horizon off-policy policy evaluation is a highly challenging task due to the excessively large variance of typical importance sampling (IS) estimators. Recently, Liu et al. (2018) proposed an approach that significantly reduces the variance of infinite-horizon off-policy evaluation by estimating the stationary density ratio, but at the cost of introducing potentially high risks due to the error in density ratio estimation. In this paper, we develop a bias-reduced augmentation of their method, which can take advantage of a learned value function to obtain higher accuracy. Our method is doubly robust in that the bias vanishes when either the density ratio or value function estimation is perfect. In general, when either of them is accurate, the bias can also be reduced. Both theoretical and empirical results show that our method yields significant advantages over previous methods.""","""The paper proposes a doubly robust off-policy evaluation method that uses both stationary density ratio as well as a learned value function in order to reduce bias. The reviewers unanimously recommend acceptance of this paper.""" 143,"""Yet another but more efficient black-box adversarial attack: tiling and evolution strategies""","['adversarial examples', 'black-box attacks', 'derivative free optimization', 'deep learning']","""We introduce a new black-box attack achieving state of the art performances. Our approach is based on a new objective function, borrowing ideas from pseudo-formula -white box attacks, and particularly designed to fit derivative-free optimization requirements. It only requires to have access to the logits of the classifier without any other information which is a more realistic scenario. Not only we introduce a new objective function, we extend previous works on black box adversarial attacks to a larger spectrum of evolution strategies and other derivative-free optimization methods. We also highlight a new intriguing property that deep neural networks are not robust to single shot tiled attacks. Our models achieve, with a budget limited to pseudo-formula queries, results up to pseudo-formula of success rate against InceptionV3 classifier with pseudo-formula queries to the network on average in the untargeted attacks setting, which is an improvement by pseudo-formula queries of the current state of the art. In the targeted setting, we are able to reach, with a limited budget of pseudo-formula , pseudo-formula of success rate with a budget of pseudo-formula queries on average, i.e. we need pseudo-formula queries less than the current state of the art.""","""This paper proposes a new black-box adversarial attack based on tiling and evolution strategies. While the experimental results look promising, the main concern of the reviewers is the novelty of the proposed algorithm, and many things need to be improved in terms of clarity and experiments. The paper does not gather sufficient support from the reviewers even after author response. I encourage the authors to improve this paper and resubmit to future conference.""" 144,"""Variance Reduced Local SGD with Lower Communication Complexity""","['variance reduction', 'local SGD', 'distributed optimization']","""To accelerate the training of machine learning models, distributed stochastic gradient descent (SGD) and its variants have been widely adopted, which apply multiple workers in parallel to speed up training. Among them, Local SGD has gained much attention due to its lower communication cost. Nevertheless, when the data distribution on workers is non-identical, Local SGD requires N^{\frac{3}{4}}) communications to maintain its \emph{linear iteration speedup} property, where pseudo-formula is the total number of iterations and pseudo-formula is the number of workers. In this paper, we propose Variance Reduced Local SGD (VRL-SGD) to further reduce the communication complexity. Benefiting from eliminating the dependency on the gradient variance among workers, we theoretically prove that VRL-SGD achieves a \emph{linear iteration speedup} with a lower communication complexity N^{\frac{3}{2}})$ even if workers access non-identical datasets. We conduct experiments on three machine learning tasks, and the experimental results demonstrate that VRL-SGD performs impressively better than Local SGD when the data among workers are quite diverse.""","""The paper presents a novel variance reduction algorithm for SGD. The presentation is clear. But the theory is not good enough. The reivewers worry about the converge results and the technical part is not sound.""" 145,"""On Federated Learning of Deep Networks from Non-IID Data: Parameter Divergence and the Effects of Hyperparametric Methods""","['Federated learning', 'Iterative parameter averaging', 'Deep networks', 'Decentralized non-IID data', 'Hyperparameter optimization methods']","""Federated learning, where a global model is trained by iterative parameter averaging of locally-computed updates, is a promising approach for distributed training of deep networks; it provides high communication-efficiency and privacy-preservability, which allows to fit well into decentralized data environments, e.g., mobile-cloud ecosystems. However, despite the advantages, the federated learning-based methods still have a challenge in dealing with non-IID training data of local devices (i.e., learners). In this regard, we study the effects of a variety of hyperparametric conditions under the non-IID environments, to answer important concerns in practical implementations: (i) We first investigate parameter divergence of local updates to explain performance degradation from non-IID data. The origin of the parameter divergence is also found both empirically and theoretically. (ii) We then revisit the effects of optimizers, network depth/width, and regularization techniques; our observations show that the well-known advantages of the hyperparameter optimization strategies could rather yield diminishing returns with non-IID data. (iii) We finally provide the reasons of the failure cases in a categorized way, mainly based on metrics of the parameter divergence.""","""This paper studies the problem of federated learning for non-i.i.d. data, and looks at the hyperparameter optimization in this setting. As the reviewers have noted, this is a purely empirical paper. There are certain aspects of the experiments that need further discussion, especially the learning rate selection for different architectures. That said, the submission may not be ready for publication at its current stage.""" 146,"""Best feature performance in codeswitched hate speech texts""","['Hate Speech', 'Code-switching', 'feature selection', 'representation learning']","""How well can hate speech concept be abstracted in order to inform automatic classification in codeswitched texts by machine learning classifiers? We explore different representations and empirically evaluate their predictiveness using both conventional and deep learning algorithms in identifying hate speech in a ~48k human-annotated dataset that contain mixed languages, a phenomenon common among multilingual speakers. This paper espouses a novel approach to handle this challenge by introducing a hierarchical approach that employs Latent Dirichlet Allocation to generate topic models that feed into another high-level feature set that we acronym PDC. PDC groups similar meaning words in word families during the preprocessing stage for supervised learning models. The high-level PDC features generated are based on Ombui et al, (2019) hate speech annotation framework that is informed by the triangular theory of hate (Stanberg,2003). Results obtained from frequency-based models using the PDC feature on the annotated dataset of ~48k short messages comprising of tweets generated during the 2012 and 2017 Kenyan presidential elections indicate an improvement on classification accuracy in identifying hate speech as compared to the baseline""","""This paper focuses on hate speech detection and compares several classification methods including Naive Bayes, SVM, KNN, CNN, and many others. The most valuable contribution of this work is a dataset of ~400,000 tweets from 2017 Kenyan general election, although it is unclear whether the authors plan to release the dataset in the future. The paper is difficult to follow, uses an incorrect ICLR format, and is full of typos. All three reviewers agree that while this paper deals with an important topic in social media analysis, it is not ready for publication in its current state. The authors did not provide a rebuttal to reviewers' concerns. I recommend rejecting this paper for ICLR.""" 147,"""Unsupervised Temperature Scaling: Robust Post-processing Calibration for Domain Shift""","['calibration', 'domain shift', 'uncertainty prediction', 'deep neural networks', 'temperature scaling']","""The uncertainty estimation is critical in real-world decision making applications, especially when distributional shift between the training and test data are prevalent. Many calibration methods in the literature have been proposed to improve the predictive uncertainty of DNNs which are generally not well-calibrated. However, none of them is specifically designed to work properly under domain shift condition. In this paper, we propose Unsupervised Temperature Scaling (UTS) as a robust calibration method to domain shift. It exploits test samples to adjust the uncertainty prediction of deep models towards the test distribution. UTS utilizes a novel loss function, weighted NLL, that allows unsupervised calibration. We evaluate UTS on a wide range of model-datasets which shows the possibility of calibration without labels and demonstrate the robustness of UTS compared to other methods (e.g., TS, MC-dropout, SVI, ensembles) in shifted domains. ""","""The paper proposes a method called unsupervised temperature scaling (UTS) for improving calibration under domain shift. The reviewers agree that this is an interesting research question, but raised concerns about clarity of the text, depth of the empirical evaluation, and validity of some of the assumptions. While the author rebuttal addressed some of these concerns, the reviewers felt that the current version of the paper is not ready for publication. I encourage the authors to revise and resubmit to a different venue.""" 148,"""Probabilistic modeling the hidden layers of deep neural networks""","['Neural Networks', 'Gaussian Process', 'Probabilistic Representation for Deep Learning']","""In this paper, we demonstrate that the parameters of Deep Neural Networks (DNNs) cannot satisfy the i.i.d. prior assumption and activations being i.i.d. is not valid for all the hidden layers of DNNs. Hence, the Gaussian Process cannot correctly explain all the hidden layers of DNNs. Alternatively, we introduce a novel probabilistic representation for the hidden layers of DNNs in two aspects: (i) a hidden layer formulates a Gibbs distribution, in which neurons define the energy function, and (ii) the connection between two adjacent layers can be modeled by a product of experts model. Based on the probabilistic representation, we demonstrate that the entire architecture of DNNs can be explained as a Bayesian hierarchical model. Moreover, the proposed probabilistic representation indicates that DNNs have explicit regularizations defined by the hidden layers serving as prior distributions. Based on the Bayesian explanation for the regularization of DNNs, we propose a novel regularization approach to improve the generalization performance of DNNs. Simulation results validate the proposed theories. ""","""This paper makes a claim that the iid assumption for NN parameters does not hold. The paper then expresses the joint distribution as a Gibbs distribution and PoE. Finally, there are some results on SGD as VI. Reviewers have mixed opinion about the paper and it is clear that the starting point of the paper (regarding iid assumption) is unclear. I myself read through the paper and discussed this with the reviewer, and it is clear that there are many issues with this paper. Here are my concerns: - The parameters of DNN are not iid *after* training. They are not supposed to be. So the empirical results where the correlation matrix is shown does not make the point that the paper is trying to make. - I agree with R2 that the prior is subjective and can be anything, and it is true that the ""trained"" NN may not correspond to a GP. This is actually well known which is why it is difficult to match the performance of a trained GP and trained NN. - The whole contribution about connection to Gibbs distribution and PoE is not insightful. These things are already known, so I don't know why this is a contribution. - Regarding connection between SGD and VI, they do *not* really prove anything. The derivation is *wrong*. In eq 85 in Appendix J2, the VI problem is written as KL(P||Q), but it should be KL(Q||P). Then this is argued to be the same as Eq. 88 obtained with SGD. This is not correct. Given these issues and based on reviewers' reaction to the content, I recommend to reject this paper. """ 149,"""When Robustness Doesnt Promote Robustness: Synthetic vs. Natural Distribution Shifts on ImageNet""","['robustness', 'distribution shift', 'image corruptions', 'adversarial robustness', 'reliable machine learning']","""We conduct a large experimental comparison of various robustness metrics for image classification. The main question of our study is to what extent current synthetic robustness interventions (lp-adversarial examples, noise corruptions, etc.) promote robustness under natural distribution shifts occurring in real data. To this end, we evaluate 147 ImageNet models under 199 different evaluation settings. We find that no current robustness intervention improves robustness on natural distribution shifts beyond a baseline given by standard models without a robustness intervention. The only exception is the use of larger training datasets, which provides a small increase in robustness on one natural distribution shift. Our results indicate that robustness improvements on real data may require new methodology and more evaluations on natural distribution shifts.""","""The authors show that models trained to satisfy adversarial robustness properties do not possess robustness to naturally occuring distribution shifts. The majority of the reviewers agree that this is not a surprising result especially for the choice of natural distribution shifts chosen by the authors (for instance it would be better if the authors compare to natural distribution shifts that look similar to the adversarial corruptions). Moreover, this is a survey study and no novel algorithms are presented, so the paper cannot be accepted on that merit either.""" 150,"""Antifragile and Robust Heteroscedastic Bayesian Optimisation""","['Bayesian Optimisation', 'Gaussian Processeses', 'Heteroscedasticity']",""" Bayesian Optimisation is an important decision-making tool for high-stakes applications in drug discovery and materials design. An oft-overlooked modelling consideration however is the representation of input-dependent or heteroscedastic aleatoric uncertainty. The cost of misrepresenting this uncertainty as being homoscedastic could be high in drug discovery applications where neglecting heteroscedasticity in high throughput virtual screening could lead to a failed drug discovery program. In this paper, we propose a heteroscedastic Bayesian Optimisation scheme which both represents and optimises aleatoric noise in the suggestions. We consider cases such as drug discovery where we would like to minimise or be robust to aleatoric uncertainty but also applications such as materials discovery where it may be beneficial to maximise or be antifragile to aleatoric uncertainty. Our scheme features a heteroscedastic Gaussian Process (GP) as the surrogate model in conjunction with two acquisition heuristics. First, we extend the augmented expected improvement (AEI) heuristic to the heteroscedastic setting and second, we introduce a new acquisition function, aleatoric-penalised expected improvement (ANPEI) based on a simple scalarisation of the performance and noise objective. Both methods are capable of penalising or promoting aleatoric noise in the suggestions and yield improved performance relative to a naive implementation of homoscedastic Bayesian Optimisation on toy problems as well as a real-world optimisation problem.""","""The reviewers initially gave scores of 1,1,3 citing primarily weak empirical results and a lack of theoretical justification. The experiments are presented on synthetic examples, which is a great start but the reviewers found that this doesn't give strong enough evidence that the methods developed in the paper would work well in practice. The authors did not submit an author response to the reviewers and as such the scores did not change during discussion. This paper would be significantly strengthened with the addition of experiments on actual problems e.g. related to drug discovery which is the motivation in the paper.""" 151,"""Targeted sampling of enlarged neighborhood via Monte Carlo tree search for TSP""","['Travelling salesman problem', 'Monte Carlo tree search', 'Reinforcement learning', 'Variable neighborhood search']","""The travelling salesman problem (TSP) is a well-known combinatorial optimization problem with a variety of real-life applications. We tackle TSP by incorporating machine learning methodology and leveraging the variable neighborhood search strategy. More precisely, the search process is considered as a Markov decision process (MDP), where a 2-opt local search is used to search within a small neighborhood, while a Monte Carlo tree search (MCTS) method (which iterates through simulation, selection and back-propagation steps), is used to sample a number of targeted actions within an enlarged neighborhood. This new paradigm clearly distinguishes itself from the existing machine learning (ML) based paradigms for solving the TSP, which either uses an end-to-end ML model, or simply applies traditional techniques after ML for post optimization. Experiments based on two public data sets show that, our approach clearly dominates all the existing learning based TSP algorithms in terms of performance, demonstrating its high potential on the TSP. More importantly, as a general framework without complicated hand-crafted rules, it can be readily extended to many other combinatorial optimization problems.""","""This paper contributes to the recently emerging literature about applying reinforcement learning methods to combinatorial optimization problems. The authors consider TSPs and propose a search method that interleaves greedy local search with Monte Carlo Tree Search (MCTS). This approach does not contain learned function approximation for transferring knowledge across problem instances, which is usually considered the main motivation for applying RL to comb opt problems. The reviewers state that, although the approach is a relatively straight-forward combination of two existing methods, it is in principle somewhat interesting. However, the experiments indicate a large gap to SOTA solvers for TSPs. No rebuttal was submitted. In absence of both SOTA results and methodological novelty, as assessed by the reviewers and my owm reading, I recommend to reject the paper in its current form.""" 152,"""AdaScale SGD: A Scale-Invariant Algorithm for Distributed Training""","['Large-batch SGD', 'large-scale learning', 'distributed training']","""When using distributed training to speed up stochastic gradient descent, learning rates must adapt to new scales in order to maintain training effectiveness. Re-tuning these parameters is resource intensive, while fixed scaling rules often degrade model quality. We propose AdaScale SGD, a practical and principled algorithm that is approximately scale invariant. By continually adapting to the gradients variance, AdaScale often trains at a wide range of scales with nearly identical results. We describe this invariance formally through AdaScales convergence bounds. As the batch size increases, the bounds maintain final objective values, while smoothly transitioning away from linear speed-ups. In empirical comparisons, AdaScale trains well beyond the batch size limits of popular linear learning rate scaling rules. This includes large-scale training without model degradation for machine translation, image classification, object detection, and speech recognition tasks. The algorithm introduces negligible computational overhead and no tuning parameters, making AdaScale an attractive choice for large-scale training. ""","""Main summary: Novel rule for scaling learning rate, known as gain ration, for which the effective batch size is increased. Discussion: reviewer 2: main concern is reviewer can't tell if it's better of worse than linear learning rate scaling from their experiment section. reviewer 3: novlty/contribution is a bit too low for ICLR. reviewer 1: algorthmic clarity lacking. Recommendation: all 3 reviewers recommend reject, I agree.""" 153,"""Learning Efficient Parameter Server Synchronization Policies for Distributed SGD""","['Distributed SGD', 'Paramter-Server', 'Synchronization Policy', 'Reinforcement Learning']","""We apply a reinforcement learning (RL) based approach to learning optimal synchronization policies used for Parameter Server-based distributed training of machine learning models with Stochastic Gradient Descent (SGD). Utilizing a formal synchronization policy description in the PS-setting, we are able to derive a suitable and compact description of states and actions, allowing us to efficiently use the standard off-the-shelf deep Q-learning algorithm. As a result, we are able to learn synchronization policies which generalize to different cluster environments, different training datasets and small model variations and (most importantly) lead to considerable decreases in training time when compared to standard policies such as bulk synchronous parallel (BSP), asynchronous parallel (ASP), or stale synchronous parallel (SSP). To support our claims we present extensive numerical results obtained from experiments performed in simulated cluster environments. In our experiments training time is reduced by 44 on average and learned policies generalize to multiple unseen circumstances.""","""The authors consider a parameter-server setup where the learner acts a server communicating updated weights to workers and receiving gradient updates from them. A major question then relates in the synchronisation of the gradient updates, for which couple of *fixed* heuristics exists that trade-off accuracy of updates (BSP) for speed (ASP) or even combine the two allowing workers to be at most k steps out-of-sync. Instead, the authors propose to learn a synchronisation policy using RL. The authors perform results on a simulated and real environment. Overall, the RL-based method seems to provide some improvement over the fixed protocols, however the margin between the fixed and the RL get smaller in the real clusters. This is actually the main concern raised by the reviewers as well (especially R2) -- the paper in its initial submission did not include the real cluster results, rather these were added at the rebuttal. I find this to be an interesting real-world application of RL and I think it provides an alternative environment for testing RL algorithms beyond simulated environments. As such, Im recommending acceptance. However, I do ask the authors to be upfront with the real cluster results and move them in the main paper. """ 154,"""Learning Reusable Options for Multi-Task Reinforcement Learning""","['Reinforcement Learning', 'Temporal Abstraction', 'Options', 'Multi-Task RL']","""Reinforcement learning (RL) has become an increasingly active area of research in recent years. Although there are many algorithms that allow an agent to solve tasks efficiently, they often ignore the possibility that prior experience related to the task at hand might be available. For many practical applications, it might be unfeasible for an agent to learn how to solve a task from scratch, given that it is generally a computationally expensive process; however, prior experience could be leveraged to make these problems tractable in practice. In this paper, we propose a framework for exploiting existing experience by learning reusable options. We show that after an agent learns policies for solving a small number of problems, we are able to use the trajectories generated from those policies to learn reusable options that allow an agent to quickly learn how to solve novel and related problems.""","""This paper presents a novel option discovery mechanism through incrementally learning reusasble options from a small number of policies that are usable across multiple tasks. The primary concern with this paper was with a number of issues around the experiments. Specifically, the reviewers took issue with the definition of novel tasks in the Atari context. A more robust discussion and analysis around what tasks are considered novel would be useful. Comparisons to other option discovery papers on the Atari domains is also required. Additionally, one reviewer had concerns on the hard limit of option execution length which remain unresolved following the discussion. While this is really promising work, it is not ready to be accepted at this stage.""" 155,"""Learning from Partially-Observed Multimodal Data with Variational Autoencoders""","['data imputation', 'variational autoencoders', 'generative models']","""Learning from only partially-observed data for imputation has been an active research area. Despite promising progress on unimodal data imputation (e.g., image in-painting), models designed for multimodal data imputation are far from satisfactory. In this paper, we propose variational selective autoencoders (VSAE) for this task. Different from previous works, our proposed VSAE learns only from partially-observed data. The proposed VSAE is capable of learning the joint distribution of observed and unobserved modalities as well as the imputation mask, resulting in a unified model for various down-stream tasks including data generation and imputation. Evaluation on both synthetic high-dimensional and challenging low-dimensional multi-modality datasets shows significant improvement over the state-of-the-art data imputation models.""","""This submission proposes a VAE-based method for jointly inferring latent variables and data generation. The method learns from partially-observed multimodal data. Strengths: -Learning to generate from partially-observed data is an important and challenging problem. -The proposed idea is novel and promising. Weaknesses: -Some experimental protocols are not fully explained. -The experiments are not sufficiently comprehensive (comparisons to key baselines are missing). -More analysis of some surprising results is needed. -The presentation has much to improve. The method is promising but the mentioned weaknesses were not sufficiently addressed during discussion. AC agrees with the majority recommendation to reject. """ 156,"""Revisiting the Generalization of Adaptive Gradient Methods""","['Adaptive Methods', 'AdaGrad', 'Generalization']","""A commonplace belief in the machine learning community is that using adaptive gradient methods hurts generalization. We re-examine this belief both theoretically and experimentally, in light of insights and trends from recent years. We revisit some previous oft-cited experiments and theoretical accounts in more depth, and provide a new set of experiments in larger-scale, state-of-the-art settings. We conclude that with proper tuning, the improved training performance of adaptive optimizers does not in general carry an overfitting penalty, especially in contemporary deep learning. Finally, we synthesize a ``user's guide'' to adaptive optimizers, including some proposed modifications to AdaGrad to mitigate some of its empirical shortcomings.""","""The paper combines several recent optimizer tricks to provide empirical evidence that goes against the common belief that adaptive methods result in larger generalization errors. The contribution of this paper is rather small: no new strategies are introduced and no new theory is presented. The paper makes a good workshop paper, but does not meet the bar for publication at ICLR. """ 157,"""Budgeted Training: Rethinking Deep Neural Network Training Under Resource Constraints""","['budgeted training', 'learning rate schedule', 'linear schedule', 'annealing', 'learning rate decay']","""In most practical settings and theoretical analyses, one assumes that a model can be trained until convergence. However, the growing complexity of machine learning datasets and models may violate such assumptions. Indeed, current approaches for hyper-parameter tuning and neural architecture search tend to be limited by practical resource constraints. Therefore, we introduce a formal setting for studying training under the non-asymptotic, resource-constrained regime, i.e., budgeted training. We analyze the following problem: ""given a dataset, algorithm, and fixed resource budget, what is the best achievable performance?"" We focus on the number of optimization iterations as the representative resource. Under such a setting, we show that it is critical to adjust the learning rate schedule according to the given budget. Among budget-aware learning schedules, we find simple linear decay to be both robust and high-performing. We support our claim through extensive experiments with state-of-the-art models on ImageNet (image classification), Kinetics (video classification), MS COCO (object detection and instance segmentation), and Cityscapes (semantic segmentation). We also analyze our results and find that the key to a good schedule is budgeted convergence, a phenomenon whereby the gradient vanishes at the end of each allowed budget. We also revisit existing approaches for fast convergence and show that budget-aware learning schedules readily outperform such approaches under (the practical but under-explored) budgeted training setting.""","""This paper formalizes the problem of training deep networks in the presence of a budget, expressed here as a maximum total number of optimization iterations, and evaluates various budget-aware learning schedules, finding simple linear decay to work well. Post-discussion, the reviewers all felt that this was a good paper. There were some concerns about the lack of theoretical justification for linear decay, but these were overruled by the practical use of these papers to the community. Therefore I am recommending it be accepted.""" 158,"""Evolutionary Reinforcement Learning for Sample-Efficient Multiagent Coordination""","['reinforcement learning', 'multiagent', 'neuroevolution']","""Many cooperative multiagent reinforcement learning environments provide agents with a sparse team-based reward as well as a dense agent-specific reward that incentivizes learning basic skills. Training policies solely on the team-based reward is often difficult due to its sparsity. Also, relying solely on the agent-specific reward is sub-optimal because it usually does not capture the team coordination objective. A common approach is to use reward shaping to construct a proxy reward by combining the individual rewards. However, this requires manual tuning for each environment. We introduce Multiagent Evolutionary Reinforcement Learning (MERL), a split-level training platform that handles the two objectives separately through two optimization processes. An evolutionary algorithm maximizes the sparse team-based objective through neuroevolution on a population of teams. Concurrently, a gradient-based optimizer trains policies to only maximize the dense agent-specific rewards. The gradient-based policies are periodically added to the evolutionary population as a way of information transfer between the two optimization processes. This enables the evolutionary algorithm to use skills learned via the agent-specific rewards toward optimizing the global objective. Results demonstrate that MERL significantly outperforms state-of-the-art methods such as MADDPG on a number of difficult coordination benchmarks. ""","""This work has a lot of promise; however, the author response was not sufficient to address the concerns expressed by reviewer 1, leading to an aggregate rating that is just not sufficient to justify an acceptance recommendation. The AC recommends rejection.""" 159,"""Moniqua: Modulo Quantized Communication in Decentralized SGD""","['decentralized training', 'quantization', 'communicaiton', 'stochastic gradient descent']","""Decentralized stochastic gradient descent (SGD), where parallel workers are connected to form a graph and communicate adjacently, has shown promising results both theoretically and empirically. In this paper we propose Moniqua, a technique that allows decentralized SGD to use quantized communication. We prove in theory that Moniqua communicates a provably bounded number of bits per iteration, while converging at the same asymptotic rate as the original algorithm does with full-precision communication. Moniqua improves upon prior works in that it (1) requires no additional memory, (2) applies to non-convex objectives, and (3) supports biased/linear quantizers. We demonstrate empirically that Moniqua converges faster with respect to wall clock time than other quantized decentralized algorithms. We also show that Moniqua is robust to very low bit-budgets, allowing less than 4-bits-per-parameter communication without affecting convergence when training VGG16 on CIFAR10.""","""This papers proposed an interesting idea for distributed decentralized training with quantized communication. The method is interesting and elegant. However, it is incremental, does not support arbitrary communication compression, and does not have a convincing explanation why modulo operation makes the algorithm better. The experiments are not convincing. Comparison is shown only for the beginning of the optimization where the algorithm does not achieve state of the art accuracy. Moreover, the modular hyperparameter is not easy to choose and seems cannot help achieve consensus.""" 160,"""ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring""",['semi-supervised learning'],"""We improve the recently-proposed ``MixMatch semi-supervised learning algorithm by introducing two new techniques: distribution alignment and augmentation anchoring. - Distribution alignment encourages the marginal distribution of predictions on unlabeled data to be close to the marginal distribution of ground-truth labels. - Augmentation anchoring} feeds multiple strongly augmented versions of an input into the model and encourages each output to be close to the prediction for a weakly-augmented version of the same input. To produce strong augmentations, we propose a variant of AutoAugment which learns the augmentation policy while the model is being trained. Our new algorithm, dubbed ReMixMatch, is significantly more data-efficient than prior work, requiring between 5 times and 16 times less data to reach the same accuracy. For example, on CIFAR-10 with 250 labeled examples we reach 93.73% accuracy (compared to MixMatch's accuracy of 93.58% with 4000 examples) and a median accuracy of 84.92% with just four labels per class. ""","""This works improves the MixMatch semi-supervised algorithm along the two directions of distribution alignment and augmentation anchoring, which together make the approach more data-efficient than prior work. All reviewers agree that the impressive empirical results in the paper are its main strength, but express concern that the method is overly complicated and hacking together many known pieces, as well as doubt as to the extent of the contribution of the augmentation method itself, with requests for better augmentation controls. While some of these concerns have not been addressed by authors in their response, the strength of empirical results seems enough to justify an acceptance recommendation.""" 161,"""Real or Not Real, that is the Question""","['GAN', 'generalization', 'realness', 'loss function']","""While generative adversarial networks (GAN) have been widely adopted in various topics, in this paper we generalize the standard GAN to a new perspective by treating realness as a random variable that can be estimated from multiple angles. In this generalized framework, referred to as RealnessGAN, the discriminator outputs a distribution as the measure of realness. While RealnessGAN shares similar theoretical guarantees with the standard GAN, it provides more insights on adversarial learning. More importantly, compared to multiple baselines, RealnessGAN provides stronger guidance for the generator, achieving improvements on both synthetic and real-world datasets. Moreover, it enables the basic DCGAN architecture to generate realistic images at 1024*1024 resolution when trained from scratch.""","""The paper proposes a novel GAN formulation where the discriminator outputs discrete distributions instead of a scalar. The objective uses two ""anchor"" distributions that correspond to real and fake data. There were some concerns about the choice of these distributions but authors have addressed it in their response. The empirical results are impressive and the method will be of interest to the wide generative models community. """ 162,"""BANANAS: Bayesian Optimization with Neural Networks for Neural Architecture Search""","['neural architecture search', 'Bayesian optimization']","""Neural Architecture Search (NAS) has seen an explosion of research in the past few years. A variety of methods have been proposed to perform NAS, including reinforcement learning, Bayesian optimization with a Gaussian process model, evolutionary search, and gradient descent. In this work, we design a NAS algorithm that performs Bayesian optimization using a neural network model. We develop a path-based encoding scheme to featurize the neural architectures that are used to train the neural network model. This strategy is particularly effective for encoding architectures in cell-based search spaces. After training on just 200 random neural architectures, we are able to predict the validation accuracy of a new architecture to within one percent of its true accuracy on average. This may be of independent interest beyond Bayesian neural architecture search. We test our algorithm on the NASBench dataset (Ying et al. 2019), and show that our algorithm significantly outperforms other NAS methods including evolutionary search, reinforcement learning, and AlphaX (Wang et al. 2019). Our algorithm is over 100x more efficient than random search, and 3.8x more efficient than the next-best algorithm. We also test our algorithm on the search space used in DARTS (Liu et al. 2018), and show that our algorithm is competitive with state-of-the-art NAS algorithms on this search space.""","""This paper uses Bayesian optimization with neural networks for neural architecture search. One of the contributions is a path-based encoding that enumerates every possible path through a cell search space. This encoding is shown to be surprisingly powerful, but it will not scale to large cell-based search spaces or non-cell-based search spaces. The availability of code, as well as the careful attention to reproducibility is much appreciated and a factor in favor of the paper. In the discussion, it surfaced that a comparison to existing Bayesian optimization approaches using neural networks would have been possible, while the authors initially did not think that this would be the case. The authors promised to include these comparisons in the final version, but, as was also discussed in the private discussion between reviewers and AC, this is problematic since it is not clear what these results will show. Therefore, the one reviewer who was debating about increasing their score did in the end not do so (but would be inclined to accept a future version with a clean and thorough comparison to baselines). All reviewers stuck with their score of ""weak reject"", leaning to borderline. I read the paper myself and concur with this judgement. I recommend rejection of the current version, with an encouragement to submit to another venue after including a comparison to BO methods based on neural networks.""" 163,"""Mode Connectivity and Sparse Neural Networks""","['sparsity', 'mode connectivity', 'lottery ticket', 'optimization landscape']","""We uncover a connection between two seemingly unrelated empirical phenomena: mode connectivity and sparsity. On the one hand, there is growing catalog of situations where, across multiple runs, SGD learns weights that fall into minima that are connected (mode connectivity). A striking example is described by Nagarajan & Kolter (2019). They observe that test error on MNIST does not change along the linear path connecting the end points of two independent SGD runs, starting from the same random initialization. On the other hand, there is the lottery ticket hypothesis of Frankle & Carbin (2019), where dense, randomly initialized networks have sparse subnetworks capable of training in isolation to full accuracy. However, neither phenomenon scales beyond small vision networks. We start by proposing a technique to find sparse subnetworks after initialization. We observe that these subnetworks match the accuracy of the full network only when two SGD runs for the same subnetwork are connected by linear paths with the no change in test error. Our findings connect the existence of sparse subnetworks that train to high accuracy with the dynamics of optimization via mode connectivity. In doing so, we identify analogues of the phenomena uncovered by Nagarajan & Kolter and Frankle & Carbin in ImageNet-scale architectures at state-of-the-art sparsity levels.""","""This paper investigates theories related to networks sparsification, related to mode connectivity and the so-called lottery ticket hypothesis. The paper is interesting and has merit, but on balance I find the contributions not sufficiently clear to warrant acceptance. The authors made substantial changes to the paper which are admirable and which bring it to borderline status. """ 164,"""Stabilizing Transformers for Reinforcement Learning""","['Deep Reinforcement Learning', 'Transformer', 'Reinforcement Learning', 'Self-Attention', 'Memory', 'Memory for Reinforcement Learning']","""Owing to their ability to both effectively integrate information over long time horizons and scale to massive amounts of data, self-attention architectures have recently shown breakthrough success in natural language processing (NLP), achieving state-of-the-art results in domains such as language modeling and machine translation. Harnessing the transformer's ability to process long time horizons of information could provide a similar performance boost in partially-observable reinforcement learning (RL) domains, but the large-scale transformers used in NLP have yet to be successfully applied to the RL setting. In this work we demonstrate that the standard transformer architecture is difficult to optimize, which was previously observed in the supervised learning setting but becomes especially pronounced with RL objectives. We propose architectural modifications that substantially improve the stability and learning speed of the original Transformer and XL variant. The proposed architecture, the Gated Transformer-XL (GTrXL), surpasses LSTMs on challenging memory environments and achieves state-of-the-art results on the multi-task DMLab-30 benchmark suite, exceeding the performance of an external memory architecture. We show that the GTrXL, trained using the same losses, has stability and performance that consistently matches or exceeds a competitive LSTM baseline, including on more reactive tasks where memory is less critical. GTrXL offers an easy-to-train, simple-to-implement but substantially more expressive architectural alternative to the standard multi-layer LSTM ubiquitously used for RL agents in partially-observable environments. ""","""This paper proposes architectural modifications to transformers, which are promising for sequential tasks requiring memory but can be unstable to optimize, and applies the resulting method to the RL setting, evaluated in the DMLab-30 benchmark. While I thought the approach was interesting and the results promising, the reviewers unanimously felt that the experimental evaluation could be more thorough, and were concerned with the motivation behind of some of the proposed changes. """ 165,"""Semi-Supervised Boosting via Self Labelling""","['semi-supervised learning', 'boosting', 'noise-resistant']","""Attention to semi-supervised learning grows in machine learning as the price to expertly label data increases. Like most previous works in the area, we focus on improving an algorithm's ability to discover the inherent property of the entire dataset from a few expertly labelled samples. In this paper we introduce Boosting via Self Labelling (BSL), a solution to semi-supervised boosting when there is only limited access to labelled instances. Our goal is to learn a classifier that is trained on a data set that is generated by combining the generalization of different algorithms which have been trained with a limited amount of supervised training samples. Our method builds upon a combination of several different components. First, an inference aided ensemble algorithm developed on a set of weak classifiers will offer the initial noisy labels. Second, an agreement based estimation approach will return the average error rates of the noisy labels. Third and finally, a noise-resistant boosting algorithm will train over the noisy labels and their error rates to describe the underlying structure as closely as possible. We provide both analytical justifications and experimental results to back the performance of our model. Based on several benchmark datasets, our results demonstrate that BSL is able to outperform state-of-the-art semi-supervised methods consistently, achieving over 90% test accuracy with only 10% of the data being labelled.""","""The paper presents a new semi-supervised boosting approach. As reviewers pointed out and AC acknowledge, the paper is not ready to publish in various aspects: (a) limited novelty/contribution, (b) reproducibility issue and (c) arguable assumptions. Hence, I recommend rejection.""" 166,"""Semi-supervised Semantic Segmentation using Auxiliary Network""","['deep learning', 'semi-supervised segmentation', 'semantic segmentation', 'CNN']","""Recently, the convolutional neural networks (CNNs) have shown great success on semantic segmentation task. However, for practical applications such as autonomous driving, the popular supervised learning method faces two challenges: the demand of low computational complexity and the need of huge training dataset accompanied by ground truth. Our focus in this paper is semi-supervised learning. We wish to use both labeled and unlabeled data in the training process. A highly efficient semantic segmentation network is our platform, which achieves high segmentation accuracy at low model size and high inference speed. We propose a semi-supervised learning approach to improve segmentation accuracy by including extra images without labels. While most existing semi-supervised learning methods are designed based on the adversarial learning techniques, we present a new and different approach, which trains an auxiliary CNN network that validates labels (ground-truth) on the unlabeled images. Therefore, in the supervised training phase, both the segmentation network and the auxiliary network are trained using labeled images. Then, in the unsupervised training phase, the unlabeled images are segmented and a subset of image pixels are picked up by the auxiliary network; and then they are used as ground truth to train the segmentation network. Thus, at the end, all dataset images can be used for retraining the segmentation network to improve the segmentation results. We use Cityscapes and CamVid datasets to verify the effectiveness of our semi-supervised scheme, and our experimental results show that it can improve the mean IoU for about 1.2% to 2.9% on the challenging Cityscapes dataset.""","""The paper presents a semi-supervised learning approach to handle semantic classification (pixel-level classification). The approach extends Hung et al. 18, using a confidence map generated by an auxiliary network, aimed to improve the identification of small objects. The reviews state that the paper novelty is limited compared to the state of the art; the reviewers made several suggestions to improve the processing pipeline (including all images, including the confidence weights). The reviews also state that the paper needs be carefully polished. The area chair hopes that the suggestions about the contents and writing of the paper will help to prepare an improved version of the paper. """ 167,"""Universal Approximation with Deep Narrow Networks""","['deep learning', 'universal approximation', 'deep narrow networks']","""The classical Universal Approximation Theorem certifies that the universal approximation property holds for the class of neural networks of arbitrary width. Here we consider the natural `dual' theorem for width-bounded networks of arbitrary depth. Precisely, let pseudo-formula be the number of inputs neurons, pseudo-formula be the number of output neurons, and let pseudo-formula be any nonaffine continuous function, with a continuous nonzero derivative at some point. Then we show that the class of neural networks of arbitrary depth, width + m + 2 and activation function pseudo-formula , exhibits the universal approximation property with respect to the uniform norm on compact subsets of pseudo-formula . This covers every activation function possible to use in practice; in particular this includes polynomial activation functions, making this genuinely different to the classical case. We go on to consider extensions of this result. First we show an analogous result for a certain class of nowhere differentiable activation functions. Second we establish an analogous result for noncompact domains, by showing that deep narrow networks with the ReLU activation function exhibit the universal approximation property with respect to the pseudo-formula -norm on pseudo-formula . Finally we show that width of only + m + 1$ suffices for `most' activation functions.""","""This article studies universal approximation with deep narrow networks, targeting the minimum width. The central contribution is described as providing results for general activation functions. The technique is described as straightforward, but robust enough to handle a variety of activation functions. The reviewers found the method elegant. The most positive position was that the article develops non trivial techniques that extend existing universal approximation results for deep narrow networks to essentially all activation functions. However, the reviewers also expressed reservations mentioning that the results could be on the incremental side, with derivations similar to previous works, and possibly of limited interest. In all, the article makes a reasonable theoretical contribution to the analysis of deep narrow neural networks. Although this is a reasonably good article, it is not good enough, given the very high acceptance bar for this year's ICLR. """ 168,"""Learning scalable and transferable multi-robot/machine sequential assignment planning via graph embedding""","['reinforcement learning', 'multi-robot/machine', 'scheduling', 'planning', 'scalability', 'transferability', 'mean-field inference', 'graph embedding']","""Can the success of reinforcement learning methods for simple combinatorial optimization problems be extended to multi-robot sequential assignment planning? In addition to the challenge of achieving near-optimal performance in large problems, transferability to an unseen number of robots and tasks is another key challenge for real-world applications. In this paper, we suggest a method that achieves the first success in both challenges for robot/machine scheduling problems. Our method comprises of three components. First, we show any robot scheduling problem can be expressed as a random probabilistic graphical model (PGM). We develop a mean-field inference method for random PGM and use it for Q-function inference. Second, we show that transferability can be achieved by carefully designing two-step sequential encoding of problem state. Third, we resolve the computational scalability issue of fitted Q-iteration by suggesting a heuristic auction-based Q-iteration fitting method enabled by transferability we achieved. We apply our method to discrete-time, discrete space problems (Multi-Robot Reward Collection (MRRC)) and scalably achieve 97% optimality with transferability. This optimality is maintained under stochastic contexts. By extending our method to continuous time, continuous space formulation, we claim to be the first learning-based method with scalable performance in any type of multi-machine scheduling problems; our method scalability achieves comparable performance to popular metaheuristics in Identical parallel machine scheduling (IPMS) problems.""","""Unfortunately, the reviewers of the paper are all not certain about their review, none of them being RL experts. Assessing the paper myselfnot being an RL expert but having experiencethe authors have addressed all points of the reviewers thoroughly. """ 169,"""Evaluating Lossy Compression Rates of Deep Generative Models""","['Deep Learning', 'Generative Models', 'Information Theory', 'Rate Distortion Theory']","""Deep generative models have achieved remarkable progress in recent years. Despite this progress, quantitative evaluation and comparison of generative models remains as one of the important challenges. One of the most popular metrics for evaluating generative models is the log-likelihood. While the direct computation of log-likelihood can be intractable, it has been recently shown that the log-likelihood of some of the most interesting generative models such as variational autoencoders (VAE) or generative adversarial networks (GAN) can be efficiently estimated using annealed importance sampling (AIS). In this work, we argue that the log-likelihood metric by itself cannot represent all the different performance characteristics of generative models, and propose to use rate distortion curves to evaluate and compare deep generative models. We show that we can approximate the entire rate distortion curve using one single run of AIS for roughly the same computational cost as a single log-likelihood estimate. We evaluate lossy compression rates of different deep generative models such as VAEs, GANs (and its variants) and adversarial autoencoders (AAE) on MNIST and CIFAR10, and arrive at a number of insights not obtainable from log-likelihoods alone.""","""The paper proposed a method to evaluate latent variable based generative models by estimating the compression in the latents (rate) and the distortion in the resulting reconstructions. While reviewers have clearly appreciated the theoretical novelty in using AIS to get an upper bound on the rate, there are concerns on missing empirical comparison with other related metrics (precision-recall) and limited practical applicability of the method due to large computational cost. Authors should consider comparing with PR metric and discuss some directions that can make the method practically as relevant as other related metrics. """ 170,"""Neural Network Branching for Neural Network Verification ""","['Neural Network Verification', 'Branch and Bound', 'Graph Neural Network', 'Learning to branch']","""Formal verification of neural networks is essential for their deployment in safety-critical areas. Many available formal verification methods have been shown to be instances of a unified Branch and Bound (BaB) formulation. We propose a novel framework for designing an effective branching strategy for BaB. Specifically, we learn a graph neural network (GNN) to imitate the strong branching heuristic behaviour. Our framework differs from previous methods for learning to branch in two main aspects. Firstly, our framework directly treats the neural network we want to verify as a graph input for the GNN. Secondly, we develop an intuitive forward and backward embedding update schedule. Empirically, our framework achieves roughly pseudo-formula reduction in both the number of branches and the time required for verification on various convolutional networks when compared to the best available hand-designed branching strategy. In addition, we show that our GNN model enjoys both horizontal and vertical transferability. Horizontally, the model trained on easy properties performs well on properties of increased difficulty levels. Vertically, the model trained on small neural networks achieves similar performance on large neural networks.""","""The authors develop a strategy to learn branching strategies for branch-and-bound based neural network verification algorithms, based on GNNs that imitate strong branching. This allows the authors to obtain significant speedups in branch and bound based neural network verification algorithms relative to strong baselines considered in prior work. The reviewers were in consensus and the quality of the paper and minor concerns raised in the initial reviews were adequately addressed in the rebuttal phase. Therefore, I strongly recommend acceptance.""" 171,"""X-Forest: Approximate Random Projection Trees for Similarity Measurement""",[],"""Similarity measurement plays a central role in various data mining and machine learning tasks. Generally, a similarity measurement solution should, in an ideal state, possess the following three properties: accuracy, efficiency and independence from prior knowledge. Yet unfortunately, vital as similarity measurements are, no previous works have addressed all of them. In this paper, we propose X-Forest, consisting of a group of approximate Random Projection Trees, such that all three targets mentioned above are tackled simultaneously. Our key techniques are as follows. First, we introduced RP Trees into the tasks of similarity measurement such that accuracy is improved. In addition, we enforce certain layers in each tree to share identical projection vectors, such that exalted efficiency is achieved. Last but not least, we introduce randomness into partition to eliminate its reliance on prior knowledge. We conduct experiments on three real-world datasets, whose results demonstrate that our model, X-Forest, reaches an efficiency of up to 3.5 times higher than RP Trees with negligible compromising on its accuracy, while also being able to outperform traditional Euclidean distance-based similarity metrics by as much as 20% with respect to clustering tasks. We have released codes in github anonymously so as to meet the demand of reproducibility.""","""This paper proposes a new method for measuring pairwise similarity between data points. The method is based on the idea that similarity between two data points to be the probability (over the randomness in constructing the trees) that they are close in a Random Projection tree. Reviewers found important limitations in this work, pertaining to clarity of mathematical statements and novelty. Unfortunately, the authors did not provide a rebuttal, so these concerns remain. Moreover, the program committee was made aware of the striking similarities between this submission and the preprint pseudo-url from Yan et al., which by itself would be grounds for rejection due to concerns of potential plagiarism. As a result, the AC recommends rejection at this time. """ 172,"""Modeling Fake News in Social Networks with Deep Multi-Agent Reinforcement Learning""","['deep multi-agent reinforcement learning', 'fake news', 'social networks', 'information aggregation']","""We develop a practical and flexible computational model of fake news on social networks in which agents act according to learned best response functions. We achieve this by extending an information aggregation game to allow for fake news and by representing agents as recurrent deep Q-networks (DQN) trained by independent Q-learning. In the game, agents repeatedly guess whether a claim is true or false taking into account an informative private signal and observations of actions of their neighbors on the social network in the previous period. We incorporate fake news into the model by adding an adversarial agent, the attacker, that either provides biased private signals to or takes over a subset of agents. The attacker can follow either a hand-tuned or trained policy. Our model allows us to tackle questions that are analytically intractable in fully rational models, while ensuring that agents follow reasonable best response functions. Our results highlight the importance of awareness, privacy and social connectivity in curbing the adverse effects of fake news. ""","""The paper aims to model fake news by drawing tools from multi-agent reinforcement learning. After the discussion period, there is a consensus among the reviewers that the paper lacks novel technical contributions. The reviewers also acknowledge that paper also doesn't quite deliver a practical solution as claimed by the authors.""" 173,"""Advantage Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning""","['reinforcement learning', 'policy search', 'control']","""In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines. Our goal is an algorithm that utilizes only simple and convergent maximum likelihood loss functions, while also being able to leverage off-policy data. Our proposed approach, which we refer to as advantage-weighted regression (AWR), consists of two standard supervised learning steps: one to regress onto target values for a value function, and another to regress onto weighted target actions for the policy. The method is simple and general, can accommodate continuous and discrete actions, and can be implemented in just a few lines of code on top of standard supervised learning methods. We provide a theoretical motivation for AWR and analyze its properties when incorporating off-policy data from experience replay. We evaluate AWR on a suite of standard OpenAI Gym benchmark tasks, and show that it achieves competitive performance compared to a number of well-established state-of-the-art RL algorithms. AWR is also able to acquire more effective policies than most off-policy algorithms when learning from purely static datasets with no additional environmental interactions. Furthermore, we demonstrate our algorithm on challenging continuous control tasks with highly complex simulated characters.""","""This paper caused a lot of discussions before and after the rebuttal. The concerns are related to the novelty of this paper, which seems to be relatively limited. Since we do not have a champion among positive reviewers, and the overall score is not high enough, I cannot recommend its acceptance at this stage. """ 174,"""Emergence of Collective Policies Inside Simulations with Biased Representations""","['collective policy', 'biased representation', 'model-based RL', 'simulation', 'imagination', 'virtual environment']","""We consider a setting where biases are involved when agents internalise an environment. Agents have different biases, all of which resulting in imperfect evidence collected for taking optimal actions. Throughout the interactions, each agent asynchronously internalises their own predictive model of the environment and forms a virtual simulation within which the agent plays trials of the episodes in entirety. In this research, we focus on developing a collective policy trained solely inside agents' simulations, which can then be transferred to the real-world environment. The key idea is to let agents imagine together; make them take turns to host virtual episodes within which all agents participate and interact with their own biased representations. Since agents' biases vary, the collective policy developed while sequentially visiting the internal simulations complement one another's shortcomings. In our experiment, the collective policies consistently achieve significantly higher returns than the best individually trained policies.""","""This paper presents an ensemble method for reinforcement learning. The method trains an ensemble of transition and reward models. Each element of this ensemble has a different view of the data (for example, ablated observation pixels) and a different latent space for its models. A single (collective) policy is then trained, by learning from trajectories generated from each of the models in the ensemble. The collective policy makes direct use of the latent spaces and models in the ensemble by means of a translator that maps one latent space into all the other latent spaces, and an aggregator that combines all the model outputs. The method is evaluated on the CarRacing and VizDoom environments. The reviewers raised several concerns about the paper. The evaluations were not convincing with artificially weak baselines and only worked well in one of the two tested environments (reviewer 2). The paper does not adequately connect to related work on model-based RL (reviewer 1 and 2). The paper does not motivate its artificial setting (reviewer 2 and 1). The paper's presentation lacks clarity from using non-standard terminology and notation without adequate explanation (reviewer 1 and 3). Technical aspects of the translator component were also unclear to multiple reviewers (reviewers 1, 2 and 3). The authors found the review comments to be helpful for future work, but provided no additional clarifications. The paper is not ready for publication.""" 175,"""Deformable Kernels: Adapting Effective Receptive Fields for Object Deformation""","['Effective Receptive Fields', 'Deformation Modeling', 'Dynamic Inference']","""Convolutional networks are not aware of an object's geometric variations, which leads to inefficient utilization of model and data capacity. To overcome this issue, recent works on deformation modeling seek to spatially reconfigure the data towards a common arrangement such that semantic recognition suffers less from deformation. This is typically done by augmenting static operators with learned free-form sampling grids in the image space, dynamically tuned to the data and task for adapting the receptive field. Yet adapting the receptive field does not quite reach the actual goal -- what really matters to the network is the *effective* receptive field (ERF), which reflects how much each pixel contributes. It is thus natural to design other approaches to adapt the ERF directly during runtime. In this work, we instantiate one possible solution as Deformable Kernels (DKs), a family of novel and generic convolutional operators for handling object deformations by directly adapting the ERF while leaving the receptive field untouched. At the heart of our method is the ability to resample the original kernel space towards recovering the deformation of objects. This approach is justified with theoretical insights that the ERF is strictly determined by data sampling locations and kernel values. We implement DKs as generic drop-in replacements of rigid kernels and conduct a series of empirical studies whose results conform with our theories. Over several tasks and standard base models, our approach compares favorably against prior works that adapt during runtime. In addition, further experiments suggest a working mechanism orthogonal and complementary to previous works.""","""In my opinion, this paper is borderline (but my expertise is not in this area) and the reviewers are too uncertain to be of help in making an informed decision.""" 176,"""Three-Head Neural Network Architecture for AlphaZero Learning""","['alphazero', 'reinforcement learning', 'two-player games', 'heuristic search', 'deep neural networks']","""The search-based reinforcement learning algorithm AlphaZero has been used as a general method for mastering two-player games Go, chess and Shogi. One crucial ingredient in AlphaZero (and its predecessor AlphaGo Zero) is the two-head network architecture that outputs two estimates --- policy and value --- for one input game state. The merit of such an architecture is that letting policy and value learning share the same representation substantially improved generalization of the neural net. A three-head network architecture has been recently proposed that can learn a third action-value head on a fixed dataset the same as for two-head net. Also, using the action-value head in Monte Carlo tree search (MCTS) improved the search efficiency. However, effectiveness of the three-head network has not been investigated in an AlphaZero style learning paradigm. In this paper, using the game of Hex as a test domain, we conduct an empirical study of the three-head network architecture in AlpahZero learning. We show that the architecture is also advantageous at the zero-style iterative learning. Specifically, we find that three-head network can induce the following benefits: (1) learning can become faster as search takes advantage of the additional action-value head; (2) better prediction results than two-head architecture can be achieved when using additional action-value learning as an auxiliary task.""","""The authors provide an empirical study of the recent 3-head architecture applied to AlphaZero style learning. They thoroughly evaluate this approach using the game Hex as a test domain. Initially, reviewers were concerned about how well the hyper parameters for tuned for different methods. The authors did a commendable job addressing the reviewers concerns in their revision. However, the reviewers agreed that with the additional results showing the gap between the 2 headed architecture and the three-headed architecture narrowed, the focus of the paper has changed substantially from the initial version. They suggest that a substantial rewrite of the paper would make the most sense before publication. As a result, at this time, I'm going to recommend rejection, but I encourage the authors to incorporate the reviewers feedback. I believe this paper has the potential to be a strong submission in the future. """ 177,"""Efficient Probabilistic Logic Reasoning with Graph Neural Networks""","['probabilistic logic reasoning', 'Markov Logic Networks', 'graph neural networks']","""Markov Logic Networks (MLNs), which elegantly combine logic rules and probabilistic graphical models, can be used to address many knowledge graph problems. However, inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult. In recent years, graph neural networks (GNNs) have emerged as efficient and effective tools for large-scale graph problems. Nevertheless, GNNs do not explicitly incorporate prior logic rules into the models, and may require many labeled examples for a target task. In this paper, we explore the combination of MLNs and GNNs, and use graph neural networks for variational inference in MLN. We propose a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model. Our extensive experiments on several benchmark datasets demonstrate that ExpressGNN leads to effective and efficient probabilistic logic reasoning.""","""This paper is far more borderline than the review scores indicate. The authors certainly did themselves no favours by posting a response so close to the end of the discussion period, but there was sufficient time to consider the responses after this, and it is somewhat disappointing that the reviewers did not engage. Reviewer 2 states that their only reason for not recommending acceptance is the lack of experiments on more than one KG. The authors point out they have experiments on more than one KG in the paper. From my reading, this is the case. I will consider R2 in favour of the paper in the absence of a response. Reviewer 3 gives a fairly clear initial review which states the main reasons they do not recommend acceptance. While not an expert on the topic of GNNs, I have enough of a technical understanding to deem that the detailed response from the authors to each of the points does address these concerns. In the absence of a response from the reviewer, it is difficult to ascertain whether they would agree, but I will lean towards assuming they are satisfied. Reviewer 1 gives a positive sounding review, with as main criticism ""Overall, the work of this paper seems technically sound but I dont find the contributions particularly surprising or novel. Along with plogicnet, there have been many extensions and applications of Gnns, and I didnt find that the paper expands this perspective in any surprising way."" This statement is simply re-asserted after the author response. I find this style of review entirely inappropriate and unfair: it is not a the role of a good scientific publication to ""surprise"". If it is technically sound, and in an area that the reviewer admits generates interest from reviewers, vague weasel words do not a reason for rejection make. I recommend acceptance.""" 178,"""MixUp as Directional Adversarial Training""","['MixUp', 'Adversarial Training', 'Untied MixUp']","""MixUp is a data augmentation scheme in which pairs of training samples and their corresponding labels are mixed using linear coefficients. Without label mixing, MixUp becomes a more conventional scheme: input samples are moved but their original labels are retained. Because samples are preferentially moved in the direction of other classes \iffalse -- which are typically clustered in input space -- \fi we refer to this method as directional adversarial training, or DAT. We show that under two mild conditions, MixUp asymptotically convergences to a subset of DAT. We define untied MixUp (UMixUp), a superset of MixUp wherein training labels are mixed with different linear coefficients to those of their corresponding samples. We show that under the same mild conditions, untied MixUp converges to the entire class of DAT schemes. Motivated by the understanding that UMixUp is both a generalization of MixUp and a form of adversarial training, we experiment with different datasets and loss functions to show that UMixUp provides improved performance over MixUp. In short, we present a novel interpretation of MixUp as belonging to a class highly analogous to adversarial training, and on this basis we introduce a simple generalization which outperforms MixUp.""","""This paper builds a connection between MixUp and adversarial training. It introduces untied MixUp (UMixUp), which generalizes the methods of MixUp. Then, it also shows that DAT and UMixUp use the same method of MixUp for generating samples but use different label mixing ratios. Though it has some valuable theoretical contributions, I agree with the reviewers that its important to include results on adversarial robustness, where both adversarial training and MixUp are playing an important role.""" 179,"""Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness""","['Invariance', 'Robustness', 'Adversarial Examples']","""Adversarial examples are malicious inputs crafted to cause a model to misclassify them. In their most common instantiation, ""perturbation-based"" adversarial examples introduce changes to the input that leave its true label unchanged, yet result in a different model prediction. Conversely, ""invariance-based"" adversarial examples insert changes to the input that leave the model's prediction unaffected despite the underlying input's label having changed. So far, the relationship between these two notions of adversarial examples has not been studied, we close this gap. We demonstrate that solely achieving perturbation-based robustness is insufficient for complete adversarial robustness. Worse, we find that classifiers trained to be Lp-norm robust are more vulnerable to invariance-based adversarial examples than their undefended counterparts. We construct theoretical arguments and analytical examples to justify why this is the case. We then illustrate empirically that the consequences of excessive perturbation-robustness can be exploited to craft new attacks. Finally, we show how to attack a provably robust defense --- certified on the MNIST test set to have at least 87% accuracy (with respect to the original test labels) under perturbations of Linfinity-norm below epsilon=0.4 --- and reduce its accuracy (under this threat model with respect to an ensemble of human labelers) to 60% with an automated attack, or just 12% with human-crafted adversarial examples.""","""The paper considers the relationship betwee: - perturbations to an input x which change predictions of a model but not the ground truth label - perturbations to an input x which do not change a model's prediction but do chance the ground truth label. The authors show that achieving robustness to the former need not guarantee robustness to the latter. While these ideas are interesting, the reviewers would like to see a tighter connection between the two forms of robustness developed. """ 180,"""Discriminability Distillation in Group Representation Learning""",[],"""Learning group representation is a commonly concerned issue in tasks where the basic unit is a group, set or sequence. The computer vision community tries to tackle it by aggregating the elements in a group based on an indicator either defined by human such as the quality or saliency of an element, or generated by a black box such as the attention score or output of a RNN. This article provides a more essential and explicable view. We claim the most significant indicator to show whether the group representation can be benefited from an element is not the quality, or an inexplicable score, but the \textit{discrimiability}. Our key insight is to explicitly design the \textit{discrimiability} using embedded class centroids on a proxy set, and show the discrimiability distribution \textit{w.r.t.} the element space can be distilled by a light-weight auxiliary distillation network. This processing is called \textit{discriminability distillation learning} (DDL). We show the proposed DDL can be flexibly plugged into many group based recognition tasks without influencing the training procedure of the original tasks. Comprehensive experiments on set-to-set face recognition and action recognition valid the advantage of DDL on both accuracy and efficiency, and it pushes forward the state-of-the-art results on these tasks by an impressive margin.""","""This paper proposes discriminability distillation learning (DDL) for learning group representations. The core idea is to learn a discriminability weight for each instance which are a member of a group, set or sequence. The discriminability score is learned by first training a standard supervised base model and using the features from this model, computing class-centroids on a proxy set, and computing the iter and intra-class distances. A function of these distance computations are then used as supervision for a distillation style small network (DDNet) which may predict the discriminability score (DDR score). A group representation is then created through a combination of known instances, weighted using their DDR score. The method is validated on face recognition and action recognition. This work initially received mixed scores, with two reviewers recommending acceptance and two recommending rejection. After reading all the reviews, rebuttals, and discussions, it seems that a key point of concern is low clarity of presentation. During the rebuttal period, the authors have revised their manuscript and interacted with reviewers. One reviewer has chosen to update their recommendation to weak acceptance in response. The main unresolved issues are related to novelty and experimental evaluation. Namely, for novelty comparison and discussion against attention based approaches and other metric learning based approaches would benefit the work, though the proposed solution does present some novelty. For the experiments there was a suggestion to evaluate the model on more complex datasets where performance is not already maxed out. The authors have provided such experiments during the rebuttal period. Despite the slight positive leanings post rebuttal, the ACs have discussed this case and determine the paper is not ready for publication.""" 181,"""Distillation pseudo-formula Early Stopping? Harvesting Dark Knowledge Utilizing Anisotropic Information Retrieval For Overparameterized NN""","['Distillation', 'Learning Thoery', 'Corrupted Label']","""Distillation is a method to transfer knowledge from one model to another and often achieves higher accuracy with the same capacity. In this paper, we aim to provide a theoretical understanding on what mainly helps with the distillation. Our answer is ""early stopping"". Assuming that the teacher network is overparameterized, we argue that the teacher network is essentially harvesting dark knowledge from the data via early stopping. This can be justified by a new concept, Anisotropic In- formation Retrieval (AIR), which means that the neural network tends to fit the informative information first and the non-informative information (including noise) later. Motivated by the recent development on theoretically analyzing overparame- terized neural networks, we can characterize AIR by the eigenspace of the Neural Tangent Kernel(NTK). AIR facilities a new understanding of distillation. With that, we further utilize distillation to refine noisy labels. We propose a self-distillation al- gorithm to sequentially distill knowledge from the network in the previous training epoch to avoid memorizing the wrong labels. We also demonstrate, both theoret- ically and empirically, that self-distillation can benefit from more than just early stopping. Theoretically, we prove convergence of the proposed algorithm to the ground truth labels for randomly initialized overparameterized neural networks in terms of l2 distance, while the previous result was on convergence in 0-1 loss. The theoretical result ensures the learned neural network enjoy a margin on the training data which leads to better generalization. Empirically, we achieve better testing accuracy and entirely avoid early stopping which makes the algorithm more user-friendly. ""","""This paper tries to bridge early stopping and distillation. 1) In Section 2, the authors empirically show more distillation effect when early stopping. 2) In Section 3, the authors propose a new provable algorithm for training noisy labels. In the discussion phase, all reviewers discussed a lot. In particular, a reviewer highlights the importance of Section 3. On the other hand, other reviewers pointed out ""what is the role of Section 2"", as the abstract/intro tends to emphasize the content of Section 2. I mostly agree all pros and cons pointed out by reviewers. I agree that the paper proposed an interesting idea for refining noisy labels with theoretical guarantees. However, the major reason for my reject decision is that the current write-up is a bit below the borderline to be accepted considering the high standard of ICLR, e.g., many typos (what is the172norm in page 4?) and misleading intro/abstract/organization. In overall, it was also hard for me to read the paper. I do believe that the paper should be much improved if the authors make more significant editorial efforts considering a more broad range of readers. I have additional suggestions for improving the paper, which I hope are useful. * Put Section 3 earlier (i.e., put Section 2 later) and revise intro/abstract so that the reader can clearly understand what is the main contribution. * Section 2.1 is weak to claim more distillation effect when early stopping. More experimental or theoretical study are necessary, e.g., you can control temperature parameter T of knowledge distillation to provide the ""early stopping"" effect without actual ""early stopping"" (the choice of T is not mentioned in the draft as it is the important hyper-parameter). * More experimental supports for your algorithm should be desirable, e.g., consider more datasets, state-of-the-art baselines, noisy types, and neural architectures (e.g., NLP models). * Softening some sentences for avoiding some potential over-claims to some readers. """ 182,"""Improving Generalization in Meta Reinforcement Learning using Learned Objectives""","['meta reinforcement learning', 'meta learning', 'reinforcement learning']","""Biological evolution has distilled the experiences of many learners into the general learning algorithms of humans. Our novel meta reinforcement learning algorithm MetaGenRL is inspired by this process. MetaGenRL distills the experiences of many complex agents to meta-learn a low-complexity neural objective function that decides how future individuals will learn. Unlike recent meta-RL algorithms, MetaGenRL can generalize to new environments that are entirely different from those used for meta-training. In some cases, it even outperforms human-engineered RL algorithms. MetaGenRL uses off-policy second-order gradients during meta-training that greatly increase its sample efficiency.""","""This paper proposes a meta-RL algorithm that learns an objective function whose gradients can be used to efficiently train a learner on entirely new tasks from those seen during meta-training. Building off-policy gradient-based meta-RL methods is challenging, and had not been previously demonstrated. Further, the demonstrated generalization capabilities are a substantial improvement in capabilities over prior meta-learning methods. There are a couple related works that are quite relevant (and somewhat similar in methodology) and overlooked -- see [1,2]. Further, we strongly encourage the authors to run the method on multiple meta-training environments and to report results with more seeds, as promised. The contributions are significant and should be seen by the ICLR community. Hence, I recommend an oral presentation. [1] Yu et al. One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning [2] Sung et al. Meta-critic networks""" 183,"""Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation""","['Reinforcement Learning', 'Learning to Optimize', 'Combinatorial Optimization', 'Compilers', 'Code Optimization', 'Neural Networks', 'ML for Systems', 'Learning for Systems']","""Achieving faster execution with shorter compilation time can foster further diversity and innovation in neural networks. However, the current paradigm of executing neural networks either relies on hand-optimized libraries, traditional compilation heuristics, or very recently genetic algorithms and other stochastic methods. These methods suffer from frequent costly hardware measurements rendering them not only too time consuming but also suboptimal. As such, we devise a solution that can learn to quickly adapt to a previously unseen design space for code optimization, both accelerating the search and improving the output performance. This solution dubbed Chameleon leverages reinforcement learning whose solution takes fewer steps to converge, and develops an adaptive sampling algorithm that not only focuses on the costly samples (real hardware measurements) on representative points but also uses a domain-knowledge inspired logic to improve the samples itself. Experimentation with real hardware shows that Chameleon provides 4.45x speed up in optimization time over AutoTVM, while also improving inference time of the modern deep networks by 5.6%.""","""This paper proposes to optimize the code optimal code in DNN compilers using adaptive sampling and reinforcement learning. This method achieves significant speedup in compilation time and execution time. The authors made strong efforts in addressing the problems raised by the reviewers, and promised to make the code publicly available, which is of particular importance for works of this nature. """ 184,"""Incorporating Horizontal Connections in Convolution by Spatial Shuffling""","['shuffle', 'convolution', 'receptive field', 'classification', 'horizontal connections']","""Convolutional Neural Networks (CNNs) are composed of multiple convolution layers and show elegant performance in vision tasks. The design of the regular convolution is based on the Receptive Field (RF) where the information within a specific region is processed. In the view of the regular convolution's RF, the outputs of neurons in lower layers with smaller RF are bundled to create neurons in higher layers with larger RF. As a result, the neurons in high layers are able to capture the global context even though the neurons in low layers only see the local information. However, in lower layers of the biological brain, the information outside of the RF changes the properties of neurons. In this work, we extend the regular convolution and propose spatially shuffled convolution (ss convolution). In ss convolution, the regular convolution is able to use the information outside of its RF by spatial shuffling which is a simple and lightweight operation. We perform experiments on CIFAR-10 and ImageNet-1k dataset, and show that ss convolution improves the classification performance across various CNNs.""","""The paper is well-motivated by neuroscience that our brains use information from outside the receptive field of convolutive processes through top-down mechanisms. However, reviewers feel that the results are not near the state of the art and the paper needs further experiments and need to scale to larger datasets. """ 185,"""Superseding Model Scaling by Penalizing Dead Units and Points with Separation Constraints""","['Dead Point', 'Dead Unit', 'Model Scaling', 'Separation Constraints', 'Dying ReLU', 'Constant Width', 'Deep Neural Networks', 'Backpropagation']","""In this article, we study a proposal that enables to train extremely thin (4 or 8 neurons per layer) and relatively deep (more than 100 layers) feedforward networks without resorting to any architectural modification such as Residual or Dense connections, data normalization or model scaling. We accomplish that by alleviating two problems. One of them are neurons whose output is zero for all the dataset, which renders them useless. This problem is known to the academic community as \emph{dead neurons}. The other is a less studied problem, dead points. Dead points refers to data points that are mapped to zero during the forward pass of the network. As such, the gradient generated by those points is not propagated back past the layer where they die, thus having no effect in the training process. In this work, we characterize both problems and propose a constraint formulation that added to the standard loss function solves them both. As an additional benefit, the proposed method allows to initialize the network weights with constant or even zero values and still allowing the network to converge to reasonable results. We show very promising results on a toy, MNIST, and CIFAR-10 datasets.""","""This paper proposes constraints to tackle the problems of dead neurons and dead points. The reviewers point out that the experiments are only done on small datasets and it is not clear if the experiments will scale further. I encourage the authors to carry out further experiments and submit to another venue.""" 186,"""Neural Architecture Search by Learning Action Space for Monte Carlo Tree Search""","['MCTS', 'Neural Architecture Search', 'Search']","""Neural Architecture Search (NAS) has emerged as a promising technique for automatic neural network design. However, existing NAS approaches often utilize manually designed action space, which is not directly related to the performance metric to be optimized (e.g., accuracy). As a result, using manually designed action space to perform NAS often leads to sample-inefficient explorations of architectures and thus can be sub-optimal. In order to improve sample efficiency, this paper proposes Latent Action Neural Architecture Search (LaNAS) that learns actions to recursively partition the search space into good or bad regions that contain networks with concentrated performance metrics, i.e., low variance. During the search phase, as different architecture search action sequences lead to regions of different performance, the search efficiency can be significantly improved by biasing towards the good regions. On the largest NAS dataset NASBench-101, our experimental results demonstrated that LaNAS is 22x, 14.6x, 12.4x, 6.8x, 16.5x more sample-efficient than Random Search, Regularized Evolution, Monte Carlo Tree Search, Neural Architecture Optimization, and Bayesian Optimization, respectively. When applied to the open domain, LaNAS achieves 98.0% accuracy on CIFAR-10 and 75.0% top1 accuracy on ImageNet in only 803 samples, outperforming SOTA AmoebaNet with 33x fewer samples.""","""This paper proposes an MCTS method for neural architecture search (NAS). Evaluations on NAS-Bench-101 and other datasets are promising. Unfortunately, no code is provided, which is very important in NAS to overcome the reproducibility crisis. Discussion: The authors were able to answer several questions of the reviewers. I also do not share the concern of AnonReviewer2 that MCTS hasn't been used for NAS before; in contrast, this appears to be a point in favor of the paper's novelty. However, the authors' reply concerning Bayesian optimization and the optimization of its acquisition function is strange: using the ConvNet-60K dataset with 1364 networks, it does not appear to make sense to use only 1% or even only 0.01% of the dataset size as a budget for optimizing the acquisition function. The reviewers stuck to their rating of 6,3,3. Overall, I therefore recommend rejection. """ 187,"""Classification Attention for Chinese NER""","['Chinese NER', 'NER', 'tagging', 'deeplearning', 'nlp']","""The character-based model, such as BERT, has achieved remarkable success in Chinese named entity recognition (NER). However, such model would likely miss the overall information of the entity words. In this paper, we propose to combine priori entity information with BERT. Instead of relying on additional lexicons or pre-trained word embeddings, our model has generated entity classification embeddings directly on the pre-trained BERT, having the merit of increasing model practicability and avoiding OOV problem. Experiments show that our model has achieved state-of-the-art results on 3 Chinese NER datasets.""","""The paper is interested in Chinese Name Entity Recognition, building on a BERT pre-trained model. All reviewers agree that the contribution has limited novelty. Motivation leading to the chosen architecture is also missing. In addition, the writing of the paper should be improved. """ 188,"""On Computation and Generalization of Generative Adversarial Imitation Learning""",[],"""Generative Adversarial Imitation Learning (GAIL) is a powerful and practical approach for learning sequential decision-making policies. Different from Reinforcement Learning (RL), GAIL takes advantage of demonstration data by experts (e.g., human), and learns both the policy and reward function of the unknown environment. Despite the significant empirical progresses, the theory behind GAIL is still largely unknown. The major difficulty comes from the underlying temporal dependency of the demonstration data and the minimax computational formulation of GAIL without convex-concave structure. To bridge such a gap between theory and practice, this paper investigates the theoretical properties of GAIL. Specifically, we show: (1) For GAIL with general reward parameterization, the generalization can be guaranteed as long as the class of the reward functions is properly controlled; (2) For GAIL, where the reward is parameterized as a reproducing kernel function, GAIL can be efficiently solved by stochastic first order optimization algorithms, which attain sublinear convergence to a stationary solution. To the best of our knowledge, these are the first results on statistical and computational guarantees of imitation learning with reward/policy function ap- proximation. Numerical experiments are provided to support our analysis. ""","""The paper provides a theoretical analysis of the recent and popular Generative Adversarial Imitation Learning (GAIL) approach. Valuable new insights on generalization and convergence are developed, and put GAIL on a stronger theoretical foundation. Reviewer questions and suggestions were largely addressed during the rebuttal.""" 189,"""Certifiably Robust Interpretation in Deep Learning""","['deep learning interpretation', 'robustness certificates', 'adversarial examples']","""Deep learning interpretation is essential to explain the reasoning behind model predictions. Understanding the robustness of interpretation methods is important especially in sensitive domains such as medical applications since interpretation results are often used in downstream tasks. Although gradient-based saliency maps are popular methods for deep learning interpretation, recent works show that they can be vulnerable to adversarial attacks. In this paper, we address this problem and provide a certifiable defense method for deep learning interpretation. We show that a sparsified version of the popular SmoothGrad method, which computes the average saliency maps over random perturbations of the input, is certifiably robust against adversarial perturbations. We obtain this result by extending recent bounds for certifiably robust smooth classifiers to the interpretation setting. Experiments on ImageNet samples validate our theory.""","""This paper discusses new methods to perform adversarial attacks on salience maps. In its current form, this paper in its current form has unfortunately has not convinced several of the reviewers/commenters of the motivation behind proposing such a method. I tend to share the same opinion. I would encourage the authors to re-think the motivation of the work, and if there are indeed solid use cases to express them explicitly in the next version of the paper.""" 190,"""Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness""","['Trustworthy Machine Learning', 'Adversarial Robustness', 'Training Objective', 'Sample Density']","""Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e.g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models. Since collecting new training data could be costly, we focus on better utilizing the given data by inducing the regions with high sample density in the feature space, which could lead to locally sufficient samples for robust learning. We first formally show that the softmax cross-entropy (SCE) loss and its variants convey inappropriate supervisory signals, which encourage the learned feature points to spread over the space sparsely in training. This inspires us to propose the Max-Mahalanobis center (MMC) loss to explicitly induce dense feature regions in order to benefit robustness. Namely, the MMC loss encourages the model to concentrate on learning ordered and compact representations, which gather around the preset optimal centers for different classes. We empirically demonstrate that applying the MMC loss can significantly improve robustness even under strong adaptive attacks, while keeping state-of-the-art accuracy on clean inputs with little extra computation compared to the SCE loss.""","""This paper proposes an alternative loss function, the max-mahalanobis center loss, that is claimed to improve adversarial robustness. In terms of quality, the reviewers commented on the convincing experiments and theoretical results, and were happy to see the sample density analysis. In terms of clarity, the reviewers commented that the paper is well-written. The problem of adversarial robustness is relevant to the ICLR community, and the proposed approach is a novel and significant contribution in this area. The authors have also convincingly answered the questions of the authors and even provided new theoretical and experimental results in their final upload. """ 191,"""Learning Neural Surrogate Model for Warm-Starting Bayesian Optimization""","['Bayesian optimization', 'meta learning', 'neural network', 'surrogate model', 'hyper-parameters tuning']","""Bayesian optimization is an effective tool to optimize black-box functions and popular for hyper-parameter tuning in machine learning. Traditional Bayesian optimization methods are based on Gaussian process (GP), relying on a GP-based surrogate model for sampling points of the function of interest. In this work, we consider transferring knowledge from related problems to target problem by learning an initial surrogate model for warm-starting Bayesian optimization. We propose a neural network-based surrogate model to estimate the function mean value in GP. Then we design a novel weighted Reptile algorithm with sampling strategy to learn an initial surrogate model from meta train set. The initial surrogate model is learned to be able to well adapt to new tasks. Extensive experiments show that this warm-starting technique enables us to find better minimizer or hyper-parameters than traditional GP and previous warm-starting methods.""","""This paper is concerned with warm-starting Bayesian optimization (i.e. starting with a better surrogate model) through transfer learning among related problems. While the key motivation for warm-starting BO is certainly important (although not novel), there are important shortcomings in the way the method is developed and demonstrated. Firstly, the reviewers questioned design decisions, such as why combine NNs and GPs in this particular way or why the posterior variance of the hybrid model is not calculated. Moreover, there are issues with the experimental methodology that do not allow extraction of confident conclusions (e.g. repeating the experiments for different initial points is highly desirable). Finally, there are presentation issues. The authors replied only to some of these concerns, but ultimately the shortcomings seem to persist and hint towards a paper that needs more work. """ 192,"""Improving the Generalization of Visual Navigation Policies using Invariance Regularization""","['Generalization', 'Deep Reinforcement Learning', 'Invariant Representation']","""Training agents to operate in one environment often yields overfitted models that are unable to generalize to the changes in that environment. However, due to the numerous variations that can occur in the real-world, the agent is often required to be robust in order to be useful. This has not been the case for agents trained with reinforcement learning (RL) algorithms. In this paper, we investigate the overfitting of RL agents to the training environments in visual navigation tasks. Our experiments show that deep RL agents can overfit even when trained on multiple environments simultaneously. We propose a regularization method which combines RL with supervised learning methods by adding a term to the RL objective that would encourage the invariance of a policy to variations in the observations that ought not to affect the action taken. The results of this method, called invariance regularization, show an improvement in the generalization of policies to environments not seen during training. ""","""All the reviewers recommend rejecting the submission. There is no basis for acceptance.""" 193,"""Empirical confidence estimates for classification by deep neural networks""","['confidence', 'classification', 'uncertainty', 'anomaly', 'robustness']","""How well can we estimate the probability that the classification predicted by a deep neural network is correct (or in the Top 5)? It is well-known that the softmax values of the network are not estimates of the probabilities of class labels. However, there is a misconception that these values are not informative. We define the notion of implied loss and prove that if an uncertainty measure is an implied loss, then low uncertainty means high probability of correct (or Top-k) classification on the test set. We demonstrate empirically that these values can be used to measure the confidence that the classification is correct. Our method is simple to use on existing networks: we proposed confidence measures for Top-k which can be evaluated by binning values on the test set. ""","""The paper proposes to model uncertainty using expected Bayes factors, and empirically show that the proposed measure correlates well with the probability that the classification is correct. All the reviewers agreed that the idea of using Bayes factors for uncertainty estimation is an interesting approach. However, the reviewers also found the presentation a bit hard to follow. While the rebuttal addressed some of these concerns, there were still some remaining concerns (see R3's comments). I think this is a really promising direction of research and I appreciate the authors' efforts to revise the draft during the rebuttal (which led to some reviewers increasing the score). This is a borderline paper right now but I feel that the paper has the potential to turn into a great paper with another round of revision. I encourage the authors to revise the draft and resubmit to a different venue.""" 194,"""On Stochastic Sign Descent Methods""","['non-convex optimization', 'stochastic optimization', 'gradient compression']","""Various gradient compression schemes have been proposed to mitigate the communication cost in distributed training of large scale machine learning models. Sign-based methods, such as signSGD (Bernstein et al., 2018), have recently been gaining popularity because of their simple compression rule and connection to adaptive gradient methods, like ADAM. In this paper, we perform a general analysis of sign-based methods for non-convex optimization. Our analysis is built on intuitive bounds on success probabilities and does not rely on special noise distributions nor on the boundedness of the variance of stochastic gradients. Extending the theory to distributed setting within a parameter server framework, we assure exponentially fast variance reduction with respect to number of nodes, maintaining 1-bit compression in both directions and using small mini-batch sizes. We validate our theoretical findings experimentally.""","""This paper proposes an analysis of signSGD in some special cases. SignGD has been shown to be of interest, whether because of its similarity to Adam or in quasi-convex settings. The complaint shared by reviewers was the strength of the conditions. SGC is really strong, I have yet to see increasing mini-batch sizes to be used in practice (although there are quite a few papers mentioning this technique to get a convergence rate) and the strength of the other two are harder to assess. With that said, the improvement compared to existing work such as Karimireddy et. al. 2019 is unclear. I encourage the authors to address the comment of the reviewers and to submit an improved version to a later, or perhaps to a more theoretical, convergence.""" 195,"""Classification-Based Anomaly Detection for General Data""",['anomaly detection'],"""Anomaly detection, finding patterns that substantially deviate from those seen previously, is one of the fundamental problems of artificial intelligence. Recently, classification-based methods were shown to achieve superior results on this task. In this work, we present a unifying view and propose an open-set method, GOAD, to relax current generalization assumptions. Furthermore, we extend the applicability of transformation-based methods to non-image data using random affine transformations. Our method is shown to obtain state-of-the-art accuracy and is applicable to broad data types. The strong performance of our method is extensively validated on multiple datasets from different domains. ""","""The paper presents a method that unifies classification-based approaches for outlier detection and (one-class) anomaly detection. The paper also extends the applicability to non-image data. In the end, all the reviewers agreed that the paper makes a valuable contribution and I'm happy to recommend acceptance.""" 196,"""Semi-supervised semantic segmentation needs strong, high-dimensional perturbations""","['computer vision', 'semantic segmentation', 'semi-supervised', 'consistency regularisation']","""Consistency regularization describes a class of approaches that have yielded ground breaking results in semi-supervised classification problems. Prior work has established the cluster assumption\,---\,under which the data distribution consists of uniform class clusters of samples separated by low density regions\,---\,as key to its success. We analyze the problem of semantic segmentation and find that the data distribution does not exhibit low density regions separating classes and offer this as an explanation for why semi-supervised segmentation is a challenging problem. We then identify the conditions that allow consistency regularization to work even without such low-density regions. This allows us to generalize the recently proposed CutMix augmentation technique to a powerful masked variant, CowMix, leading to a successful application of consistency regularization in the semi-supervised semantic segmentation setting and reaching state-of-the-art results in several standard datasets.""","""This paper proposes a method for semi-supervised semantic segmentation through consistency (with respect to various perturbations) regularization. While the reviewers believe that this paper contains interesting ideas and that it has been substantially improved from its original form, it is not yet ready for acceptance to ICLR-2020. With a little bit of polish, this paper is likely to be accepted at another venue.""" 197,"""HOW IMPORTANT ARE NETWORK WEIGHTS? TO WHAT EXTENT DO THEY NEED AN UPDATE?""","['weights update', 'weights importance', 'weight freezing']","""In the context of optimization, a gradient of a neural network indicates the amount a specific weight should change with respect to the loss. Therefore, small gradients indicate a good value of the weight that requires no change and can be kept frozen during training. This paper provides an experimental study on the importance of a neural network weights, and to which extent do they need to be updated. We wish to show that starting from the third epoch, freezing weights which have no informative gradient and are less likely to be changed during training, results in a very slight drop in the overall accuracy (and in sometimes better). We experiment on the MNIST, CIFAR10 and Flickr8k datasets using several architectures (VGG19, ResNet-110 and DenseNet-121). On CIFAR10, we show that freezing 80% of the VGG19 network parameters from the third epoch onwards results in 0.24% drop in accuracy, while freezing 50% of Resnet-110 parameters results in 0.9% drop in accuracy and finally freezing 70% of Densnet-121 parameters results in 0.57% drop in accuracy. Furthermore, to experiemnt with real-life applications, we train an image captioning model with attention mechanism on the Flickr8k dataset using LSTM networks, freezing 60% of the parameters from the third epoch onwards, resulting in a better BLEU-4 score than the fully trained model. Our source code can be found in the appendix.""","""The authors demonstrate that starting from the 3rd epoch, freezing a large fraction of the weights (based on gradient information), but not entire layers, results in slight drops in performance. Given existing literature, the reviewers did not find this surprising, even though freezing only some of a layers weights has not been explicitly analyzed before. Although this is an interesting observation, the authors did not explain why this finding is important and it is unclear what the impact of such a finding will be. The authors are encouraged to expand on the implications of their finding and theoretical basis for it. Furthermore, reviewers raised concerns about the extensiveness of the empirical evaluation. This paper falls below the bar for ICLR, so I recommend rejection.""" 198,"""Federated Adversarial Domain Adaptation""","['Federated Learning', 'Domain Adaptation', 'Transfer Learning', 'Feature Disentanglement']","""Federated learning improves data privacy and efficiency in machine learning performed over networks of distributed devices, such as mobile phones, IoT and wearable devices, etc. Yet models trained with federated learning can still fail to generalize to new devices due to the problem of domain shift. Domain shift occurs when the labeled data collected by source nodes statistically differs from the target node's unlabeled data. In this work, we present a principled approach to the problem of federated domain adaptation, which aims to align the representations learned among the different nodes with the data distribution of the target node. Our approach extends adversarial adaptation techniques to the constraints of the federated setting. In addition, we devise a dynamic attention mechanism and leverage feature disentanglement to enhance knowledge transfer. Empirically, we perform extensive experiments on several image and text classification tasks and show promising results under unsupervised federated domain adaptation setting.""","""This paper studies an interesting new problem, federated domain adaptation, and proposes an approach based on dynamic attention, federated adversarial alignment, and representation disentanglement. Reviewers generally agree that the paper contributes a novel approach to an interesting problem with theoretical guarantees and empirical justification. While many professional concerns were raised by the reviewers, the authors managed to perform an effective rebuttal with a major revision, which addressed the concerns convincingly. AC believes that the updated version is acceptable. Hence I recommend acceptance.""" 199,"""Global Relational Models of Source Code""","['Models of Source Code', 'Graph Neural Networks', 'Structured Learning']","""Models of code can learn distributed representations of a program's syntax and semantics to predict many non-trivial properties of a program. Recent state-of-the-art models leverage highly structured representations of programs, such as trees, graphs and paths therein (e.g. data-flow relations), which are precise and abundantly available for code. This provides a strong inductive bias towards semantically meaningful relations, yielding more generalizable representations than classical sequence-based models. Unfortunately, these models primarily rely on graph-based message passing to represent relations in code, which makes them de facto local due to the high cost of message-passing steps, quite in contrast to modern, global sequence-based models, such as the Transformer. In this work, we bridge this divide between global and structured models by introducing two new hybrid model families that are both global and incorporate structural bias: Graph Sandwiches, which wrap traditional (gated) graph message-passing layers in sequential message-passing layers; and Graph Relational Embedding Attention Transformers (GREAT for short), which bias traditional Transformers with relational information from graph edge types. By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation. Starting with a graph-based model that already improves upon the prior state-of-the-art for this task by 20%, we show that our proposed hybrid models improve an additional 10-15%, while training both faster and using fewer parameters.""","""The paper investigates hybrid NN architectures to represent programs, involving both local (RNN, Transformer) and global (Gated Graph NN) structures, with the goal of exploiting the program structure while permitting the fast flow of information through the whole program. The proof of concept for the quality of the representation is the performance on the VarMisuse task (identifying where a variable was replaced by another one, and which variable was the correct one). Other criteria regard the computational cost of training and number of parameters. Varied architectures, involving fast and local transmission with and without attention mechanisms, are investigated, comparing full graphs and compressed (leaves-only) graphs. The lessons learned concern the trade-off between the architecture of the model, the computational time and the learning curve. It is suggested that the Transformer learns from scratch to connect the tokens as appropriate; and that interleaving RNN and GNN allows for more effective processing, with less message passes and less parameters with improved accuracy. A first issue raised by the reviewers concerns the computational time (ca 100 hours on P100 GPUs); the authors focus on the performance gain w.r.t. GGNN in terms of computational time (significant) and in terms of epochs. Another concern raised by the reviewers is the moderate originality of the proposed architecture. I strongly recommend that the authors make their architecture public; this is imo the best way to evidence the originality of the proposed solution. The authors did a good job in answering the other concerns, in particular concerning the computational time and the choice of the samples. I thus recommend acceptance. """ 200,"""Deep Network Classification by Scattering and Homotopy Dictionary Learning""","['dictionary learning', 'scattering transform', 'sparse coding', 'imagenet']","""We introduce a sparse scattering deep convolutional neural network, which provides a simple model to analyze properties of deep representation learning for classification. Learning a single dictionary matrix with a classifier yields a higher classification accuracy than AlexNet over the ImageNet 2012 dataset. The network first applies a scattering transform that linearizes variabilities due to geometric transformations such as translations and small deformations. A sparse pseudo-formula dictionary coding reduces intra-class variability while preserving class separation through projections over unions of linear spaces. It is implemented in a deep convolutional network with a homotopy algorithm having an exponential convergence. A convergence proof is given in a general framework that includes ALISTA. Classification results are analyzed on ImageNet.""","""After the rebuttal period the ratings on this paper increased and it now has a strong assessment across reviewers. The AC recommends acceptance.""" 201,"""Localized Generations with Deep Neural Networks for Multi-Scale Structured Datasets""","['Variational autoencoder', 'Local learning', 'Model-agnostic meta-learning', 'Disentangled representation']","""Extracting the hidden structure of the external environment is an essential component of intelligent agents and human learning. The real-world datasets that we are interested in are often characterized by the locality: the change in the structural relationship between the data points depending on location in observation space. The local learning approach extracts semantic representations for these datasets by training the embedding model from scratch for each local neighborhood, respectively. However, this approach is only limited to use with a simple model, since the complex model, including deep neural networks, requires a massive amount of data and extended training time. In this study, we overcome this trade-off based on the insight that the real-world dataset often shares some structural similarity between each neighborhood. We propose to utilize the embedding model for the other local structure as a weak form of supervision. Our proposed model, the Local VAE, generalize the Variational Autoencoder to have the different model parameters for each local subset and train these local parameters by the gradient-based meta-learning. Our experimental results showed that the Local VAE succeeded in learning the semantic representations for the dataset with local structure, including the 3D Shapes Dataset, and generated high-quality images.""","""The paper presents a structured VAE, where the model parameters depend on a local structure (such as distance in feature or local space), and it uses the meta-learning framework to adjust the dependency of the model parameters to the local neighborhood. The idea is natural, as pointed by Rev#1. It incurs an extra learning cost, as noted by Rev#1 and #2, asking for details about the extra-cost. The authors' reply is (last alinea in first reply to Rev#1): we did not comment (...) because in essence, using neighborhoods in a naive way is not affordable. The area chair would like to know the actual computational time of Local VAE compared to that of the baselines. More details (for instance visualization) about the results on Cars3D and NORB would also be needed to better appreciate the impact of the locality structure. The fact that the optimal value (wrt Disentanglement) is rather low ( pseudo-formula ) would need be discussed, and assessed w.r.t. the standard deviation. In summary, the paper presents a good idea. More details about its impacts on the VAE quality, and its computation costs, are needed to fully appreciate its merits. """ 202,"""LOGAN: Latent Optimisation for Generative Adversarial Networks""","['GAN', 'adversarial training', 'generative model', 'game theory']","""Training generative adversarial networks requires balancing of delicate adversarial dynamics. Even with careful tuning, training may diverge or end up in a bad equilibrium with dropped modes. In this work, we introduce a new form of latent optimisation inspired by the CS-GAN and show that it improves adversarial dynamics by enhancing interactions between the discriminator and the generator. We develop supporting theoretical analysis from the perspectives of differentiable games and stochastic approximation. Our experiments demonstrate that latent optimisation can significantly improve GAN training, obtaining state-of-the-art performance for the ImageNet (128 x 128) dataset. Our model achieves an Inception Score (IS) of 148 and an Frechet Inception Distance (FID) of 3.4, an improvement of 17% and 32% in IS and FID respectively, compared with the baseline BigGAN-deep model with the same architecture and number of parameters.""","""The authors propose to overcome challenges in GAN training through latent optimization, i.e. updating the latent code, motivated by natural gradients. The authors show improvement over previous methods. The work is well-motivated, but in my opinion, further experiments and comparisons need to be made before the work can be ready for publication. The authors write that ""Unfortunately, SGA is expensive to scale because computing the second-order derivatives with respect to all parameters is expensive"" and further ""Crucially, latent optimization approximates SGA using only second-order derivatives with respect to the latent z and parameters of the discriminator and generator separately. The second-order terms involving parameters of both the discriminator and the generator which are extremely expensive to compute are not used. For latent zs with dimensions typically used in GANs (e.g., 128256, orders of magnitude less than the number of parameters), these can be computed efficiently. In short, latent optimization efficiently couples the gradients of the discriminator and generator, as prescribed by SGA, but using the much lower-dimensional latent source z which makes the adjustment scalable."" However, this is not true. Computing the Hessian vector product is not that expensive. In fact, it can be computed at a cost comparable to gradient evaluations using automatic differentiation (Pearlmutter (1994)). In frameworks such as PyTorch, this can be done efficiently using double backpropagation, so only twice the cost. Based on the above, one of the main claims of improvement over existing methods, which is furthermore not investigated experimentally, is false. It is unacceptable that the authors do not compare with SGA: both in terms of quality and computational cost since that is the premise of the paper. The authors also miss recent works that successfully ran methods with Hessian-vector products: pseudo-url pseudo-url""" 203,"""VAENAS: Sampling Matters in Neural Architecture Search""",[],"""Neural Architecture Search (NAS) aims at automatically finding neural network architectures within an enormous designed search space. The search space usually contains billions of network architectures which causes extremely expensive computing costs in searching for the best-performing architecture. One-shot and gradient-based NAS approaches have recently shown to achieve superior results on various computer vision tasks such as image recognition. With the weight sharing mechanism, these methods lead to efficient model search. Despite their success, however, current sampling methods are either fixed or hand-crafted and thus ineffective. In this paper, we propose a learnable sampling module based on variational auto-encoder (VAE) for neural architecture search (NAS), named as VAENAS, which can be easily embedded into existing weight sharing NAS framework, e.g., one-shot approach and gradient-based approach, and significantly improve the performance of searching results. VAENAS generates a series of competitive results on CIFAR-10 and ImageNet in NasNet-like search space. Moreover, combined with one-shot approach, our method achieves a new state-of-the-art result for ImageNet classification model under 400M FLOPs with 77.4\% in ShuffleNet-like search space. Finally, we conduct a thorough analysis of VAENAS on NAS-bench-101 dataset, which demonstrates the effectiveness of our proposed methods. ""","""This paper proposes to represent the distribution w.r.t. which neural architecture search (NAS) samples architectures through a variational autoencoder, rather than through a fully factorized distribution (as previous work did). In the discussion, a few things improved (causing one reviewer to increase his/her score from 1 to 3), but it became clear that the empirical evaluation has issues, with a different search space being used for the method than for the baselines. There was unanimous agreement for rejection. I agree with this judgement and thus recommend rejection.""" 204,"""Sentence embedding with contrastive multi-views learning""","['contrastive', 'multi-views', 'linguistic', 'embedding']","""In this work, we propose a self-supervised method to learn sentence representations with an injection of linguistic knowledge. Multiple linguistic frameworks propose diverse sentence structures from which semantic meaning might be expressed out of compositional words operations. We aim to take advantage of this linguist diversity and learn to represent sentences by contrasting these diverse views. Formally, multiple views of the same sentence are mapped to close representations. On the contrary, views from other sentences are mapped further. By contrasting different linguistic views, we aim at building embeddings which better capture semantic and which are less sensitive to the sentence outward form. ""","""This paper proposes a method to learn sentence representations that incorporates linguistic knowledge in the form of dependency trees using contrastive learning. Experiments on SentEval and probing tasks show that the proposed method underperform baseline methods. All reviewers agree that the results are not strong enough to support the claim of the paper and have some concerns about the scalability of the implementation. They also agree that the writing of the paper can be improved (details included in their reviews below). The authors acknowledged these concerns and mentioned that they will use them to improve the paper for future work, so I recommend rejecting this paper for ICLR.""" 205,"""Continual Learning using the SHDL Framework with Skewed Replay Distributions""","['Continual Learning', 'Catastrophic Forgetting', 'SHDL', 'CIFAR-100']","""Human and animals continuously acquire, adapt as well as transfer knowledge throughout their lifespan. The ability to learn continuously is crucial for the effective functioning of agents interacting with the real world and processing continuous streams of information. Continuous learning has been a long-standing challenge for neural networks as the repeated acquisition of information from non-uniform data distributions generally lead to catastrophic forgetting or interference. This work proposes a modular architecture capable of continuous acquisition of tasks while averting catastrophic forgetting. Specifically, our contributions are: (i) Efficient Architecture: a modular architecture emulating the visual cortex that can learn meaningful representations with limited labelled examples, (ii) Knowledge Retention: retention of learned knowledge via limited replay of past experiences, (iii) Forward Transfer: efficient and relatively faster learning on new tasks, and (iv) Naturally Skewed Distributions: The learning in the above-mentioned claims is performed on non-uniform data distributions which better represent the natural statistics of our ongoing experience. Several experiments that substantiate the above-mentioned claims are demonstrated on the CIFAR-100 dataset.""","""The paper adapts a previously proposed modular deep network architecture (SHDL) for supervised learning in a continual learning setting. One problem in this setting is catastrophic forgetting. The proposed solution replays a small fraction of the data from old tasks to avoid forgetting, on top of a modular architecture that facilitates fast transfer when new tasks are added. The method is developed for image inputs and evaluated experimentally on CIFAR-100. The reviews were in agreement that this paper is not ready for publication. All the reviews had concerns about the lack of explanation of the proposed solution and the experimental methods. The reviewers were concerned about the choice of metrics not being comparable or justified: Reviewer4 wanted an apples-to-apples comparison, Reviewer1 suggested the paper follow the evaluation paradigm used in earlier papers, and Reviewer2 described the absence of an explained baseline value. Two reviewers (Reviewer4 and Reviewer2) described the lack of details on the parameters, architecture, and training regime used for the experiments. The paper did not not justify which aspects of the modular system contributed to the observed performance (Reviewer4 and Reviewer1). Several additional concerns were also raised. The authors did not respond to any of the concerns raised by the reviewers. """ 206,"""Semi-supervised Pose Estimation with Geometric Latent Representations""","['Semi-supervised learning', 'pose estimation', 'angle estimation', 'variational autoencoders']","""Pose estimation is the task of finding the orientation of an object within an image with respect to a fixed frame of reference. Current classification and regression approaches to the task require large quantities of labelled data for their purposes. The amount of labelled data for pose estimation is relatively limited. With this in mind, we propose the use of Conditional Variational Autoencoders (CVAEs) \cite{Kingma2014a} with circular latent representations to estimate the corresponding 2D rotations of an object. The method is capable of training with datasets that have an arbitrary amount of labelled images providing relatively similar performance for cases in which 10-20% of the labels for images is missing. ""","""This paper addresses the problem of rotation estimation in 2D images. The method attempted to reduce the labeling need by learning in a semi-supervised fashion. The approach learns a VAE where the latent code is be factored into the latent vector and the object rotation. All reviewers agreed that this paper is not ready for acceptance. The reviewers did express promise in the direction of this work. However, there were a few main concerns. First, the focus on 2D instead of 3D orientation. The general consensus was that 3D would be more pertinent use case and that extension of the proposed approach from 2D to 3D is likely non-trivial. The second issue is that minimal technical novelty. The reviewers argue that the proposed solution is a combination of existing techniques to a new problem area. Since the work does not have sufficient technical novelty to compare against other disentanglement works and is being applied to a less relevant experimental setting, the AC does not recommend acceptance. """ 207,"""GUIDEGAN: ATTENTION BASED SPATIAL GUIDANCE FOR IMAGE-TO-IMAGE TRANSLATION""","['Image-to-Image translation', 'Attention Learning', 'GAN']","""Recently, Generative Adversarial Network (GAN) and numbers of its variants have been widely used to solve the image-to-image translation problem and achieved extraordinary results in both a supervised and unsupervised manner. However, most GAN-based methods suffer from the imbalance problem between the generator and discriminator in practice. Namely, the relative model capacities of the generator and discriminator do not match, leading to mode collapse and/or diminished gradients. To tackle this problem, we propose a GuideGAN based on attention mechanism. More specifically, we arm the discriminator with an attention mechanism so not only it estimates the probability that its input is real, but also does it create an attention map that highlights the critical features for such prediction. This attention map then assists the generator to produce more plausible and realistic images. We extensively evaluate the proposed GuideGAN framework on a number of image transfer tasks. Both qualitative results and quantitative comparison demonstrate the superiority of our proposed approach.""","""The paper proposes to augment the conditional GAN discriminator with an attention mechanism, with the aim to help the generator, in the context of image to image translation. The reviewers raise several issues in their reviews. One theoretical concern has to do with how the training of the attention mechanism (which seems to be collaborative) would interact with the minimax, zero-sum nature of a GAN objective; another with the discrepancy in how the attention map is used during training and testing. The experimental results were not significant enough, and the reviewers also recommend additional experiment results to clearly demonstrate the benefit of the method. """ 208,"""Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples""","['few-shot learning', 'meta-learning', 'few-shot classification']","""Few-shot classification refers to learning a classifier for new classes given only a few examples. While a plethora of models have emerged to tackle it, we find the procedure and datasets that are used to assess their progress lacking. To address this limitation, we propose Meta-Dataset: a new benchmark for training and evaluating models that is large-scale, consists of diverse datasets, and presents more realistic tasks. We experiment with popular baselines and meta-learners on Meta-Dataset, along with a competitive method that we propose. We analyze performance as a function of various characteristics of test tasks and examine the models ability to leverage diverse training sources for improving their generalization. We also propose a new set of baselines for quantifying the benefit of meta-learning in Meta-Dataset. Our extensive experimentation has uncovered important research challenges and we hope to inspire work in these directions.""","""While the reviewers have some outstanding issues regarding the organization and clarity of the paper, the overall consensus is that the proposed evaluation methods is a useful improvement over current standards for meta-learning.""" 209,"""Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks""","['Neural Tangent Kernels', 'over-parametrized neural networks', 'deep learning theory']","""Recent theoretical work has established connections between over-parametrized neural networks and linearized models governed by the Neural Tangent Kernels (NTKs). NTK theory leads to concrete convergence and generalization results, yet the empirical performance of neural networks are observed to exceed their linearized models, suggesting insufficiency of this theory. Towards closing this gap, we investigate the training of over-parametrized neural networks that are beyond the NTK regime yet still governed by the Taylor expansion of the network. We bring forward the idea of randomizing the neural networks, which allows them to escape their NTK and couple with quadratic models. We show that the optimization landscape of randomized two-layer networks are nice and amenable to escaping-saddle algorithms. We prove concrete generalization and expressivity results on these randomized networks, which lead to sample complexity bounds (of learning certain simple functions) that match the NTK and can in addition be better by a dimension factor when mild distributional assumptions are present. We demonstrate that our randomization technique can be generalized systematically beyond the quadratic case, by using it to find networks that are coupled with higher-order terms in their Taylor series. ""","""This paper studies the training of over-parameterized two-layer neural networks by considering high-order Taylor approximation, and randomizing the network to remove the first order term in the networks Taylor expansion. This enables the neural network training go beyond the recently so-called neural tangent kernel (NTK) regime. The authors also established the optimization landscape, generalization error and expressive power results under the proposed analysis framework. They showed that when learning polynomials, the proposed randomized networks with quadratic Taylor approximation outperform standard NTK by a factor of the input dimension. This is a very nice work, and provides a new perspective on NTK and beyond. All reviewers are in support of accepting this paper. """ 210,"""Kronecker Attention Networks""",[],"""Attention operators have been applied on both 1-D data like texts and higher-order data such as images and videos. Use of attention operators on high-order data requires flattening of the spatial or spatial-temporal dimensions into a vector, which is assumed to follow a multivariate normal distribution. This not only incurs excessive requirements on computational resources, but also fails to preserve structures in data. In this work, we propose to avoid flattening by developing Kronecker attention operators (KAOs) that operate on high-order tensor data directly. KAOs lead to dramatic reductions in computational resources. Moreover, we analyze KAOs theoretically from a probabilistic perspective and point out that KAOs assume the data follow matrix-variate normal distributions. Experimental results show that KAOs reduce the amount of required computational resources by a factor of hundreds, with larger factors for higher-dimensional and higher-order data. Results also show that networks with KAOs outperform models without attention, while achieving competitive performance as those with original attention operators.""","""This submission has been assessed by three reviewers who scored it as 3/3/3. The main criticism includes lack of motivation for sections 3.1 and 3.2, comparisons to mere regular self-attention without encompassing more works on this topic, a connection between Theorem 1 and the rest of the paper seems missing. Finally, there exists a strong resemblance to another submission by the same authors which is also raises the questions about potentially a dual submission. Even excluding the last argument, lack of responses to reviewers does not help this case. Thus, this paper cannot be accepted by ICLR2020.""" 211,"""INSTANCE CROSS ENTROPY FOR DEEP METRIC LEARNING""","['Deep Metric Learning', 'Instance Cross Entropy', 'Sample Mining/Weighting', 'Image Retrieval']","""Loss functions play a crucial role in deep metric learning thus a variety of them have been proposed. Some supervise the learning process by pairwise or tripletwise similarity constraints while others take the advantage of structured similarity information among multiple data points. In this work, we approach deep metric learning from a novel perspective. We propose instance cross entropy (ICE) which measures the difference between an estimated instance-level matching distribution and its ground-truth one. ICE has three main appealing properties. Firstly, similar to categorical cross entropy (CCE), ICE has clear probabilistic interpretation and exploits structured semantic similarity information for learning supervision. Secondly, ICE is scalable to infinite training data as it learns on mini-batches iteratively and is independent of the training set size. Thirdly, motivated by our relative weight analysis, seamless sample reweighting is incorporated. It rescales samples gradients to control the differentiation degree over training examples instead of truncating them by sample mining. In addition to its simplicity and intuitiveness, extensive experiments on three real-world benchmarks demonstrate the superiority of ICE.""","""The paper proposes a new objective function called ICE for metric learning. There was a substantial discussion with the authors about this paper. The two reviewers most experienced in the field found the novelty compared to the vast existing literature lacking, and remained unconvinced after the discussion. Some reviewers also found the technical presentation and interpretations to need improvement, and this was partially addressed by a new revision. Based on this discussion, I recommend a rejection at this time, but encourage the authors to incorporate the feedback and in particular place the work in context more fully, and resubmit to another venue.""" 212,"""A closer look at the approximation capabilities of neural networks""","['deep learning', 'approximation', 'universal approximation theorem']","""The universal approximation theorem, in one of its most general versions, says that if we consider only continuous activation functions , then a standard feedforward neural network with one hidden layer is able to approximate any continuous multivariate function f to any given approximation threshold , if and only if is non-polynomial. In this paper, we give a direct algebraic proof of the theorem. Furthermore we shall explicitly quantify the number of hidden units required for approximation. Specifically, if X in R^n is compact, then a neural network with n input units, m output units, and a single hidden layer with {n+d choose d} hidden units (independent of m and ), can uniformly approximate any polynomial function f:X -> R^m whose total degree is at most d for each of its m coordinate functions. In the general case that f is any continuous function, we show there exists some N in O(^{-n}) (independent of m), such that N hidden units would suffice to approximate f. We also show that this uniform approximation property (UAP) still holds even under seemingly strong conditions imposed on the weights. We highlight several consequences: (i) For any > 0, the UAP still holds if we restrict all non-bias weights w in the last layer to satisfy |w| < . (ii) There exists some >0 (depending only on f and ), such that the UAP still holds if we restrict all non-bias weights w in the first layer to satisfy |w|>. (iii) If the non-bias weights in the first layer are *fixed* and randomly chosen from a suitable range, then the UAP holds with probability 1.""","""This is a nice paper on the classical problem of universal approximation, but giving a direct proof with good approximation rates, and providing many refinements and ties to the literature. If possible, I urge the authors to revise the paper further for camera ready; there are various technical oversights (e.g., 1/lambda should appear in the approximation rates in theorem 3.1), and the proof of theorem 3.1 is an uninterrupted 2.5 page block (splitting it into lemmas would make it cleaner, and also those lemmas could be useful to other authors).""" 213,"""Knowledge Hypergraphs: Prediction Beyond Binary Relations""","['knowledge graphs', 'knowledge hypergraphs', 'knowledge hypergraph completion']","""A Knowledge Hypergraph is a knowledge base where relations are defined on two or more entities. In this work, we introduce two embedding-based models that perform link prediction in knowledge hypergraphs: (1) HSimplE is a shift-based method that is inspired by an existing model operating on knowledge graphs, in which the representation of an entity is a function of its position in the relation, and (2) HypE is a convolution-based method which disentangles the representation of an entity from its position in the relation. We test our models on two new knowledge hypergraph datasets that we obtain from Freebase, and show that both HSimplE and HypE are more effective in predicting links in knowledge hypergraphs than the proposed baselines and existing methods. Our experiments show that HypE outperforms HSimplE when trained with fewer parameters and when tested on samples that contain at least one entity in a position never encountered during training.""","""The paper proposes two methods for link prediction in knowledge hypergraphs. The first method concatenates the embedding of all entities and relations in a hyperedge. The second method combines an entity embedding, a relation embedding, and a weighted convolution of positions. The authors demonstrate on two datasets (derived by the authors from Freebase), that the proposed methods work well compared to baselines. The paper proposes direct generalizations of knowledge graph approaches, and unfortunately does not yet provide a comprehensive coverage of the possible design space of the two proposed extensions. The authors should be commended for providing the source code for reproducibility. One of the reviewers (who was unfortunately also the most negative), was time pressed. Unfortunately, the discussion period was not used by the reviewers to respond to the authors' rebuttal of their concerns. Even discounting the most negative review, this paper is on the borderline, and given the large number of submissions to ICLR, it unfortunately falls below the acceptance threshold in its current form. """ 214,"""Evaluating The Search Phase of Neural Architecture Search""","['Neural architecture search', 'parameter sharing', 'random search', 'evaluation framework']",""" Neural Architecture Search (NAS) aims to facilitate the design of deep networks for new tasks. Existing techniques rely on two stages: searching over the architecture space and validating the best architecture. NAS algorithms are currently compared solely based on their results on the downstream task. While intuitive, this fails to explicitly evaluate the effectiveness of their search strategies. In this paper, we propose to evaluate the NAS search phase. To this end, we compare the quality of the solutions obtained by NAS search policies with that of random architecture selection. We find that: (i) On average, the state-of-the-art NAS algorithms perform similarly to the random policy; (ii) the widely-used weight sharing strategy degrades the ranking of the NAS candidates to the point of not reflecting their true performance, thus reducing the effectiveness of the search process. We believe that our evaluation framework will be key to designing NAS strategies that consistently discover architectures superior to random ones.""","""This is one of several recent parallel papers that pointed out issues with neural architecture search (NAS). It shows that several NAS algorithms do not perform better than random search and finds that their weight sharing mechanism leads to low correlations of the search performance and final evaluation performance. Code is available to ensure reproducibility of the work. After the discussion period, all reviewers are mildly in favour of accepting the paper. My recommendation is therefore to accept the paper. The paper's results may in part appear to be old news by now, but they were not when the paper first appeared on arXiv (in parallel to Li & Talwalkar, so similarities to that work should not be held against this paper).""" 215,"""Learning to Learn by Zeroth-Order Oracle""","['learning to learn', 'zeroth-order optimization', 'black-box adversarial attack']","""In the learning to learn (L2L) framework, we cast the design of optimization algorithms as a machine learning problem and use deep neural networks to learn the update rules. In this paper, we extend the L2L framework to zeroth-order (ZO) optimization setting, where no explicit gradient information is available. Our learned optimizer, modeled as recurrent neural network (RNN), first approximates gradient by ZO gradient estimator and then produces parameter update utilizing the knowledge of previous iterations. To reduce high variance effect due to ZO gradient estimator, we further introduce another RNN to learn the Gaussian sampling rule and dynamically guide the query direction sampling. Our learned optimizer outperforms hand-designed algorithms in terms of convergence rate and final solution on both synthetic and practical ZO optimization tasks (in particular, the black-box adversarial attack task, which is one of the most widely used tasks of ZO optimization). We finally conduct extensive analytical experiments to demonstrate the effectiveness of our proposed optimizer.""","""This paper proposes to extend learning to learn framework based on zeroth-order optimization. Generally, the paper is well presented and easy to follow. The core idea is to incorporate another RNN to adaptively to learn the Gaussian sampling rule. Although the method does not seem to have a strong theorical support, its effectiveness is evaluated in the well-organized experiments including realistic tasks like black-box adversarial attack. All reviewers including two experts in this field admit the novelty of the methods and are positive to the acceptance. Id like to support their opinions and recommend accepting the paper. As R#1 still finds some details unclear, please try to clarify these points in the final version of the paper.""" 216,"""High-Frequency guided Curriculum Learning for Class-specific Object Boundary Detection""","['Computer Vision', 'Object Contour Detection', 'Curriculum Learning', 'Wavelets', 'Aerial Imagery']","""This work addresses class-specific object boundary extraction, i.e., retrieving boundary pixels that belong to a class of objects in the given image. Although recent ConvNet-based approaches demonstrate impressive results, we notice that they produce several false-alarms and misdetections when used in real-world applications. We hypothesize that although boundary detection is simple at some pixels that are rooted in identifiable high-frequency locations, other pixels pose a higher level of difficulties, for instance, region pixels with an appearance similar to the boundaries; or boundary pixels with insignificant edge strengths. Therefore, the training process needs to account for different levels of learning complexity in different regions to overcome false alarms. In this work, we devise a curriculum-learning-based training process for object boundary detection. This multi-stage training process first trains the network at simpler pixels (with sufficient edge strengths) and then at harder pixels in the later stages of the curriculum. We also propose a novel system for object boundary detection that relies on a fully convolutional neural network (FCN) and wavelet decomposition of image frequencies. This system uses high-frequency bands from the wavelet pyramid and augments them to conv features from different layers of FCN. Our ablation studies with contourMNIST dataset, a simulated digit contours from MNIST, demonstrate that this explicit high-frequency augmentation helps the model to converge faster. Our model trained by the proposed curriculum scheme outperforms a state-of-the-art object boundary detection method by a significant margin on a challenging aerial image dataset. ""","""This paper received all negative reviewers, and the scores were kept after the rebuttal. The authors are encouraged to submit their work to a computer vision conference where this kind of work may be more appreciated. Furthermore, including stronger baselines such as Acuna et al is recommended.""" 217,"""Exploring Model-based Planning with Policy Networks""","['reinforcement learning', 'model-based reinforcement learning', 'planning']","""Model-based reinforcement learning (MBRL) with model-predictive control or online planning has shown great potential for locomotion control tasks in both sample efficiency and asymptotic performance. Despite the successes, the existing planning methods search from candidate sequences randomly generated in the action space, which is inefficient in complex high-dimensional environments. In this paper, we propose a novel MBRL algorithm, model-based policy planning (POPLIN), that combines policy networks with online planning. More specifically, we formulate action planning at each time-step as an optimization problem using neural networks. We experiment with both optimization w.r.t. the action sequences initialized from the policy network, and also online optimization directly w.r.t. the parameters of the policy network. We show that POPLIN obtains state-of-the-art performance in the MuJoCo benchmarking environments, being about 3x more sample efficient than the state-of-the-art algorithms, such as PETS, TD3 and SAC. To explain the effectiveness of our algorithm, we show that the optimization surface in parameter space is smoother than in action space. Further more, we found the distilled policy network can be effectively applied without the expansive model predictive control during test time for some environments such as Cheetah. Code is released.""","""This paper proposes a model-based policy optimization approach that uses both a policy and model to plan online at test time. The paper includes significant contributions and strong results in comparison to a number of prior works, and is quite relevant to the ICLR community. There are a couple of related works that are missing [1,2] that combine learned policies and learned models, but generally the discussion of prior work is thorough. Overall, the paper is clearly above the bar for acceptance. [1] pseudo-url [2] pseudo-url""" 218,"""Variational Diffusion Autoencoders with Random Walk Sampling""","['generative models', 'variational inference', 'manifold learning', 'diffusion maps']","""Variational inference (VI) methods and especially variational autoencoders (VAEs) specify scalable generative models that enjoy an intuitive connection to manifold learning --- with many default priors the posterior/likelihood pair pseudo-formula / pseudo-formula can be viewed as an approximate homeomorphism (and its inverse) between the data manifold and a latent Euclidean space. However, these approximations are well-documented to become degenerate in training. Unless the subjective prior is carefully chosen, the topologies of the prior and data distributions often will not match. Conversely, diffusion maps (DM) automatically \textit{infer} the data topology and enjoy a rigorous connection to manifold learning, but do not scale easily or provide the inverse homeomorphism. In this paper, we propose \textbf{a)} a principled measure for recognizing the mismatch between data and latent distributions and \textbf{b)} a method that combines the advantages of variational inference and diffusion maps to learn a homeomorphic generative model. The measure, the \textit{locally bi-Lipschitz property}, is a sufficient condition for a homeomorphism and easy to compute and interpret. The method, the \textit{variational diffusion autoencoder} (VDAE), is a novel generative algorithm that first infers the topology of the data distribution, then models a diffusion random walk over the data. To achieve efficient computation in VDAEs, we use stochastic versions of both variational inference and manifold learning optimization. We prove approximation theoretic results for the dimension dependence of VDAEs, and that locally isotropic sampling in the latent space results in a random walk over the reconstructed manifold. Finally, we demonstrate the utility of our method on various real and synthetic datasets, and show that it exhibits performance superior to other generative models.""","""This paper proposes to train latent-variable models (VAEs) based on diffusion maps on the data-manifold. While this is an interesting idea, there are substantial problems with the current draft regarding clarity, novelty and scalability. In its current form, it is unlikely that the proposed model will have a substantial impact on the community.""" 219,"""What Can Learned Intrinsic Rewards Capture?""","['reinforcement learning', 'deep reinforcement learning', 'intrinsic movitation']","""Reinforcement learning agents can include different components, such as policies, value functions, state representations, and environment models. Any or all of these can be the loci of knowledge, i.e., structures where knowledge, whether given or learned, can be deposited and reused. Regardless of its composition, the objective of an agent is behave so as to maximise the sum of suitable scalar functions of state: the rewards. As far as the learning algorithm is concerned, these rewards are typically given and immutable. In this paper we instead consider the proposition that the reward function itself may be a good locus of knowledge. This is consistent with a common use, in the literature, of hand-designed intrinsic rewards to improve the learning dynamics of an agent. We adopt a multi-lifetime setting of the Optimal Rewards Framework, and investigate how meta-learning can be used to find good reward functions in a data-driven way. To this end, we propose to meta-learn an intrinsic reward function that allows agents to maximise their extrinsic rewards accumulated until the end of their lifetimes. This long-term lifetime objective allows our learned intrinsic reward to generate systematic multi-episode exploratory behaviour. Through proof-of-concept experiments, we elucidate interesting forms of knowledge that may be captured by a suitably trained intrinsic reward such as the usefulness of exploring uncertain states and rewards.""","""The authors present a metalearning-based approach to learning intrinsic rewards that improve RL performance across distributions of problems. This is essentially a more computationally efficient approach to approaches suggested by Singh (2009/10). The reviewers agreed that the core idea was good, if a bit incremental, but were also concerned about the similarity to the Singh et al. work, the simplicity of the toy domains tested, and comparison to relevant methods. The reviewers felt that the authors addressed their main concerns and significantly improved the paper; however the similarity to Singh et al. remains, and thus the concerns about incrementalism. Thus, I recommend this paper for rejection at this time.""" 220,"""Multi-step Greedy Policies in Model-Free Deep Reinforcement Learning""","['Reinforcement Learning', 'Multi-step greedy policies', 'Model free Reinforcement Learning']","""Multi-step greedy policies have been extensively used in model-based Reinforcement Learning (RL) and in the case when a model of the environment is available (e.g., in the game of Go). In this work, we explore the benefits of multi-step greedy policies in model-free RL when employed in the framework of multi-step Dynamic Programming (DP): multi-step Policy and Value Iteration. These algorithms iteratively solve short-horizon decision problems and converge to the optimal solution of the original one. By using model-free algorithms as solvers of the short-horizon problems we derive fully model-free algorithms which are instances of the multi-step DP framework. As model-free algorithms are prone to instabilities w.r.t. the decision problem horizon, this simple approach can help in mitigating these instabilities and results in an improved model-free algorithms. We test this approach and show results on both discrete and continuous control problems.""","""This paper extends recent multi-step dynamic programming algorithms to reinforcement learning with function approximation. In particular, the paper extends h-step optimal Bellman operators (and associated k-PI and k-VI algorithms) to deep reinforcement learning. The paper describes new extensions to DQN and TRPO algorithms. This approach is claimed to reduce the instability of model-free algorithms, and the approach is tested on Atari and Mujoco domains. The reviewers noticed several limitations of the work. The reviewers found little theoretical contribution in this work and they were unsatisfied with the empirical contributions. The reviewers were unconvinced of the strength and clarity of the empirical results with the Atari and Mujoco domains along with the deep learning network architectures. The reviewers suggested that simpler domains with a simpler function approximation scheme could enable more through experiments and more conclusive results. The claim in the abstract of addressing the instabilities was also not adequately studied in the paper. This paper is not ready for publication. The primary contribution of this work is the empirical evaluation, and the evaluation is not sufficiently clear for the reviewers.""" 221,"""XLDA: Cross-Lingual Data Augmentation for Natural Language Inference and Question Answering""","['cross-lingual', 'transfer learning', 'BERT']","""While natural language processing systems often focus on a single language, multilingual transfer learning has the potential to improve performance, especially for low-resource languages. We introduce XLDA, cross-lingual data augmentation, a method that replaces a segment of the input text with its translation in another language. XLDA enhances performance of all 14 tested languages of the cross-lingual natural language inference (XNLI) benchmark. With improvements of up to 4.8, training with XLDA achieves state-of-the-art performance for Greek, Turkish, and Urdu. XLDA is in contrast to, and performs markedly better than, a more naive approach that aggregates examples in various languages in a way that each example is solely in one language. On the SQuAD question answering task, we see that XLDA provides a 1.0 performance increase on the English evaluation set. Comprehensive experiments suggest that most languages are effective as cross-lingual augmentors, that XLDA is robust to a wide range of translation quality, and that XLDA is even more effective for randomly initialized models than for pretrained models.""","""The authors provide an analysis of a cross-lingual data augmentation technique which they call XLDA. This consists of replacing a segment of an input text with its translation in another language. They show that when fine-tuning, it is more beneficial to train on the cross-lingual hypotheses than on the in-language pairs, especially for low resource languages such as Greek, Turkish and Urdu. The paper explores an interesting idea however they lack comparison with other techniques such as backtranslation and XLM models, and would benefit from a wider range of tasks. I feel like this paper is more suitable for an NLP-focussed venue. """ 222,"""Coloring graph neural networks for node disambiguation""","['Graph neural networks', 'separability', 'node disambiguation', 'universal approximation', 'representation learning']","""In this paper, we show that a simple coloring scheme can improve, both theoretically and empirically, the expressive power of Message Passing Neural Networks (MPNNs). More specifically, we introduce a graph neural network called Colored Local Iterative Procedure (CLIP) that uses colors to disambiguate identical node attributes, and show that this representation is a universal approximator of continuous functions on graphs with node attributes. Our method relies on separability, a key topological characteristic that allows to extend well-chosen neural networks into universal representations. Finally, we show experimentally that CLIP is capable of capturing structural characteristics that traditional MPNNs fail to distinguish, while being state-of-the-art on benchmark graph classification datasets.""","""This paper presents an extension of MPNN which leverages the random color augmentation to improve the representation power of MPNN. The experimental results shows the effectiveness of colorization. A majority of the reviewers were particularly concerned about lacking permutation invariance in the approach as well as the large variance issue in practice, and their opinion stays the same after the rebuttal. The reviewers unanimously expressed their concerns on the large variance issue during the discussion period. Overall, the reviewers believe that the authors has not addressed their concerns sufficiently.""" 223,"""Neural Arithmetic Units""",[],"""Neural networks can approximate complex functions, but they struggle to perform exact arithmetic operations over real numbers. The lack of inductive bias for arithmetic operations leaves neural networks without the underlying logic necessary to extrapolate on tasks such as addition, subtraction, and multiplication. We present two new neural network components: the Neural Addition Unit (NAU), which can learn exact addition and subtraction; and the Neural Multiplication Unit (NMU) that can multiply subsets of a vector. The NMU is, to our knowledge, the first arithmetic neural network component that can learn to multiply elements from a vector, when the hidden size is large. The two new components draw inspiration from a theoretical analysis of recently proposed arithmetic components. We find that careful initialization, restricting parameter space, and regularizing for sparsity is important when optimizing the NAU and NMU. Our proposed units NAU and NMU, compared with previous neural units, converge more consistently, have fewer parameters, learn faster, can converge for larger hidden sizes, obtain sparse and meaningful weights, and can extrapolate to negative and small values.""","""This paper extends work on NALUs, providing a pair of units which, in tandem, outperform NALUs. The reviewers were broadly in favour of the paper given the presentation and results. The one dissenting reviewer appears to not have had time to reconsider their score despite the main points of clarification being addressed in the revision. I am happy to err on the side of optimism here and assume they would be satisfied with the changes that came as an outcome of the discussion, and recommend acceptance.""" 224,"""AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty""","['robustness', 'uncertainty']","""Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice. When the train and test distributions are mismatched, accuracy can plummet. Currently there are few techniques that improve robustness to unforeseen data shifts encountered during deployment. In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers. We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions. AugMix significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half. ""","""This paper tackles the problem of learning under data shift, i.e. when the training and testing distributions are different. The authors propose an approach to improve robustness and uncertainty of image classifiers in this situation. The technique uses synthetic samples created by mixing multiple augmented images, in addition to a Jensen-Shannon Divergence consistency loss. Its evaluation is entirely based on experimental evidence. The method is simple, easy to implement, and effective. Though this is a purely empirical paper, the experiments are extensive and convincing. In the end, the reviewers didn't show any objections against this paper. I therefore recommend acceptance.""" 225,"""Variational Template Machine for Data-to-Text Generation""",[],"""How to generate descriptions from structured data organized in tables? Existing approaches using neural encoder-decoder models often suffer from lacking diversity. We claim that an open set of templates is crucial for enriching the phrase constructions and realizing varied generations.Learning such templates is prohibitive since it often requires a large paired , which is seldom available. This paper explores the problem of automatically learning reusable ""templates"" from paired and non-paired data. We propose the variational template machine (VTM), a novel method to generate text descriptions from data tables. Our contributions include: a) we carefully devise a specific model architecture and losses to explicitly disentangle text template and semantic content information, in the latent spaces, and b) we utilize both small parallel data and large raw text without aligned tables to enrich the template learning. Experiments on datasets from a variety of different domains show that VTM is able to generate more diversely while keeping a good fluency and quality. ""","""The paper addresses the problem of generating descriptions from structured data. In particular a Variational Template Machine which explicitly disentangles templates from semantic content. They empirically demonstrate that their model performs better than existing methods on different methods. This paper has received a strong acceptance from two reviewers. In particular, the reviewers have appreciated the novelty and empirical evaluation of the proposed approach. R3 has raised quite a few concerns but I feel they were adequately addressed by the reviewers. Hence, I recommend that the paper be accepted. """ 226,"""Potential Flow Generator with pseudo-formula Optimal Transport Regularity for Generative Models""","['generative models', 'optimal transport', 'GANs', 'flow-based models']","""We propose a potential flow generator with pseudo-formula optimal transport regularity, which can be easily integrated into a wide range of generative models including different versions of GANs and flow-based models. With up to a slight augmentation of the original generator loss functions, our generator is not only a transport map from the input distribution to the target one, but also the one with minimum pseudo-formula transport cost. We show the correctness and robustness of the potential flow generator in several 2D problems, and illustrate the concept of ``proximity'' due to the pseudo-formula optimal transport regularity. Subsequently, we demonstrate the effectiveness of the potential flow generator in image translation tasks with unpaired training data from the MNIST dataset and the CelebA dataset. ""","""This paper proposes applying potential flow generators in conjunction with L2 optimal transport regularity to favor solutions that ""move"" input points as little as possible to output points drawn from the target distribution. The resulting pipeline can be effective in dealing with, among other things, image-to-image translation tasks with unpaired data. Overall, one of the appeals of this methodology is that it can be integrated within a number of existing generative modeling paradigms (e.g., GANs, etc.). After the rebuttal and discussion period, two reviewers maintained weak reject scores while one favored strong acceptance. With these borderline/mixed scores, this paper was discussed at the meta-review level and the final decision was to side with the majority, noting that a revision which fully addresses reviewer comments could likely be successful at a future venue. As one important lingering issue, R1 pointed out that the optimality conditions of the proposed approach are only enforced on sampled trajectories, not actually on the entire space. The rebuttal concedes this point, but suggests that the method still seems to work. But as an improvement, the suggestion is made that randomly perturbed trajectories could help to mitigate this issue. However, no experiments were conducted using this modification, which could be helpful in building confidence in the reliability of the overall methodology. Additionally, from my perspective the empirical validation could also be improved to help solidify the contribution in a revision. For example, the image-to-image translation experiments with CelebA were based on a linear (PCA) embedding and feedforward networks. It would have been nice to have seen a more sophisticated setup for this purpose (as discussed in Section 5), especially for a non-theoretical paper with an ostensibly practically-relevant algorithmic proposal. And consistent with reviewer comments, the paper definitely needs another pass to clean up a number of small grammatical mistakes.""" 227,"""CLEVRER: Collision Events for Video Representation and Reasoning""","['Neuro-symbolic', 'Reasoning']","""The ability to reason about temporal and causal events from videos lies at the core of human intelligence. Most video reasoning benchmarks, however, focus on pattern recognition from complex visual and language input, instead of on causal structure. We study the complementary problem, exploring the temporal and causal structures behind videos of objects with simple visual appearance. To this end, we introduce the CoLlision Events for Video REpresentation and Reasoning (CLEVRER) dataset, a diagnostic video dataset for systematic evaluation of computational models on a wide range of reasoning tasks. Motivated by the theory of human casual judgment, CLEVRER includes four types of question: descriptive (e.g., what color), explanatory (whats responsible for), predictive (what will happen next), and counterfactual (what if). We evaluate various state-of-the-art models for visual reasoning on our benchmark. While these models thrive on the perception-based task (descriptive), they perform poorly on the causal tasks (explanatory, predictive and counterfactual), suggesting that a principled approach for causal reasoning should incorporate the capability of both perceiving complex visual and language inputs, and understanding the underlying dynamics and causal relations. We also study an oracle model that explicitly combines these components via symbolic representations. ""","""The reviewers are unanimous in their opinion that this paper offers a novel approach to causal learning. I concur.""" 228,"""On the Weaknesses of Reinforcement Learning for Neural Machine Translation""","['Reinforcement learning', 'MRT', 'minimum risk training', 'reinforce', 'machine translation', 'peakkiness', 'generation']","""Reinforcement learning (RL) is frequently used to increase performance in text generation tasks, including machine translation (MT), notably through the use of Minimum Risk Training (MRT) and Generative Adversarial Networks (GAN). However, little is known about what and how these methods learn in the context of MT. We prove that one of the most common RL methods for MT does not optimize the expected reward, as well as show that other methods take an infeasibly long time to converge. In fact, our results suggest that RL practices in MT are likely to improve performance only where the pre-trained parameters are already close to yielding the correct translation. Our findings further suggest that observed gains may be due to effects unrelated to the training signal, concretely, changes in the shape of the distribution curve.""","""In my opinion, the main strength of this work is the theoretical analysis and some observations that may be of great interest to the NLP community in terms of better analyzing the performance of RL (and ""RL-like"") methods as optimizers. The main weakness, as pointed out by R3, the limited empirical analysis. I would urge the authors to take R3's advice and attempt insofar as possible to broaden the scope of the empirical analysis in the final. I believe that this is important for the paper to be able to make its case convincingly. Nonetheless, I do think that the paper makes a significant contribution that will be of interest to the community, and should be presented at ICLR. Therefore, I would recommend for it to be accepted.""" 229,"""Parallel Scheduled Sampling""","['deep learning', 'generative models', 'teacher forcing', 'scheduled sampling']","""Auto-regressive models are widely used in sequence generation problems. The output sequence is typically generated in a predetermined order, one discrete unit(pixel or word or character) at a time. The models are trained by teacher-forcing where ground-truth history is fed to the model as input, which at test time is replaced by the model prediction. Scheduled Sampling (Bengio et al., 2015) aimsto mitigate this discrepancy between train and test time by randomly replacing some discrete units in the history with the models prediction. While teacher-forced training works well with ML accelerators as the computation can be parallelized across time, Scheduled Sampling involves undesirable sequential processing. In this paper, we introduce a simple technique to parallelize Scheduled Sampling across time. Experimentally, we find the proposed technique leads to equivalent or better performance on image generation, summarization, dialog generation, and translation compared to teacher-forced training. n dialog response generation task,Parallel Scheduled Sampling achieves 1.6 BLEU score (11.5%) improvement over teacher-forcing while in image generation it achieves 20% and 13.8% improvement in Frechet Inception Distance (FID) and Inception Score (IS) respectively. Further, we discuss the effects of different hyper-parameters associated with Scheduled Sampling on the model performance.""","""The paper proposes a parallelization approach for speeding up scheduled sampling, and show significant improvement over the original. The approach is simple and a clear improvement over vanilla schedule sampling. However, the reviewers point out that there are more recent methods to compare against or combine with, and that the paper is a bit thin on content and could have addressed this. The proposed approach may well combine well with newer techniques, but I tend to agree that this should be tested.""" 230,"""Fully Polynomial-Time Randomized Approximation Schemes for Global Optimization of High-Dimensional Folded Concave Penalized Generalized Linear Models""","['statistical learning', 'FPRAS', 'global optimization', 'folded concave penalty', 'GLM', 'high dimensional learning']","""Global solutions to high-dimensional sparse estimation problems with a folded concave penalty (FCP) have been shown to be statistically desirable but are strongly NP-hard to compute, which implies the non-existence of a pseudo-polynomial time global optimization schemes in the worst case. This paper shows that, with high probability, a global solution to the formulation for a FCP-based high-dimensional generalized linear model coincides with a stationary point characterized by the significant subspace second order necessary conditions (S pseudo-formula ONC). Since the desired S pseudo-formula ONC solution admits a fully polynomial-time approximation schemes (FPTAS), we thus have shown the existence of fully polynomial-time randomized approximation scheme (FPRAS) for a strongly NP-hard problem. We further demonstrate two versions of the FPRAS for generating the desired S pseudo-formula ONC solutions. One follows the paradigm of an interior point trust region algorithm and the other is the well-studied local linear approximation (LLA). Our analysis thus provides new techniques for global optimization of certain NP-Hard problems and new insights on the effectiveness of LLA.""","""Thanks for your detailed feedback to the reviewers, which clarified us a lot in many respects. However, the novelty of this paper is rather marginal and given the high competition at ICLR2020, this paper is unfortunately below the bar. We hope that the reviewers' comments are useful for improving the paper for potential future publication. """ 231,"""Stochastic Conditional Generative Networks with Basis Decomposition""",[],"""While generative adversarial networks (GANs) have revolutionized machine learning, a number of open questions remain to fully understand them and exploit their power. One of these questions is how to efficiently achieve proper diversity and sampling of the multi-mode data space. To address this, we introduce BasisGAN, a stochastic conditional multi-mode image generator. By exploiting the observation that a convolutional filter can be well approximated as a linear combination of a small set of basis elements, we learn a plug-and-played basis generator to stochastically generate basis elements, with just a few hundred of parameters, to fully embed stochasticity into convolutional filters. By sampling basis elements instead of filters, we dramatically reduce the cost of modeling the parameter space with no sacrifice on either image diversity or fidelity. To illustrate this proposed plug-and-play framework, we construct variants of BasisGAN based on state-of-the-art conditional image generation networks, and train the networks by simply plugging in a basis generator, without additional auxiliary components, hyperparameters, or training objectives. The experimental success is complemented with theoretical results indicating how the perturbations introduced by the proposed sampling of basis elements can propagate to the appearance of generated images.""","""Main content: BasiGAN, a novel method for introducing stochasticity in conditional GANs Summary of discussion: reviewer1: interesting work and results on GANs. Reviewer had a question on pre-defned basis but i think it was answered by the authors. reviewer3: interesting and novel work on GANS, wel-written paper and improves on SOTA. The main uestion is around bases again like reviewer 1, but it seems the authors have addressed this. reviewer4: Novel interesting work. Main comments are around making Theorem 1 more theoretically correct, which it sounds like the authors addressed. Recommendation: Poster. Well written and novel paper and authors addressed a lot of concerns. """ 232,"""Graph Neural Networks for Soft Semi-Supervised Learning on Hypergraphs""","['Graph Neural Networks', 'Soft Semi-supervised Learning', 'Hypergraphs']","""Graph-based semi-supervised learning (SSL) assigns labels to initially unlabelled vertices in a graph. Graph neural networks (GNNs), esp. graph convolutional networks (GCNs), inspired the current-state-of-the art models for graph-based SSL problems. GCNs inherently assume that the labels of interest are numerical or categorical variables. However, in many real-world applications such as co-authorship networks, recommendation networks, etc., vertex labels can be naturally represented by probability distributions or histograms. Moreover, real-world network datasets have complex relationships going beyond pairwise associations. These relationships can be modelled naturally and flexibly by hypergraphs. In this paper, we explore GNNs for graph-based SSL of histograms. Motivated by complex relationships (those going beyond pairwise) in real-world networks, we propose a novel method for directed hypergraphs. Our work builds upon existing works on graph-based SSL of histograms derived from the theory of optimal transportation. A key contribution of this paper is to establish generalisation error bounds for a one-layer GNN within the framework of algorithmic stability. We also demonstrate our proposed methods' effectiveness through detailed experimentation on real-world data. We have made the code available.""","""This paper proposes and evaluates using graph convolutional networks for semi-supervised learning of probability distributions (histograms). The paper was reviewed by three experts, all of whom gave a Weak Reject rating. The reviewers acknowledged the strengths of the paper, but also had several important concerns including quality of writing and significance of the contribution, in addition to several more specific technical questions. The authors submitted a response that addressed these concerns to some extent. However, in post-rebuttal discussions, the reviewers chose not to change their ratings, feeling that quality of writing still needed to be improved and that overall a significant revision and another round of peer review would be needed. In light of these reviews, we are not able to recommend accepting the paper, but hope the authors will find the suggestions of the reviewers helpful in preparing a revision for another venue. """ 233,"""Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation""","['adversarial training', 'generative models', 'unpaired video translation', 'video super-resolution', 'temporal coherence', 'self-supervision', 'cycle-consistency']","""We focus on temporal self-supervision for GAN-based video generation tasks. While adversarial training successfully yields generative models for a variety of areas, temporal relationship in the generated data is much less explored. This is crucial for sequential generation tasks, e.g. video super-resolution and unpaired video translation. For the former, state-of-the-art methods often favor simpler norm losses such as L2 over adversarial training. However, their averaging nature easily leads to temporally smooth results with an undesirable lack of spatial detail. For unpaired video translation, existing approaches modify the generator networks to form spatio-temporal cycle consistencies. In contrast, we focus on improving the learning objectives and propose a temporally self-supervised algorithm. For both tasks, we show that temporal adversarial learning is key to achieving temporally coherent solutions without sacrificing spatial detail. We also propose a novel Ping-Pong loss to improve the long-term temporal consistency. It effectively prevents recurrent networks from accumulating artifacts temporally without depressing detailed features. We also propose a first set of metrics to quantitatively evaluate the accuracy as well as the perceptual quality of the temporal evolution. A series of user studies confirms the rankings computed with these metrics.""","""The paper presents an architecture for conditional video generation tasks with temporal self-supervision and temporal adversarial learning. The proposed architecture is reasonable but looks somewhat complicated. In terms of technical novelty, the so-called ""ping-pong"" loss looks interesting and novel, but other parts are more-or-less some combinations of existing techniques. Experimental results show promise of the proposed method against selected baselines for video super-resolution (VSR) and unpaired video-to-video translation tasks (UVT). In terms of weakness, (1) the technical novelty is not very high; (2) the final loss is a combination of many losses with many hyperparameters; (3) experimentally the proposed method is not compared against recent SOTA methods on VSR and UVT. The proposed method should be compared against more recent SOTA baselines for VSR tasks (see examples of references below): EDVR: Video Restoration with Enhanced Deformable Convolutional Networks pseudo-url Progressive Fusion Video Super-Resolution Network via Exploiting Non-Local Spatio-Temporal Correlations ICCV 2019 Recurrent Back-Projection Network for Video Super-Resolution CVPR 2019 The same comment would apply for baselines for UVT tasks: Mocycle-GAN: Unpaired Video-to-Video Translation pseudo-url Preserving Semantic and Temporal Consistency for Unpaired Video-to-Video Translation pseudo-url Particularly for UVT, the evaluated dataset seems limited in terms of scope as well (i.e., evaluations on more popular benchmarks, such as Viper would be needed for further validation). Overall, given that the contribution of this work is an empirical performance with a rather complex architecture/loss, more comprehensive empirical evaluations on SOTA baselines are warranted. """ 234,"""Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators""","['Adversarial Learning', 'Semi-supervised Learning', 'Image generation', 'Image segmentation', 'Missing Data']","""Generative adversarial networks (GANs) have shown great success in applications such as image generation and inpainting. However, they typically require large datasets, which are often not available, especially in the context of prediction tasks such as image segmentation that require labels. Therefore, methods such as the CycleGAN use more easily available unlabelled data, but do not offer a way to leverage additional labelled data for improved performance. To address this shortcoming, we show how to factorise the joint data distribution into a set of lower-dimensional distributions along with their dependencies. This allows splitting the discriminator in a GAN into multiple ""sub-discriminators"" that can be independently trained from incomplete observations. Their outputs can be combined to estimate the density ratio between the joint real and the generator distribution, which enables training generators as in the original GAN framework. We apply our method to image generation, image segmentation and audio source separation, and obtain improved performance over a standard GAN when additional incomplete training examples are available. For the Cityscapes segmentation task in particular, our method also improves accuracy by an absolute 14.9% over CycleGAN while using only 25 additional paired examples.""","""All three reviewers appreciate the new method (FactorGAN) for training generative networks from incomplete observations. At the same time, the quality of the experimental results can still be improved. On balance, the paper will make a good poster.""" 235,"""Rnyi Fair Inference""",[],"""Machine learning algorithms have been increasingly deployed in critical automated decision-making systems that directly affect human lives. When these algorithms are solely trained to minimize the training/test error, they could suffer from systematic discrimination against individuals based on their sensitive attributes, such as gender or race. Recently, there has been a surge in machine learning society to develop algorithms for fair machine learning. In particular, several adversarial learning procedures have been proposed to impose fairness. Unfortunately, these algorithms either can only impose fairness up to linear dependence between the variables, or they lack computational convergence guarantees. In this paper, we use Rnyi correlation as a measure of fairness of machine learning models and develop a general training framework to impose fairness. In particular, we propose a min-max formulation which balances the accuracy and fairness when solved to optimality. For the case of discrete sensitive attributes, we suggest an iterative algorithm with theoretical convergence guarantee for solving the proposed min-max problem. Our algorithm and analysis are then specialized to fair classification and fair clustering problems. To demonstrate the performance of the proposed Rnyi fair inference framework in practice, we compare it with well-known existing methods on several benchmark datasets. Experiments indicate that the proposed method has favorable empirical performance against state-of-the-art approaches.""","""The paper addresses the problem of fair representation learning. The authors propose to use Rnyi correlation as a measure of (in)dependence between the predictor and the sensitive attribute and developed a general training framework to impose fairness with theoretical properties. The empirical evaluations have been performed using standard benchmarks for fairness methods and the SOTA baselines -- all this supports the main claims of this work's contributions. All the reviewers and AC agree that this work has made a valuable contribution and recommend acceptance. Congratulations to the authors! """ 236,"""Homogeneous Linear Inequality Constraints for Neural Network Activations""","['deep learning', 'constrained optimization']","""We propose a method to impose homogeneous linear inequality constraints of the form 0$ on neural network activations. The proposed method allows a data-driven training approach to be combined with modeling prior knowledge about the task. One way to achieve this task is by means of a projection step at test time after unconstrained training. However, this is an expensive operation. By directly incorporating the constraints into the architecture, we can significantly speed-up inference at test time; for instance, our experiments show a speed-up of up to two orders of magnitude over a projection method. Our algorithm computes a suitable parameterization of the feasible set at initialization and uses standard variants of stochastic gradient descent to find solutions to the constrained network. Thus, the modeling constraints are always satisfied during training. Crucially, our approach avoids to solve an optimization problem at each training step or to manually trade-off data and constraint fidelity with additional hyperparameters. We consider constrained generative modeling as an important application domain and experimentally demonstrate the proposed method by constraining a variational autoencoder.""","""The authors propose a framework for incorporating homogeneous linear inequality constraints on neural network activations into neural network architectures. The authors show that this enables training neural networks that are guaranteed to satisfy non-trivial constraints on the neurons in a manner that is significantly more scalable than prior work, and demonstrate this experimentally on a generative modelling task. The problem considered in the paper is certainly significant (training neural networks that are guaranteed to satisfy constraints arises in many applications) and the authors make some interesting contributions. However, the reviewers found the following issues that make it difficult to accept the paper in its present form: 1) The setting of homogeneous linear equality constraints is not well-motivated and the significance of being able to impose such constraints is not clearly articulated in the paper. The authors would do well to prepare a future revision documenting use-cases motivated by practical applications and add these to the paper. 2) The experimental evaluation is not sufficiently thorough: the authors evaluate their method on an artificial constraint involving a ""checkerboard pattern"" on MNIST. Even in this case, the training method proposed by the authors seems to suffer from some issues, and more thorough experiments need to be conducted to confirm that the training method can perform well across a variety of datasets and constraints. Given these issues, I recommend rejection. However, I encourage the authors to revise their work on this important topic and prepare a future version including practical examples of the constraints and experiments on a variety of prediction tasks. """ 237,"""Graph Warp Module: an Auxiliary Module for Boosting the Power of Graph Neural Networks in Molecular Graph Analysis""","['Graph Neural Networks', 'molecular graph analysis', 'supernode', 'auxiliary module']","""Graph Neural Network (GNN) is a popular architecture for the analysis of chemical molecules, and it has numerous applications in material and medicinal science. Current lines of GNNs developed for molecular analysis, however, do not fit well on the training set, and their performance does not scale well with the complexity of the network. In this paper, we propose an auxiliary module to be attached to a GNN that can boost the representation power of the model without hindering the original GNN architecture. Our auxiliary module can improve the representation power and the generalization ability of a wide variety of GNNs, including those that are used commonly in biochemical applications. ""","""This paper presents an auxiliary module to boost the representation power of GNNs. The new module consists of virtual supernode, attention unit, and warp gate unit. The usefulness of each component is shown in well-organized experiments. This is the very borderline paper with split scores. While all reviewers basically agree that the empirical findings in the paper are interesting and could be valuable to the community, one reviewer raised concern regarding the incremental novelty of the method, which is also understood by other reviewers. The impression was not changed through authors response and reviewer discussion, and there is no strong opinion to champion the paper. Therefore, Id like to recommend rejection this time. """ 238,"""Depth creates no more spurious local minima in linear networks""","['local minimum', 'deep linear network']","""We show that for any convex differentiable loss, a deep linear network has no spurious local minima as long as it is true for the two layer case. This reduction greatly simplifies the study on the existence of spurious local minima in deep linear networks. When applied to the quadratic loss, our result immediately implies the powerful result by Kawaguchi (2016). Further, with the recent work by Zhou& Liang (2018), we can remove all the assumptions in (Kawaguchi, 2016). This property holds for more general multi-tower linear networks too. Our proof builds on the work in (Laurent & von Brecht, 2018) and develops a new perturbation argument to show that any spurious local minimum must have full rank, a structural property which can be useful more generally""","""Paper shows that the question of linear deep networks having spurious local minima under benign conditions on the loss function can be reduced to the two layer case. This paper is motivated by and builds upon works that are proven for specific cases. Reviewers found the techniques used to prove the result not very novel in light of existing techniques. Novelty of technique is of particular importance to this area because these results have little practical value in linear networks on their own; the goal is to extend these techniques to the more interesting non-linear case. """ 239,"""Iterative Target Augmentation for Effective Conditional Generation""","['data augmentation', 'generative models', 'self-training', 'molecular optimization', 'program synthesis']","""Many challenging prediction problems, from molecular optimization to program synthesis, involve creating complex structured objects as outputs. However, available training data may not be sufficient for a generative model to learn all possible complex transformations. By leveraging the idea that evaluation is easier than generation, we show how a simple, broadly applicable, iterative target augmentation scheme can be surprisingly effective in guiding the training and use of such models. Our scheme views the generative model as a prior distribution, and employs a separately trained filter as the likelihood. In each augmentation step, we filter the model's outputs to obtain additional prediction targets for the next training epoch. Our method is applicable in the supervised as well as semi-supervised settings. We demonstrate that our approach yields significant gains over strong baselines both in molecular optimization and program synthesis. In particular, our augmented model outperforms the previous state-of-the-art in molecular optimization by over 10% in absolute gain. ""","""This paper proposes a training scheme to enhance the optimization process where the outputs are required to meet certain constraints. The authors propose to insert an additional target augmentation phase after the regular training. For each datapoint, the algorithm samples candidate outputs until it find a valid output according the an external filter. The model is further fine-tuned on the augmented dataset. The authors provided detailed answers and responses to the reviews, which the reviewers appreciated. However, some significant concerns remained, and due to a large number of stronger papers, this paper was not accepted at this time.""" 240,"""Composition-based Multi-Relational Graph Convolutional Networks""","['Graph Convolutional Networks', 'Multi-relational Graphs', 'Knowledge Graph Embeddings', 'Link Prediction']","""Graph Convolutional Networks (GCNs) have recently been shown to be quite successful in modeling graph-structured data. However, the primary focus has been on handling simple undirected graphs. Multi-relational graphs are a more general and prevalent form of graphs where each edge has a label and direction associated with it. Most of the existing approaches to handle such graphs suffer from over-parameterization and are restricted to learning representations of nodes only. In this paper, we propose CompGCN, a novel Graph Convolutional framework which jointly embeds both nodes and relations in a relational graph. CompGCN leverages a variety of entity-relation composition operations from Knowledge Graph Embedding techniques and scales with the number of relations. It also generalizes several of the existing multi-relational GCN methods. We evaluate our proposed method on multiple tasks such as node classification, link prediction, and graph classification, and achieve demonstrably superior results. We make the source code of CompGCN available to foster reproducible research.""","""This paper proposes and evaluates a formulation of graph convolutional networks for multi-relation graphs. The paper was reviewed by three experts working in this area and received three Weak Accept decisions. The reviewers identified some concerns, including novelty with respect to existing work and specific details of the experimental setup and results that were not clear. The authors have addressed most of these concerns in their response, including adding a table that explicitly explains the contribution with respect to existing work and clarifying the missing details. Given the unanimous Weak Accept decision, the ACs also recommend Accept as a poster.""" 241,"""Neural Stored-program Memory""","['Memory Augmented Neural Networks', 'Universal Turing Machine', 'fast-weight']","""Neural networks powered with external memory simulate computer behaviors. These models, which use the memory to store data for a neural controller, can learn algorithms and other complex tasks. In this paper, we introduce a new memory to store weights for the controller, analogous to the stored-program memory in modern computer architectures. The proposed model, dubbed Neural Stored-program Memory, augments current memory-augmented neural networks, creating differentiable machines that can switch programs through time, adapt to variable contexts and thus fully resemble the Universal Turing Machine. A wide range of experiments demonstrate that the resulting machines not only excel in classical algorithmic problems, but also have potential for compositional, continual, few-shot learning and question-answering tasks. ""","""This paper presents the neural stored-program memory, which is a key-value memory that is used to store weights for another neural network, analogous to having programs in computers. They provide an extensive set of experiments in various domains to show the benefit of the proposed method, including synthetic tasks and few-shot learning experiments. This is an interesting paper proposing a new idea. We discuss this submission extensively and based on our discussion I recommend accepting this submission. A few final comments from reviewers for the authors: - Please try to make the paper a bit more self-contained so that it is more useful to a general audience. This can be done by either making more space in the main text (e.g., reducing the size of Figure 1, reducing space between sections, table captions and text, etc.) or adding more details in the Appendix. Importantly, your formatting is a bit off. Please use the correct style file, it will give you more space. All reviewers agree that the paper are missing some important details that would improve the paper. - Please cite the original fast weight paper by Malsburg (1981). - Regarding fast-weights using outer products, this was actually first done in the 1993 paper instead of the 2016 and 2017 papers.""" 242,"""Can gradient clipping mitigate label noise?""",[],"""Gradient clipping is a widely-used technique in the training of deep networks, and is generally motivated from an optimisation lens: informally, it controls the dynamics of iterates, thus enhancing the rate of convergence to a local minimum. This intuition has been made precise in a line of recent works, which show that suitable clipping can yield significantly faster convergence than vanilla gradient descent. In this paper, we propose a new lens for studying gradient clipping, namely, robustness: informally, one expects clipping to provide robustness to noise, since one does not overly trust any single sample. Surprisingly, we prove that for the common problem of label noise in classification, standard gradient clipping does not in general provide robustness. On the other hand, we show that a simple variant of gradient clipping is provably robust, and corresponds to suitably modifying the underlying loss function. This yields a simple, noise-robust alternative to the standard cross-entropy loss which performs well empirically.""","""This paper studies the effect of clipping on mitigating label noise. The authors demonstrate that standard gradient clipping does not suffice for achieving robustness to label noise. The authors suggest a noise-robust alternative. In the discussion the reviewers raised some interesting questions and technical detailed but mostly agreed that the paper is well-written with nice contributions. I concur with the reviewers that this is a nicely written paper with good contributions. I recommend acceptance but recommend the authors continue to improve their paper based on the reviewers' suggestions.""" 243,"""A Theoretical Analysis of Deep Q-Learning""","['reinforcement learning', 'deep Q network', 'minimax-Q learning', 'zero-sum Markov Game']","""Despite the great empirical success of deep reinforcement learning, its theoretical foundation is less well understood. In this work, we make the first attempt to theoretically understand the deep Q-network (DQN) algorithm (Mnih et al., 2015) from both algorithmic and statistical perspectives. In specific, we focus on a slight simplification of DQN that fully captures its key features. Under mild assumptions, we establish the algorithmic and statistical rates of convergence for the action-value functions of the iterative policy sequence obtained by DQN. In particular, the statistical error characterizes the bias and variance that arise from approximating the action-value function using deep neural network, while the algorithmic error converges to zero at a geometric rate. As a byproduct, our analysis provides justifications for the techniques of experience replay and target network, which are crucial to the empirical success of DQN. Furthermore, as a simple extension of DQN, we propose the Minimax-DQN algorithm for zero-sum Markov game with two players, which is deferred to the appendix due to space limitations.""","""The authors offer theoretical guarantees for a simplified version of the deep Q-learning algorithm. However, the majority of the reviewers agree that the simplifying assumptions are so many that the results do not capture major important aspects of deep Q-Learning (e.g. understanding good exploration strategies, understanding why deep nets are better approximators and not using neural net classes that are so large that can capture all non-parametric functions). For justifying the paper to be called a theoretical analysis of deep Q-Learning some of these aspects need to be addressed, or the motivation/title of the paper needs to be re-defined. """ 244,"""Learning to Reason: Distilling Hierarchy via Self-Supervision and Reinforcement Learning""","['Reinforcement learning', 'Self-supervised learning', 'unsupervised learning', 'representation learning']","""We present a hierarchical planning and control framework that enables an agent to perform various tasks and adapt to a new task flexibly. Rather than learning an individual policy for each particular task, the proposed framework, DISH, distills a hierarchical policy from a set of tasks by self-supervision and reinforcement learning. The framework is based on the idea of latent variable models that represent high-dimensional observations using low-dimensional latent variables. The resulting policy consists of two levels of hierarchy: (i) a planning module that reasons a sequence of latent intentions that would lead to optimistic future and (ii) a feedback control policy, shared across the tasks, that executes the inferred intention. Because the reasoning is performed in low-dimensional latent space, the learned policy can immediately be used to solve or adapt to new tasks without additional training. We demonstrate the proposed framework can learn compact representations (3-dimensional latent states for a 90-dimensional humanoid system) while solving a small number of imitation tasks, and the resulting policy is directly applicable to other types of tasks, i.e., navigation in cluttered environments.""","""The authors present a self-supervised framework for learning a hierarchical policy in reinforcement learning tasks that combines a high-level planner over learned latent goals with a shared low-level goal-completing control policy. The reviewers had significant concerns about both problem positioning (w.r.t. existing work) and writing clarity, as well as the fact that all comparative experiments were ablations, rather than comparisons to prior work. While the reviewers agreed that the authors reasonably resolved issues of clarity, there was not agreement that concerns about positioning w.r.t. prior work and experimental comparisons were sufficiently resolved. Thus, I recommend to reject this paper at this time.""" 245,"""Poisoning Attacks with Generative Adversarial Nets""","['data poisoning', 'adversarial machine learning', 'generative adversarial nets']","""Machine learning algorithms are vulnerable to poisoning attacks: An adversary can inject malicious points in the training dataset to influence the learning process and degrade the algorithm's performance. Optimal poisoning attacks have already been proposed to evaluate worst-case scenarios, modelling attacks as a bi-level optimization problem. Solving these problems is computationally demanding and has limited applicability for some models such as deep networks. In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i.e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training. We propose a Generative Adversarial Net with three components: generator, discriminator, and the target classifier. This approach allows us to model naturally the detectability constrains that can be expected in realistic attacks and to identify the regions of the underlying data distribution that can be more vulnerable to data poisoning. Our experimental evaluation shows the effectiveness of our attack to compromise machine learning classifiers, including deep networks.""","""This paper proposes a GAN-based approach to producing poisons for neural networks. While the approach is interesting and appreciated by the reviewers, it is a legitimate and recurring criticism that the method is only demonstrated on very toy problems (MNIST and Fashion MNIST). During the rebuttal stage, the authors added results on CIFAR, although the results on CIFAR were not convincing enough to change the reviewer scores; the SOTA in GANs is sufficient to generate realistic images of cars and trucks (even at the ImageNet scale), while the demonstrated images are sufficiently far from the natural image distribution on CIFAR-10 that it is not clear whether the method benefits from using a GAN. It should be noted that a range of poisoning methods exist that can effectively target CIFAR, and SOTA methods (e.g., poison polytope attacks and backdoor attacks) can even target datasets like ImageNet and CelebA.""" 246,"""Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base""","['question-answering', 'knowledge base completion', 'neuro-symbolic reasoning', 'multihop reasoning']","""We describe a novel way of representing a symbolic knowledge base (KB) called a sparse-matrix reified KB. This representation enables neural modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs. The sparse-matrix reified KB can be distributed across multiple GPUs, can scale to tens of millions of entities and facts, and is orders of magnitude faster than naive sparse-matrix implementations. The reified KB enables very simple end-to-end architectures to obtain competitive performance on several benchmarks representing two families of tasks: KB completion, and learning semantic parsers from denotations.""","""This paper proposes an approach to representing a symbolic knowledge base as a sparse matrix, which enables the use of differentiable neural modules for inference. This approach scales to large knowledge bases and is demonstrated on several tasks. Post-discussion and rebuttal, all three reviewers are in agreement that this is an interesting and useful paper. There was intiially some concern about clarity and polish, but these have been resolved upon rebuttal and discussion. Therefore I recommend acceptance. """ 247,"""Towards understanding the true loss surface of deep neural networks using random matrix theory and iterative spectral methods""","['Random Matrix theory', 'deep learning', 'deep learning theory', 'hessian eigenvalues', 'true risk']","""The geometric properties of loss surfaces, such as the local flatness of a solution, are associated with generalization in deep learning. The Hessian is often used to understand these geometric properties. We investigate the differences between the eigenvalues of the neural network Hessian evaluated over the empirical dataset, the Empirical Hessian, and the eigenvalues of the Hessian under the data generating distribution, which we term the True Hessian. Under mild assumptions, we use random matrix theory to show that the True Hessian has eigenvalues of smaller absolute value than the Empirical Hessian. We support these results for different SGD schedules on both a 110-Layer ResNet and VGG-16. To perform these experiments we propose a framework for spectral visualization, based on GPU accelerated stochastic Lanczos quadrature. This approach is an order of magnitude faster than state-of-the-art methods for spectral visualization, and can be generically used to investigate the spectral properties of matrices in deep learning.""","""The reviewers all appreciated the importance of the topic: understanding the local geometry of loss surfaces of large models is viewed as critical to understand generalization and design better optimization methods. However, reviewers also pointed out the strength of the assumptions and the limitations of the empirical study. Despite the claim that these assumptions are weaker than those made in prior work, this did not convince the reviewers that the conclusion could be applied to common loss landscapes. I encourage the authors to address the points made by the reviewers and submit an updated version to a later conference.""" 248,"""RNNs Incrementally Evolving on an Equilibrium Manifold: A Panacea for Vanishing and Exploding Gradients?""","['novel recurrent neural architectures', 'learning representations of outputs or states']","""Recurrent neural networks (RNNs) are particularly well-suited for modeling long-term dependencies in sequential data, but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate. While a number of works attempt to mitigate this effect through gated recurrent units, skip-connections, parametric constraints and design choices, we propose a novel incremental RNN (iRNN), where hidden state vectors keep track of incremental changes, and as such approximate state-vector increments of Rosenblatt's (1962) continuous-time RNNs. iRNN exhibits identity gradients and is able to account for long-term dependencies (LTD). We show that our method is computationally efficient overcoming overheads of many existing methods that attempt to improve RNN training, while suffering no performance degradation. We demonstrate the utility of our approach with extensive experiments and show competitive performance against standard LSTMs on LTD and other non-LTD tasks. ""","""In this paper, the authors propose the incremental RNN, a novel recurrent neural network architecture that resolves the exploding/vanishing gradient problem. While the reviewers initially had various concerns, the paper has been substantially improved during the discussion period and all questions by the reviewers have been resolved. The main idea of the paper is elegant, the theoretical results interesting, and the empirical evaluation extensive. The reviewers and the AC recommend acceptance of this paper to ICLR-2020.""" 249,"""Meta-Learning Deep Energy-Based Memory Models""","['associative memory', 'energy-based memory', 'meta-learning', 'compressive memory']","""We study the problem of learning an associative memory model -- a system which is able to retrieve a remembered pattern based on its distorted or incomplete version. Attractor networks provide a sound model of associative memory: patterns are stored as attractors of the network dynamics and associative retrieval is performed by running the dynamics starting from a query pattern until it converges to an attractor. In such models the dynamics are often implemented as an optimization procedure that minimizes an energy function, such as in the classical Hopfield network. In general it is difficult to derive a writing rule for a given dynamics and energy that is both compressive and fast. Thus, most research in energy-based memory has been limited either to tractable energy models not expressive enough to handle complex high-dimensional objects such as natural images, or to models that do not offer fast writing. We present a novel meta-learning approach to energy-based memory models (EBMM) that allows one to use an arbitrary neural architecture as an energy model and quickly store patterns in its weights. We demonstrate experimentally that our EBMM approach can build compressed memories for synthetic and natural data, and is capable of associative retrieval that outperforms existing memory systems in terms of the reconstruction error and compression rate.""","""Four knowledgable reviewers recommend accept. Good job!""" 250,"""Individualised Dose-Response Estimation using Generative Adversarial Nets""","['individualised dose-response estimation', 'treatment effects', 'causal inference', 'generative adversarial networks']","""The problem of estimating treatment responses from observational data is by now a well-studied one. Less well studied, though, is the problem of treatment response estimation when the treatments are accompanied by a continuous dosage parameter. In this paper, we tackle this lesser studied problem by building on a modification of the generative adversarial networks (GANs) framework that has already demonstrated effectiveness in the former problem. Our model, DRGAN, is flexible, capable of handling multiple treatments each accompanied by a dosage parameter. The key idea is to use a significantly modified GAN model to generate entire dose-response curves for each sample in the training data which will then allow us to use standard supervised methods to learn an inference model capable of estimating these curves for a new sample. Our model consists of 3 blocks: (1) a generator, (2) a discriminator, (3) an inference block. In order to address the challenge presented by the introduction of dosages, we propose novel architectures for both our generator and discriminator. We model the generator as a multi-task deep neural network. In order to address the increased complexity of the treatment space (because of the addition of dosages), we develop a hierarchical discriminator consisting of several networks: (a) a treatment discriminator, (b) a dosage discriminator for each treatment. In the experiments section, we introduce a new semi-synthetic data simulation for use in the dose-response setting and demonstrate improvements over the existing benchmark models.""","""This paper addresses the problem of estimating treatment responses involving a continuous dosage parameter. The basic idea is to learn a GAN model capable of generating synthetic dose-response curves for each training sample, which then facilitates the supervised training of an inference model that estimates these curves for new cases. For this purpose, specialized architectures are also proposed for the GAN, which involves a multi-task generator network and a hierarchical discriminator network. Empirical results demonstrate improvement over existing methods. While there is always a chance that reviewers may underappreciate certain aspects of a submission, the fact that there was a unanimous decision to reject this work indicates that the contribution must be better marketed to the ML community. For example, after the rebuttal one reviewer remained unconvinced regarding explanations for why the proposed method is likely to learn the full potential outcome distribution. Among other things, another reviewer felt that both the proposed DRGAN model, and the GANITE framework upon which it is based, were not necessarily working as advertised in the present context.""" 251,"""GENERALIZATION GUARANTEES FOR NEURAL NETS VIA HARNESSING THE LOW-RANKNESS OF JACOBIAN""","['Theory of neural nets', 'low-rank structure of Jacobian', 'optimization and generalization theory']","""Modern neural network architectures often generalize well despite containing many more parameters than the size of the training dataset. This paper explores the generalization capabilities of neural networks trained via gradient descent. We develop a data-dependent optimization and generalization theory which leverages the low-rank structure of the Jacobian matrix associated with the network. Our results help demystify why training and generalization is easier on clean and structured datasets and harder on noisy and unstructured datasets as well as how the network size affects the evolution of the train and test errors during training. Specifically, we use a control knob to split the Jacobian spectum into ``information"" and ``nuisance"" spaces associated with the large and small singular values. We show that over the information space learning is fast and one can quickly train a model with zero training loss that can also generalize well. Over the nuisance space training is slower and early stopping can help with generalization at the expense of some bias. We also show that the overall generalization capability of the network is controlled by how well the labels are aligned with the information space. A key feature of our results is that even constant width neural nets can provably generalize for sufficiently nice datasets. We conduct various numerical experiments on deep networks that corroborate our theoretical findings and demonstrate that: (i) the Jacobian of typical neural networks exhibit low-rank structure with a few large singular values and many small ones leading to a low-dimensional information space, (ii) over the information space learning is fast and most of the labels falls on this space, and (iii) label noise falls on the nuisance space and impedes optimization/generalization.""","""This submission investigates the properties of the Jacobian matrix in deep learning setup. Specifically, it splits the spectrum of the matrix into information (large singulars) and ``nuisance (small singulars) spaces. The paper shows that over the information space learning is fast and achieves zero loss. It also shows that generalization relates to how well labels are aligned with the information space. While the submission certainly has encouraging analysis/results, reviewers find these contributions limited and it is not clear how some of the claims in the paper can be extended to more general settings. For example, while the authors claim that low-rank structure is suggested by theory, the support of this claim is limited to a case study on mixture of Gaussians. In addition, the provided analysis only studies two-layer networks. As elaborated by R4, extending these arguments to more than two layers does not seem straighforward using the tools used in the submission. While all reviewers appreciated author's response, they were not convinced and maintained their original ratings. """ 252,"""Invariance vs Robustness of Neural Networks""","['Invariance', 'Adversarial', 'Robustness']","""Neural networks achieve human-level accuracy on many standard datasets used in image classification. The next step is to achieve better generalization to natural (or non-adversarial) perturbations as well as known pixel-wise adversarial perturbations of inputs. Previous work has studied generalization to natural geometric transformations (e.g., rotations) as invariance, and generalization to adversarial perturbations as robustness. In this paper, we examine the interplay between invariance and robustness. We empirically study the following two cases:(a) change in adversarial robustness as we improve only the invariance using equivariant models and training augmentation, (b) change in invariance as we improve only the adversarial robustness using adversarial training. We observe that the rotation invariance of equivariant models (StdCNNs and GCNNs) improves by training augmentation with progressively larger rotations but while doing so, their adversarial robustness does not improve, or worse, it can even drop significantly on datasets such as MNIST. As a plausible explanation for this phenomenon we observe that the average perturbation distance of the test points to the decision boundary decreases as the model learns larger and larger rotations. On the other hand, we take adversarially trained LeNet and ResNet models which have good \ell_\infty adversarial robustness on MNIST and CIFAR-10, and observe that adversarially training them with progressively larger norms keeps their rotation invariance essentially unchanged. In fact, the difference between test accuracy on unrotated test data and on randomly rotated test data upto \theta , for all \theta in [0, 180], remains essentially unchanged after adversarial training . As a plausible explanation for the observed phenomenon we show empirically that the principal components of adversarial perturbations and perturbations given by small rotations are nearly orthogonal""","""This paper examines the interplay between the related ideas of invariance and robustness in deep neural network models. Invariance is the notion that small perturbations to an input image (such as rotations or translations) should not change the classification of that image. Robustness is usually taken to be the idea that small perturbations to input images (e.g. noise, whether white or adversarial) should not significantly affect the model's performance. In the context of this paper, robustness is mostly considered in terms of adversarial perturbations that are imperceptible to humans and created to intentionally disrupt a model's accuracy. The results of this investigation suggests that these ideas are mostly unrelated: equivariant models (with architectures designed to encourage the learning of invariances) that are trained with data augmentation whereby input images are given random rotations do not seem to offer any additional adversarial robustness, and similarly using adversarial training to combat adversarial noise does not seem to confer any additional help for learning rotational invariance. (In some cases, these types of training on the one hand seem to make invariance to the other type of perturbations even worse.) Unfortunately, the reviewers do not believe the technical results are of sufficient interest to warrant publication at this time. """ 253,"""Implicit Rugosity Regularization via Data Augmentation""","['deep networks', 'implicit regularization', 'Hessian', 'rugosity', 'curviness', 'complexity']","""Deep (neural) networks have been applied productively in a wide range of supervised and unsupervised learning tasks. Unlike classical machine learning algorithms, deep networks typically operate in the overparameterized regime, where the number of parameters is larger than the number of training data points. Consequently, understanding the generalization properties and the role of (explicit or implicit) regularization in these networks is of great importance. In this work, we explore how the oft-used heuristic of data augmentation imposes an implicit regularization penalty of a novel measure of the rugosity or roughness based on the tangent Hessian of the function fit to the training data.""","""This paper aims to study the effect of data augmentation of generalization performance. The authors put forth a measure of rugosity or ""roughness"" based on the tangent Hessian of the function reminiscent of a classic result by Donoho et. al. The authors show that this measure changes in tandem with how much data augmentation helps. The reviewers and I concur that the rugosity measure is interesting. However, as the reviewer mention the main draw back of this paper is that this measure of rugosity when made explicit does not improve generalization. This is the main draw back of the paper. I agree with the authors that this measure is interesting in itself. However, I think in its current form the paper is not ready for prime time and recommend rejection. That said, I believe this paper has a lot of potential and recommend the authors to rewrite and carry out more careful experiments for a future submission.""" 254,"""Localizing and Amortizing: Efficient Inference for Gaussian Processes""","['Gaussian Processes', 'Variational Inference', 'Amortized Inference', 'Nearest Neighbors']","""The inference of Gaussian Processes concerns the distribution of the underlying function given observed data points. GP inference based on local ranges of data points is able to capture fine-scale correlations and allow fine-grained decomposition of the computation. Following this direction, we propose a new inference model that considers the correlations and observations of the K nearest neighbors for the inference at a data point. Compared with previous works, we also eliminate the data ordering prerequisite to simplify the inference process. Additionally, the inference task is decomposed to small subtasks with several technique innovations, making our model well suits the stochastic optimization. Since the decomposed small subtasks have the same structure, we further speed up the inference procedure with amortized inference. Our model runs efficiently and achieves good performances on several benchmark tasks.""","""This paper presents a method for speeding up Gaussian process inference by leveraging locality information through k-nearest neighbours. The key idea is well-motivated intuitively, however the way in which it is implemented seems to introduce new complications. One such issue is KNN overhead in high dimensions, but R1 outlines other potential issues too. Moreover, the method's merit is not demonstrated in a convincing way through the experiments. The authors have provided a rebuttal for those issues, but it does not seem to solve the concerns entirely. """ 255,"""A critical analysis of self-supervision, or what we can learn from a single image""","['self-supervision', 'feature representation learning', 'CNN']","""We look critically at popular self-supervision techniques for learning deep convolutional neural networks without manual labels. We show that three different and representative methods, BiGAN, RotNet and DeepCluster, can learn the first few layers of a convolutional network from a single image as well as using millions of images and manual labels, provided that strong data augmentation is used. However, for deeper layers the gap with manual supervision cannot be closed even if millions of unlabelled images are used for training. We conclude that: (1) the weights of the early layers of deep networks contain limited information about the statistics of natural images, that (2) such low-level statistics can be learned through self-supervision just as well as through strong supervision, and that (3) the low-level statistics can be captured via synthetic transformations instead of using a large image dataset.""","""This paper studies the effectiveness of self-supervised approaches by characterising how much information they can extract from a given dataset of images on a per-layer basis. Based on an empirical evaluation of RotNet, BiGAN, and DeepCluster, the authors argue that the early layers of CNNs can be effectively learned from a single image coupled with strong data augmentation. Secondly, the authors also provide some empirical evidence that supervision might still necessary to learn the deeper layers (even in the presence of millions of images for self-supervision). Overall, the reviews agree that the paper is well written and timely given the growing popularity of self-supervised methods. Given that most of the issues raised by the reviewers were adequately addressed in the rebuttal, I will recommend acceptance. We ask the authors to include additional experiments requested by the reviewers (they are valuable even if the conclusions are not perfectly aligned with the main message). """ 256,"""Improved memory in recurrent neural networks with sequential non-normal dynamics""","['recurrent neural networks', 'memory', 'non-normal dynamics']","""Training recurrent neural networks (RNNs) is a hard problem due to degeneracies in the optimization landscape, a problem also known as vanishing/exploding gradients. Short of designing new RNN architectures, previous methods for dealing with this problem usually boil down to orthogonalization of the recurrent dynamics, either at initialization or during the entire training period. The basic motivation behind these methods is that orthogonal transformations are isometries of the Euclidean space, hence they preserve (Euclidean) norms and effectively deal with vanishing/exploding gradients. However, this ignores the crucial effects of non-linearity and noise. In the presence of a non-linearity, orthogonal transformations no longer preserve norms, suggesting that alternative transformations might be better suited to non-linear networks. Moreover, in the presence of noise, norm preservation itself ceases to be the ideal objective. A more sensible objective is maximizing the signal-to-noise ratio (SNR) of the propagated signal instead. Previous work has shown that in the linear case, recurrent networks that maximize the SNR display strongly non-normal, sequential dynamics and orthogonal networks are highly suboptimal by this measure. Motivated by this finding, here we investigate the potential of non-normal RNNs, i.e. RNNs with a non-normal recurrent connectivity matrix, in sequential processing tasks. Our experimental results show that non-normal RNNs outperform their orthogonal counterparts in a diverse range of benchmarks. We also find evidence for increased non-normality and hidden chain-like feedforward motifs in trained RNNs initialized with orthogonal recurrent connectivity matrices. ""","""This paper proposes to explore nonnormal matrix initialization in RNNs. Two reviewers recommended acceptance and one recommended rejection. The reviewers recommending acceptance highlighted the utility of the approach, its potential to inspire future work, and the clarity and quality of writing and accompanying experiments. One reviewer recommending weak acceptance expressed appreciation of the quality of the rebuttal and that their concerns were largely addressed. The reviewer recommending rejection was primarily concerned with the novelty of the method. Their review suggested the inclusion of an additional citation, which was included in a revised version for the rebuttal but not with a direct comparison of results. On the balance, the paper has a relatively high degree of support from the reviewers, and presents an interesting and potentially useful initialization in a clear and well-motivated way.""" 257,"""Boosting Network: Learn by Growing Filters and Layers via SplitLBI""",[],"""Network structures are important to learning good representations of many tasks in computer vision and machine learning communities. These structures are either manually designed, or searched by Neural Architecture Search (NAS) in previous works, which however requires either expert-level efforts, or prohibitive computational cost. In practice, it is desirable to efficiently and simultaneously learn both the structures and parameters of a network from arbitrary classes with budgeted computational cost. We identify it as a new learning paradigm -- Boosting Network, where one starts from simple models, delving into complex trained models progressively. In this paper, by virtue of an iterative sparse regularization path -- Split Linearized Bregman Iteration (SplitLBI), we propose a simple yet effective boosting network method that can simultaneously grow and train a network by progressively adding both convolutional filters and layers. Extensive experiments with VGG and ResNets validate the effectiveness of our proposed algorithms.""","""This paper considers how to learn the structure of deep network by beginning with a simple network and then progressively adding layers and filters as needed. The paper received three reviews by expert working in this area. R1 recommends Weak Reject due to concerns about novelty, degree of contribution, clarity of technical exposition, and experiments. R2 recommends Weak Accept and has some specific suggestions and questions. R3 recommends Weak Reject, also citing concerns with experiments and writing. The authors submitted a response that addressed many of these comments, but R1 and R3 continue to have concerns about contribution and the experiments, while R2 maintains their Weak Accept rating. Given the split decision, the AC also read the paper. While we believe the paper has significant merit, we agree with R1 and R3 on the need for additional experimentation, and believe another round of peer review would help clarify the writing and contribution. We hope the reviewer comments will hep authors prepare a revision for a future venue.""" 258,"""Improving Federated Learning Personalization via Model Agnostic Meta Learning""","['Federated Learning', 'Model Agnostic Meta Learning', 'Personalization']","""Federated Learning (FL) refers to learning a high quality global model based on decentralized data storage, without ever copying the raw data. A natural scenario arises with data created on mobile phones by the activity of their users. Given the typical data heterogeneity in such situations, it is natural to ask how can the global model be personalized for every such device, individually. In this work, we point out that the setting of Model Agnostic Meta Learning (MAML), where one optimizes for a fast, gradient-based, few-shot adaptation to a heterogeneous distribution of tasks, has a number of similarities with the objective of personalization for FL. We present FL as a natural source of practical applications for MAML algorithms, and make the following observations. 1) The popular FL algorithm, Federated Averaging, can be interpreted as a meta learning algorithm. 2) Careful fine-tuning can yield a global model with higher accuracy, which is at the same time easier to personalize. However, solely optimizing for the global model accuracy yields a weaker personalization result. 3) A model trained using a standard datacenter optimization method is much harder to personalize, compared to one trained using Federated Averaging, supporting the first claim. These results raise new questions for FL, MAML, and broader ML research.""","""The reviewers have reached consensus that while the paper is interesting, it could use more time. We urge the authors to continue their investigations.""" 259,"""Contrastive Representation Distillation""","['Knowledge Distillation', 'Representation Learning', 'Contrastive Learning', 'Mutual Information']",""" Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. When combined with knowledge distillation, our method sets a state of the art in many transfer tasks, sometimes even outperforming the teacher network.""","""This paper presents a new distillation method with theoretical and empirical supports. Given reviewers' comments and AC's reading, the novelty/significance and application-scope shown in the paper can be arguably limited. However, the authors extensively verified and compared the proposed methods and existing ones by showing significant improvements under comprehensive experiments. As the distillation method can enjoy a broader usage, I think the propose method in this paper can be influential in the future works. Hence, I think this is a borderlines paper toward acceptance.""" 260,"""Weakly-Supervised Trajectory Segmentation for Learning Reusable Skills""","['skills', 'demonstration', 'agent', 'sub-task', 'primitives', 'robot learning', 'manipulation']","""Learning useful and reusable skill, or sub-task primitives, is a long-standing problem in sensorimotor control. This is challenging because it's hard to define what constitutes a useful skill. Instead of direct manual supervision which is tedious and prone to bias, in this work, our goal is to extract reusable skills from a collection of human demonstrations collected directly for several end-tasks. We propose a weakly-supervised approach for trajectory segmentation following the classic work on multiple instance learning. Our approach is end-to-end trainable, works directly from high-dimensional input (e.g., images) and only requires the knowledge of what skill primitives are present at training, without any need of segmentation or ordering of primitives. We evaluate our approach via rigorous experimentation across four environments ranging from simulation to real world robots, procedurally generated to human collected demonstrations and discrete to continuous action space. Finally, we leverage the generated skill segmentation to demonstrate preliminary evidence of zero-shot transfer to new combinations of skills. Result videos at pseudo-url""","""The authors present a multiple instance learning-based approach that uses weak supervison (of which skills appear in any given trajectory) to automatically segment a set of skills from demonstrations. The reviewers had significant concerns about the significance and performance of the method, as well as the metrics used for analysis. Most notably, neither the original paper nor the rebuttal provided a sufficient justification or fix for the lack of analysis beyond accuracy scores (as opposed to confusion matrices, precision/recall, etc), which leaves the contribution and claims of the paper unclear. Thus, I recommend rejection at this time.""" 261,"""Language-independent Cross-lingual Contextual Representations""","['contextual representation', 'cross-lingual', 'transfer learning']","""Contextual representation models like BERT have achieved state-of-the-art performance on a diverse range of NLP tasks. We propose a cross-lingual contextual representation model that generates language-independent contextual representations. This helps to enable zero-shot cross-lingual transfer of a wide range of NLP models, on top of contextual representation models like BERT. We provide a formulation of language-independent cross-lingual contextual representation based on mono-lingual representations. Our formulation takes three steps to align sequences of vectors: transform, extract, and reorder. We present a detailed discussion about the process of learning cross-lingual contextual representations, also about the performance in cross-lingual transfer learning and its implications.""","""The paper proposes a method to learn cross-lingual representations by aligning monolingual models with the help of a parallel corpus using a three-step process: transform, extract, and reorder. Experiments on XNLI show that the proposed method is able to perform zero-shot cross-lingual transfer, although its overall performance is still below state-of-the-art jointly trained method XLM. All three reviewers suggested that the proposed method needs to be evaluated more thoroughly (more datasets and languages). R2 and R4 raise some concerns around the complexity of the proposed method (possibly could be simplified further). R3 suggests a more thorough investigation on why the model saturates at 250,000 parallel sentences, among others. The authors acknowledged reviewers' concerns in their response and will incorporate them in future work. I recommend rejecting this paper for ICLR.""" 262,"""Improved Training Techniques for Online Neural Machine Translation""","['Deep learning', 'natural language processing', 'Machine translation']","""Neural sequence-to-sequence models are at the basis of state-of-the-art solutions for sequential prediction problems such as machine translation and speech recognition. The models typically assume that the entire input is available when starting target generation. In some applications, however, it is desirable to start the decoding process before the entire input is available, e.g. to reduce the latency in automatic speech recognition. We consider state-of-the-art wait-k decoders, that first read k tokens from the source and then alternate between reading tokens from the input and writing to the output. We investigate the sensitivity of such models to the value of k that is used during training and when deploying the model, and the effect of updating the hidden states in transformer models as new source tokens are read. We experiment with German-English translation on the IWSLT14 dataset and the larger WMT15 dataset. Our results significantly improve over earlier state-of-the-art results for German-English translation on the WMT15 dataset across different latency levels.""","""The paper proposes a method of training latency-limited (wait-k) decoders for online machine translation. The authors investigate the impact of the value of k, and of recalculating the transformer's decoder hidden states when a new source token arrives. They significantly improve over state-of-the-art results for German-English translation on the WMT15 dataset, however there is limited novelty wrt previous approaches. The authors responded in depth to reviews and updated the paper with improvements, for which there was no reviewer response. The paper presents interesting results but IMO the approach is not novel enough to justify acceptance at ICLR. """ 263,"""Near-Zero-Cost Differentially Private Deep Learning with Teacher Ensembles""",[],"""Ensuring the privacy of sensitive data used to train modern machine learning models is of paramount importance in many areas of practice. One approach to study these concerns is through the lens of differential privacy. In this framework, privacy guarantees are generally obtained by perturbing models in such a way that specifics of data used to train the model are made ambiguous. A particular instance of this approach is through a ``teacher-student'' model, wherein the teacher, who owns the sensitive data, provides the student with useful, but noisy, information, hopefully allowing the student model to perform well on a given task without access to particular features of the sensitive data. Because stronger privacy guarantees generally involve more significant noising on the part of the teacher, deploying existing frameworks fundamentally involves a trade-off between utility and privacy guarantee. One of the most important techniques used in previous work involves an ensemble of teacher models, which return information to a student based on a noisy voting procedure. In this work, we propose a novel voting mechanism, which we call an Immutable Noisy ArgMax, that, under certain conditions, can bear very large random noising from the teacher without affecting the useful information transferred to the student. Our mechanisms improve over the state-of-the-art methods on all measures, and scale to larger tasks with both higher utility and stronger privacy ( \approx 0$).""","""This paper presents a differentially private mechanism, called Noisy ArgMax, for privately aggregating predictions from several teacher models. There is a consensus in the discussion that the technique of adding a large constant to the largest vote breaks differential privacy. Given this technical flaw, the paper cannot be accepted.""" 264,"""On Variational Learning of Controllable Representations for Text without Supervision""","['sequence variational autoencoders', 'unsupervised learning', 'controllable text generation', 'text style transfer']","""The variational autoencoder (VAE) has found success in modelling the manifold of natural images on certain datasets, allowing meaningful images to be generated while interpolating or extrapolating in the latent code space, but it is unclear whether similar capabilities are feasible for text considering its discrete nature. In this work, we investigate the reason why unsupervised learning of controllable representations fails for text. We find that traditional sequence VAEs can learn disentangled representations through their latent codes to some extent, but they often fail to properly decode when the latent factor is being manipulated, because the manipulated codes often land in holes or vacant regions in the aggregated posterior latent space, which the decoding network is not trained to process. Both as a validation of the explanation and as a fix to the problem, we propose to constrain the posterior mean to a learned probability simplex, and performs manipulation within this simplex. Our proposed method mitigates the latent vacancy problem and achieves the first success in unsupervised learning of controllable representations for text. Empirically, our method significantly outperforms unsupervised baselines and is competitive with strong supervised approaches on text style transfer. Furthermore, when switching the latent factor (e.g., topic) during a long sentence generation, our proposed framework can often complete the sentence in a seemingly natural way -- a capability that has never been attempted by previous methods. ""","""This paper analyzes the behavior of VAE for learning controllable text representations and uses this insight to introduce a method to constrain the posterior space by introducing a regularization term and a structured reconstruction term to the standard VAE loss. Experiments show the proposed method improves over unsupervised baselines, although it still underperforms supervised approaches in text style transfer. The paper had some issues with presentation, as pointed out by R1 and R3. In addition, it missed citations to many prior work. Some of these issues had been addressed after the rebuttal, but I still think it needs to be more self contained (e.g., include details of evaluation protocols in the appendix, instead of citing another paper). In an internal discussion, R1 still has some concerns regarding whether the negative log likelihood is less affected by manipulations in the constrained space compared to beta-VAE. In particular, the concern is about whether the magnitude of the manipulation is comparable across models, which is also shared by R3. R1 also think some of the generated samples are not very convincing. This is a borderline paper with some interesting insights that tackles an important problem. However, due to its shortcoming in the current state, I recommend to reject the paper.""" 265,"""HOPPITY: LEARNING GRAPH TRANSFORMATIONS TO DETECT AND FIX BUGS IN PROGRAMS""","['Bug Detection', 'Program Repair', 'Graph Neural Network', 'Graph Transformation']","""We present a learning-based approach to detect and fix a broad range of bugs in Javascript programs. We frame the problem in terms of learning a sequence of graph transformations: given a buggy program modeled by a graph structure, our model makes a sequence of predictions including the position of bug nodes and corresponding graph edits to produce a fix. Unlike previous works that use deep neural networks, our approach targets bugs that are more complex and semantic in nature (i.e.~bugs that require adding or deleting statements to fix). We have realized our approach in a tool called HOPPITY. By training on 290,715 Javascript code change commits on Github, HOPPITY correctly detects and fixes bugs in 9,490 out of 36,361 programs in an end-to-end fashion. Given the bug location and type of the fix, HOPPITY also outperforms the baseline approach by a wide margin.""","""This paper presents a learning-based approach to detect and fix bugs in JavaScript programs. By modeling the bug detection and fix as a sequence of graph transformations, the proposed method achieved promising experimental results on a large JavaScript dataset crawled from GitHub. All the reviews agree to accept the paper for its reasonable and interesting approach to solve the bug problems. The main concerns are about the experimental design, which has been addressed by the authors in the revision. Based on the novelty and solid experiments of the proposed method, I agreed to accept the paper as other revises. """ 266,"""Deep Coordination Graphs""","['multi-agent reinforcement learning', 'coordination graph', 'deep Q-learning', 'value factorization', 'relative overgeneralization']","""This paper introduces the deep coordination graph (DCG) for collaborative multi-agent reinforcement learning. DCG strikes a flexible trade-off between representational capacity and generalization by factorizing the joint value function of all agents according to a coordination graph into payoffs between pairs of agents. The value can be maximized by local message passing along the graph, which allows training of the value function end-to-end with Q-learning. Payoff functions are approximated with deep neural networks and parameter sharing improves generalization over the state-action space. We show that DCG can solve challenging predator-prey tasks that are vulnerable to the relative overgeneralization pathology and in which all other known value factorization approaches fail.""","""This work extends previous work (Castellini et al) with parameter sharing and low-rank approximations, for pairwise communication between agents. However the work as presented here is still considered too incremental, in particular when compared to Castellini et al. The advances such as parameter sharing and low-rank approximation are good but not enough of a contribution. Authors' efforts to address this concern did not change reviewers' judgment. Therefore, we recommend rejection.""" 267,"""Active Learning Graph Neural Networks via Node Feature Propagation""","['Graph Learning', 'Active Learning']","""Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data. However, a large quantity of labeled graphs is difficult to obtain, which significantly limit the true success of GNNs. Although active learning has been widely studied for addressing label-sparse issues with other data types like text, images, etc., how to make it effective over graphs is an open question for research. In this paper, we present the investigation on active learning with GNNs for node classification tasks. Specifically, we propose a new method, which uses node feature propagation followed by K-Medoids clustering of the nodes for instance selection in active learning. With a theoretical bound analysis we justify the design choice of our approach. In our experiments on four benchmark dataset, the proposed method outperforms other representative baseline methods consistently and significantly.""","""The authors propose a method of selecting nodes to label in a graph neural network setting to reduce the loss as efficiently as possible. Building atop Sener & Savarese 2017 the authors propose an alternative distance metric and clustering algorithm. In comparison to the just mentioned work, they show that their upper bound is smaller than the previous art's upper bound. While one cannot conclude from this that their algorithm is better, at least empirically the method appears to have a advantage over state of the art. However, reviewers were concerned about the assumptions necessary to prove the theorem, despite the modifications made by the authors after the initial round. The work proposes a simple estimator and shows promising results but reviewers felt improvements like reducing the number of assumptions and potentially a lower bound may greatly strengthen the paper.""" 268,"""Towards Interpretable Evaluations: A Case Study of Named Entity Recognition""","['interpretable evaluation', 'dataset biases', 'model biases', 'NER']",""" With the proliferation of models for natural language processing (NLP) tasks, it is even harder to understand the differences between models and their relative merits. Simply looking at differences between holistic metrics such as accuracy, BLEU, or F1 do not tell us \emph{why} or \emph{how} a particular method is better and how dataset biases influence the choices of model design. In this paper, we present a general methodology for {\emph{interpretable}} evaluation of NLP systems and choose the task of named entity recognition (NER) as a case study, which is a core task of identifying people, places, or organizations in text. The proposed evaluation method enables us to interpret the \textit{model biases}, \textit{dataset biases}, and how the \emph{differences in the datasets} affect the design of the models, identifying the strengths and weaknesses of current approaches. By making our {analysis} tool available, we make it easy for future researchers to run similar analyses and drive the progress in this area.""","""The paper diligently setup and conducted multiple experiments to validate their approach - bucketizating attributions of data and analyze them accordingly to discover deeper insights eg biases. However, reviewers pointed out that such bucketing is tailored to tasks where attributions are easily observed, such as the one of the focus in this paper -NER. While manuscript proposes this approach as general, reviewers failed to seem this point. Another reviewer recommended this manuscript to become a journal item rather than conference, due to the length of the page in appendix (17). There were some confusions around writings as well, pointed out by some reviewers. We highly recommend authors to carefully reflect on reviewers both pros and cons of the paper to improve the paper for your future submission. """ 269,"""Counterfactuals uncover the modular structure of deep generative models""","['generative models', 'causality', 'counterfactuals', 'representation learning', 'disentanglement', 'generalization', 'unsupervised learning']","""Deep generative models can emulate the perceptual properties of complex image datasets, providing a latent representation of the data. However, manipulating such representation to perform meaningful and controllable transformations in the data space remains challenging without some form of supervision. While previous work has focused on exploiting statistical independence to \textit{disentangle} latent factors, we argue that such requirement can be advantageously relaxed and propose instead a non-statistical framework that relies on identifying a modular organization of the network, based on counterfactual manipulations. Our experiments support that modularity between groups of channels is achieved to a certain degree on a variety of generative models. This allowed the design of targeted interventions on complex image datasets, opening the way to applications such as computationally efficient style transfer and the automated assessment of robustness to contextual changes in pattern recognition systems.""","""This paper provides a fresh application of tools from causality theory to investigate modularity and disentanglement in learned deep generative models. It also goes one step further towards making these models more transparent by studying their internal components. While there is still margin for improving the experiments, I believe this paper is a timely contribution to the ICLR/ML community. This paper has high-variance in the reviewer scores. But I believe the authors did a good job with the revision and rebuttal. I recommend acceptance.""" 270,"""Overlearning Reveals Sensitive Attributes""","['privacy', 'censoring representation', 'transfer learning']","""``""Overlearning'' means that a model trained for a seemingly simple objective implicitly learns to recognize attributes and concepts that are (1) not part of the learning objective, and (2) sensitive from a privacy or bias perspective. For example, a binary gender classifier of facial images also learns to recognize races, even races that are not represented in the training data, and identities. We demonstrate overlearning in several vision and NLP models and analyze its harmful consequences. First, inference-time representations of an overlearned model reveal sensitive attributes of the input, breaking privacy protections such as model partitioning. Second, an overlearned model can be ""`re-purposed'' for a different, privacy-violating task even in the absence of the original training data. We show that overlearning is intrinsic for some tasks and cannot be prevented by censoring unwanted attributes. Finally, we investigate where, when, and why overlearning happens during model training.""","""This paper introduces the problem of overlearning, which can be thought of as unintended transfer learning from a (victim) source model to a target task that the source models creator had not intended its model to be used for. The paper raises good points about privacy legislation limitations due to the fact that overlearning makes it impossible to foresee future uses of a given dataset. Please incorporate the revisions suggested in the reviews to add clarity to the overlearning versus censoring confusion addressed by the reviewers.""" 271,"""Unsupervised Learning of Efficient and Robust Speech Representations""",[],"""We present an unsupervised method for learning speech representations based on a bidirectional contrastive predictive coding that implicitly discovers phonetic structure from large-scale corpora of unlabelled raw audio signals. The representations, which we learn from up to 8000 hours of publicly accessible speech data, are evaluated by looking at their impact on the behaviour of supervised speech recognition systems. First, across a variety of datasets, we find that the features learned from the largest and most diverse pretraining dataset result in significant improvements over standard audio features as well as over features learned from smaller amounts of pretraining data. Second, they significantly improve sample efficiency in low-data scenarios. Finally, the features confer significant robustness advantages to the resulting recognition systems: we see significant improvements in out-of-domain transfer relative to baseline feature sets, and the features likewise provide improvements in four different low-resource African language datasets.""","""The paper focuses on learning speech representations with contrastive predictive coding (CPC). As noted by reviewers, (i) novelty is too low (mostly making the model bidirectional) for ICLR (ii) comparison with existing work is missing.""" 272,"""BOOSTING ENCODER-DECODER CNN FOR INVERSE PROBLEMS""","['Prediction error', 'Boosting', 'Encoder-decoder convolutional neural network', 'Inverse problem']","""Encoder-decoder convolutional neural networks (CNN) have been extensively used for various inverse problems. However, their prediction error for unseen test data is difficult to estimate a priori, since the neural networks are trained using only selected data and their architectures are largely considered blackboxes. This poses a fundamental challenge in improving the performance of neural networks. Recently, it was shown that Steins unbiased risk estimator (SURE) can be used as an unbiased estimator of the prediction error for denoising problems. However, the computation of the divergence term in SURE is difficult to implement in a neural network framework, and the condition to avoid trivial identity mapping is not well defined. In this paper, inspired by the finding that an encoder-decoder CNN can be expressed as a piecewise linear representation, we provide a close form expression of the unbiased estimator for the prediction error. The close form representation leads to a novel boosting scheme to prevent a neural network from converging to an identity mapping so that it can enhance the performance. Experimental results show that the proposed algorithm provides consistent improvement in various inverse problems.""","""This paper introduces a closed-form expression for the Steins unbiased estimator for the prediction error, and a boosting approach based on this, with empirical evaluation. While this paper is interesting, all reviewers seem to agree that more work is required before this paper can be published at ICLR. """ 273,"""Bounds on Over-Parameterization for Guaranteed Existence of Descent Paths in Shallow ReLU Networks""","['Spurious local minima', 'Loss landscape', 'Over-parameterization', 'Theory of deep learning', 'Optimization', 'Descent path']","""We study the landscape of squared loss in neural networks with one-hidden layer and ReLU activation functions. Let pseudo-formula and pseudo-formula be the widths of hidden and input layers, respectively. We show that there exist poor local minima with positive curvature for some training sets of size m+2d-2 By positive curvature of a local minimum, we mean that within a small neighborhood the loss function is strictly increasing in all directions. Consequently, for such training sets, there are initialization of weights from which there is no descent path to global optima. It is known that for m$, there always exist descent paths to global optima from all initial weights. In this perspective, our results provide a somewhat sharp characterization of the over-parameterization required for ""existence of descent paths"" in the loss landscape. ""","""This article investigates the optimization landscape of shallow ReLU networks, showing that for sufficiently narrow networks there are data sets for which there is no descent paths to the global minimiser. The topic and the nature of the results is very interesting. The reviewers found that this article makes important contributions in a relevant line of investigation and had generally positive ratings. The authors' responses addressed questions from the initial reviews, and the discussion helped identifying questions for future study departing from the present contribution. """ 274,"""Meta-Learning with Warped Gradient Descent""","['meta-learning', 'transfer learning']","""Learning an efficient update rule from data that promotes rapid learning of new tasks from the same distribution remains an open problem in meta-learning. Typically, previous works have approached this issue either by attempting to train a neural network that directly produces updates or by attempting to learn better initialisations or scaling factors for a gradient-based update rule. Both of these approaches pose challenges. On one hand, directly producing an update forgoes a useful inductive bias and can easily lead to non-converging behaviour. On the other hand, approaches that try to control a gradient-based update rule typically resort to computing gradients through the learning process to obtain their meta-gradients, leading to methods that can not scale beyond few-shot task adaptation. In this work, we propose Warped Gradient Descent (WarpGrad), a method that intersects these approaches to mitigate their limitations. WarpGrad meta-learns an efficiently parameterised preconditioning matrix that facilitates gradient descent across the task distribution. Preconditioning arises by interleaving non-linear layers, referred to as warp-layers, between the layers of a task-learner. Warp-layers are meta-learned without backpropagating through the task training process in a manner similar to methods that learn to directly produce updates. WarpGrad is computationally efficient, easy to implement, and can scale to arbitrarily large meta-learning problems. We provide a geometrical interpretation of the approach and evaluate its effectiveness in a variety of settings, including few-shot, standard supervised, continual and reinforcement learning.""","""A strong paper reporting improved approaches to meta-learning.""" 275,"""Multi-scale Attributed Node Embedding""","['network embedding', 'graph embedding', 'node embedding', 'network science', 'graph representation learning']","""We present network embedding algorithms that capture information about a node from the local distribution over node attributes around it, as observed over random walks following an approach similar to Skip-gram. Observations from neighborhoods of different sizes are either pooled (AE) or encoded distinctly in a multi-scale approach (MUSAE). Capturing attribute-neighborhood relationships over multiple scales is useful for a diverse range of applications, including latent feature identification across disconnected networks with similar attributes. We prove theoretically that matrices of node-feature pointwise mutual information are implicitly factorized by the embeddings. Experiments show that our algorithms are robust, computationally efficient and outperform comparable models on social, web and citation network datasets.""","""This paper constitutes interesting progress on an important topic; the reviewers identify certain improvements and directions for future work, and I urge the authors to continue to develop refinements and extensions.""" 276,"""Neural Non-additive Utility Aggregation""",[],"""Neural architectures for set regression problems aim at learning representations such that good predictions can be made based on the learned representations. This strategy, however, ignores the fact that meaningful intermediate results might be helpful to perform well. We study two new architectures that explicitly model latent intermediate utilities and use non-additive utility aggregation to estimate the set utility based on the latent utilities. We evaluate the new architectures with visual and textual datasets, which have non-additive set utilities due to redundancy and synergy effects. We find that the new architectures perform substantially better in this setup.""","""This paper presents two new architectures that model latent intermediate utilities and use non-additive utility aggregation to estimate the set utility based on the computed latent utilities. These two extensions are easy to understand and seem like a simple extension to the existing RNN model architectures, so that they can be implemented easily. However, the connection to Choquet integral is not clear and no theory has been provided to make that connection. Hence, it is hard for the reader to understand why the integral is useful here. The reviewers have also raised objection about the evaluation which does not seem to be fair to existing methods. These comments can be incorporated to make the paper more accessible and the results more appreciable. """ 277,"""Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients""",[],"""Adversarial examples have been well known as a serious threat to deep neural networks (DNNs). To ensure successful and safe operations of DNNs on realworld tasks, it is urgent to equip DNNs with effective defense strategies. In this work, we study the detection of adversarial examples, based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution (GGD), but with different parameters (i.e., shape factor, mean, and variance). GGD is a general distribution family to cover many popular distributions (e.g., Laplacian, Gaussian, or uniform). It is more likely to approximate the intrinsic distributions of internal responses than any specific distribution. Besides, since the shape factor is more robust to different databases rather than the other two parameters, we propose to construct discriminative features via the shape factor for adversarial detection, employing the magnitude of Benford-Fourier coefficients (MBF), which can be easily estimated using responses. Finally, a support vector machine is trained as the adversarial detector through leveraging the MBF features. Through the Kolmogorov-Smirnov (KS) test, we empirically verify that: 1) the posterior vectors of both adversarial and benign examples follow GGD; 2) the extracted MBF features of adversarial and benign examples follow different distributions. Extensive experiments in terms of image classification demonstrate that the proposed detector is much more effective and robust on detecting adversarial examples of different crafting methods and different sources, in contrast to state-of-the-art adversarial detection methods.""","""This paper presents a new metric for adversarial attack's detection. The reviewers find the idea interesting, but the some part has not been clearly explained, and there are questions on the reproducibility issue of the experiments. """ 278,"""V-MPO: On-Policy Maximum a Posteriori Policy Optimization for Discrete and Continuous Control""","['reinforcement learning', 'policy iteration', 'multi-task learning', 'continuous control']","""Some of the most successful applications of deep reinforcement learning to challenging domains in discrete and continuous control have used policy gradient methods in the on-policy setting. However, policy gradients can suffer from large variance that may limit performance, and in practice require carefully tuned entropy regularization to prevent policy collapse. As an alternative to policy gradient algorithms, we introduce V-MPO, an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) that performs policy iteration based on a learned state-value function. We show that V-MPO surpasses previously reported scores for both the Atari-57 and DMLab-30 benchmark suites in the multi-task setting, and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters. On individual DMLab and Atari levels, the proposed algorithm can achieve scores that are substantially higher than has previously been reported. V-MPO is also applicable to problems with high-dimensional, continuous action spaces, which we demonstrate in the context of learning to control simulated humanoids with 22 degrees of freedom from full state observations and 56 degrees of freedom from pixel observations, as well as example OpenAI Gym tasks where V-MPO achieves substantially higher asymptotic scores than previously reported.""","""This paper proposes an extension of MPO for on-policy reinforcement learning. The proposed method achieved promising results in a relatively hyper-parameter insensitive manner. One concern of the reviewers is the lack of comparison with previous works, such as original MPO, which has been partially addressed by the authors in rebuttal. In addition, Blind Review #3 has some concerns with the fairness of the experimental comparison, though other reviews accept the comparison using standardized benchmark. Overall, the paper proposes a promising extension of MPO; thus, I recommend it for acceptance. """ 279,"""Empirical Studies on the Properties of Linear Regions in Deep Neural Networks""","['deep learning', 'linear region', 'optimization']","""A deep neural networks (DNN) with piecewise linear activations can partition the input space into numerous small linear regions, where different linear functions are fitted. It is believed that the number of these regions represents the expressivity of a DNN. This paper provides a novel and meticulous perspective to look into DNNs: Instead of just counting the number of the linear regions, we study their local properties, such as the inspheres, the directions of the corresponding hyperplanes, the decision boundaries, and the relevance of the surrounding regions. We empirically observed that different optimization techniques lead to completely different linear regions, even though they result in similar classification accuracies. We hope our study can inspire the design of novel optimization techniques, and help discover and analyze the behaviors of DNNs.""","""This paper studies the properties of regions where a DNN with piecewise linear activations behaves linearly. They develop a variety of techniques to chracterize properties and show how these properties correlate with various parameters of the network architecture and training method. The reviewers were in consensus on the quality of the paper: The paper is well written and contains a number of insights that would be of broad interest to the deep learning community. I therefore recommend acceptance.""" 280,"""Unified recurrent network for many feature types""","['sparse', 'recurrent', 'asynchronous', 'time', 'series']","""There are time series that are amenable to recurrent neural network (RNN) solutions when treated as sequences, but some series, e.g. asynchronous time series, provide a richer variation of feature types than current RNN cells take into account. In order to address such situations, we introduce a unified RNN that handles five different feature types, each in a different manner. Our RNN framework separates sequential features into two groups dependent on their frequency, which we call sparse and dense features, and which affect cell updates differently. Further, we also incorporate time features at the sequential level that relate to the time between specified events in the sequence and are used to modify the cell's memory state. We also include two types of static (whole sequence level) features, one related to time and one not, which are combined with the encoder output. The experiments show that the proposed modeling framework does increase performance compared to standard cells.""","""main summary: sparse time LSTM discussions; reviewer 4: technical description of the proposed method insufficient, reviewer 2, 3: same paper sent to ICLR 2019 and rejected recommendation: rejected, based on all reviewers comments""" 281,"""S2VG: Soft Stochastic Value Gradient method""","['Model-based reinforcement learning', 'soft stochastic value gradient']","""Model-based reinforcement learning (MBRL) has shown its advantages in sample-efficiency over model-free reinforcement learning (MFRL). Despite the impressive results it achieves, it still faces a trade-off between the ease of data generation and model bias. In this paper, we propose a simple and elegant model-based reinforcement learning algorithm called soft stochastic value gradient method (S2VG). S2VG combines the merits of the maximum-entropy reinforcement learning and MBRL, and exploits both real and imaginary data. In particular, we embed the model in the policy training and learn pseudo-formula and pseudo-formula functions from the real (or imaginary) data set. Such embedding enables us to compute an analytic policy gradient through the back-propagation rather than the likelihood-ratio estimation, which can reduce the variance of the gradient estimation. We name our algorithm Soft Stochastic Value Gradient method to indicate its connection with the well-known stochastic value gradient method in \citep{heess2015Learning}.""","""The authors consider improvements to model-based reinforcement learning to improve sample efficiency and computational speed. They propose a method which they claim is simple and elegant and embeds the model in the policy learning step, this allows them to compute analytic gradients through the model which can have lower variance than likelihood ratio gradients. They evaluate their method on Mujoco with limited data. All of the reviewers found the presentation confusing and below the bar for an acceptable submission. Although the authors tried to explain the algorithm better to the reviewers, they did not find the presentation sufficiently improved. I agree that the paper has substantial room for improvement around clarity. Reviewers also asked that experiments be run for more time steps. I agree that this would be an important addition as many model-based reinforcement learning approaches perform worse asymptotically model free approaches and it would be interesting to see how the proposed approach does. A reviewer pointed out that equation 2 is missing a term, and indeed I believe that is true. The authors response is not correct, they likely refer to an equation in SVG where the state is integrated out. Finally, the method does not compare to state-of-the-art model-based approaches, claiming that they use ensembles or uncertainty to improve performance. The authors would need to show that adding either of these to their approach attains similar performance to state-of-the-art approaches. At this time, this paper is below the bar for acceptance.""" 282,"""Representation Learning Through Latent Canonicalizations""","['representation learning', 'latent canonicalization', 'sim2real', 'few shot', 'disentanglement']","""We seek to learn a representation on a large annotated data source that generalizes to a target domain using limited new supervision. Many prior approaches to this problem have focused on learning disentangled representations so that as individual factors vary in a new domain, only a portion of the representation need be updated. In this work, we seek the generalization power of disentangled representations, but relax the requirement of explicit latent disentanglement and instead encourage linearity of individual factors of variation by requiring them to be manipulable by learned linear transformations. We dub these transformations latent canonicalizers, as they aim to modify the value of a factor to a pre-determined (but arbitrary) canonical value (e.g., recoloring the image foreground to black). Assuming a source domain with access to meta-labels specifying the factors of variation within an image, we demonstrate experimentally that our method helps reduce the number of observations needed to generalize to a similar target domain when compared to a number of supervised baselines. ""","""This paper proposes a method to allow models to generalize more effectively through the use of latent linear transforms. Overall, I think this method is interesting, but both R2 and R4 were concerned with the experimental evaluation being too simplistic, and the method not being applicable to areas where a good simulator is not available. This seems like a very valid concern to me, and given the high bar for acceptance to ICLR, I would suggest that the paper is not accepted at this time. I would encourage the authors to continue with follow-up experiments that better showcase the generality of the method, and re-submit a more polished draft to a conference in the near future.""" 283,"""Improved Generalization Bound of Permutation Invariant Deep Neural Networks""","['Deep Neural Network', 'Invariance', 'Symmetry', 'Group', 'Generalization']","""We theoretically prove that a permutation invariant property of deep neural networks largely improves its generalization performance. Learning problems with data that are invariant to permutations are frequently observed in various applications, for example, point cloud data and graph neural networks. Numerous methodologies have been developed and they achieve great performances, however, understanding a mechanism of the performance is still a developing problem. In this paper, we derive a theoretical generalization bound for invariant deep neural networks with a ReLU activation to clarify their mechanism. Consequently, our bound shows that the main term of their generalization gap is improved by pseudo-formula where pseudo-formula is a number of permuting coordinates of data. Moreover, we prove that an approximation power of invariant deep neural networks can achieve an optimal rate, though the networks are restricted to be invariant. To achieve the results, we develop several new proof techniques such as correspondence with a fundamental domain and a scale-sensitive metric entropy.""","""This work proves a generalization bound for permutation invariant neural networks (with ReLU activations). While it appears the proof is technically sound and the exact result is novel, reviewers did not feel that the proof significantly improves our understanding of model generalization relative to prior work. Because of this, the work is too incremental in its current form.""" 284,"""Robust saliency maps with distribution-preserving decoys""","['explainable machine learning', 'explainable AI', 'deep learning interpretability', 'saliency maps', 'perturbation', 'convolutional neural network']","""Saliency methods help to make deep neural network predictions more interpretable by identifying particular features, such as pixels in an image, that contribute most strongly to the network's prediction. Unfortunately, recent evidence suggests that many saliency methods perform poorly when gradients are saturated or in the presence of strong inter-feature dependence or noise injected by an adversarial attack. In this work, we propose a data-driven technique that uses the distribution-preserving decoys to infer robust saliency scores in conjunction with a pre-trained convolutional neural network classifier and any off-the-shelf saliency method. We formulate the generation of decoys as an optimization problem, potentially applicable to any convolutional network architecture. We also propose a novel decoy-enhanced saliency score, which provably compensates for gradient saturation and considers joint activation patterns of pixels in a single-layer convolutional neural network. Empirical results on the ImageNet data set using three different deep neural network architectures---VGGNet, AlexNet and ResNet---show both qualitatively and quantitatively that decoy-enhanced saliency scores outperform raw scores produced by three existing saliency methods.""","""This submission proposes a method to explain deep vision models using saliency maps that are robust to certain input perturbations. Strengths: -The paper is clear and well-written. -The approach is interesting. Weaknesses: -The motivation and formulation of the approach (e.g. coherence vs explanation and the use of decoys) was not convincing. -The validation needs additional experiments and comparisons to recent works. These weaknesses were not sufficiently addressed in the discussion phase. AC agrees with the majority recommendation to reject.""" 285,"""Graph inference learning for semi-supervised classification""","['semi-supervised classification', 'graph inference learning']","""In this work, we address the semi-supervised classification of graph data, where the categories of those unlabeled nodes are inferred from labeled nodes as well as graph structures. Recent works often solve this problem with the advanced graph convolution in a conventional supervised manner, but the performance could be heavily affected when labeled data is scarce. Here we propose a Graph Inference Learning (GIL) framework to boost the performance of node classification by learning the inference of node labels on graph topology. To bridge the connection of two nodes, we formally define a structure relation by encapsulating node attributes, between-node paths and local topological structures together, which can make inference conveniently deduced from one node to another node. For learning the inference process, we further introduce meta-optimization on structure relations from training nodes to validation nodes, such that the learnt graph inference capability can be better self-adapted into test nodes. Comprehensive evaluations on four benchmark datasets (including Cora, Citeseer, Pubmed and NELL) demonstrate the superiority of our GIL when compared with other state-of-the-art methods in the semi-supervised node classification task.""","""The authors propose a graph inference learning framework to address the issues of sparse labeled data in graphs. The authors use structural information and node attributes to define a structure relation which is then use to infer unknown labels from known labels. The authors demonstrate the effectiveness of their approach on four benchmark datasets. The approach presented in the paper is sound and the empirical results are convincing. All reviewers have given a positive rating for this paper. Two reviewers had some initial concerns about the paper but after the rebuttal they have acknowledged the answers given by the authors and adjusted their scores. R1 still has concerns about the motivation of the paper and I request the authors to adequately address this in their final version.""" 286,"""A Coordinate-Free Construction of Scalable Natural Gradient""","['Natural gradient', 'second-order optimization', 'K-FAC', 'parameterization invariance', 'deep learning']","""Most neural networks are trained using first-order optimization methods, which are sensitive to the parameterization of the model. Natural gradient descent is invariant to smooth reparameterizations because it is defined in a coordinate-free way, but tractable approximations are typically defined in terms of coordinate systems, and hence may lose the invariance properties. We analyze the invariance properties of the Kronecker-Factored Approximate Curvature (K-FAC) algorithm by constructing the algorithm in a coordinate-free way. We explicitly construct a Riemannian metric under which the natural gradient matches the K-FAC update; invariance to affine transformations of the activations follows immediately. We extend our framework to analyze the invariance properties of K-FAC appied to convolutional networks and recurrent neural networks, as well as metrics other than the usual Fisher metric.""","""The authors analyze the natural gradient algorithm for training a neural net from a theoretical perspective and prove connections to the K-FAC algorithm. The paper is poorly written and contains no experimental evaluation or well established implications wrt practical significance of the results.""" 287,"""Unsupervised Disentanglement of Pose, Appearance and Background from Images and Videos""",['unsupervised landmark discovery'],"""Unsupervised landmark learning is the task of learning semantic keypoint-like representations without the use of expensive keypoint-level annotations. A popular approach is to factorize an image into a pose and appearance data stream, then to reconstruct the image from the factorized components. The pose representation should capture a set of consistent and tightly localized landmarks in order to facilitate reconstruction of the input image. Ultimately, we wish for our learned landmarks to focus on the foreground object of interest. However, the reconstruction task of the entire image forces the model to allocate landmarks to model the background. This work explores the effects of factorizing the reconstruction task into separate foreground and background reconstructions, conditioning only the foreground reconstruction on the unsupervised landmarks. Our experiments demonstrate that the proposed factorization results in landmarks that are focused on the foreground object of interest. Furthermore, the rendered background quality is also improved, as the background rendering pipeline no longer requires the ill-suited landmarks to model its pose and appearance. We demonstrate this improvement in the context of the video-prediction.""","""The paper proposes an approach for unsupervised learning of keypoint landmarks from images and videos by decomposing them into the foreground and static background. The technical approach builds upon related prior works such as Lorenz et al. 2019 and Jakab et al. 2018 by extending them with foreground/background separation. The proposed method works well for static background achieving strong pose prediction results. The weaknesses of the paper are that (1) the proposed method is a fairly reasonable but incremental extension of existing techniques; (2) it relies on a strong assumption on the property of static backgrounds; (3) video prediction results are of limited significance and scope. In particular, the proposed method may work for simple data like KTH but is very limited for modeling videos as it is not well-suited to handle moving backgrounds, interactions between objects (e.g., robot arm in the foreground and objects in the background), and stochasticity. """ 288,"""Inductive representation learning on temporal graphs""","['temporal graph', 'inductive representation learning', 'functional time encoding', 'self-attention']","""Inductive representation learning on temporal graphs is an important step toward salable machine learning on real-world dynamic networks. The evolving nature of temporal dynamic graphs requires handling new nodes as well as capturing temporal patterns. The node embeddings, which are now functions of time, should represent both the static node features and the evolving topological structures. Moreover, node and topological features can be temporal as well, whose patterns the node embeddings should also capture. We propose the temporal graph attention (TGAT) layer to efficiently aggregate temporal-topological neighborhood features to learn the time-feature interactions. For TGAT, we use the self-attention mechanism as building block and develop a novel functional time encoding technique based on the classical Bochner's theorem from harmonic analysis. By stacking TGAT layers, the network recognizes the node embeddings as functions of time and is able to inductively infer embeddings for both new and observed nodes as the graph evolves. The proposed approach handles both node classification and link prediction task, and can be naturally extended to include the temporal edge features. We evaluate our method with transductive and inductive tasks under temporal settings with two benchmark and one industrial dataset. Our TGAT model compares favorably to state-of-the-art baselines as well as the previous temporal graph embedding approaches.""","""The major contribution of this paper is the use of random Fourier features as temporal (positional) encoding for dynamic graphs. The reviewers all find the proposed method interesting, and believes that this is a paper with reasonable contributions. One comment pointed out that the connection between Time2Vec and harmonic analysis has been discussed in the previous work, and we suggest the authors to include this discussion/comparison in the paper.""" 289,"""The Effect of Neural Net Architecture on Gradient Confusion & Training Performance""","['neural network architecture', 'speed of training', 'layer width', 'network depth']","""The goal of this paper is to study why typical neural networks train so fast, and how neural network architecture affects the speed of training. We introduce a simple concept called gradient confusion to help formally analyze this. When confusion is high, stochastic gradients produced by different data samples may be negatively correlated, slowing down convergence. But when gradient confusion is low, data samples interact harmoniously, and training proceeds quickly. Through novel theoretical and experimental results, we show how the neural net architecture affects gradient confusion, and thus the efficiency of training. We show that increasing the width of neural networks leads to lower gradient confusion, and thus easier model training. On the other hand, increasing the depth of neural networks has the opposite effect. Finally, we observe empirically that techniques like batch normalization and skip connections reduce gradient confusion, which helps reduce the training burden of very deep networks.""","""This paper introduces the concept of gradient confusion to show how the neural network architecture affects the speed of training. The reviewers' opinion on this paper varies widely, also after the discussion phase. The main disagreement is on the significance of this work, and whether the concept of gradient confusion adds something meaningful to the existing literature with respect to understanding deep networks. The strong disagreement on this paper suggest that the paper is not quite ready yet for ICLR, but that the authors should make another iteration on the paper to strengthen the case for its significance. """ 290,"""R-TRANSFORMER: RECURRENT NEURAL NETWORK ENHANCED TRANSFORMER""","['Sequence Modeling', 'Multi-head Attention', 'RNNs']","""Recurrent Neural Networks have long been the dominating choice for sequence modeling. However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure. Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently. Notably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks. Despite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts. In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks. The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings. We evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks.""","""The submission proposes a variant of a Transformer architecture that does not use positional embeddings to model local structural patterns but instead adds a recurrent layer before each attention layer to maintain local context. The approach is empirically verified on a number of domains. The reviewers had concerns with the paper, most notably that the architectural modification is not sufficiently novel or significant to warrant publication, that appropriate ablations and baselines were not done to convincingly show the benefit of the approach, that the speed tradeoff was not adequately discussed, and that the results were not compared to actual SOTA results. For these reasons, the recommendation is to reject the paper.""" 291,"""BatchEnsemble: an Alternative Approach to Efficient Ensemble and Lifelong Learning""","['deep learning', 'ensembles']",""" Ensembles, where multiple neural networks are trained individually and their predictions are averaged, have been shown to be widely successful for improving both the accuracy and predictive uncertainty of single neural networks. However, an ensembles cost for both training and testing increases linearly with the number of networks, which quickly becomes untenable. In this paper, we propose BatchEnsemble, an ensemble method whose computational and memory costs are significantly lower than typical ensembles. BatchEnsemble achieves this by defining each weight matrix to be the Hadamard product of a shared weight among all ensemble members and a rank-one matrix per member. Unlike ensembles, BatchEnsemble is not only parallelizable across devices, where one device trains one member, but also parallelizable within a device, where multiple ensemble members are updated simultaneously for a given mini-batch. Across CIFAR-10, CIFAR-100, WMT14 EN-DE/EN-FR translation, and out-of-distribution tasks, BatchEnsemble yields competitive accuracy and uncertainties as typical ensembles; the speedup at test time is 3X and memory reduction is 3X at an ensemble of size 4. We also apply BatchEnsemble to lifelong learning, where on Split-CIFAR-100, BatchEnsemble yields comparable performance to progressive neural networks while having a much lower computational and memory costs. We further show that BatchEnsemble can easily scale up to lifelong learning on Split-ImageNet which involves 100 sequential learning tasks""","""This paper proposed an improved ensemble method called BatchEnsemble, where the weight matrix is decomposed as the element-wise product of a shared weigth metrix and a rank-one matrix for each member. The effectiveness of the proposed methods has been verified by experiments on a list of various tasks including image classification, machine translation, lifelong learning and uncertainty modeling. The idea is simple and easy to follow. Although some reviewers thought it lacks of in-deep analysis, I would like to see it being accepted so the community can benefit from it.""" 292,"""Mixed Precision DNNs: All you need is a good parametrization""","['Deep Neural Network Compression', 'Quantization', 'Straight through gradients']","""Efficient deep neural network (DNN) inference on mobile or embedded devices typically involves quantization of the network parameters and activations. In particular, mixed precision networks achieve better performance than networks with homogeneous bitwidth for the same size constraint. Since choosing the optimal bitwidths is not straight forward, training methods, which can learn them, are desirable. Differentiable quantization with straight-through gradients allows to learn the quantizer's parameters using gradient methods. We show that a suited parametrization of the quantizer is the key to achieve a stable training and a good final performance. Specifically, we propose to parametrize the quantizer with the step size and dynamic range. The bitwidth can then be inferred from them. Other parametrizations, which explicitly use the bitwidth, consistently perform worse. We confirm our findings with experiments on CIFAR-10 and ImageNet and we obtain mixed precision DNNs with learned quantization parameters, achieving state-of-the-art performance.""","""The reviewers uniformly vote to accept this paper. Please take comments into account when revising for the camera ready. I was also very impressed by the authors' responsiveness to reviewer comments, putting in additional work after submission.""" 293,"""Role-Wise Data Augmentation for Knowledge Distillation""","['Data Augmentation', 'Knowledge Distillation']","""Knowledge Distillation (KD) is a common method for transferring the ``knowledge'' learned by one machine learning model (the teacher) into another model (the student), where typically, the teacher has a greater capacity (e.g., more parameters or higher bit-widths). To our knowledge, existing methods overlook the fact that although the student absorbs extra knowledge from the teacher, both models share the same input data -- and this data is the only medium by which the teacher's knowledge can be demonstrated. Due to the difference in model capacities, the student may not benefit fully from the same data points on which the teacher is trained. On the other hand, a human teacher may demonstrate a piece of knowledge with individualized examples adapted to a particular student, for instance, in terms of her cultural background and interests. Inspired by this behavior, we design data augmentation agents with distinct roles to facilitate knowledge distillation. Our data augmentation agents generate distinct training data for the teacher and student, respectively. We focus specifically on KD when the teacher network has greater precision (bit-width) than the student network. We find empirically that specially tailored data points enable the teacher's knowledge to be demonstrated more effectively to the student. We compare our approach with existing KD methods on training popular neural architectures and demonstrate that role-wise data augmentation improves the effectiveness of KD over strong prior approaches. The code for reproducing our results will be made publicly available.""","""This paper studies Population-Based Augmentation in the context of knowledge distillation (KD) and proposes a role-wise data augmentation schemes for improved KD. While the reviewers believe that there is some merit in the proposed approach, its incremental nature and inherent complexity require a cleaner exposition and a stronger empirical evaluation on additional data sets. I will hence recommend the rejection of this manuscript in the current state. Nevertheless, applying PBA to KD seems to be an interesting direction and we encourage the authors to add the missing experiments and to carefully incorporate the reviewer feedback to improve the manuscript.""" 294,"""A Theoretical Analysis of the Number of Shots in Few-Shot Learning""","['Few shot learning', 'Meta Learning', 'Performance Bounds']","""Few-shot classification is the task of predicting the category of an example from a set of few labeled examples. The number of labeled examples per category is called the number of shots (or shot number). Recent works tackle this task through meta-learning, where a meta-learner extracts information from observed tasks during meta-training to quickly adapt to new tasks during meta-testing. In this formulation, the number of shots exploited during meta-training has an impact on the recognition performance at meta-test time. Generally, the shot number used in meta-training should match the one used in meta-testing to obtain the best performance. We introduce a theoretical analysis of the impact of the shot number on Prototypical Networks, a state-of-the-art few-shot classification method. From our analysis, we propose a simple method that is robust to the choice of shot number used during meta-training, which is a crucial hyperparameter. The performance of our model trained for an arbitrary meta-training shot number shows great performance for different values of meta-testing shot numbers. We experimentally demonstrate our approach on different few-shot classification benchmarks.""","""The reviewers generally found the paper's contribution to be valuable and informative, and I believe that this paper should be accepted for publication and a poster presentation. I would strongly recommend to the authors to carefully read over the reviews and address any comments or concerns that were not yet addressed in the rebuttal.""" 295,"""CaptainGAN: Navigate Through Embedding Space For Better Text Generation""","['Generative Adversarial Network', 'Text Generation', 'Straight-Through Estimator']","""Score-function-based text generation approaches such as REINFORCE, in general, suffer from high computational complexity and training instability problems. This is mainly due to the non-differentiable nature of the discrete space sampling and thus these methods have to treat the discriminator as a reward function and ignore the gradient information. In this paper, we propose a novel approach, CaptainGAN, which adopts the straight-through gradient estimator and introduces a re-centered gradient estimation technique to steer the generator toward better text tokens through the embedding space. Our method is stable to train and converges quickly without maximum likelihood pre-training. On multiple metrics of text quality and diversity, our method outperforms existing GAN-based methods on natural language generation.""","""This paper proposes a method to train generative adversarial nets for text generation. The paper proposes to address the challenge of discrete sequences using straight-through and gradient centering. The reviewers found that the results on COCO Image Captions and EMNLP 2017 News were interesting. However, this paper is borderline because it does not sufficiently motivate one of its key contributions: the gradient centering. The paper establishes that it provides an improvement in ablation, but more in-depth analysis would significantly improve the paper. I strongly encourage the authors to resubmit the paper once this has been addressed.""" 296,"""Independence-aware Advantage Estimation""","['Reinforcement Learning', 'Advantage Estimation']","""Most of existing advantage function estimation methods in reinforcement learning suffer from the problem of high variance, which scales unfavorably with the time horizon. To address this challenge, we propose to identify the independence property between current action and future states in environments, which can be further leveraged to effectively reduce the variance of the advantage estimation. In particular, the recognized independence property can be naturally utilized to construct a novel importance sampling advantage estimator with close-to-zero variance even when the Monte-Carlo return signal yields a large variance. To further remove the risk of the high variance introduced by the new estimator, we combine it with existing Monte-Carlo estimator via a reward decomposition model learned by minimizing the estimation variance. Experiments demonstrate that our method achieves higher sample efficiency compared with existing advantage estimation methods in complex environments. ""","""Policy gradient methods typically suffer from high variance in the advantage function estimator. The authors point out independence property between the current action and future states which implies that certain terms from the advantage estimator can be omitted when this property holds. Based on this fact, they construct a novel important sampling based advantage estimator. They evaluate their approach on simple discrete action environments and demonstrate reduced variance and improved performance. Reviewers were generally concerned about the clarity of the technical exposition and the positioning of this work with respect to other estimators of the advantage function which use control variates. The authors clarified differences between their approach and previous approaches using control variance and clarified many of the technical questions that reviewers asked about. I am not convinced by the merits of this approach. While, I think the fundamental idea is interesting, the experiments are limited to simple discrete environments and no comparison is made to other control variate based approaches for reducing variance. Furthermore, due to the function approximation which introduces bias, the method should be compared to actor critic methods which directly estimate the advantage function. Finally, one of the advantages of on policy policy gradient methods is its simplicity. This method introduces many additional steps and parameters to be learned. The authors would need to demonstrate large improvements in sample efficiency on more complex tasks to justify this added complexity. At this time, I do not recommend this paper for acceptance.""" 297,"""Conditional Learning of Fair Representations""","['algorithmic fairness', 'representation learning']","""We propose a novel algorithm for learning fair representations that can simultaneously mitigate two notions of disparity among different demographic subgroups in the classification setting. Two key components underpinning the design of our algorithm are balanced error rate and conditional alignment of representations. We show how these two components contribute to ensuring accuracy parity and equalized false-positive and false-negative rates across groups without impacting demographic parity. Furthermore, we also demonstrate both in theory and on two real-world experiments that the proposed algorithm leads to a better utility-fairness trade-off on balanced datasets compared with existing algorithms on learning fair representations for classification. ""","""This paper provides a new algorithm for learning fair representation for two different fairness criteria--accuracy parity and equalized odds. The reviewers agree that the paper provides novel techniques, although the experiments may appear to be a bit weak. Overall, this paper gives new contributions to the fair representation learning literature. The authors should consider citing and discussing the relationship with the following work: A Reductions Approach to Fair Classification., ICML 2018""" 298,"""Sparse Coding with Gated Learned ISTA""","['Sparse coding', 'deep learning', 'learned ISTA', 'convergence analysis']","""In this paper, we study the learned iterative shrinkage thresholding algorithm (LISTA) for solving sparse coding problems. Following assumptions made by prior works, we first discover that the code components in its estimations may be lower than expected, i.e., require gains, and to address this problem, a gated mechanism amenable to theoretical analysis is then introduced. Specific design of the gates is inspired by convergence analyses of the mechanism and hence its effectiveness can be formally guaranteed. In addition to the gain gates, we further introduce overshoot gates for compensating insufficient step size in LISTA. Extensive empirical results confirm our theoretical findings and verify the effectiveness of our method.""","""The paper extends LISTA by introducing gain gates and overshoot gates, which respectively address underestimation of code components and compensation of small step size of LISTA. The authors theoretically analyze these extensions and backup the effectiveness of their proposed algorithm with encouraging empirical results. All reviewers are highly positive on the contributions of this paper, and appreciate the rigorous theory which is further supported by convincing experiments. All three reviewers recommended accept. """ 299,"""RaCT: Toward Amortized Ranking-Critical Training For Collaborative Filtering ""","['Collaborative Filtering', 'Recommender Systems', 'Actor-Critic', 'Learned Metrics']","""We investigate new methods for training collaborative filtering models based on actor-critic reinforcement learning, to more directly maximize ranking-based objective functions. Specifically, we train a critic network to approximate ranking-based metrics, and then update the actor network to directly optimize against the learned metrics. In contrast to traditional learning-to-rank methods that require re-running the optimization procedure for new lists, our critic-based method amortizes the scoring process with a neural network, and can directly provide the (approximate) ranking scores for new lists. We demonstrate the actor-critic's ability to significantly improve the performance of a variety of prediction models, and achieve better or comparable performance to a variety of strong baselines on three large-scale datasets. ""","""The reviewers generally agreed that the application and method are interesting and relevant, and the paper should be accepted. I would encourage the authors to carefully go through the reviewers' suggestions and address them in the final.""" 300,"""Provable robustness against all adversarial pseudo-formula -perturbations for 1$""","['adversarial robustness', 'provable guarantees']","""In recent years several adversarial attacks and defenses have been proposed. Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used. One way out of this dilemma are provable robustness guarantees. While provably robust models for specific pseudo-formula -perturbation models have been developed, we show that they do not come with any guarantee against other pseudo-formula -perturbations. We propose a new regularization scheme, MMR-Universal, for ReLU networks which enforces robustness wrt pseudo-formula - \textit{and} pseudo-formula -perturbations and show how that leads to the first provably robust models wrt any pseudo-formula -norm for 1$.""","""This paper extends the degree to which ReLU networks can be provably resistant to a broader class of adversarial attacks using a MMR-Universal regularization scheme. In particular, the first provably robust model in terms of lp norm perturbations is developed, where robustness holds with respect to *any* p greater than or equal to one (as opposed to prior work that may only apply to specific lp-norm perturbations). While I support accepting this paper based on the strong reviews and significant technical contribution, one potential drawback is the lack of empirical tests with a broader cohort of representative CNN architectures (as pointed out by R1). In this regard, the rebuttal promises that additional experiments with larger models will be added in the future to the final version, but obviously such results cannot be used to evaluate performance at this time.""" 301,"""Variational Autoencoders with Normalizing Flow Decoders""",[],"""Recently proposed normalizing flow models such as Glow (Kingma & Dhariwal, 2018) have been shown to be able to generate high quality, high dimensional images with relatively fast sampling speed. Due to the inherently restrictive design of architecture , however, it is necessary that their model are excessively deep in order to achieve effective training. In this paper we propose to combine Glow model with an underlying variational autoencoder in order to counteract this issue. We demonstrate that such our proposed model is competitive with Glow in terms of image quality while requiring far less time for training. Additionally, our model achieves state-of-the-art FID score on CIFAR-10 for a likelihood-based model.""","""The paper received mixed reviews: WR (R1,R3) and WA (R2). AC has carefully read reviews and rebuttal and examined the paper. Unfortunately, the AC sides with R1 & R3, who are more experienced in this field than R2, and feels that paper does not quite meet the acceptance threshold. The authors should incorporate the comments of the reviewers and resubmit to another venue. """ 302,"""MULTI-STAGE INFLUENCE FUNCTION""","['influence function', 'multistage training', 'pretrained model']","""Multi-stage training and knowledge transfer from a large-scale pretrain task to various fine-tune end tasks have revolutionized natural language processing (NLP) and computer vision (CV), with state-of-the-art performances constantly being improved. In this paper, we develop a multi-stage influence function score to track predictions from a finetune model all the way back to the pretrain data. With this score, we can identify the pretrain examples in the pretrain task that contribute most to a prediction in the fine-tune task. The proposed multi-stage influence function generalizes the original influence function for a single model in Koh et al 2017, thereby enabling influence computation through both pretrain and fine-tune models. We test our proposed method in various experiments to show its effectiveness and potential applications.""","""This paper extends the idea of influence functions (aka the implicit function theorem) to multi-stage training pipelines, and also adds an L2 penalty to approximate the effect of training for a limited number of iterations. I think this paper is borderline. I also think that R3 had the best take and questions on this paper. Pros: - The main idea makes sense, and could be used to understand real training pipelines better. - The experiments, while mostly small-scale, answer most of the immediate questions about this model. Cons: - The paper still isn't all that polished. E.g. on page 4: ""Algorithm 1 shows how to compute the influence score in (11). The pseudocode for computing the influence function in (11) is shown in Algorithm 1"" - I wish the image dataset experiments had been done with larger images and models. Ultimately, the straightforwardness of the extension and the relative niche applications mean that although the main idea is sound, the quality and the overall impact of this paper don't quite meet the bar.""" 303,"""THE EFFECT OF ADVERSARIAL TRAINING: A THEORETICAL CHARACTERIZATION""","['adversarial training', 'robustness', 'separable data']","""It has widely shown that adversarial training (Madry et al., 2018) is effective in defending adversarial attack empirically. However, the theoretical understanding of the difference between the solution of adversarial training and that of standard training is limited. In this paper, we characterize the solution of adversarial training for linear classification problem for a full range of adversarial radius "". Specifically, we show that if the data themselves are -strongly linearly-separable, adversarial training with radius smaller than "" converges to the hard margin solution of SVM with a faster rate than standard training. If the data themselves are not -strongly linearly-separable, we show that adversarial training with radius "" is stable to outliers while standard training is not. Moreover, we prove that the classifier returned by adversarial training with a large radius "" has low confidence in each data point. Experiments corroborate our theoretical finding well.""","""This paper studies adversarial training in the linear classification setting, and shows a rate of convergence for adversarial training of o(1/log T) to the hard margin SVM solution under a set of assumptions. While 2 reviewers agree that the problem and the central result is somewhat interesting (though R3 is uncertain of the applicability to deep learning, I agree that useful insights can often be gleaned from studying the linear case), reviewers were critical of the degree of clarity and rigour in the writing, including notation, symbol reuse, repetitions/redundancies, and clarity surrounding the assumptions made. No updates to the paper were made and reviewers did not feel their concerns were addressed by the rebuttals. I therefore recommend rejection, but would encourage the authors to continue refining their paper in order to showcase their results more clearly and didactically.""" 304,"""Policy Message Passing: A New Algorithm for Probabilistic Graph Inference""","['graph inference algorithm', 'graph reasoning', 'variational inference']","""A general graph-structured neural network architecture operates on graphs through two core components: (1) complex enough message functions; (2) a fixed information aggregation process. In this paper, we present the Policy Message Passing algorithm, which takes a probabilistic perspective and reformulates the whole information aggregation as stochastic sequential processes. The algorithm works on a much larger search space, utilizes reasoning history to perform inference, and is robust to noisy edges. We apply our algorithm to multiple complex graph reasoning and prediction tasks and show that our algorithm consistently outperforms state-of-the-art graph-structured models by a significant margin.""","""This paper was reviewed by 3 experts, who recommend Weak Reject, Weak Reject, and Reject. The reviewers were overall supportive of the work presented in the paper and felt it would have merit for eventual publication. However, the reviewers identified a number of serious concerns about writing quality, missing technical details, experiments, and missing connections to related work. In light of these reviews, and the fact that the authors have not submitted a response to reviews, we are not able to accept the paper. However given the supportive nature of the reviews, we hope the authors will work to polish the paper and submit to another venue.""" 305,"""Learning Expensive Coordination: An Event-Based Deep RL Approach""","['Multi-Agent Deep Reinforcement Learning', 'Deep Reinforcement Learning', 'Leader–Follower Markov Game', 'Expensive Coordination']","""Existing works in deep Multi-Agent Reinforcement Learning (MARL) mainly focus on coordinating cooperative agents to complete certain tasks jointly. However, in many cases of the real world, agents are self-interested such as employees in a company and clubs in a league. Therefore, the leader, i.e., the manager of the company or the league, needs to provide bonuses to followers for efficient coordination, which we call expensive coordination. The main difficulties of expensive coordination are that i) the leader has to consider the long-term effect and predict the followers' behaviors when assigning bonuses and ii) the complex interactions between followers make the training process hard to converge, especially when the leader's policy changes with time. In this work, we address this problem through an event-based deep RL approach. Our main contributions are threefold. (1) We model the leader's decision-making process as a semi-Markov Decision Process and propose a novel multi-agent event-based policy gradient to learn the leader's long-term policy. (2) We exploit the leader-follower consistency scheme to design a follower-aware module and a follower-specific attention module to predict the followers' behaviors and make accurate response to their behaviors. (3) We propose an action abstraction-based policy gradient algorithm to reduce the followers' decision space and thus accelerate the training process of followers. Experiments in resource collections, navigation, and the predator-prey game reveal that our approach outperforms the state-of-the-art methods dramatically.""","""This paper tackles the challenge of incentivising selfish agents towards a collaborative goal. In doing so, the authors propose several new modules. The reviewers commented on experiments being extremely thorough. One reviewer commented on a lack of ablation study of the 3 contributions, which was promptly provided by the authors. The proposed method is also supported by theoretical derivations. The contributions appear to be quite novel, significantly improving performance of the studied SMGs. One reviewer mentioned the clarity being compromised by too much material being in the appendix, which has been addressed by the authors moving some main pieces of content to the main text. Two reviewer commented on the relevance being lower because of the problem not being widely studied in RL. I would disagree with the reviewers on this aspect, it is great to have new problem brought to light and have fresh and novel results, rather than having yet another paper work on Atari. I also think that the authors in their rebuttal made the practical relevance of their problem setting sufficiently clear with several practical examples. """ 306,"""FEW-SHOT LEARNING ON GRAPHS VIA SUPER-CLASSES BASED ON GRAPH SPECTRAL MEASURES""","['Few shot graph classification', 'graph spectral measures', 'super-classes']","""We propose to study the problem of few-shot graph classification in graph neural networks (GNNs) to recognize unseen classes, given limited labeled graph examples. Despite several interesting GNN variants being proposed recently for node and graph classification tasks, when faced with scarce labeled examples in the few-shot setting, these GNNs exhibit significant loss in classification performance. Here, we present an approach where a probability measure is assigned to each graph based on the spectrum of the graphs normalized Laplacian. This enables us to accordingly cluster the graph base-labels associated with each graph into super-classes, where the L^p Wasserstein distance serves as our underlying distance metric. Subsequently, a super-graph constructed based on the super-classes is then fed to our proposed GNN framework which exploits the latent inter-class relationships made explicit by the super-graph to achieve better class label separation among the graphs. We conduct exhaustive empirical evaluations of our proposed method and show that it outperforms both the adaptation of state-of-the-art graph classification methods to few-shot scenario and our naive baseline GNNs. Additionally, we also extend and study the behavior of our method to semi-supervised and active learning scenarios.""","""The authors propose a method for few-shot learning for graph classification. The majority of reviewers agree on the novelty of the proposed method and that the problem is interesting. The authors have addressed all major concerns.""" 307,"""Understanding and Stabilizing GANs' Training Dynamics with Control Theory""","['Generative Adversarial Nets', 'Stability Analysis', 'Control Theory']","""Generative adversarial networks~(GANs) have made significant progress on realistic image generation but often suffer from instability during the training process. Most previous analyses mainly focus on the equilibrium that GANs achieve, whereas a gap exists between such theoretical analyses and practical implementations, where it is the training dynamics that plays a vital role in the convergence and stability of GANs. In this paper, we directly model the dynamics of GANs and adopt the control theory to understand and stabilize it. Specifically, we interpret the training process of various GANs as certain types of dynamics in a unified perspective of control theory which enables us to model the stability and convergence easily. Borrowed from control theory, we adopt the widely-used negative feedback control to stabilize the training dynamics, which can be considered as an pseudo-formula regularization on the output of the discriminator. We empirically verify our method on both synthetic data and natural image datasets. The results demonstrate that our method can stabilize the training dynamics as well as converge better than baselines.""","""This paper suggests stabilizing the training of GANs using ideas from control theory. The reviewers all noted that the approach was well-motivated and seemed convinced that that the problem was a worthwhile one. However, there were universal concerns about the comparisons with baselines and performance over previous works on Stabilizing GAN training and the authors were not able to properly address them.""" 308,"""Learning from Unlabelled Videos Using Contrastive Predictive Neural 3D Mapping""","['3D feature learning', 'unsupervised learning', 'inverse graphics', 'object discovery']","""Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction. One of the most basic prediction tasks is view prediction: how would a given scene look from an alternative viewpoint? Humans excel at this task. Our ability to imagine and fill in missing information is tightly coupled with perception: we feel as if we see the world in 3 dimensions, while in fact, information from only the front surface of the world hits our retinas. This paper explores the role of view prediction in the development of 3D visual recognition. We propose neural 3D mapping networks, which take as input 2.5D (color and depth) video streams captured by a moving camera, and lift them to stable 3D feature maps of the scene, by disentangling the scene content from the motion of the camera. The model also projects its 3D feature maps to novel viewpoints, to predict and match against target views. We propose contrastive prediction losses to replace the standard color regression loss, and show that this leads to better performance on complex photorealistic data. We show that the proposed model learns visual representations useful for (1) semi-supervised learning of 3D object detectors, and (2) unsupervised learning of 3D moving object detectors, by estimating the motion of the inferred 3D feature maps in videos of dynamic scenes. To the best of our knowledge, this is the first work that empirically shows view prediction to be a scalable self-supervised task beneficial to 3D object detection. ""","""The authors propose to learn space-aware 3D feature abstractions of the world given 2.5D input, by minimizing 3D and 2D view contrastive prediction objectives. The work builds upon Tung et al. (2019) but extends it by removing some of the limitations, making it thus more general. To do so, they learn an inverse graphics network which takes as input 2.5D video and maps to a 3D feature maps of the scene. The authors present experiments on both real and simulation datasets and their proposed approach is tested on feature learning, 3D moving object detection, and 3D motion estimation with good performance. All reviewers agree that this is an important problem in computer vision and the papers provides a working solution. The authors have done a good job with comparisons and make a clear case about their superiority of their model (large datasets, multiple tasks). Moreover, the rebuttal period has been quite productive, with the authors incorporating reviewers' comments in the manuscript, resulting thus in a stronger submission. Based in reviewer's comment and my own assessment, I think this paper should get accepted, as the experiments are solid with good results that the CV audience of ICLR would find relevant. """ 309,"""Star-Convexity in Non-Negative Matrix Factorization""","['nmf', 'convexity', 'nonconvex optimization', 'average-case-analysis']","""Non-negative matrix factorization (NMF) is a highly celebrated algorithm for matrix decomposition that guarantees strictly non-negative factors. The underlying optimization problem is computationally intractable, yet in practice gradient descent based solvers often find good solutions. This gap between computational hardness and practical success mirrors recent observations in deep learning, where it has been the focus of extensive discussion and analysis. In this paper we revisit the NMF optimization problem and analyze its loss landscape in non-worst-case settings. It has recently been observed that gradients in deep networks tend to point towards the final minimizer throughout the optimization. We show that a similar property holds (with high probability) for NMF, provably in a non-worst case model with a planted solution, and empirically across an extensive suite of real-world NMF problems. Our analysis predicts that this property becomes more likely with growing number of parameters, and experiments suggest that a similar trend might also hold for deep neural networks --- turning increasing data sets and models into a blessing from an optimization perspective. ""","""The paper derives results for nonnegative-matrix factorization along the lines of recent results on SGD for DNNs, showing that the loss is star-convex towards randomized planted solutions. Overall, the paper is relatively well written and fairly clear. The reviewers agree that the theoretical contribution of the paper could be improved (tighten bounds) and that the experiments can be improved as well. In the context of other papers submitted to ICLR I therefore recommend to reject the paper. """ 310,"""BayesOpt Adversarial Attack""","['Black-box Adversarial Attack', 'Bayesian Optimisation', 'Gaussian Process']","""Black-box adversarial attacks require a large number of attempts before finding successful adversarial examples that are visually indistinguishable from the original input. Current approaches relying on substitute model training, gradient estimation or genetic algorithms often require an excessive number of queries. Therefore, they are not suitable for real-world systems where the maximum query number is limited due to cost. We propose a query-efficient black-box attack which uses Bayesian optimisation in combination with Bayesian model selection to optimise over the adversarial perturbation and the optimal degree of search space dimension reduction. We demonstrate empirically that our method can achieve comparable success rates with 2-5 times fewer queries compared to previous state-of-the-art black-box attacks.""","""This paper proposes a query-efficient black-box attack that uses Bayesian optimization in combination with Bayesian model selection to optimize over the adversarial perturbation and the optimal degree of search space dimension reduction. The method can achieve comparable success rates with 2-5 times fewer queries compared to previous state-of-the-art black-box attacks. The paper should be further improved in the final version (e.g., including more results on ImageNet data).""" 311,"""Learning Algorithmic Solutions to Symbolic Planning Tasks with a Neural Computer""",[],"""A key feature of intelligent behavior is the ability to learn abstract strategies that transfer to unfamiliar problems. Therefore, we present a novel architecture, based on memory-augmented networks, that is inspired by the von Neumann and Harvard architectures of modern computers. This architecture enables the learning of abstract algorithmic solutions via Evolution Strategies in a reinforcement learning setting. Applied to Sokoban, sliding block puzzle and robotic manipulation tasks, we show that the architecture can learn algorithmic solutions with strong generalization and abstraction: scaling to arbitrary task configurations and complexities, and being independent of both the data representation and the task domain.""","""The authors present a method that optimizes a differentiable neural computer with evolutionary search, and which can transfer abstract strategies to novel problems. The reviewers all agreed that the approach is interesting, though were concerned about the magnitude of the contribution / novelty compared to existing work, clarity of contributions, impact of pretraining, and simplicity of examples. While the reviewers felt that the authors resolved the many of their concerns in the rebuttal, there was remaining concern about the significance of the contribution. Thus, I recommend this paper for rejection at this time.""" 312,"""Blockwise Adaptivity: Faster Training and Better Generalization in Deep Learning""","['optimization', 'deep learning', 'blockwise adaptivity']","""Stochastic methods with coordinate-wise adaptive stepsize (such as RMSprop and Adam) have been widely used in training deep neural networks. Despite their fast convergence, they can generalize worse than stochastic gradient descent. In this paper, by revisiting the design of Adagrad, we propose to split the network parameters into blocks, and use a blockwise adaptive stepsize. Intuitively, blockwise adaptivity is less aggressive than adaptivity to individual coordinates, and can have a better balance between adaptivity and generalization. We show theoretically that the proposed blockwise adaptive gradient descent has comparable regret in online convex learning and convergence rate for optimizing nonconvex objective as its counterpart with coordinate-wise adaptive stepsize, but is better up to some constant. We also study its uniform stability and show that blockwise adaptivity can lead to lower generalization error than coordinate-wise adaptivity. Experimental results show that blockwise adaptive gradient descent converges faster and improves generalization performance over Nesterov's accelerated gradient and Adam.""","""The authors propose an adaptive block-wise coordinate descent method and claim faster convergence and lower generalization error. While the reviewers agreed that this method may work well in practice, they had several concerns about the relevance of the theory and strength of the empirical results. After considering the author responses, the reviewers have agreed that this paper is not yet ready for publication. """ 313,"""The Variational Bandwidth Bottleneck: Stochastic Evaluation on an Information Budget""","['Variational Information Bottleneck', 'Reinforcement learning']","""In many applications, it is desirable to extract only the relevant information from complex input data, which involves making a decision about which input features are relevant. The information bottleneck method formalizes this as an information-theoretic optimization problem by maintaining an optimal tradeoff between compression (throwing away irrelevant input information), and predicting the target. In many problem settings, including the reinforcement learning problems we consider in this work, we might prefer to compress only part of the input. This is typically the case when we have a standard conditioning input, such as a state observation, and a ``privileged'' input, which might correspond to the goal of a task, the output of a costly planning algorithm, or communication with another agent. In such cases, we might prefer to compress the privileged input, either to achieve better generalization (e.g., with respect to goals) or to minimize access to costly information (e.g., in the case of communication). Practical implementations of the information bottleneck based on variational inference require access to the privileged input in order to compute the bottleneck variable, so although they perform compression, this compression operation itself needs unrestricted, lossless access. In this work, we propose the variational bandwidth bottleneck, which decides for each example on the estimated value of the privileged information before seeing it, i.e., only based on the standard input, and then accordingly chooses stochastically, whether to access the privileged input or not. We formulate a tractable approximation to this framework and demonstrate in a series of reinforcement learning experiments that it can improve generalization and reduce access to computationally costly information.""","""Existing implementation of information bottleneck need access to privileged information which goes against the idea of compression. The authors propose variational bandwidth bottleneck which estimates the value of the privileged information and then stochastically decided whether to access this information or not. They provide a suitable approximation and show that their method improves generalisation in RL while reducing access to expensive information. These paper received only two reviews. However, both the reviews were favourable. During discussions with the AC the reviewers acknowledged that most of their concerns were addressed. R2 is still concerned that VBB does not result in improvement in terms of sample efficiency. I request the authors to adequately address this in the final version. Having said that, the paper does make other interesting contributions, hence I recommend that this paper should be accepted.""" 314,"""Imitation Learning of Robot Policies using Language, Vision and Motion""","['robot learning', 'imitation learning', 'natural language processing']","""In this work we propose a novel end-to-end imitation learning approach which combines natural language, vision, and motion information to produce an abstract representation of a task, which in turn can be used to synthesize specific motion controllers at run-time. This multimodal approach enables generalization to a wide variety of environmental conditions and allows an end-user to influence a robot policy through verbal communication. We empirically validate our approach with an extensive set of simulations and show that it achieves a high task success rate over a variety of conditions while remaining amenable to probabilistic interpretability.""","""The present paper addresses the problem of imitation learning in multi-modal settings, combining vision, language and motion. The proposed approach learns an abstract task representation, and the goal is to use this as a basis for generalization. This paper was subject to considerable discussion, and the authors clarified several issues that reviewers raised during the rebuttal phase. Overall, the empirical study presented in the paper remains limited, for example in terms of ablations (which components of the proposed model have what effect on performance) and placement in the context of prior work. As a result, the depth of insights is not yet sufficient for publication.""" 315,"""Regularizing Deep Multi-Task Networks using Orthogonal Gradients""","['multi-task learning', 'gradient regularization', 'orthogonal gradients']","""Deep neural networks are a promising approach towards multi-task learning because of their capability to leverage knowledge across domains and learn general purpose representations. Nevertheless, they can fail to live up to these promises as tasks often compete for a model's limited resources, potentially leading to lower overall performance. In this work we tackle the issue of interfering tasks through a comprehensive analysis of their training, derived from looking at the interaction between gradients within their shared parameters. Our empirical results show that well-performing models have low variance in the angles between task gradients and that popular regularization methods implicitly reduce this measure. Based on this observation, we propose a novel gradient regularization term that minimizes task interference by enforcing near orthogonal gradients. Updating the shared parameters using this property encourages task specific decoders to optimize different parts of the feature extractor, thus reducing competition. We evaluate our method with classification and regression tasks on the multiDigitMNIST and NYUv2 dataset where we obtain competitive results. This work is a first step towards non-interfering multi-task optimization.""","""This paper proposes a training approach that orthogonalizes gradients to enable better learning across multiple tasks. The idea is simple and intuitive. Given that there is past work following the same kind of ideas, it would be need to further: (a) expand the experimental evaluation section with comparisons to prior work and, ideally, demonstrate stronger results. (b) study in more depth the assumptions behind gradient orthogonality for transfer. This would increase impact on top of past literature by explaining, besides intuitions, why gradient orthogonality helps for transfer in the first place. """ 316,"""SDGM: Sparse Bayesian Classifier Based on a Discriminative Gaussian Mixture Model""","['classification', 'sparse Bayesian learning', 'Gaussian mixture model']","""In probabilistic classification, a discriminative model based on Gaussian mixture exhibits flexible fitting capability. Nevertheless, it is difficult to determine the number of components. We propose a sparse classifier based on a discriminative Gaussian mixture model (GMM), which is named sparse discriminative Gaussian mixture (SDGM). In the SDGM, a GMM-based discriminative model is trained by sparse Bayesian learning. This learning algorithm improves the generalization capability by obtaining a sparse solution and automatically determines the number of components by removing redundant components. The SDGM can be embedded into neural networks (NNs) such as convolutional NNs and can be trained in an end-to-end manner. Experimental results indicated that the proposed method prevented overfitting by obtaining sparsity. Furthermore, we demonstrated that the proposed method outperformed a fully connected layer with the softmax function in certain cases when it was used as the last layer of a deep NN.""","""This paper presents a method for merging a discriminative GMM with an ARD sparsity-promoting prior. This is accomplished by nesting the ARD prior update within a larger EM-based routine for handling the GMM, allowing the model to automatically remove redundant components and improve generalization. The resulting algorithm was deployed on standard benchmark data sets and compared against existing baselines such as logistic regression, RVMs, and SVMs. Overall, one potential weakness of this paper, which is admittedly somewhat subjective, is that the exhibited novelty of the proposed approach is modest. Indeed ARD approaches are now widely used in various capacities, and even if some hurdles must be overcome to implement the specific marriage with a discriminative GMM as reported here, at least one reviewer did not feel that this was sufficient to warrant publication. Other concerns related to the experiments and comparison with existing work. For example, one reviewer mentioned comparisons with Panousis et al., ""Nonparametric Bayesian Deep Networks with Local Competition,"" ICML 2019 and requested a discussion of differences. However, the rebuttal merely deferred this consideration to future work and provided no feedback regarding similarities or differences. In the end, all reviewers recommended rejecting this paper and I did not find any sufficient reason to overrule this consensus.""" 317,"""Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery""","['reinforcement learning', 'semi-supervised learning', 'unsupervised learning', 'robotics', 'deep learning']","""Reinforcement learning requires manual specification of a reward function to learn a task. While in principle this reward function only needs to specify the task goal, in practice reinforcement learning can be very time-consuming or even infeasible unless the reward function is shaped so as to provide a smooth gradient towards a successful outcome. This shaping is difficult to specify by hand, particularly when the task is learned from raw observations, such as images. In this paper, we study how we can automatically learn dynamical distances: a measure of the expected number of time steps to reach a given goal state from any other state. These dynamical distances can be used to provide well-shaped reward functions for reaching new goals, making it possible to learn complex tasks efficiently. We show that dynamical distances can be used in a semi-supervised regime, where unsupervised interaction with the environment is used to learn the dynamical distances, while a small amount of preference supervision is used to determine the task goal, without any manually engineered reward function or goal examples. We evaluate our method both on a real-world robot and in simulation. We show that our method can learn to turn a valve with a real-world 9-DoF hand, using raw image observations and just ten preference labels, without any other supervision. Videos of the learned skills can be found on the project website: pseudo-url""","""The authors present a method to learn the expected number of time steps to reach any given state from any other state in a reinforcement learning setting. They show that these so-called dynamical distances can be used to increase learning efficiency by helping to shape reward. After some initial discussion, the reviewers had concerns about the applicability of this method to continuing problems without a clear goal state, learning issues due to the dependence of distance estimates on policy (and vice versa), experimental thoroughness, and a variety of smaller technical issues. While some of these were resolved, the largest outstanding issue is whether the proper comparisons were made to existing work other than DIAYN. The authors appear to agree that additional baselines would benefit the paper, but are uncertain whether this can occur in time. Nonetheless, after discussion the reviewers all appeared to agree on the merit of the core idea, though I strongly encourage the authors to address as many technical and baseline issues as possible before the camera ready deadline. In summary, I recommend this paper for acceptance.""" 318,"""Behavior Regularized Offline Reinforcement Learning""","['reinforcement learning', 'offline RL', 'batch RL']","""In reinforcement learning (RL) research, it is common to assume access to direct online interactions with the environment. However in many real-world applications, access to the environment is limited to a fixed offline dataset of logged experience. In such settings, standard RL algorithms have been shown to diverge or otherwise yield poor performance. Accordingly, much recent work has suggested a number of remedies to these issues. In this work, we introduce a general framework, behavior regularized actor critic (BRAC), to empirically evaluate recently proposed methods as well as a number of simple baselines across a variety of offline continuous control tasks. Surprisingly, we find that many of the technical complexities introduced in recent methods are unnecessary to achieve strong performance. Additional ablations provide insights into which design choices matter most in the offline RL setting.""","""This paper is an empirical studies of methods to stabilize offline (ie, batch) RL methods where the dataset is available up front and not collected during learning. This can be an important setting in e.g. safety critical or production systems, where learned policies should not be applied on the real system until their performance and safety is verified. Since policies leave the area where training data is present, in such settings poor performance or divergence might result, unless divergence from the reference policy is regularized. This paper studies various methods to perform such regularization. The reviewers are all very happy about the thoroughness of the empirical work. The work only studies existing methods (and combination thereof), so the novelty is limited by design. The paper was also considered well written and easy to follow. The results were very similar between the considered regularizers, which somehow limits the usefulness of the paper as practical guideline (although at least now we know that perhaps we do not need to spend a lot of time choosing the best between these). Bigger differences were observed between ""value penalties"" versus ""policy regularization"". This seems to correspond to theoretical observations by Neu et al (pseudo-url, 2017), which is not cited in the manuscript. Although unpublished, I think that work is highly relevant for the current manuscript, and I'd strongly recommend the authors to consider its content. Some minor comments about the paper are given below. On the balance, the strong point of the paper is the empirical thoroughness and clarity, whereas novelty, significance, and theoretical analysis are weaker points. Due to the high selectivity of ICLR, I unfortunately have to recommend rejection for this manuscript. I have some minor comments about the contents of the paper: - The manuscript contains the line: ""Under this definition, such a behavior policy b is always well-defined even if the dataset was collected by multiple, distinct behavior policies"". Wouldn't simply defining the behavior as a mixture of the underlying behavior policies (when known) work equally well? - The paper mentions several earlier works that regularize policies update using the KL from a reference policy (or to a reference policy). The paper of Peters is cited in this context, although there the constraint is actually on the KL divergence between state-action distributions, resulting in a different type of regularization.""" 319,"""Resizable Neural Networks""",[],""" In this paper, we present a deep convolutional neural network (CNN) which performs arbitrary resize operation on intermediate feature map resolution at stage-level. Motivated by weight sharing mechanism in neural architecture search, where a super-network is trained and sub-networks inherit the weights from the super-network, we present a novel CNN approach. We construct a spatial super-network which consists of multiple sub-networks, where each sub-network is a single scale network that obtain a unique spatial configuration, the convolutional layers are shared across all sub-networks. Such network, named as Resizable Neural Networks, are equivalent to training infinite single scale networks, but has no extra computational cost. Moreover, we present a training algorithm such that all sub-networks achieve better performance than individually trained counterparts. On large-scale ImageNet classification, we demonstrate its effectiveness on various modern network architectures such as MobileNet, ShuffleNet, and ResNet. To go even further, we present three variants of resizable networks: 1) Resizable as Architecture Search (Resizable-NAS). On ImageNet, Resizable-NAS ResNet-50 attain 0.4% higher on accuracy and 44% smaller than the baseline model. 2) Resizable as Data Augmentation (Resizable-Aug). While we use resizable networks as a data augmentation technique, it obtains superior performance on ImageNet classification, outperform AutoAugment by 1.2% with ResNet-50. 3) Adaptive Resizable Network (Resizable-Adapt). We introduce the adaptive resizable networks as dynamic networks, which further improve the performance with less computational cost via data-dependent inference.""","""This paper offers likely novel schemes for image resizing. The performance improvement is clear. Unfortunately two reviewers find substantial clarity issues in the manuscript after revision, and the AC concurs that this is still an issue. The paper is borderline but given the number of higher ranked papers in the pool is unable to be accepted unfortunately. """ 320,"""The Local Elasticity of Neural Networks""",[],"""This paper presents a phenomenon in neural networks that we refer to as local elasticity. Roughly speaking, a classifier is said to be locally elastic if its prediction at a feature vector x' is not significantly perturbed, after the classifier is updated via stochastic gradient descent at a (labeled) feature vector x that is dissimilar to x' in a certain sense. This phenomenon is shown to persist for neural networks with nonlinear activation functions through extensive simulations on real-life and synthetic datasets, whereas this is not observed in linear classifiers. In addition, we offer a geometric interpretation of local elasticity using the neural tangent kernel (Jacot et al., 2018). Building on top of local elasticity, we obtain pairwise similarity measures between feature vectors, which can be used for clustering in conjunction with K-means. The effectiveness of the clustering algorithm on the MNIST and CIFAR-10 datasets in turn corroborates the hypothesis of local elasticity of neural networks on real-life data. Finally, we discuss some implications of local elasticity to shed light on several intriguing aspects of deep neural networks.""","""This paper presents a new phenomenon referred to as the ""local elasticity of neural networks"". The main argument is that the SGD update for nonlinear network at a local input x does not change the predictions at a different input x' (see Fig. 2). This is then connected to similarity using nearest-neighbor and kernel methods. An algorithm is also presented. The reviewers find the paper intriguing and believe that this could be interesting for the community. After the rebuttal period, one of the reviewers increased their score. I do agree with the view of the reviewers, although I found that the paper's presentation can be improved. For example, Fig. 1 is not clear at all, and the related work section basically talks about many existing works but does not discuss why they are related to this work and how this work add value to this existing works. I found Fig. 2 very clear and informative. I hope that the authors could further improve the presentation. This should help in improving the impact of the paper. With the reviewers score, I recommend to accept this paper, and encourage the authors to improve the presentation of the paper.""" 321,"""Deep Evidential Uncertainty""","['Evidential deep learning', 'Uncertainty estimation', 'Epistemic uncertainty']","""Deterministic neural networks (NNs) are increasingly being deployed in safety critical domains, where calibrated, robust and efficient measures of uncertainty are crucial. While it is possible to train regression networks to output the parameters of a probability distribution by maximizing a Gaussian likelihood function, the resulting model remains oblivious to the underlying confidence of its predictions. In this paper, we propose a novel method for training deterministic NNs to not only estimate the desired target but also the associated evidence in support of that target. We accomplish this by placing evidential priors over our original Gaussian likelihood function and training our NN to infer the hyperparameters of our evidential distribution. We impose priors during training such that the model is penalized when its predicted evidence is not aligned with the correct output. Thus the model estimates not only the probabilistic mean and variance of our target but also the underlying uncertainty associated with each of those parameters. We observe that our evidential regression method learns well-calibrated measures of uncertainty on various benchmarks, scales to complex computer vision tasks, and is robust to adversarial input perturbations. ""","""This paper presents a method for providing uncertainty for deep learning regressors through assigning a notion of evidence to the predictions. This is done by putting priors on the parameters of the Gaussian outputs of the model and estimating these via an empirical Bayes-like optimization. The reviewers in general found the methodology sensible although incremental in light of Sensoy et al. and Malinin & Gales but found the experiments thorough. A comment on the paper pointed out that the approach was very similar to something presented in the thesis of Malinin (it seems unfair to expect the authors to have been aware of this, but the thesis should be cited and not just the paper which is a different contribution). In discussion, one reviewer raised their score from weak reject to weak accept but the highest scoring reviewer explicitly was not willing to champion the paper and raise their score to accept. Thus the recommendation here is to reject. Taking the reviewer feedback into account, incorporating the proposed changes and adding more careful treatment of related work would make this a much stronger submission to a future conference.""" 322,"""Lossless Data Compression with Transformer""","['data compression', 'transformer']","""Transformers have replaced long-short term memory and other recurrent neural networks variants in sequence modeling. It achieves state-of-the-art performance on a wide range of tasks related to natural language processing, including language modeling, machine translation, and sentence representation. Lossless compression is another problem that can benefit from better sequence models. It is closely related to the problem of online learning of language models. But, despite this ressemblance, it is an area where purely neural network based methods have not yet reached the compression ratio of state-of-the-art algorithms. In this paper, we propose a Transformer based lossless compression method that match the best compression ratio for text. Our approach is purely based on neural networks and does not rely on hand-crafted features as other lossless compression algorithms. We also provide a thorough study of the impact of the different components of the Transformer and its training on the compression ratio.""","""The paper proposes to use transformers to do lossless data compression. The idea is simple and straightforward (with adding n-gram inputs). The initial submission considered one dataset, a new dataset was added in the rebuttal. Still, there is no runtime in the experiments (and Transformers can take a lot of time to train). Since this is more an experimental paper, this is crucial (and the improvements reports are very small and it is difficult to judge if there are significant). Overall, there was a positive discussion between the authors and the reviewers. The reviewers commented that concerns have been addressed, but did not change the evaluation which is unanimous reject. """ 323,"""Refining the variational posterior through iterative optimization""","['uncertainty estimation', 'variational inference', 'auxiliary variables', 'Bayesian neural networks']","""Variational inference (VI) is a popular approach for approximate Bayesian inference that is particularly promising for highly parameterized models such as deep neural networks. A key challenge of variational inference is to approximate the posterior over model parameters with a distribution that is simpler and tractable yet sufficiently expressive. In this work, we propose a method for training highly flexible variational distributions by starting with a coarse approximation and iteratively refining it. Each refinement step makes cheap, local adjustments and only requires optimization of simple variational families. We demonstrate theoretically that our method always improves a bound on the approximation (the Evidence Lower BOund) and observe this empirically across a variety of benchmark tasks. In experiments, our method consistently outperforms recent variational inference methods for deep learning in terms of log-likelihood and the ELBO. We see that the gains are further amplified on larger scale models, significantly outperforming standard VI and deep ensembles on residual networks on CIFAR10.""","""In this paper a method for refining the variational approximation is proposed. The reviewers liked the contribution but a number reservations such as missing reference made the paper drop below the acceptance threshold. The authors are encouraged to modify paper and send to next conference. Reject. """ 324,"""Adaptive Adversarial Imitation Learning""","['Imitation Learning', 'Reinforcement Learning']","""We present the ADaptive Adversarial Imitation Learning (ADAIL) algorithm for learning adaptive policies that can be transferred between environments of varying dynamics, by imitating a small number of demonstrations collected from a single source domain. This problem is important in robotic learning because in real world scenarios 1) reward functions are hard to obtain, 2) learned policies from one domain are difficult to deploy in another due to varying source to target domain statistics, 3) collecting expert demonstrations in multiple environments where the dynamics are known and controlled is often infeasible. We address these constraints by building upon recent advances in adversarial imitation learning; we condition our policy on a learned dynamics embedding and we employ a domain-adversarial loss to learn a dynamics-invariant discriminator. The effectiveness of our method is demonstrated on simulated control tasks with varying environment dynamics and the learned adaptive agent outperforms several recent baselines.""","""This paper extends adversarial imitation learning to an adaptive setting where environment dynamics change frequently. The authors propose a novel approach with pragmatic design choices to address the challenges that arise in this setting. Several questions and requests for clarification were addressed during the reviewing phase. The paper remains borderline after the rebuttal. Remaining concerns include the size of the algorithmic or conceptual contribution of the paper.""" 325,"""AutoQ: Automated Kernel-Wise Neural Network Quantization ""","['AutoML', 'Kernel-Wise Neural Networks Quantization', 'Hierarchical Deep Reinforcement Learning']","""Network quantization is one of the most hardware friendly techniques to enable the deployment of convolutional neural networks (CNNs) on low-power mobile devices. Recent network quantization techniques quantize each weight kernel in a convolutional layer independently for higher inference accuracy, since the weight kernels in a layer exhibit different variances and hence have different amounts of redundancy. The quantization bitwidth or bit number (QBN) directly decides the inference accuracy, latency, energy and hardware overhead. To effectively reduce the redundancy and accelerate CNN inferences, various weight kernels should be quantized with different QBNs. However, prior works use only one QBN to quantize each convolutional layer or the entire CNN, because the design space of searching a QBN for each weight kernel is too large. The hand-crafted heuristic of the kernel-wise QBN search is so sophisticated that domain experts can obtain only sub-optimal results. It is difficult for even deep reinforcement learning (DRL) DDPG-based agents to find a kernel-wise QBN configuration that can achieve reasonable inference accuracy. In this paper, we propose a hierarchical-DRL-based kernel-wise network quantization technique, AutoQ, to automatically search a QBN for each weight kernel, and choose another QBN for each activation layer. Compared to the models quantized by the state-of-the-art DRL-based schemes, on average, the same models quantized by AutoQ reduce the inference latency by 54.06%, and decrease the inference energy consumption by 50.69%, while achieving the same inference accuracy.""","""This paper proposes a network quantization method which is based on kernel-level quantization. The extension from layer-level to kernel-level is straightforward, and so the novelty is somewhat limited given its similarity with HAQ. Nevertheless, experimental results demonstrate its efficiency in real applications. The paper can be improved by clarifying some experimental details, and have further discussions on its relationship with HAQ.""" 326,"""Learnable Group Transform For Time-Series""","['Group Transform', 'Time-Frequency Representation', 'Wavelet Transform', 'Group Theory', 'Representation Theory', 'Time-Series']","""We propose to undertake the problem of representation learning for time-series by considering a Group Transform approach. This framework allows us to, first, generalize classical time-frequency transformations such as the Wavelet Transform, and second, to enable the learnability of the representation. While the creation of the Wavelet Transform filter-bank relies on the sampling of the affine group in order to transform the mother filter, our approach allows for non-linear transformations of the mother filter by introducing the group of strictly increasing and continuous functions. The transformations induced by such a group enable us to span a larger class of signal representations. The sampling of this group can be optimized with respect to a specific loss and function and thus cast into a Deep Learning architecture. The experiments on diverse time-series datasets demonstrate the expressivity of this framework which competes with state-of-the-art performances.""","""This paper received two weak rejects (3) and one accept (8). In the discussion phase, the paper received significant discussion between the authors and reviewers and internally between the reviewers (which is tremendously appreciated). In particular, there was a discussion about the novelty of the contribution and ideas (AnonReviewer3 felt that the ideas presented provided an interesting new thought-provoking perspective) and the strength of the empirical results. None of the reviewers felt really strongly about rejecting and would not argue strongly against acceptance. However, AnonReviewer3 was not prepared to really champion the paper for acceptance due to a lack of confidence. Unfortunately, the paper falls just below the bar for acceptance. Taking the reviewer feedback into account and adding careful new experiments with strong results would make this a much stronger paper for a future submission.""" 327,"""Zero-Shot Policy Transfer with Disentangled Attention""","['Transfer Learning', 'Reinforcement Learning', 'Attention', 'Domain Adaptation', 'Representation Learning', 'Feature Extraction']","""Domain adaptation is an open problem in deep reinforcement learning (RL). Often, agents are asked to perform in environments where data is difficult to obtain. In such settings, agents are trained in similar environments, such as simulators, and are then transferred to the original environment. The gap between visual observations of the source and target environments often causes the agent to fail in the target environment. We present a new RL agent, SADALA (Soft Attention DisentAngled representation Learning Agent). SADALA first learns a compressed state representation. It then jointly learns to ignore distracting features and solve the task presented. SADALA's separation of important and unimportant visual features leads to robust domain transfer. SADALA outperforms both prior disentangled-representation based RL and domain randomization approaches across RL environments (Visual Cartpole and DeepMind Lab).""","""This paper proposes a new method for zero-shot policy transfer in RL. The authors propose learning the policy over a disentangled representation that is augmented with attention. Hence, the paper is a simple modification of an existing approach (DARLA). The reviewers agreed that the novelty of the proposed approach and the experimental evaluation are limited. For this reason I recommend rejection.""" 328,"""Fantastic Generalization Measures and Where to Find Them""","['Generalization', 'correlation', 'experiments']","""Generalization of deep networks has been intensely researched in recent years, resulting in a number of theoretical bounds and empirically motivated measures. However, most papers proposing such measures only study a small set of models, leaving open the question of whether these measures are truly useful in practice. We present the first large scale study of generalization bounds and measures in deep networks. We train over two thousand CIFAR-10 networks with systematic changes in important hyper-parameters. We attempt to uncover potential causal relationships between each measure and generalization, by using rank correlation coefficient and its modified forms. We analyze the results and show that some of the studied measures are very promising for further research.""","""This paper provides a valuable survey, summary, and empirical comparison of many generalization quantities from throughout the literature. It is comprehensive, thorough, and will be useful to a variety of researchers (both theoretical and applied).""" 329,"""Combining graph and sequence information to learn protein representations""","['NLP', 'Protein', 'Representation Learning']","""Computational methods that infer the function of proteins are key to understanding life at the molecular level. In recent years, representation learning has emerged as a powerful paradigm to discover new patterns among entities as varied as images, words, speech, molecules. In typical representation learning, there is only one source of data or one level of abstraction at which the learned representation occurs. However, proteins can be described by their primary, secondary, tertiary, and quaternary structure or even as nodes in protein-protein interaction networks. Given that protein function is an emergent property of all these levels of interactions in this work, we learn joint representations from both amino acid sequence and multilayer networks representing tissue-specific protein-protein interactions. Using these representations, we train machine learning models that outperform existing methods on the task of tissue-specific protein function prediction on 10 out of 13 tissues. Furthermore, we outperform existing methods by 19% on average.""","""The paper presents a linear classifier based on a concatenation of two types of features for protein function prediction. The two features are constructed using methods from previous papers, based on peptide sequence and protein-protein interactions. All the reviewers agree that the problem is an important one, but the paper as it is presented does not provide any methodological advance, and weak empirical evidence of better protein function prediction. Therefore the paper would require a major revision before being suitable for ICLR. """ 330,"""Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization""",[],"""As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed on clusters to perform model fitting in parallel. Alistarh et al. (2017) describe two variants of data-parallel SGD that quantize and encode gradients to lessen communication costs. For the first variant, QSGD, they provide strong theoretical guarantees. For the second variant, which we call QSGDinf, they demonstrate impressive empirical gains for distributed training of large neural networks. Building on their work, we propose an alternative scheme for quantizing gradients and show that it yields stronger theoretical guarantees than exist for QSGD while matching the empirical performance of QSGDinf.""","""This paper proposes a communication-efficient data-parallel SGD with quantization. The method bridges the gap between theory and practice. The QSGD method has theoretical guarantees while QSGDinf doesn't, but the latter gives better result. This paper proves stronger results for QSGD using a different quantization scheme which matches the performance of QSGDinf. The reviewers find issues with the approach and have pointed some of them out. During the discussion period, we did discuss if reviewers would like to raise their scores. Unfortunately, they still have unresolved issues (see R1's comment). R1 made another comment recently that they were unable to add to their review: ""The proposed algorithm and the theoretical analysis does not include momentum. However, in the experiments, it is clearly stated that momentum (with a factor of 0.9) is used. Thus, it is unclear whether the experiments really validate the theoretical guarantees. And, it is also unclear how momentum is added for both NUQSGD and EF-SGD, since momentum is not mentioned in Algorithm 1 in this paper, or the paper of QSGD, or the paper of EF-SignSGD. (There is a version of SignSGD with momentum *without* error feedback, called SIGNUM)."" With the current score, the paper does not make the cut for ICLR, but I encourage the authors to revise the paper based on reviewers' feedback. For now, I recommend to reject this paper.""" 331,"""Training Deep Networks with Stochastic Gradient Normalized by Layerwise Adaptive Second Moments""","['deep learning', 'optimization', 'SGD', 'Adam', 'NovoGrad', 'large batch training']","""We propose NovoGrad, an adaptive stochastic gradient descent method with layer-wise gradient normalization and decoupled weight decay. In our experiments on neural networks for image classification, speech recognition, machine translation, and language modeling, it performs on par or better than well tuned SGD with momentum and Adam/AdamW. Additionally, NovoGrad (1) is robust to the choice of learning rate and weight initialization, (2) works well in a large batch setting, and (3) has two times smaller memory footprint than Adam.""","""The paper presented an adaptive stochastic gradient descent method with layer-wise normalization and decoupled weight decay and justified it on a variety of tasks. The main concern for this paper is the novelty is not sufficient. The method is a combination of LARS and AdamW with slight modifications. Although the paper has good empirically evaluations, theoretical convergence proof would make the paper more convincing. """ 332,"""Adaptive Online Planning for Continual Lifelong Learning""","['reinforcement learning', 'model predictive control', 'planning', 'model based', 'model free', 'uncertainty', 'computation']","""We study learning control in an online lifelong learning scenario, where mistakes can compound catastrophically into the future and the underlying dynamics of the environment may change. Traditional model-free policy learning methods have achieved successes in difficult tasks due to their broad flexibility, and capably condense broad experiences into compact networks, but struggle in this setting, as they can activate failure modes early in their lifetimes which are difficult to recover from and face performance degradation as dynamics change. On the other hand, model-based planning methods learn and adapt quickly, but require prohibitive levels of computational resources. Under constrained computation limits, the agent must allocate its resources wisely, which requires the agent to understand both its own performance and the current state of the environment: knowing that its mastery over control in the current dynamics is poor, the agent should dedicate more time to planning. We present a new algorithm, Adaptive Online Planning (AOP), that achieves strong performance in this setting by combining model-based planning with model-free learning. By measuring the performance of the planner and the uncertainty of the model-free components, AOP is able to call upon more extensive planning only when necessary, leading to reduced computation times. We show that AOP gracefully deals with novel situations, adapting behaviors and policies effectively in the face of unpredictable changes in the world -- challenges that a continual learning agent naturally faces over an extended lifetime -- even when traditional reinforcement learning methods fail.""","""A new setting for lifelong learning is analyzed and a new method, AOP, is introduced, which combines a model-free with a model-based approach to deal with this setting. While the idea is interesting, the main claims are insufficiently demonstrated. A theoretical justification is missing, and the experiments alone are not rigorous enough to draw strong conclusions. The three environments are rather simplistic and there are concerns about the statistical significance, for at least some of the experiments.""" 333,"""Reparameterized Variational Divergence Minimization for Stable Imitation""","['Imitation Learning', 'Reinforcement Learning', 'Adversarial Learning', 'Learning from Demonstration']","""State-of-the-art results in imitation learning are currently held by adversarial methods that iteratively estimate the divergence between student and expert policies and then minimize this divergence to bring the imitation policy closer to expert behavior. Analogous techniques for imitation learning from observations alone (without expert action labels), however, have not enjoyed the same ubiquitous successes. Recent work in adversarial methods for generative models has shown that the measure used to judge the discrepancy between real and synthetic samples is an algorithmic design choice, and that different choices can result in significant differences in model performance. Choices including Wasserstein distance and various pseudo-formula -divergences have already been explored in the adversarial networks literature, while more recently the latter class has been investigated for imitation learning. Unfortunately, we find that in practice this existing imitation-learning framework for using pseudo-formula -divergences suffers from numerical instabilities stemming from the combination of function approximation and policy-gradient reinforcement learning. In this work, we alleviate these challenges and offer a reparameterization of adversarial imitation learning as pseudo-formula -divergence minimization before further extending the framework to handle the problem of imitation from observations only. Empirically, we demonstrate that our design choices for coupling imitation learning and pseudo-formula -divergences are critical to recovering successful imitation policies. Moreover, we find that with the appropriate choice of pseudo-formula -divergence, we can obtain imitation-from-observation algorithms that outperform baseline approaches and more closely match expert performance in continous-control tasks with low-dimensional observation spaces. With high-dimensional observations, we still observe a significant gap with and without action labels, offering an interesting avenue for future work.""","""The submission performs empirical analysis on f-VIM (Ke, 2019), a method for imitation learning by f-divergence minimization. The paper especially focues on a state-only formulation akin to GAILfO (Torabi et al., 2018b). The main contributions are: 1) The paper identifies numerical proplems with the output activations of f-VIM and suggest a scheme to choose them such that the resulting rewards are bounded. 2) A regularizer that was proposed by Mescheder et al. (2018) for GANs is tested in the adversarial imitation learning setting. 3) In order to handle state-only demonstrations, the technique of GAILfO is applied to f-VIM (then denoted f-VIMO) which inputs state-nextStates instead of state-actions to the discriminator. The reviewers found the submitted paper hard to follow, which suggests a revision might make more apparent the author's contributions in later submissions of this work. """ 334,"""Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets""","['Sparse Network', 'Model Compression', 'Adversarial Training']","""Deep neural nets (DNNs) compression is crucial for adaptation to mobile devices. Though many successful algorithms exist to compress naturally trained DNNs, developing efficient and stable compression algorithms for robustly trained DNNs remains widely open. In this paper, we focus on a co-design of efficient DNN compression algorithms and sparse neural architectures for robust and accurate deep learning. Such a co-design enables us to advance the goal of accommodating both sparsity and robustness. With this objective in mind, we leverage the relaxed augmented Lagrangian based algorithms to prune the weights of adversarially trained DNNs, at both structured and unstructured levels. Using a Feynman-Kac formalism principled robust and sparse DNNs, we can at least double the channel sparsity of the adversarially trained ResNet20 for CIFAR10 classification, meanwhile, improve the natural accuracy by 8.69\% and the robust accuracy under the benchmark 20 iterations of IFGSM attack by 5.42\%.""","""The paper is rejected based on unanimous reviews.""" 335,"""Ternary MobileNets via Per-Layer Hybrid Filter Banks""","['Model compression', 'ternary quantization', 'energy-efficient models']","""MobileNets family of computer vision neural networks have fueled tremendous progress in the design and organization of resource-efficient architectures in recent years. New applications with stringent real-time requirements in highly constrained devices require further compression of MobileNets-like already computeefficient networks. Model quantization is a widely used technique to compress and accelerate neural network inference and prior works have quantized MobileNets to 4 6 bits albeit with a modest to significant drop in accuracy. While quantization to sub-byte values (i.e. precision 8 bits) has been valuable, even further quantization of MobileNets to binary or ternary values is necessary to realize significant energy savings and possibly runtime speedups on specialized hardware, such as ASICs and FPGAs. Under the key observation that convolutional filters at each layer of a deep neural network may respond differently to ternary quantization, we propose a novel quantization method that generates per-layer hybrid filter banks consisting of full-precision and ternary weight filters for MobileNets. The layer-wise hybrid filter banks essentially combine the strengths of full-precision and ternary weight filters to derive a compact, energy-efficient architecture for MobileNets. Using this proposed quantization method, we quantized a substantial portion of weight filters of MobileNets to ternary values resulting in 27.98% savings in energy, and a 51.07% reduction in the model size, while achieving comparable accuracy and no degradation in throughput on specialized hardware in comparison to the baseline full-precision MobileNets.""","""The paper presents a quantization method that generates per-layer hybrid filter banks consisting of full-precision and ternary weight filters for MobileNets. The paper is well-written. However, it is incremental. Moreover, empirical results are not convincing enough. Experiments are only performed on ImageNet. Comparison on more datasets and more model architectures should be performed.""" 336,"""Neural Communication Systems with Bandwidth-limited Channel""","['variational inference', 'joint coding', 'bandwidth-limited channel', 'deep learning', 'representation learning', 'compression']","""Reliably transmitting messages despite information loss due to a noisy channel is a core problem of information theory. One of the most important aspects of real world communication is that it may happen at varying levels of information transfer. The bandwidth-limited channel models this phenomenon. In this study we consider learning joint coding with the bandwidth-limited channel. Although, classical results suggest that it is asymptotically optimal to separate the sub-tasks of compression (source coding) and error correction (channel coding), it is well known that for finite block-length problems, and when there are restrictions to the computational complexity of coding, this optimality may not be achieved. Thus, we empirically compare the performance of joint and separate systems, and conclude that joint systems outperform their separate counterparts when coding is performed by flexible learnable function approximators such as neural networks. Specifically, we cast the joint communication problem as a variational learning problem. To facilitate this, we introduce a differentiable and computationally efficient version of this channel. We show that our design compensates for the loss of information by two mechanisms: (i) missing information is modelled by a prior model incorporated in the channel model, and (ii) sampling from the joint model is improved by auxiliary latent variables in the decoder. Experimental results justify the validity of our design decisions through improved distortion and FID scores.""","""There was some support for this paper, but it was on the borderline and significant concerns were raised. It did not compare to the exiting related literature on communications, compression, and coding. There were significant issues with clarity.""" 337,"""The Intriguing Effects of Focal Loss on the Calibration of Deep Neural Networks""",[],"""Miscalibration -- a mismatch between a model's confidence and its correctness -- of Deep Neural Networks (DNNs) makes their predictions hard for downstream components to trust. Ideally, we want networks to be accurate, calibrated and confident. Temperature scaling, the most popular calibration approach, will calibrate a DNN without affecting its accuracy, but it will also make its correct predictions under-confident. In this paper, we show that replacing the widely used cross-entropy loss with focal loss allows us to learn models that are already very well calibrated. When combined with temperature scaling, focal loss, whilst preserving accuracy and yielding state-of-the-art calibrated models, also preserves the confidence of the model's correct predictions, which is extremely desirable for downstream tasks. We provide a thorough analysis of the factors causing miscalibration, and use the insights we glean from this to theoretically justify the empirically excellent performance of focal loss. We perform extensive experiments on a variety of computer vision (CIFAR-10/100) and NLP (SST, 20 Newsgroup) datasets, and with a wide variety of different network architectures, and show that our approach achieves state-of-the-art accuracy and calibration in almost all cases.""","""The paper investigates the effect of focal loss on calibration of neural nets. On one hand, the reviewers agree that this paper is well-written and the empirical results are interesting. On the other hand, the reviewers felt that there could be better evaluation of the effect of calibration on downstream tasks, and better justification for the choice of optimal gamma (e.g. on a simpler problem setup). I encourage the others to revise the draft and resubmit to a different venue. """ 338,"""High performance RNNs with spiking neurons""","['RNNs', 'Spiking neurons', 'Neuromorphics']",""" The increasing need for compact and low-power computing solutions for machine learning applications has triggered a renaissance in the study of energy-efficient neural network accelerators. In particular, in-memory computing neuromorphic architectures have started to receive substantial attention from both academia and industry. However, most of these architectures rely on spiking neural networks, which typically perform poorly compared to their non-spiking counterparts in terms of accuracy. In this paper, we propose a new adaptive spiking neuron model that can also be abstracted as a low-pass filter. This abstraction enables faster and better training of spiking networks using back-propagation, without simulating spikes. We show that this model dramatically improves the inference performance of a recurrent neural network and validate it with three complex spatio-temporal learning tasks: the temporal addition task, the temporal copying task, and a spoken-phrase recognition task. Application of these results will lead to the development of powerful spiking models for neuromorphic hardware that solve relevant edge-computing and Internet-of-Things applications with high accuracy and ultra-low power consumption.""","""This paper presents a new mechanism to train spiking neural networks that is more suitable for neuromorphic chips. While the text is well written and the experiments provide an interesting analysis, the relevance of the proposed neuron models to the ICLR/ML community seems small at this point. My recommendation is that this paper should be submitted to a more specialised conference/workshop dedicated to hardware methods.""" 339,"""DASGrad: Double Adaptive Stochastic Gradient""","['stochastic convex optimization', 'adaptivity', 'online learning', 'transfer learning']","""Adaptive moment methods have been remarkably successful for optimization under the presence of high dimensional or sparse gradients, in parallel to this, adaptive sampling probabilities for SGD have allowed optimizers to improve convergence rates by prioritizing examples to learn efficiently. Numerous applications in the past have implicitly combined adaptive moment methods with adaptive probabilities yet the theoretical guarantees of such procedures have not been explored. We formalize double adaptive stochastic gradient methods DASGrad as an optimization technique and analyze its convergence improvements in a stochastic convex optimization setting, we provide empirical validation of our findings with convex and non convex objectives. We observe that the benefits of the method increase with the model complexity and variability of the gradients, and we explore the resulting utility in extensions to transfer learning. ""","""The reviewers were confused by several elements of the paper, as mentioned in their reviews and, despite the authors' rebuttal, still have several areas of concerns. I encourage you to read the reviews carefully and address the reviewers' concerns for a future submission.""" 340,"""Physics-as-Inverse-Graphics: Unsupervised Physical Parameter Estimation from Video""",[],"""We propose a model that is able to perform physical parameter estimation of systems from video, where the differential equations governing the scene dynamics are known, but labeled states or objects are not available. Existing physical scene understanding methods require either object state supervision, or do not integrate with differentiable physics to learn interpretable system parameters and states. We address this problem through a \textit{physics-as-inverse-graphics} approach that brings together vision-as-inverse-graphics and differentiable physics engines, where objects and explicit state and velocity representations are discovered by the model. This framework allows us to perform long term extrapolative video prediction, as well as vision-based model-predictive control. Our approach significantly outperforms related unsupervised methods in long-term future frame prediction of systems with interacting objects (such as ball-spring or 3-body gravitational systems), due to its ability to build dynamics into the model as an inductive bias. We further show the value of this tight vision-physics integration by demonstrating data-efficient learning of vision-actuated model-based control for a pendulum system. We also show that the controller's interpretability provides unique capabilities in goal-driven control and physical reasoning for zero-data adaptation.""","""The submission presents an approach to estimating physical parameters from video. The approach is sensible and is presented fairly well. The main criticism is that the approach is only demonstrated in simplistic ""toy"" settings. Nevertheless, the reviewers recommend (weakly) accepting the paper and the AC concurs.""" 341,"""Dynamics-Aware Embeddings""","['representation learning', 'reinforcement learning', 'rl']","""In this paper we consider self-supervised representation learning to improve sample efficiency in reinforcement learning (RL). We propose a forward prediction objective for simultaneously learning embeddings of states and actions. These embeddings capture the structure of the environment's dynamics, enabling efficient policy learning. We demonstrate that our action embeddings alone improve the sample efficiency and peak performance of model-free RL on control from low-dimensional states. By combining state and action embeddings, we achieve efficient learning of high-quality policies on goal-conditioned continuous control from pixel observations in only 1-2 million environment steps.""","""This paper studies how self-supervised objectives can improve representations for efficient RL. The reviewers are generally in agreement that the method is interesting, the paper is well-written, and the results are convincing. The paper should be accepted.""" 342,"""Sampling-Free Learning of Bayesian Quantized Neural Networks""","['Bayesian neural networks', 'Quantized neural networks']","""Bayesian learning of model parameters in neural networks is important in scenarios where estimates with well-calibrated uncertainty are important. In this paper, we propose Bayesian quantized networks (BQNs), quantized neural networks (QNNs) for which we learn a posterior distribution over their discrete parameters. We provide a set of efficient algorithms for learning and prediction in BQNs without the need to sample from their parameters or activations, which not only allows for differentiable learning in quantized models but also reduces the variance in gradients estimation. We evaluate BQNs on MNIST, Fashion-MNIST and KMNIST classification datasets compared against bootstrap ensemble of QNNs (E-QNN). We demonstrate BQNs achieve both lower predictive errors and better-calibrated uncertainties than E-QNN (with less than 20% of the negative log-likelihood).""","""This paper proposes Bayesian quantized networks and efficient algorithms for learning and prediction of these networks. The reviewers generally thought that this was a novel and interesting paper. There were a few concerns about the clarity of parts of the paper and the experimental results. These concerns were addressed during the discussion phase, and the reviewers agree that the paper should be accepted.""" 343,"""Attack-Resistant Federated Learning with Residual-based Reweighting""","['robust federated learning', 'backdoor attacks']","""Federated learning has a variety of applications in multiple domains by utilizing private training data stored on different devices. However, the aggregation process in federated learning is highly vulnerable to adversarial attacks so that the global model may behave abnormally under attacks. To tackle this challenge, we present a novel aggregation algorithm with residual-based reweighting to defend federated learning. Our aggregation algorithm combines repeated median regression with the reweighting scheme in iteratively reweighted least squares. Our experiments show that our aggression algorithm outperforms other alternative algorithms in the presence of label-flipping, backdoor, and Gaussian noise attacks. We also provide theoretical guarantees for our aggregation algorithm. ""","""The paper proposes an aggregation algorithm for federated learning that is robust against label-flipping, backdoor, and Gaussian noise attacks. The reviewers agree that the paper presents an interesting and novel method, however the reviewers also agree that the theory was difficult to understand and that the success of the methodology may be highly dependent on design choices and difficult-to-tune hyperparameters. """ 344,"""From Inference to Generation: End-to-end Fully Self-supervised Generation of Human Face from Speech""","['Multi-modal learning', 'Self-supervised learning', 'Voice profiling', 'Conditional GANs']","""This work seeks the possibility of generating the human face from voice solely based on the audio-visual data without any human-labeled annotations. To this end, we propose a multi-modal learning framework that links the inference stage and generation stage. First, the inference networks are trained to match the speaker identity between the two different modalities. Then the pre-trained inference networks cooperate with the generation network by giving conditional information about the voice. The proposed method exploits the recent development of GANs techniques and generates the human face directly from the speech waveform making our system fully end-to-end. We analyze the extent to which the network can naturally disentangle two latent factors that contribute to the generation of a face image one that comes directly from a speech signal and the other that is not related to it and explore whether the network can learn to generate natural human face image distribution by modeling these factors. Experimental results show that the proposed network can not only match the relationship between the human face and speech, but can also generate the high-quality human face sample conditioned on its speech. Finally, the correlation between the generated face and the corresponding speech is quantitatively measured to analyze the relationship between the two modalities.""","""The authors propose a conditional GAN-based approach for generating faces consistent with given input speech. The technical novelty is not large, as the approach is mainly putting together existing ideas, but the application is a fairly new one and the experiments and results are convincing. The approach might also have broader applicability beyond this task.""" 345,"""SoftAdam: Unifying SGD and Adam for better stochastic gradient descent""","['Optimization', 'SGD', 'Adam', 'Generalization', 'Deep Learning']","""Abstract Stochastic gradient descent (SGD) and Adam are commonly used to optimize deep neural networks, but choosing one usually means making tradeoffs between speed, accuracy and stability. Here we present an intuition for why the tradeoffs exist as well as a method for unifying the two in a continuous way. This makes it possible to control the way models are trained in much greater detail. We show that for default parameters, the new algorithm equals or outperforms SGD and Adam across a range of models for image classification tasks and outperforms SGD for language modeling tasks.""","""The reviewers all agreed that the proposed modification was minor. I encourage the authors to pursue in this direction, as they mentioned in their rebuttal, before resubmitting to another conference.""" 346,"""Learning Robust Representations via Multi-View Information Bottleneck""","['Information Bottleneck', 'Multi-View Learning', 'Representation Learning', 'Information Theory']","""The information bottleneck principle provides an information-theoretic method for representation learning, by training an encoder to retain all information which is relevant for predicting the label while minimizing the amount of other, excess information in the representation. The original formulation, however, requires labeled data to identify the superfluous information. In this work, we extend this ability to the multi-view unsupervised setting, where two views of the same underlying entity are provided but the label is unknown. This enables us to identify superfluous information as that not shared by both views. A theoretical analysis leads to the definition of a new multi-view model that produces state-of-the-art results on the Sketchy dataset and label-limited versions of the MIR-Flickr dataset. We also extend our theory to the single-view setting by taking advantage of standard data augmentation techniques, empirically showing better generalization capabilities when compared to common unsupervised approaches for representation learning.""","""This paper extends the information bottleneck method to the unsupervised representation learning under the multi-view assumption. The work couples the multi-view InfoMax principle with the information bottleneck principle to derive an objective which encourages the representations to contain only the information shared by both views and thus eliminate the effect of independent factors of variations. Recent advances in estimating lower-bounds on mutual information are applied to perform approximate optimisation in practice. The authors empirically validate the proposed approach in two standard multi-view settings. Overall, the reviewers found the presentation clear, and the paper well written and well motivated. The issues raised by the reviewers were addressed in the rebuttal and we feel that the work is well suited for ICLR. We ask the authors to carefully integrate the detailed comments from the reviewers into the manuscript. Finally, the work should investigate and briefly establish a connection to [1]. [1] Wang et al. ""Deep Multi-view Information Bottleneck"". International Conference on Data Mining 2019 (pseudo-url)""" 347,"""Adapting Behaviour for Learning Progress""","['adaptation', 'behaviour', 'reinforcement learning', 'modulated behaviour', 'exploration', 'deep reinforcement learning']","""Determining what experience to generate to best facilitate learning (i.e. exploration) is one of the distinguishing features and open challenges in reinforcement learning. The advent of distributed agents that interact with parallel instances of the environment has enabled larger scale and greater flexibility, but has not removed the need to tune or tailor exploration to the task, because the ideal data for the learning algorithm necessarily depends on its process of learning. We propose to dynamically adapt the data generation by using a non-stationary multi-armed bandit to optimize a proxy of the learning progress. The data distribution is controlled via modulating multiple parameters of the policy (such as stochasticity, consistency or optimism) without significant overhead. The adaptation speed of the bandit can be increased by exploiting the factored modulation structure. We demonstrate on a suite of Atari 2600 games how this unified approach produces results comparable to per-task tuning at a fraction of the cost.""","""The paper introduces a non-stationary bandit strategy for adapting the exploration rate in Deep RL algorithms. They consider exploration algorithms with a tunable parameter (e.g. the epsilon probability in epsilon-greedy) and attempt to adjust this parameter in an online fashion using a proxy to the learning progress. The proposed approach is empirically compared with using fixed exploration parameters and adjusting the parameter using a bandit strategy that doesn't model the learning process. Unfortunately, the proposed approach is not theoretically grounded and the experiments lack comparison with good baselines in order to be convincing. A comparison with other, provably efficient, non-stationary bandit algorithms such as exponential weight methods (Besbes et al 2014) or Thompson sampling (Raj & Kalyani 2017), which are cited in the paper, is missing. Moreover, given the whole set of results and how they are presented, the improvement due to the proposed method is not clear. In light of these concerns I recommend to reject this paper.""" 348,"""On PAC-Bayes Bounds for Deep Neural Networks using the Loss Curvature""","['PAC-Bayes', 'Hessian', 'curvature', 'lower bound', 'Variational Inference']","""We investigate whether it's possible to tighten PAC-Bayes bounds for deep neural networks by utilizing the Hessian of the training loss at the minimum. For the case of Gaussian priors and posteriors we introduce a Hessian-based method to obtain tighter PAC-Bayes bounds that relies on closed form solutions of layerwise subproblems. We thus avoid commonly used variational inference techniques which can be difficult to implement and time consuming for modern deep architectures. We conduct a theoretical analysis that links the random initialization, minimum, and curvature at the minimum of a deep neural network to limits on what is provable about generalization through PAC-Bayes. Through careful experiments we validate our theoretical predictions and analyze the influence of the prior mean, prior covariance, posterior mean and posterior covariance on obtaining tighter bounds. ""","""The paper computes an ""approximate"" generalization bound based on loss curvature. Several expert reviewers found a long list of issues, including missing related work and a sloppy mix of formal statements and heuristics, without proper accounting of what could be gleaned from some many heuristic steps. Ultimately, the paper needs to be rewritten and re-reviewed. """ 349,"""Encoder-Agnostic Adaptation for Conditional Language Generation""","['NLP', 'generation', 'pretraining']","""Large pretrained language models have changed the way researchers approach discriminative natural language understanding tasks, leading to the dominance of approaches that adapt a pretrained model for arbitrary downstream tasks. However, it is an open question how to use similar techniques for language generation. Early results in the encoder-agnostic setting have been mostly negative. In this work, we explore methods for adapting a pretrained language model to arbitrary conditional input. We observe that pretrained transformer models are sensitive to large parameter changes during tuning. Therefore, we propose an adaptation that directly injects arbitrary conditioning into self attention, an approach we call pseudo self attention. Through experiments on four diverse conditional text generation tasks, we show that this encoder-agnostic technique outperforms strong baselines, produces coherent generations, and is data-efficient.""","""This paper proposes a method to use a pretrained language model for language generation with arbitrary conditional input (images, text). The main idea, which is called pseudo self-attention, is to incorporate the conditioning input as a pseudo history to a pretrained transformer. Experiments on class-conditional generation, summarization, story generation, and image captioning show the benefit of the proposed approach. While I think that the proposed approach makes sense, especially for generation from multiple modalities, it would be useful to see the following comparison in the case of conditional generation from one modality (i.e., text-text such as in summarization and story generation). How does the proposed approach compare to a method that simply concatenates these input and output? In Figure 1(c), this would be having the encoder part be pretrained as well, as opposed to randomly initialized, which is possible if the input is also text. I believe this is what R2 is suggesting as well when they mentioned a GPT-2 style model, and I agree this is an important baseline. This is a borderline paper. However, due to space constraint and the above issues, I recommend to reject the paper.""" 350,"""Multigrid Neural Memory""","['multigrid architecture', 'memory network', 'convolutional neural network']","""We introduce a novel architecture that integrates a large addressable memory space into the core functionality of a deep neural network. Our design distributes both memory addressing operations and storage capacity over many network layers. Distinct from strategies that connect neural networks to external memory banks, our approach co-locates memory with computation throughout the network structure. Mirroring recent architectural innovations in convolutional networks, we organize memory into a multiresolution hierarchy, whose internal connectivity enables learning of dynamic information routing strategies and data-dependent read/write operations. This multigrid spatial layout permits parameter-efficient scaling of memory size, allowing us to experiment with memories substantially larger than those in prior work. We demonstrate this capability on synthetic exploration and mapping tasks, where the network is able to self-organize and retain long-term memory for trajectories of thousands of time steps. On tasks decoupled from any notion of spatial geometry, such as sorting or associative recall, our design functions as a truly generic memory and yields results competitive with those of the recently proposed Differentiable Neural Computer.""","""This paper investigates convolutional LSTMs with a multi-grid structure. This idea in itself has very little innovation and the experimental results are not entirely convincing.""" 351,"""Estimating counterfactual treatment outcomes over time through adversarially balanced representations""","['treatment effects over time', 'causal inference', 'counterfactual estimation']","""Identifying when to give treatments to patients and how to select among multiple treatments over time are important medical problems with a few existing solutions. In this paper, we introduce the Counterfactual Recurrent Network (CRN), a novel sequence-to-sequence model that leverages the increasingly available patient observational data to estimate treatment effects over time and answer such medical questions. To handle the bias from time-varying confounders, covariates affecting the treatment assignment policy in the observational data, CRN uses domain adversarial training to build balancing representations of the patient history. At each timestep, CRN constructs a treatment invariant representation which removes the association between patient history and treatment assignments and thus can be reliably used for making counterfactual predictions. On a simulated model of tumour growth, with varying degree of time-dependent confounding, we show how our model achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment than current state-of-the-art methods.""","""Reviewers uniformly suggest acceptance. Please look carefully at reviewer comments and address in the camera-ready. Great work!""" 352,"""Flexible and Efficient Long-Range Planning Through Curious Exploration""","['Curiosity', 'Planning', 'Reinforcement Learning', 'Robotics', 'Exploration']","""Identifying algorithms that flexibly and efficiently discover temporally-extended multi-phase plans is an essential next step for the advancement of robotics and model-based reinforcement learning. The core problem of long-range planning is finding an efficient way to search through the tree of possible action sequences which, if left unchecked, grows exponentially with the length of the plan. Existing non-learned planning solutions from the Task and Motion Planning (TAMP) literature rely on the existence of logical descriptions for the effects and preconditions for actions. This constraint allows TAMP methods to efficiently reduce the tree search problem but limits their ability to generalize to unseen and complex physical environments. In contrast, deep reinforcement learning (DRL) methods use flexible neural-network-based function approximators to discover policies that generalize naturally to unseen circumstances. However, DRL methods have had trouble dealing with the very sparse reward landscapes inherent to long-range multi-step planning situations. Here, we propose the Curious Sample Planner (CSP), which fuses elements of TAMP and DRL by using a curiosity-guided sampling strategy to learn to efficiently explore the tree of action effects. We show that CSP can efficiently discover interesting and complex temporally-extended plans for solving a wide range of physically realistic 3D tasks. In contrast, standard DRL and random sampling methods often fail to solve these tasks at all or do so only with a huge and highly variable number of training samples. We explore the use of a variety of curiosity metrics with CSP and analyze the types of solutions that CSP discovers. Finally, we show that CSP supports task transfer so that the exploration policies learned during experience with one task can help improve efficiency on related tasks.""","""The authors consider planning problems with sparse rewards. They propose an algorithm that performs planning based on an auxiliary reward given by a curiosity score. They test they approach on a range of tasks in simulated robotics environments and compare to model-free baselines. The reviewers mainly criticize the lack of competitive baselines; it comes as now surprise that the baselines presented in the paper do not perform well, as they make use of strictly less information of the problem. The authors were very active in the rebuttal period, however eventually did not fully manage to address the points raised by the reviewers. Although the paper proposes an interesting approach, I think this paper is below acceptance threshold. The experimental results lack baselines, Furthermore, critical details of the algorithm are missing / hard to find.""" 353,"""Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations""","['Variational Autoencoder', 'Clustering', 'Generative model']","""Clustering high-dimensional data, such as images or biological measurements, is a long-standing problem and has been studied extensively. Recently, Deep Clustering gained popularity due to the non-linearity of neural networks, which allows for flexibility in fitting the specific peculiarities of complex data. Here we introduce the Mixture-of-Experts Similarity Variational Autoencoder (MoE-Sim-VAE), a novel generative clustering model. The model can learn multi-modal distributions of high-dimensional data and use these to generate realistic data with high efficacy and efficiency. MoE-Sim-VAE is based on a Variational Autoencoder (VAE), where the decoder consists of a Mixture-of-Experts (MoE) architecture. This specific architecture allows for various modes of the data to be automatically learned by means of the experts. Additionally, we encourage the latent representation of our model to follow a Gaussian mixture distribution and to accurately represent the similarities between the data points. We assess the performance of our model on synthetic data, the MNIST benchmark data set, and a challenging real-world task of defining cell subpopulations from mass cytometry (CyTOF) measurements on hundreds of different datasets. MoE-Sim-VAE exhibits superior clustering performance on all these tasks in comparison to the baselines and we show that the MoE architecture in the decoder reduces the computational cost of sampling specific data modes with high fidelity.""","""The paper proposes a VAE with a mixture-of-experts decoder for clustering and generation of high-dimensional data. Overall, the reviewers found the paper well-written and structured , but in post rebuttal discussion questioned the overall importance and interest of the work to the community. This is genuinely a borderline submission. However, the calibrated average score currently falls below the acceptance threshold, so Im recommending rejection, but strongly encouraging the authors to continue the work, better motivating the importance of the work, and resubmitting.""" 354,"""On summarized validation curves and generalization""","['model selection', 'deep learning', 'early stopping', 'validation curves']","""The validation curve is widely used for model selection and hyper-parameter search with the curve usually summarized over all the training tasks. However, this summarization tends to lose the intricacies of the per-task curves and it isn't able to reflect if all the tasks are at their validation optimum even if the summarized curve might be. In this work, we explore this loss of information, how it affects the model at testing and how to detect it using interval plots. We propose two techniques as a proof-of-concept of the potential gain in the test performance when per-task validation curves are accounted for. Our experiments on three large datasets show up to a 2.5% increase (averaged over multiple trials) in the test accuracy rate when model selection uses the per-task validation maximums instead of the summarized validation maximum. This potential increase is not a result of any modification to the model but rather at what point of training the weights were selected from. This presents an exciting direction for new training and model selection techniques that rely on more than just averaged metrics. ""","""The reviewers reached a consensus that the paper is preliminary and has a very limited contribution. Therefore, I cannot recommend acceptance at this time.""" 355,"""HaarPooling: Graph Pooling with Compressive Haar Basis""","['graph pooling', 'graph neural networks', 'tree', 'graph classification', 'graph regression', 'deep learning', 'Haar wavelet basis', 'fast Haar transforms']","""Deep Graph Neural Networks (GNNs) are instrumental in graph classification and graph-based regression tasks. In these tasks, graph pooling is a critical ingredient by which GNNs adapt to input graphs of varying size and structure. We propose a new graph pooling operation based on compressive Haar transforms, called HaarPooling. HaarPooling is computed following a chain of sequential clusterings of the input graph. The input of each pooling layer is transformed by the compressive Haar basis of the corresponding clustering. HaarPooling operates in the frequency domain by the synthesis of nodes in the same cluster and filters out fine detail information by compressive Haar transforms. Such transforms provide an effective characterization of the data and preserve the structure information of the input graph. By the sparsity of the Haar basis, the computation of HaarPooling is of linear complexity. The GNN with HaarPooling and existing graph convolution layers achieves state-of-the-art performance on diverse graph classification problems.""","""This paper presents a new graph pooling method, called HaarPooling. Based on the hierarchical HaarPooling, the graph classification problems can be solved under the graph neural network framework. One major concern of reviewers is the experiment design. Authors add a new real world dataset in revision. Another concern is computational performance. The main text did not give a comprehensive analysis and the rebuttal did not fully address these problems. Overall, this paper presents an interesting graph pooling approach for graph classification while the presentation needs further polish. Based on the reviewers comments, I choose to reject the paper. """ 356,"""Kernel of CycleGAN as a principal homogeneous space""","['Generative models', 'CycleGAN']","""Unpaired image-to-image translation has attracted significant interest due to the invention of CycleGAN, a method which utilizes a combination of adversarial and cycle consistency losses to avoid the need for paired data. It is known that the CycleGAN problem might admit multiple solutions, and our goal in this paper is to analyze the space of exact solutions and to give perturbation bounds for approximate solutions. We show theoretically that the exact solution space is invariant with respect to automorphisms of the underlying probability spaces, and, furthermore, that the group of automorphisms acts freely and transitively on the space of exact solutions. We examine the case of zero pure CycleGAN loss first in its generality, and, subsequently, expand our analysis to approximate solutions for extended CycleGAN loss where identity loss term is included. In order to demonstrate that these results are applicable, we show that under mild conditions nontrivial smooth automorphisms exist. Furthermore, we provide empirical evidence that neural networks can learn these automorphisms with unexpected and unwanted results. We conclude that finding optimal solutions to the CycleGAN loss does not necessarily lead to the envisioned result in image-to-image translation tasks and that underlying hidden symmetries can render the result useless.""","""This paper theoretically studied one of the fundamental issue in CycleGAN (recently gained much attention for image-to-image translation). The authors analyze the space of exact and approximated solutions under automorphisms. Reviewers mostly agree with theoretical value of the paper. Some concerns on practical values are also raised, e.g., limited or no-surprising experimental results. In overall, I think this is a boarderline paper. But, I am a bit toward acceptance as the theoretical contribution is solid, and potentially beneficial to many future works on unpaired image-to-image translation. """ 357,"""Unsupervised Intuitive Physics from Past Experiences""","['Intuitive physics', 'Deep learning']","""We consider the problem of learning models of intuitive physics from raw, unlabelled visual input. Differently from prior work, in addition to learning general physical principles, we are also interested in learning ``on the fly'' physical properties specific to new environments, based on a small number of environment-specific experiences. We do all this in an unsupervised manner, using a meta-learning formulation where the goal is to predict videos containing demonstrations of physical phenomena, such as objects moving and colliding with a complex background. We introduce the idea of summarizing past experiences in a very compact manner, in our case using dynamic images, and show that this can be used to solve the problem well and efficiently. Empirically, we show, via extensive experiments and ablation studies, that our model learns to perform physical predictions that generalize well in time and space, as well as to a variable number of interacting physical objects.""","""While the reviewers found the paper interesting, all the reviewers raised concerns about the fairly simple experimental settings, which makes it hard to appreciate the strengths of the proposed method. During rebuttal phase, the reviewers still felt this weakness was not sufficiently addressed.""" 358,"""Combining MixMatch and Active Learning for Better Accuracy with Fewer Labels""","['active learning', 'semi-supervised learning']","""We propose using active learning based techniques to further improve the state-of-the-art semi-supervised learning MixMatch algorithm. We provide a thorough empirical evaluation of several active-learning and baseline methods, which successfully demonstrate a significant improvement on the benchmark CIFAR-10, CIFAR-100, and SVHN datasets (as much as 1.5% in absolute accuracy). We also provide an empirical analysis of the cost trade-off between incrementally gathering more labeled versus unlabeled data. This analysis can be used to measure the relative value of labeled/unlabeled data at different points of the learning curve, where we find that although the incremental value of labeled data can be as much as 20x that of unlabeled, it quickly diminishes to less than 3x once more than 2,000 labeled example are observed.""","""This paper extends state of the art semi-supervised learning techniques (i.e., MixMatch) to collect new data adaptively and studies the benefit of getting new labels versus adding more unlabeled data. Active learning is incorporated in a natural and simple (albeit, unsurprising) way and the experiments are convincing that this approach has merit. While the approach works, reviewers were concerned about the novelty of the combination given that its somewhat obvious and straightforward to accomplish. Reviewers were also concerned that the space of both semi-supervised learning algorithms and active learning algorithms was not sufficiently exhaustively studied. As one reviewer points out: neither of these ideas are new or particular to deep learning. Due to lack of novelty, this paper is not suited for a top tier conference. """ 359,"""Provable Filter Pruning for Efficient Neural Networks""","['theory', 'compression', 'filter pruning', 'neural networks']","""We present a provable, sampling-based approach for generating compact Convolutional Neural Networks (CNNs) by identifying and removing redundant filters from an over-parameterized network. Our algorithm uses a small batch of input data points to assign a saliency score to each filter and constructs an importance sampling distribution where filters that highly affect the output are sampled with correspondingly high probability. In contrast to existing filter pruning approaches, our method is simultaneously data-informed, exhibits provable guarantees on the size and performance of the pruned network, and is widely applicable to varying network architectures and data sets. Our analytical bounds bridge the notions of compressibility and importance of network structures, which gives rise to a fully-automated procedure for identifying and preserving filters in layers that are essential to the network's performance. Our experimental evaluations on popular architectures and data sets show that our algorithm consistently generates sparser and more efficient models than those constructed by existing filter pruning approaches. ""","""This paper presents a sampling-based approach for generating compact CNNs by pruning redundant filters. One advantage of the proposed method is a bound for the final pruning error. One of the major concerns during review is the experiment design. The original paper lacks the results on real work dataset like ImageNet. Furthermore, the presentation is a little misleading. The authors addressed most of these problems in the revision. Model compression and purring is a very important field for real world application, hence I choose to accept the paper. """ 360,"""Skew-Explore: Learn faster in continuous spaces with sparse rewards""","['reinforcement learning', 'exploration', 'sparse reward']","""In many reinforcement learning settings, rewards which are extrinsically available to the learning agent are too sparse to train a suitable policy. Beside reward shaping which requires human expertise, utilizing better exploration strategies helps to circumvent the problem of policy training with sparse rewards. In this work, we introduce an exploration approach based on maximizing the entropy of the visited states while learning a goal-conditioned policy. The main contribution of this work is to introduce a novel reward function which combined with a goal proposing scheme, increases the entropy of the visited states faster compared to the prior work. This improves the exploration capability of the agent, and therefore enhances the agent's chance to solve sparse reward problems more efficiently. Our empirical studies demonstrate the superiority of the proposed method to solve different sparse reward problems in comparison to the prior work. ""","""While the reviewers generally appreciated the ideas presented in the paper and found the overall aims and motivation of the paper to be compelling, there were too many questions raised about the experiments and the soundness of the technical formulation to accept the paper at this time, and the reviewers did not feel that the authors had adequately addressed these issues in their responses. The main concerns were (1) with the correctness and rigor of the technical derivation, which the reviewers generally found to be somewhat questionable -- while the main idea seems reasonable, the details have a few too many question marks; (2) the experimental results have a number of shortcomings that make it difficult to fully understand whether the method really works, and how well.""" 361,"""Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep Reinforcement Learning""","['explainability', 'saliency maps', 'representations', 'deep reinforcement learning']","""Saliency maps are frequently used to support explanations of the behavior of deep reinforcement learning (RL) agents. However, a review of how saliency maps are used in practice indicates that the derived explanations are often unfalsifiable and can be highly subjective. We introduce an empirical approach grounded in counterfactual reasoning to test the hypotheses generated from saliency maps and assess the degree to which they correspond to the semantics of RL environments. We use Atari games, a common benchmark for deep RL, to evaluate three types of saliency maps. Our results show the extent to which existing claims about Atari games can be evaluated and suggest that saliency maps are best viewed as an exploratory tool rather than an explanatory tool.""","""This was a contentious paper, with quite a large variance in the ratings, and ultimately a lack of consensus. After reading the paper myself, I found it to be a valuable synthesis of common usage of saliency maps and a critique of their improper interpretation. Further, the demonstration of more rigorous methods of evaluating agents based on salience maps using case studies is quite illustrative and compelling. I think we as a field can agree that wed like to gain better understanding our deep RL models. This is not possible if we dont have a good understanding of the analysis tools were using. R2 rightly pointed out a need for quantitative justification for their results, in the form of statistical tests, which the authors were able to provide, leading the reviewer to revise their score to the highest value of 8. I thank them for instigating the discussion. R1 continues to feel that the lack of a methodological contribution (in the form of improving learning within an agent) is a weakness. However, I dont believe that all papers at deep learning conferences have to have the goal of empirically learning better on some benchmark task or dataset, and that theres room at ICLR for more analysis papers. Indeed, itd be nice to see more papers like this. For this reason, Im inclined to recommend accept for this paper. However this paper does have weaknesses, in that the framework proposed could be made more rigorous and formal. Currently it seems rather adhoc and on a task-by-task basis (ie we need to have access to game states or define them ourselves for the task). Its also disappointing that it doesnt work for recurrent agents, which limits its applicability for analyzing current SOTA deep RL agents. I wonder if authors can comment on possible extensions that would allow for this. """ 362,"""Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data""","['Generative models', 'generating synthetic data', 'neural architecture search', 'learning to teach', 'meta-learning']","""This paper investigates the intriguing question of whether we can create learning algorithms that automatically generate training data, learning environments, and curricula in order to help AI agents rapidly learn. We show that such algorithms are possible via Generative Teaching Networks (GTNs), a general approach that is applicable to supervised, unsupervised, and reinforcement learning. GTNs are deep neural networks that generate data and/or training environments that a learner (e.g.\ a freshly initialized neural network) trains on before being tested on a target task. We then differentiate \emph{through the entire learning process} via meta-gradients to update the GTN parameters to improve performance on the target task. GTNs have the beneficial property that they can theoretically generate any type of data or training environment, making their potential impact large. This paper introduces GTNs, discusses their potential, and showcases that they can substantially accelerate learning. We also demonstrate a practical and exciting application of GTNs: accelerating the evaluation of candidate architectures for neural architecture search (NAS), which is rate-limited by such evaluations, enabling massive speed-ups in NAS. GTN-NAS improves the NAS state of the art, finding higher performing architectures when controlling for the search proposal mechanism. GTN-NAS also is competitive with the overall state of the art approaches, which achieve top performance while using orders of magnitude less computation than typical NAS methods. Overall, GTNs represent a first step toward the ambitious goal of algorithms that generate their own training data and, in doing so, open a variety of interesting new research questions and directions.""","""Overview: This paper introduces a method to distill a large dataset into a smaller one that allows for faster training. The main application of this technique being studied is neural architecture search, which can be sped up by quickly evaluating architectures on the generated data rather than slowly evaluating them on the original data. Summary of discussion: During the discussion period, the authors appear to have updated the paper quite a bit, leading to the reviewers feeling more positive about it now than in the beginning. In particular, in the beginning, it appears to have been unclear that the distillation is merely used as a speedup trick, not to generate additional information out of thin air. The reviewers' scores left the paper below the decision boundary, but closely enough so that I read it myself. My own judgement: I like the idea, which I find very novel. However, I have to push back on the authors' claims about their good performance in NAS. This has several reasons: 1. In contrast to what is claimed by the authors, the comparison to graph hypernetworks (Zhang et al) is not fair, since the authors used a different protocol: Zhang et al sampled 800 networks and reported the performance (mean +/- std) of the 10 judged to be best by the hypernetwork. In contrast, the authors of the current paper sampled 1000 networks and reported the performance of the single one judged to be best. They repeated this procedure 5 times to get mean +/- std. The best architecture of 1000 is of course more likely to be strong than the average of the top 10 of 800. 2. The comparison to random search with weight sharing (here: 3.92% error) does not appear fair. The cited paper in Table 1 is *not* the paper introducing random search + weight sharing, but the neural architecture optimization paper. The original one reported an error of 2.85% +/- 0.08% with 4.3M params. That paper also has the full source code available, so the authors could have performed a true apples-to-apples comparison. 3. The authors' method requires an additional (one-time) cost for actually creating the 'fake' training data, so their runtimes should be increased by the 8h required for that. 4. The fact that the authors achieve 2.42% error doesn't mean much; that result is just based on scaling the network up to 100M params. (The network obtained by random search also achieves 2.51%.) As it stands, I cannot judge whether the authors' approach yields strong performance for NAS. In order to allow that conclusion, the authors would have to compare to another method based on the same underlying code base and experimental protocol. Also, the authors do not make code available at this time. Their method has a lot of bells and whistles, and I do not expect that I could reproduce it. They promise code, but it is unclear what this would include: the generated training data, code for training the networks, code for the meta-approach, etc? This would have been much easier to judge had the authors made the code available in anonymized fashion during the review. Because of these reasons, in terms of making progress on NAS, the paper does not quite clear the bar for me. The authors also evaluated their method in several other scenarios, including reinforcement learning. These results appear to be very promising, but largely preliminary due to lack of time in the rebuttal phase. Recommendation: The paper is very novel and the results appear very promising, but they are also somewhat preliminary. The reviewers' scores leave the paper just below the acceptance threshold and my own borderline judgement is not positive enough to overrule this. I believe that some more time, and one more iteration of reorganization and review, would allow this paper to ripen into a very strong paper. For a resubmission to the next venue, I would recommend to either perform an apples-to-apples comparison for NAS or reorganize and just use NAS as one of several equally-weighted possible applications. In the current form, I believe the paper is not using its full potential.""" 363,"""Smooth Kernels Improve Adversarial Robustness and Perceptually-Aligned Gradients""","['adversarial robustness', 'computer vision', 'smoothness regularization']","""Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns. Inspired by the intuition that humans are more sensitive to the lower-frequency (larger-scale) patterns we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel. We apply our regularization onto several popular training methods, demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness. Further, building on recent work establishing connections between adversarial robustness and interpretability, we show that our method appears to give more perceptually-aligned gradients. ""","""The authors propose a regularized for convolutional kernels that seeks to improve adversarial robustness of CNNs and produce more perceptually aligned gradients. While the topic studied by the paper is interesting, reviewers pointed out several deficiencies with the empirical evaluation that call into question the validity of the claims made by the authors. In particular: 1) Adversarial evaluation protocol: There are several red flags in the way the authors perform adversarial evaluation. The authors use a pre-defined adversarial attack toolbox (Foolbox) but are unable to produce successful attacks even for large perturbation radii - this suggests that the attack is not tuned properly. Further, the authors present results over the best case performance over several attacks, which is dubious since the goal of adversarial evaluation is to reveal the worst case performance of the model. 2) Perceptual alignment: The claim of perceptually aligned gradients also does not seem sufficiently justified given the experimental results, since the improvement over the baseline is quite marginal. Here too, the authors report failure of a standard visualization technique that has been successfully used in prior work, calling into question the validity of these results. The authors did not participate in the rebuttal phase and the reviewers maintained their scores after the initial reviews. Overall, given the significant flaws in the empirical evaluation, I recommend that the paper be rejected. I encourage the authors to rerun their experiments following the feedback from reviewers 1 and 3 and resubmit the paper with a more careful empirical evaluation.""" 364,"""Filling the Soap Bubbles: Efficient Black-Box Adversarial Certification with Non-Gaussian Smoothing""","['Adversarial Certification', 'Randomized Smoothing', 'Functional Optimization']","""Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning. However, most existing methods only leverage Gaussian smoothing noise and only work for pseudo-formula perturbation. We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks, from a unified functional optimization perspective. Our new framework allows us to identify a key trade-off between accuracy and robustness via designing smoothing distributions, helping to design two new families of non-Gaussian smoothing distributions that work more efficiently for pseudo-formula and pseudo-formula attacks, respectively. Our proposed methods achieve better results than previous works and provide a new perspective on randomized smoothing certification.""","""The authors extend the framework of randomized smoothing to handle non-Gaussian smoothing distribution and use this to show that they can construct smoothed models that perform well against l2 and linf adversarial attacks. They show that the resulting framework can obtain state-of-the-art certified robustness results improving upon prior work. While the paper contains several interesting ideas, the reviewers were concerned about several technical flaws and omissions from the paper: 1) A theorem on strong duality was incorrect in the initial version of the paper, though this was fixed in the rebuttal. However, the reasoning of the authors on the ""fundamental trade-off"" is specific to the particular framework they consider, and is not really a fundamental trade-off. 2) The justification for the new family of distributions constructed by the author is not very clear and the experiments only show marginal improvements over prior work. Thus, the significance of this contribution is not clear. Some of the issues were clarified during the rebuttal, but the reviewers remained unconvinced about the above points. Thus, the paper cannot be accepted in its current form. """ 365,"""Learning to Rank Learning Curves""",[],"""Many automated machine learning methods, such as those for hyperparameter and neural architecture optimization, are computationally expensive because they involve training many different model configurations. In this work, we present a new method that saves computational budget by terminating poor configurations early on in the training. In contrast to existing methods, we consider this task as a ranking and transfer learning problem. We qualitatively show that by optimizing a pairwise ranking loss and leveraging learning curves from other data sets, our model is able to effectively rank learning curves without having to observe many or very long learning curves. We further demonstrate that our method can be used to accelerate a neural architecture search by a factor of up to 100 without a significant performance degradation of the discovered architecture. In further experiments we analyze the quality of ranking, the influence of different model components as well as the predictive behavior of the model.""","""Authors propose a new way of early stopping for neural architecture search. In contrast to making keep or kill decisions based on extrapolating the learning curves then making decisions between alternatives, this work learns a model on pairwise comparisons between learning curves directly. Reviewers were concerned with over-claiming of novelty since the original version of this paper overlooked significant hyperparameter tuning works. In a revision, additional experiments were performed using some of the suggested methods but reviewers remained skeptical that the empirical experiments provided enough justification that this work was ready for prime time. """ 366,"""Policy Optimization In the Face of Uncertainty""","['Reinforcement Learning', 'Model-based Reinforcement Learning']","""Model-based reinforcement learning has the potential to be more sample efficient than model-free approaches. However, existing model-based methods are vulnerable to model bias, which leads to poor generalization and asymptotic performance compared to model-free counterparts. In this paper, we propose a novel policy optimization framework using an uncertainty-aware objective function to handle those issues. In this framework, the agent simultaneously learns an uncertainty-aware dynamics model and optimizes the policy according to these learned models. Under this framework, the objective function can represented end-to-end as a single computational graph, which allows seamless policy gradient computation via backpropagation through the models. In addition to being theoretically sound, our approach shows promising results on challenging continuous control benchmarks with competitive asymptotic performance and sample complexity compared to state-of-the-art baselines.""","""The main contribution of this work is introducing the uncertainty-aware value function prediction into model-based RL, which can be used to balance the risk and return empirically. The reviewers generally agree that this paper addresses an interesting problem, but there are some concerns that remain (see reviewer comments). I also want to highlight that in terms of empirical results, it is insufficient to present results for 3 different random seeds. To highlight any kind of robustness, I suggest *at least* 10-20 different random seeds; otherwise the findings can/will be misleading. """ 367,"""Analyzing Privacy Loss in Updates of Natural Language Models""","['Language Modelling', 'Privacy']","""To continuously improve quality and reflect changes in data, machine learning-based services have to regularly re-train and update their core models. In the setting of language models, we show that a comparative analysis of model snapshots before and after an update can reveal a surprising amount of detailed information about the changes in the data used for training before and after the update. We discuss the privacy implications of our findings, propose mitigation strategies and evaluate their effect.""","""This paper report empirical implications of privacy leaks in language models. Reviewers generally agree that the results look promising and interesting, but the paper isnt fully developed yet. A few pointed out that framing the paper better to better indicate broader implications of the observed symptoms would greatly improve the paper. Another pointed out better placing this work in the context of other related work. Overall, this paper could use another cycle of polishing/enhancing the results. """ 368,"""Realism Index: Interpolation in Generative Models With Arbitrary Prior""",[],"""In order to perform plausible interpolations in the latent space of a generative model, we need a measure that credibly reflects if a point in an interpolation is close to the data manifold being modelled, i.e. if it is convincing. In this paper, we introduce a realism index of a point, which can be constructed from an arbitrary prior density, or based on FID score approach in case a prior is not available. We propose a numerically efficient algorithm that directly maximises the realism index of an interpolation which, as we theoretically prove, leads to a search of a geodesic with respect to the corresponding Riemann structure. We show that we obtain better interpolations then the classical linear ones, in particular when either the prior density is not convex shaped, or when the soap bubble effect appears.""","""This paper introduces a realism metric for generated covariates and then leverage this metric to produce a novel method of interpolating between two real covariates. The reviewers found the method novel and were satisfied with the response form the authors to their concerns. However, Reviewer 4 did have reservations about the response to his/her points 3 and 4. Moreover, in the discussion period it was decided that while the method was well justified by intuition and theory, the empirical evaluationwhich is the what matters at the end of the daywas unconvincing. """ 369,"""Curriculum Learning for Deep Generative Models with Clustering""","['curriculum learning', 'generative adversarial network']","""Training generative models like Generative Adversarial Network (GAN) is challenging for noisy data. A novel curriculum learning algorithm pertaining to clustering is proposed to address this issue in this paper. The curriculum construction is based on the centrality of underlying clusters in data points. The data points of high centrality takes priority of being fed into generative models during training. To make our algorithm scalable to large-scale data, the active set is devised, in the sense that every round of training proceeds only on an active subset containing a small fraction of already trained data and the incremental data of lower centrality. Moreover, the geometric analysis is presented to interpret the necessity of cluster curriculum for generative models. The experiments on cat and human-face data validate that our algorithm is able to learn the optimal generative models (e.g. ProGAN) with respect to specified quality metrics for noisy data. An interesting finding is that the optimal cluster curriculum is closely related to the critical point of the geometric percolation process formulated in the paper.""","""The paper proposes a curriculum learning approach to training generative models like GANs. The reviewers had a number of questions and concerns related to specific details in the paper and experimental results. While the authors were able to address some of these concerns, the reviewers believe that further refinement is necessary before the paper is ready for publication.""" 370,"""Neural Arithmetic Unit by reusing many small pre-trained networks""","['NALU', 'feed forward NN']","""We propose a solution for evaluation of mathematical expression. However, instead of designing a single end-to-end model we propose a Lego bricks style architecture. In this architecture instead of training a complex end-to-end neural network, many small networks can be trained independently each accomplishing one specific operation and acting a single lego brick. More difficult or complex task can then be solved using a combination of these smaller network. In this work we first identify 8 fundamental operations that are commonly used to solve arithmetic operations (such as 1 digit multiplication, addition, subtraction, sign calculator etc). These fundamental operations are then learned using simple feed forward neural networks. We then shows that different operations can be designed simply by reusing these smaller networks. As an example we reuse these smaller networks to develop larger and a more complex network to solve n-digit multiplication, n-digit division, and cross product. This bottom-up strategy not only introduces reusability, we also show that it allows to generalize for computations involving n-digits and we show results for up to 7 digit numbers. Unlike existing methods, our solution also generalizes for both positive as well as negative numbers.""","""This paper proposes to train and compose neural networks for the purposes of arithmetic operations. All reviewers agree that the motivation for such a work is unclear, and the general presentation in the paper can be significantly improved. As such, I cannot recommend this paper in its current state for publication. """ 371,"""Neural Linear Bandits: Overcoming Catastrophic Forgetting through Likelihood Matching""",[],"""We study neural-linear bandits for solving problems where both exploration and representation learning play an important role. Neural-linear bandits leverage the representation power of deep neural networks and combine it with efficient exploration mechanisms, designed for linear contextual bandits, on top of the last hidden layer. Since the representation is being optimized during learning, information regarding exploration with ""old"" features is lost. Here, we propose the first limited memory neural-linear bandit that is resilient to this catastrophic forgetting phenomenon. We perform simulations on a variety of real-world problems, including regression, classification, and sentiment analysis, and observe that our algorithm achieves superior performance and shows resilience to catastrophic forgetting. ""","""Reviewers found the problem statement having merit, but found the solution not completely justifiable. Bandit algorithms often come with theoretical justification because the feedback is such that the algorithm could be performing horribly without giving any indication of performance loss. With neural networks this is obviously challenging given the lack of supervised learning guarantees, but reviewers remain skeptical and prefer not to speculate based on empirical results. """ 372,"""Perceptual Generative Autoencoders""",[],"""Modern generative models are usually designed to match target distributions directly in the data space, where the intrinsic dimensionality of data can be much lower than the ambient dimensionality. We argue that this discrepancy may contribute to the difficulties in training generative models. We therefore propose to map both the generated and target distributions to the latent space using the encoder of a standard autoencoder, and train the generator (or decoder) to match the target distribution in the latent space. The resulting method, perceptual generative autoencoder (PGA), is then incorporated with a maximum likelihood or variational autoencoder (VAE) objective to train the generative model. With maximum likelihood, PGAs generalize the idea of reversible generative models to unrestricted neural network architectures and arbitrary latent dimensionalities. When combined with VAEs, PGAs can generate sharper samples than vanilla VAEs. Compared to other autoencoder-based generative models using simple priors, PGAs achieve state-of-the-art FID scores on CIFAR-10 and CelebA.""","""The authors present a new training procedure for generative models where the target and generated distributions are first mapped to a latent space and the divergence between then is minimised in this latent space. The authors achieve state of the art results on two datasets. All reviewers agreed that the idea was vert interesting and has a lot of potential. Unfortunately, in the initial version of the paper the main section (section 3) was not very clear with confusing notation and statements. I thank the authors for taking this feedback positively and significantly revising the writeup. However, even after revising the writeup some of the ideas are still not clear. In particular, during discussions between the AC and reviewers it was pointed out that the training procedure is still not convincing. It was not clear whether the heuristic combination of the deterministic PGA parts of the objective (3) with the likelihood/VAE based terms (9) and (12,13), was conceptually very sound. Unfortunately, most of the initial discussions with the authors revolved around clarity and once we crossed the ""clarity"" barrier there wasn't enough time to discuss the other technical details of the paper. As a result, even though the paper seems interesting, the initial lack of clarity went against the paper. In summary, based on the reviewer comments, I recommend that the paper cannot be accepted. """ 373,"""Towards Disentangling Non-Robust and Robust Components in Performance Metric""","['adversarial examples', 'robust machine learning']","""The vulnerability to slight input perturbations is a worrying yet intriguing property of deep neural networks (DNNs). Though some efforts have been devoted to investigating the reason behind such adversarial behavior, the relation between standard accuracy and adversarial behavior of DNNs is still little understood. In this work, we reveal such relation by first introducing a metric characterizing the standard performance of DNNs. Then we theoretically show this metric can be disentangled into an information-theoretic non-robust component that is related to adversarial behavior, and a robust component. Then, we show by experiments that DNNs under standard training rely heavily on optimizing the non-robust component in achieving decent performance. We also demonstrate current state-of-the-art adversarial training algorithms indeed try to robustify DNNs by preventing them from using the non-robust component to distinguish samples from different categories. Based on our findings, we take a step forward and point out the possible direction of simultaneously achieving decent standard generalization and adversarial robustness. It is hoped that our theory can further inspire the community to make more interesting discoveries about the relation between standard accuracy and adversarial robustness of DNNs.""","""All reviewers suggest rejection. Beyond that, the more knowledgable two have consistent questions about the motivation for using the CCKL objective. As such, the exposition of this paper, and justification of the work could use improvement, so that experienced reviewers understand the contributions of the paper.""" 374,"""Characterize and Transfer Attention in Graph Neural Networks""","['Graph Neural Networks', 'Graph Attention Networks', 'Attention', 'Transfer Learning', 'Empirical Study']","""Does attention matter and, if so, when and how? Our study on both inductive and transductive learning suggests that datasets have a strong influence on the effects of attention in graph neural networks. Independent of learning setting, task and attention variant, attention mostly degenerate to simple averaging for all three citation networks, whereas they behave strikingly different in the protein-protein interaction networks and molecular graphs: nodes attend to different neighbors per head and get more focused in deeper layers. Consequently, attention distributions become telltale features of the datasets themselves. We further explore the possibility of transferring attention for graph sparsification and show that, when applicable, attention-based sparsification retains enough information to obtain good performance while reducing computational and storage costs. Finally, we point out several possible directions for further study and transfer of attention.""","""This paper suggests that datasets have a strong influence on the effects of attention in graph neural networks and explores the possibility of transferring attention for graph sparsification, suggesting that attention-based sparsification retains enough information to obtain good performance while reducing computational and storage costs. Unfortunately I cannot recommend acceptance for this paper in its present form. Some concerns raised by the reviewers are: the analysis lacks theoretical insights and does not seem to be very useful in practice; the proposed method for graph sparsification lacks novelty; the experiments are not thorough to validate its usefulness. I encourage the authors to address these concerns in an eventual resubmission. """ 375,"""Functional Regularisation for Continual Learning with Gaussian Processes""","['Continual Learning', 'Gaussian Processes', 'Lifelong learning', 'Incremental Learning']","""We introduce a framework for Continual Learning (CL) based on Bayesian inference over the function space rather than the parameters of a deep neural network. This method, referred to as functional regularisation for Continual Learning, avoids forgetting a previous task by constructing and memorising an approximate posterior belief over the underlying task-specific function. To achieve this we rely on a Gaussian process obtained by treating the weights of the last layer of a neural network as random and Gaussian distributed. Then, the training algorithm sequentially encounters tasks and constructs posterior beliefs over the task-specific functions by using inducing point sparse Gaussian process methods. At each step a new task is first learnt and then a summary is constructed consisting of (i) inducing inputs a fixed-size subset of the task inputs selected such that it optimally represents the task and (ii) a posterior distribution over the function values at these inputs. This summary then regularises learning of future tasks, through Kullback-Leibler regularisation terms. Our method thus unites approaches focused on (pseudo-)rehearsal with those derived from a sequential Bayesian inference perspective in a principled way, leading to strong results on accepted benchmarks.""","""The authors introduce a framework for continual learning in neural networks based on sparse Gaussian process methods. The reviewers had a number of questions and concerns, that were adequately addressed during the discussion phase. This is an interesting addition to the continual learning literature. Please be sure to update the paper based on the discussion.""" 376,"""On the Convergence of FedAvg on Non-IID Data""","['Federated Learning', 'stochastic optimization', 'Federated Averaging']","""Federated learning enables a large amount of edge computing devices to jointly learn a model without data sharing. As a leading algorithm in this setting, Federated Averaging (\texttt{FedAvg}) runs Stochastic Gradient Descent (SGD) in parallel on a small subset of the total devices and averages the sequences only once in a while. Despite its simplicity, it lacks theoretical guarantees under realistic settings. In this paper, we analyze the convergence of \texttt{FedAvg} on non-iid data and establish a convergence rate of pseudo-formula for strongly convex and smooth problems, where pseudo-formula is the number of SGDs. Importantly, our bound demonstrates a trade-off between communication-efficiency and convergence rate. As user devices may be disconnected from the server, we relax the assumption of full device participation to partial device participation and study different averaging schemes; low device participation rate can be achieved without severely slowing down the learning. Our results indicate that heterogeneity of data slows down the convergence, which matches empirical observations. Furthermore, we provide a necessary condition for \texttt{FedAvg} on non-iid data: the learning rate pseudo-formula must decay, even if full-gradient is used; otherwise, the solution will be (\eta)$ away from the optimal.""","""This manuscript analyzes the convergence of federated learning wit hstragellers, and provides convergence rates. The proof techniques involve bounding the effects of the non-identical distribution due to stragglers and related issues. The manuscript also includes a thorough empirical evaluation. Overall, the reviewers were quite positive about the manuscript, with a few details that should be improved. """ 377,"""Transition Based Dependency Parser for Amharic Language Using Deep Learning""","['Amharic dependency parsing', 'arc-eager transition', 'LSTM', 'Transition action prediction', 'Relationship type prediction']","""Researches shows that attempts done to apply existing dependency parser on morphological rich languages including Amharic shows a poor performance. In this study, a dependency parser for Amharic language is implemented using arc-eager transition system and LSTM network. The study introduced another way of building labeled dependency structure by using a separate network model to predict dependency relation. This helps the number of classes to decrease from 2n+2 into n, where n is the number of relationship types in the language and increases the number of examples for each class in the data set. Evaluation of the parser model results 91.54 and 81.4 unlabeled and labeled attachment score respectively. The major challenge in this study was the decrease of the accuracy of labeled attachment score. This is mainly due to the size and quality of the tree-bank available for Amharic language. Improving the tree-bank by increasing the size and by adding morphological information can make the performance of parser better.""","""The paper builds a transition-based dependency parser for Amharic, first predicting transitions and then dependency labels. The model is poorly motivated, and poorly described. The experiments have serious problems with their train/test splits and lack of baseline. The reviewers all convincingly argue for reject. The authors have not responded. """ 378,"""How can we generalise learning distributed representations of graphs?""","['graphs', 'distributed representations', 'similarity learning']","""We propose a general framework to construct unsupervised models capable of learning distributed representations of discrete structures such as graphs based on R-Convolution kernels and distributed semantics research. Our framework combines the insights and observations of Deep Graph Kernels and Graph2Vec towards a unified methodology for performing similarity learning on graphs of arbitrary size. This is exemplified by our own instance G2DR which extends Graph2Vec from labelled graphs towards unlabelled graphs and tackles issues of diagonal dominance through pruning of the subgraph vocabulary composing graphs. These changes produce new state of the art results in the downstream application of G2DR embeddings in graph classification tasks over datasets with small labelled graphs in binary classification to multi-class classification on large unlabelled graphs using an off-the-shelf support vector machine. ""","""The paper proposed a general framework to construct unsupervised models for representation learning of discrete structures. The reviewers feel that the approach is taken directly from graph kernels, and the novelty is not high enough. """ 379,"""The Early Phase of Neural Network Training""","['empirical', 'learning dynamics', 'lottery tickets', 'critical periods', 'early']","""Recent studies have shown that many important aspects of neural network learning take place within the very earliest iterations or epochs of training. For example, sparse, trainable sub-networks emerge (Frankle et al., 2019), gradient descent moves into a small subspace (Gur-Ari et al., 2018), and the network undergoes a critical period (Achille et al., 2019). Here we examine the changes that deep neural networks undergo during this early phase of training. We perform extensive measurements of the network state and its updates during these early iterations of training, and leverage the framework of Frankle et al. (2019) to quantitatively probe the weight distribution and its reliance on various aspects of the dataset. We find that, within this framework, deep networks are not robust to reinitializing with random weights while maintaining signs, and that weight distributions are highly non-independent even after only a few hundred iterations. Despite this, pre-training with blurred inputs or an auxiliary self-supervised task can approximate the changes in supervised networks, suggesting that these changes are label-agnostic, though labels significantly accelerate this process. Together, these results help to elucidate the network changes occurring during this pivotal initial period of learning.""","""This paper studies numerous ways in which the statistics of network weights evolve during network training. Reviewers are not entirely sure what conclusions to make from these studies, and training dynamics can be strongly impacted by arbitrary choices made in the training process. Despite these issues, the reviewers think the observed results are interesting enough to clear the bar for publication.""" 380,"""Policy Optimization with Stochastic Mirror Descent""","['reinforcement learning', 'policy gradient', 'stochastic variance reduce gradient', 'sample efficiency', 'stochastic mirror descent']","""Improving sample efficiency has been a longstanding goal in reinforcement learning. In this paper, we propose the pseudo-formula : a sample efficient policy gradient method with stochastic mirror descent. A novel variance reduced policy gradient estimator is the key of pseudo-formula to improve sample efficiency. Our pseudo-formula needs only pseudo-formula sample trajectories to achieve an pseudo-formula -approximate first-order stationary point, which matches the best-known sample complexity. We conduct extensive experiments to show our algorithm outperforms state-of-the-art policy gradient methods in various settings.""","""This paper proposes a new policy gradient method based on stochastic mirror descent and variance reduction. Both theoretical analysis and experiments are provided to demonstrate the sample efficiency of the proposed algorithm. The main concerns of this paper include: (1) unclear presentation in both the main results and the proof; and (2) missing baselines (e.g., HAPG) in the experiments. This paper has been carefully discussed but even after author response and reviewer discussion, it does not gather sufficient support. Note: the authors disclosed their identity by adding the author names in the revision during the author response. After discussion with PC chair, the openreview team helped remove that revision during the reviewer discussion to avoid desk reject. """ 381,"""Constant Curvature Graph Convolutional Networks""","['graph convolutional neural networks', 'hyperbolic spaces', 'gyrvector spaces', 'riemannian manifolds', 'graph embeddings']",""" Interest has been rising lately towards methods representing data in non-Euclidean spaces, e.g. hyperbolic or spherical. These geometries provide specific inductive biases useful for certain real-world data properties, e.g. scale-free or hierarchical graphs are best embedded in a hyperbolic space. However, the very popular class of graph neural networks is currently limited to model data only via Euclidean node embeddings and associated vector space operations. In this work, we bridge this gap by proposing mathematically grounded generalizations of graph convolutional networks (GCN) to (products of) constant curvature spaces. We do this by i) extending the gyro-vector space theory from hyperbolic to spherical spaces, providing a unified and smooth view of the two geometries, ii) leveraging gyro-barycentric coordinates that generalize the classic Euclidean concept of the center of mass. Our class of models gives strict generalizations in the sense that they recover their Euclidean counterparts when the curvature goes to zero from either side. Empirically, our methods outperform different types of classic Euclidean GCNs in the tasks of node classification and minimizing distortion for symbolic data exhibiting non-Euclidean behavior, according to their discrete curvature. ""","""This paper proposes using non-Euclidean spaces for GCNs, leveraging the gyrovector space formalism. The model allows products of constant curvature, both positive and negative, generalizing hyperbolic embeddings. Reviewers got mixed impressions on this paper. Whereas some found its methodology compelling and its empirical evaluation satisfactory, it was generally perceived that this paper will greatly benefit from another round of reviewing. In particular, the authors should improve readability of the main text and provide a more thorough discussion on related recent (and concurrent) work. """ 382,"""Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement Learning""",[],"""In this paper, we propose a deep reinforcement learning (DRL) based framework to efficiently perform runtime channel pruning on convolutional neural networks (CNNs). Our DRL-based framework aims to learn a pruning strategy to determine how many and which channels to be pruned in each convolutional layer, depending on each specific input instance in runtime. The learned policy optimizes the performance of the network by restricting the computational resource on layers under an overall computation budget. Furthermore, unlike other runtime pruning methods which require to store all channels parameters in inference, our framework can reduce parameters storage consumption at deployment by introducing a static pruning component. Comparison experimental results with existing runtime and static pruning methods on state-of-the-art CNNs demonstrate that our proposed framework is able to provide a tradeoff between dynamic flexibility and storage efficiency in runtime channel pruning. ""","""Main content: Proposes a deep RL unified framework to manage the trade-off between static pruning to decrease storage requirements and network flexibility for dynamic pruning to decrease runtime costs Summary of discussion: reviewer1: Reviewer likes the proposed DRL approach, but writing and algorithmic details are lacking reviewer2: Pruning methods are certainly imortant, but there are details missing wrt the algorithm in the paper. reviewer3: Presents a novel RL algorithm, showing good results on CIFAR10 and ISLVRC2012. Algorithmic details and parameters are not clearly explained. Recommendation: All reviewers liked the work but the writing/algorithmic details are lacking. I recommend Reject. """ 383,"""Optimizing Data Usage via Differentiable Rewards""","['data selection', 'multilingual neural machine translation', 'data usage optimzation', 'transfer learning', 'classification']","""To acquire a new skill, humans learn better and faster if a tutor, based on their current knowledge level, informs them of how much attention they should pay to particular content or practice problems. Similarly, a machine learning model could potentially be trained better with a scorer that adapts to its current learning state and estimates the importance of each training data instance. Training such an adaptive scorer efficiently is a challenging problem; in order to precisely quantify the effect of a data instance at a given time during the training, it is typically necessary to first complete the entire training process. To efficiently optimize data usage, we propose a reinforcement learning approach called Differentiable Data Selection (DDS). In DDS, we formulate a scorer network as a learnable function of the training data, which can be efficiently updated along with the main model being trained. Specifically, DDS updates the scorer with an intuitive reward signal: it should up-weigh the data that has a similar gradient with a dev set upon which we would finally like to perform well. Without significant computing overhead, DDS delivers strong and consistent improvements over several strong baselines on two very different tasks of machine translation and image classification.""","""The paper proposes an iterative learning method that jointly trains both a model and a scorer network that places a non-uniform weights on data points, which estimates the importance of each data point for training. This leads to significant improvement on several benchmarks. The reviewers mostly agreed that the approach is novel and that the benchmark results were impressive, especially on Imagenet. There were both clarity issues about methodology and experiments, as well as concerns about several technical issues. The reviewers felt that the rebuttal resolved the majority of minor technical issues, but did not sufficiently clarify the more significant methodological concerns. Thus, I recommend rejection at this time.""" 384,"""Energy-based models for atomic-resolution protein conformations""","['energy-based model', 'transformer', 'energy function', 'protein conformation']","""We propose an energy-based model (EBM) of protein conformations that operates at atomic scale. The model is trained solely on crystallized protein data. By contrast, existing approaches for scoring conformations use energy functions that incorporate knowledge of physical principles and features that are the complex product of several decades of research and tuning. To evaluate the model, we benchmark on the rotamer recovery task, the problem of predicting the conformation of a side chain from its context within a protein structure, which has been used to evaluate energy functions for protein design. The model achieves performance close to that of the Rosetta energy function, a state-of-the-art method widely used in protein structure prediction and design. An investigation of the models outputs and hidden representations finds that it captures physicochemical properties relevant to protein energy.""","""The paper proposes a data-driven approach to learning atomic-resolution energy functions. Experiment results show that the proposed energy function is similar to the state-of-art method (Rosetta) based on physical principles and engineered features. The paper addresses an interesting and challenging problem. The results are very promising. It is a good showcase of how ML can be applied to solve an important application problem. For the final version, we suggest that the authors can tune down some claims in the paper to fairly reflect the contribution of the work. """ 385,"""A Copula approach for hyperparameter transfer learning""","['Hyperparameter optimization', 'Bayesian Optimization', 'Gaussian Process', 'Copula', 'Transfer-learning']","""Bayesian optimization (BO) is a popular methodology to tune the hyperparameters of expensive black-box functions. Despite its success, standard BO focuses on a single task at a time and is not designed to leverage information from related functions, such as tuning performance metrics of the same algorithm across multiple datasets. In this work, we introduce a novel approach to achieve transfer learning across different datasets as well as different metrics. The main idea is to regress the mapping from hyperparameter to metric quantiles with a semi-parametric Gaussian Copula distribution, which provides robustness against different scales or outliers that can occur in different tasks. We introduce two methods to leverage this estimation: a Thompson sampling strategy as well as a Gaussian Copula process using such quantile estimate as a prior. We show that these strategies can combine the estimation of multiple metrics such as runtime and accuracy, steering the optimization toward cheaper hyperparameters for the same level of accuracy. Experiments on an extensive set of hyperparameter tuning tasks demonstrate significant improvements over state-of-the-art methods.""","""This paper tackles the problem of transferring learning between tasks when performing Bayesian hyperparameter optimization. In this setting, tasks can correspond to different datasets or different metrics. The proposed approach uses Gaussian copulas to synchronize the different scales of the considered tasks and uses Thompson Sampling from the resulting Gaussian Copula Process for selecting next hyperparameters. The main weakness of the paper resides in the concerns raised about the experiments. First, the results are hard to interpret, leading to a misunderstanding of performances. Moreover, the considered baselines may not be adapted (they may be trivial). This might be due to a misunderstanding of the paper, which would align with the third major concern, that is the lack of clarity. These points could be addressed in a future version of the work, but it would need to be reviewed again and therefore would be too late for the current camera-ready. Hence, I recommend rejecting this paper.""" 386,"""Using Objective Bayesian Methods to Determine the Optimal Degree of Curvature within the Loss Landscape""","['Objective Bayes', 'Information Geometry', 'Artificial Neural Networks']","""The efficacy of the width of the basin of attraction surrounding a minimum in parameter space as an indicator for the generalizability of a model parametrization is a point of contention surrounding the training of artificial neural networks, with the dominant view being that wider areas in the landscape reflect better generalizability by the trained model. In this work, however, we aim to show that this is only true for a noiseless system and in general the trend of the model towards wide areas in the landscape reflect the propensity of the model to overfit the training data. Utilizing the objective Bayesian (Jeffreys) prior we instead propose a different determinant of the optimal width within the parameter landscape determined solely by the curvature of the landscape. In doing so we utilize the decomposition of the landscape into the dimensions of principal curvature and find the first principal curvature dimension of the parameter space to be independent of noise within the training data.""","""There has been significant discussion in the literature on the effect of the properties of the curvature of minima on generalization in deep learning. This paper aims to shed some light on that discussion through the lens of theoretical analysis and the use of a Bayesian Jeffrey's prior. It seems clear that the reviewers appreciated the work and found the analysis insightful. However, a major issue cited by the reviewers is a lack of compelling empirical evidence that the claims of the paper are true. The authors run experiments on very small networks and reviewers felt that the results of these experiments were unlikely to extrapolate to large scale modern models and problems. One reviewer was concerned about the quality of the exposition in terms of the writing and language and care in terminology. Unfortunately, this paper falls below the bar for acceptance, but it seems likely that stronger empirical results and a careful treatment of the writing would make this a much stronger paper for future submission.""" 387,"""BlockSwap: Fisher-guided Block Substitution for Network Compression on a Budget""","['model compression', 'architecture search', 'efficiency', 'budget', 'convolutional neural networks']","""The desire to map neural networks to varying-capacity devices has led to the development of a wealth of compression techniques, many of which involve replacing standard convolutional blocks in a large network with cheap alternative blocks. However, not all blocks are created equally; for a required compute budget there may exist a potent combination of many different cheap blocks, though exhaustively searching for such a combination is prohibitively expensive. In this work, we develop BlockSwap: a fast algorithm for choosing networks with interleaved block types by passing a single minibatch of training data through randomly initialised networks and gauging their Fisher potential. These networks can then be used as students and distilled with the original large network as a teacher. We demonstrate the effectiveness of the chosen networks across CIFAR-10 and ImageNet for classification, and COCO for detection, and provide a comprehensive ablation study of our approach. BlockSwap quickly explores possible block configurations using a simple architecture ranking system, yielding highly competitive networks in orders of magnitude less time than most architecture search techniques (e.g. under 5 minutes on a single GPU for CIFAR-10).""","""Two reviewers recommend acceptance. One reviewer is negative, however, does not provide reasons for rejection. The AC read the paper and agrees with the positive reviewers. in that the paper provides value for the community on an important topic of network compression.""" 388,"""Learning Temporal Abstraction with Information-theoretic Constraints for Hierarchical Reinforcement Learning""","['hierarchical reinforcement learning', 'temporal abstraction']","""Applying reinforcement learning (RL) to real-world problems will require reasoning about action-reward correlation over long time horizons. Hierarchical reinforcement learning (HRL) methods handle this by dividing the task into hierarchies, often with hand-tuned network structure or pre-defined subgoals. We propose a novel HRL framework TAIC, which learns the temporal abstraction from past experience or expert demonstrations without task-specific knowledge. We formulate the temporal abstraction problem as learning latent representations of action sequences and present a novel approach of regularizing the latent space by adding information-theoretic constraints. Specifically, we maximize the mutual information between the latent variables and the state changes. A visualization of the latent space demonstrates that our algorithm learns an effective abstraction of the long action sequences. The learned abstraction allows us to learn new tasks on higher level more efficiently. We convey a significant speedup in convergence over benchmark learning problems. These results demonstrate that learning temporal abstractions is an effective technique in increasing the convergence rate and sample efficiency of RL algorithms.""","""This paper presents a novel hierarchical reinforcement learning framework, based on learning temporal abstractions from past experience or expert demonstrations using recurrent variational autoencoders and regularising the representations. This is certainly an interesting line of work, but there were two primary areas of concern in the reviews: the clarity of details of the approach, and the lack of comparison to baselines. While the former issue was largely dealt with in the rebuttals, the latter remained an issue for all reviewers. For this reason, I recommend rejection of the paper in its current form.""" 389,"""Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks""",[],"""(Frankle & Carbin, 2019) shows that there exist winning tickets (small but critical subnetworks) for dense, randomly initialized networks, that can be trained alone to achieve comparable accuracies to the latter in a similar number of iterations. However, the identification of these winning tickets still requires the costly train-prune-retrain process, limiting their practical benefits. In this paper, we discover for the first time that the winning tickets can be identified at the very early training stage, which we term as Early-Bird (EB) tickets, via low-cost training schemes (e.g., early stopping and low-precision training) at large learning rates. Our finding of EB tickets is consistent with recently reported observations that the key connectivity patterns of neural networks emerge early. Furthermore, we propose a mask distance metric that can be used to identify EB tickets with low computational overhead, without needing to know the true winning tickets that emerge after the full training. Finally, we leverage the existence of EB tickets and the proposed mask distance to develop efficient training methods, which are achieved by first identifying EB tickets via low-cost schemes, and then continuing to train merely the EB tickets towards the target accuracy. Experiments based on various deep networks and datasets validate: 1) the existence of EB tickets, and the effectiveness of mask distance in efficiently identifying them; and 2) that the proposed efficient training via EB tickets can achieve up to 4.7x energy savings while maintaining comparable or even better accuracy, demonstrating a promising and easily adopted method for tackling cost-prohibitive deep network training.""","""This work studies small but critical subnetworks, called winning tickets, that have very similar performance to an entire network, even with much less training. They show how to identify these early in the training of the entire network, saving computation and time in identifying them and then overall for the prediction task as a whole. The reviewers agree this paper is well-presented and of general interest to the community. Therefore, we recommend that the paper be accepted.""" 390,"""Progressive Upsampling Audio Synthesis via Effective Adversarial Training""","['audio synthesis', 'sound effect generation', 'generative adversarial network', 'progressive training', 'raw-waveform']","""This paper proposes a novel generative model called PUGAN, which progressively synthesizes high-quality audio in a raw waveform. PUGAN leverages on the recently proposed idea of progressive generation of higher-resolution images by stacking multiple encode-decoder architectures. To effectively apply it to raw audio generation, we propose two novel modules: (1) a neural upsampling layer and (2) a sinc convolutional layer. Compared to the existing state-of-the-art model called WaveGAN, which uses a single decoder architecture, our model generates audio signals and converts them in a higher resolution in a progressive manner, while using a significantly smaller number of parameters, e.g., 20x smaller for 44.1kHz output, than an existing technique called WaveGAN. Our experiments show that the audio signals can be generated in real-time with the comparable quality to that of WaveGAN with respect to the inception scores and the human evaluation.""","""Inspired by WaveGAN, this paper proposes a PUGAN to synthesizes high-quality audio in a raw waveform. The paper is well motivated. But all the reviewers find that the paper is lack of clarity and details, and there are some problems in the experiments.""" 391,"""Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality""","['Deep generative models', 'out-of-distribution detection', 'safety']","""Recent work has shown that deep generative models can assign higher likelihood to out-of-distribution data sets than to their training data [Nalisnick et al., 2019; Choi et al., 2019]. We posit that this phenomenon is caused by a mismatch between the model's typical set and its areas of high probability density. In-distribution inputs should reside in the former but not necessarily in the latter, as previous work has presumed [Bishop, 1994]. To determine whether or not inputs reside in the typical set, we propose a statistically principled, easy-to-implement test using the empirical distribution of model likelihoods. The test is model agnostic and widely applicable, only requiring that the likelihood can be computed or closely approximated. We report experiments showing that our procedure can successfully detect the out-of-distribution sets in several of the challenging cases reported by Nalisnick et al. [2019].""","""This paper tackles the problem of detecting out of distribution (OoD) samples. To this end, the authors propose a new approach based on typical sets, i.e. sets of samples whose expected log likelihood approximate the model's entropy. The idea is then to rely on statistical testing using the empirical distribution of model likelihoods in order to determine whether samples lie in the typical set of the considered model. Experiments are provided where the proposed approach show competitive performance on MNIST and natural image tasks. This work has major drawbacks: novelty, theoretical soundness, and robustness in settings with model misspecification. Using the typicality notion has already been explored in Choi. et al. 2019 (for flow-based model), which dampers the novelty of this work. The conditions under which the typicality notion can be used are also not clear, e.g. in the small data regime. Finally, the current experiments are lacking a characterization of robustness to model misspecification. Given these limitations, I recommend to reject this paper. """ 392,"""Data-Efficient Image Recognition with Contrastive Predictive Coding""","['Deep learning', 'representation learning', 'contrastive methods', 'unsupervised learning', 'self-supervised learning', 'vision', 'data-efficiency']","""Human observers can learn to recognize new categories of objects from a handful of examples, yet doing so with machine perception remains an open challenge. We hypothesize that data-efficient recognition is enabled by representations which make the variability in natural signals more predictable, as suggested by recent perceptual evidence. We therefore revisit and improve Contrastive Predictive Coding, a recently-proposed unsupervised learning framework, and arrive at a representation which enables generalization from small amounts of labeled data. When provided with only 1% of ImageNet labels (i.e. 13 per class), this model retains a strong classification performance, 73% Top-5 accuracy, outperforming supervised networks by 28% (a 65% relative improvement) and state-of-the-art semi-supervised methods by 14%. We also find this representation to serve as a useful substrate for object detection on the PASCAL-VOC 2007 dataset, approaching the performance of representations trained with a fully annotated ImageNet dataset.""","""The paper tackles the key question of achieving high prediction performances with few labels. The proposed approach builds upon Contrastive Predictive Coding (van den Oord et al. 2018). The contribution lies in i) refining CPC along several axes including model capacity, directional predictions, patch-based augmentation; ii) showing that the refined representation learned by the called CPC.v2 supports an efficient classification in a few-label regime, and can be transferred to another dataset; iii) showing that the auxiliary losses involved in the CPC are not necessarily predictive of the eventual performance of the network. This paper generated a hot discussion. Reviewers were not convinced that the paper contributions are sufficiently innovative to deserve being published at ICLR. Authors argued that novelty does not have to lie in equations, and that the new ideas and evidence presented are worth. The area chair thinks that the paper raises profound questions (e.g., what auxiliary losses are most conducive to learning a good representation; how to divide the computational efforts among the preliminary phase of representation learning and the later phase of classifier learning), but given the number of options and details involved, these results may support several interpretations besides the authors'. The authors might also want to leave the claim about the generality of the CPC++ principles (e.g., regarding audio) for further work - or to bring additional evidence backing up this claim. In conclusion, this paper contains brilliant ideas and I hope to see them published with a strengthened analysis of its components. """ 393,"""Stochastic Weight Averaging in Parallel: Large-Batch Training That Generalizes Well""","['Large batch training', 'Distributed neural network training', 'Stochastic Weight Averaging']","""We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm to accelerate DNN training. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models computed independently and in parallel. The resulting models generalize equally well as those trained with small mini-batches but are produced in a substantially shorter time. We demonstrate the reduction in training time and the good generalization performance of the resulting models on the computer vision datasets CIFAR10, CIFAR100, and ImageNet.""","""The authors proposed a simple and effective approach to parallel training based on stochastic weight averaging. Moreover, the authors have carefully addressed the reviewer comments in the discussion period, particularly the relation to local SGD, to the satisfaction of reviewers. Local SGD mimics sequential SGD with noise induced by lack of synchronization, whereas SWAP averages multiple samples from a stationary distribution, and synchronizes at the end. Please clarify these points and carefully account for reviewer comments in the final version. Overall, the proposed approach will make an excellent addition to the program, both elegant and practically useful.""" 394,"""Gradient-free Neural Network Training by Multi-convex Alternating Optimization""","['neural network', 'alternating minimization', 'global convergence']","""In recent years, stochastic gradient descent (SGD) and its variants have been the dominant optimization methods for training deep neural networks. However, SGD suffers from limitations such as the lack of theoretical guarantees, vanishing gradients, excessive sensitivity to input, and difficulties solving highly non-smooth constraints and functions. To overcome these drawbacks, alternating minimization-based methods for deep neural network optimization have attracted fast-increasing attention recently. As an emerging and open domain, however, several new challenges need to be addressed, including 1) Convergence depending on the choice of hyperparameters, and 2) Lack of unified theoretical frameworks with general conditions. We, therefore, propose a novel Deep Learning Alternating Minimization (DLAM) algorithm to deal with these two challenges. Our innovative inequality-constrained formulation infinitely approximates the original problem with non-convex equality constraints, enabling our proof of global convergence of the DLAM algorithm under mild, practical conditions, regardless of the choice of hyperparameters and wide range of various activation functions. Experiments on benchmark datasets demonstrate the effectiveness of DLAM.""","""The paper proposes a new learning algorithm for deep neural networks that first reformulates the problem as a multi-convex and then uses an alternating update to solve. The reviewers are concerned about the closeness to previous work, comparisons with related work like dlADMM, and the difficulty of the dataset. While the authors proposed the possibility of addressing some of these issues, the reviewers feel that without actually addressing them, the paper is not yet ready for publication. """ 395,"""EXPLOITING SEMANTIC COHERENCE TO IMPROVE PREDICTION IN SATELLITE SCENE IMAGE ANALYSIS: APPLICATION TO DISEASE DENSITY ESTIMATION""","['semantic coherence', 'satellite scene image analysis', 'convolutional neural networks', 'disease density']","""High intra-class diversity and inter-class similarity is a characteristic of remote sensing scene image data sets currently posing significant difficulty for deep learning algorithms on classification tasks. To improve accuracy, post-classification methods have been proposed for smoothing results of model predictions. However, those approaches require an additional neural network to perform the smoothing operation, which adds overhead to the task. We propose an approach that involves learning deep features directly over neighboring scene images without requiring use of a cleanup model. Our approach utilizes a siamese network to improve the discriminative power of convolutional neural networks on a pair of neighboring scene images. It then exploits semantic coherence between this pair to enrich the feature vector of the image for which we want to predict a label. Empirical results show that this approach provides a viable alternative to existing methods. For example, our model improved prediction accuracy by 1 percentage point and dropped the mean squared error value by 0.02 over the baseline, on a disease density estimation task. These performance gains are comparable with results from existing post-classification methods, moreover without implementation overheads.""","""This papers proposed a solution to the problem of disease density estimation using satellite scene images. The method combines a classification and regression task. The reviewers were unanimous in their recommendation that the submission not be accepted to ICLR. The main concern was a lack of methodological novelty. The authors responded to reviewer comments, and indicated a list of improvements that still remain to be done indicating that the paper should at least go through another review cycle.""" 396,"""Regulatory Focus: Promotion and Prevention Inclinations in Policy Search""","['Reinforcement Learning', 'Regulatory Focus', 'Promotion and Prevention', 'Exploration']","""The estimation of advantage is crucial for a number of reinforcement learning algorithms, as it directly influences the choices of future paths. In this work, we propose a family of estimates based on the order statistics over the path ensemble, which allows one to flexibly drive the learning process in a promotion focus or prevention focus. On top of this formulation, we systematically study the impacts of different regulatory focuses. Our findings reveal that regulatory focus, when chosen appropriately, can result in significant benefits. In particular, for the environments with sparse rewards, promotion focus would lead to more efficient exploration of the policy space; while for those where individual actions can have critical impacts, prevention focus is preferable. On various benchmarks, including MuJoCo continuous control, Terrain locomotion, Atari games, and sparse-reward environments, the proposed schemes consistently demonstrate improvement over mainstream methods, not only accelerating the learning process but also obtaining substantial performance gains.""","""The authors take inspiration from regulatory fit theory and propose a new parameter for policy gradient algorithms in RL that can manage the ""regulatory focus"" of an agent. They hypothesize that this can affect performance in a problem-specific way, especially when trading off between broad exploration and risk. The reviewers expressed concerns about the usefulness of the proposed algorithm in practice and a lack of thorough empirical comparisons or theoretical results. Unfortunately, the authors did not provide a rebuttal, so no further discussion of these issues was possible; thus, I recommend to reject.""" 397,"""Training Deep Neural Networks by optimizing over nonlocal paths in hyperparameter space""","['deep learning', 'Hyperparameter optimization', 'dropout']","""Hyperparameter optimization is both a practical issue and an interesting theoretical problem in training of deep architectures. Despite many recent advances the most commonly used methods almost universally involve training multiple and decoupled copies of the model, in effect sampling the hyperparameter space. We show that at a negligible additional computational cost, results can be improved by sampling \emph{nonlocal paths} instead of points in hyperparameter space. To this end we interpret hyperparameters as controlling the level of correlated noise in training, which can be mapped to an effective temperature. The usually independent instances of the model are coupled and allowed to exchange their hyperparameters throughout the training using the well established parallel tempering technique of statistical physics. Each simulation corresponds then to a unique path, or history, in the joint hyperparameter/model-parameter space. We provide empirical tests of our method, in particular for dropout and learning rate optimization. We observed faster training and improved resistance to overfitting and showed a systematic decrease in the absolute validation error, improving over benchmark results.""","""This paper uses a variant of parallel tempering to tune the subset of neural net hyperparameters which control the amount of noise and/or rate of diffusion (e.g. learning rate, batch size). It's certainly an appealing idea to run multiple chains in parallel and periodically propose swaps between them. However, I'm not persuaded about the details. The argumentation in the paper is fairly informal, and it uses ideas from optimization and MCMC somewhat interchangeably. Since the individual chains aren't sampling from any known stationary distribution, it's not clear to me what MH-based swaps will achieve. The authors are upset with one of the reviews and think it misrepresents their paper. However, I find myself agreeing with most of the reviewer's points. Furthermore, as a general principle, the availability of code doesn't by itself make a paper reproducible. One should be able to reproduce it without the code, and one shouldn't need to refer to the code for important details about the algorithm. Another limitation (pointed out by various reviewers) is that there aren't any comparisons against prior work on hyperparameter optimization. Overall, I think there are some promising and appealing ideas in this submission, but it needs to be cleaned up before it's ready for publication at ICLR. """ 398,"""Versatile Anomaly Detection with Outlier Preserving Distribution Mapping Autoencoders""","['Anomaly detection', 'outliers', 'deep learning', 'distribution mapping', 'wasserstein autoencoders']",""" State-of-the-art deep learning methods for outlier detection make the assumption that anomalies will appear far away from inlier data in the latent space produced by distribution mapping deep networks. However, this assumption fails in practice, because the divergence penalty adopted for this purpose encourages mapping outliers into the same high-probability regions as inliers. To overcome this shortcoming, we introduce a novel deep learning outlier detection method, called Outlier Preserving Distribution Mapping Autoencoder (OP-DMA), which succeeds to map outliers to low probability regions in the latent space of an autoencoder. For this we leverage the insight that outliers are likely to have a higher reconstruction error than inliers. We thus achieve outlier-preserving distribution mapping through weighting the reconstruction error of individual points by the value of a multivariate Gaussian probability density function evaluated at those points. This weighting implies that outliers will result overall penalty if they are mapped to low-probability regions. We show that if the global minimum of our newly proposed loss function is achieved, then our OP-DMA maps inliers to regions with a Mahalanobis distance less than delta, and outliers to regions past this delta, delta being the inverse Chi Squared CDF evaluated at (1-alpha) with alpha the percentage of outliers in the dataset. Our experiments confirm that OP-DMA consistently outperforms the state-of-art methods on a rich variety of outlier detection benchmark datasets.""","""This paper proposes an outlier detection method that maps outliers to low probability regions of the latent space. The novelty is in proposing a weighted reconstruction error penalizing the mapping of outliers into high probability regions. The reviewers find the idea promising. They have also raised several questions. It seems the questions are at least partially addressed in the rebuttal, and as a result one of our expert reviewers (R5) has increased their score from WR to WA. But since we did not have a champion for this paper and its overall score is not high enough, I can only recommend a reject at this stage.""" 399,"""Discovering Motor Programs by Recomposing Demonstrations""","['Learning from Demonstration', 'Imitation Learning', 'Motor Primitives']","""In this paper, we present an approach to learn recomposable motor primitives across large-scale and diverse manipulation demonstrations. Current approaches to decomposing demonstrations into primitives often assume manually defined primitives and bypass the difficulty of discovering these primitives. On the other hand, approaches in primitive discovery put restrictive assumptions on the complexity of a primitive, which limit applicability to narrow tasks. Our approach attempts to circumvent these challenges by jointly learning both the underlying motor primitives and recomposing these primitives to form the original demonstration. Through constraints on both the parsimony of primitive decomposition and the simplicity of a given primitive, we are able to learn a diverse set of motor primitives, as well as a coherent latent representation for these primitives. We demonstrate both qualitatively and quantitatively, that our learned primitives capture semantically meaningful aspects of a demonstration. This allows us to compose these primitives in a hierarchical reinforcement learning setup to efficiently solve robotic manipulation tasks like reaching and pushing. Our results may be viewed at pseudo-url. ""","""The work presents a novel and effective solution to learning reusable motor skills. The urgency of this problem and the considerable rebuttal of the authors merits publication of this paper, which is not perfect but needs community attention.""" 400,"""Training Neural Networks for and by Interpolation""","['optimization', 'adaptive learning-rate', 'Polyak step-size', 'Newton-Raphson']","""In modern supervised learning, many deep neural networks are able to interpolate the data: the empirical loss can be driven to near zero on all samples simultaneously. In this work, we explicitly exploit this interpolation property for the design of a new optimization algorithm for deep learning. Specifically, we use it to compute an adaptive learning-rate in closed form at each iteration. This results in the Adaptive Learning-rates for Interpolation with Gradients (ALI-G) algorithm. ALI-G retains the main advantage of SGD which is a low computational cost per iteration. But unlike SGD, the learning-rate of ALI-G uses a single constant hyper-parameter and does not require a decay schedule, which makes it considerably easier to tune. We provide convergence guarantees of ALI-G in the stochastic convex setting. Notably, all our convergence results tackle the realistic case where the interpolation property is satisfied up to some tolerance. We provide experiments on a variety of architectures and tasks: (i) learning a differentiable neural computer; (ii) training a wide residual network on the SVHN data set; (iii) training a Bi-LSTM on the SNLI data set; and (iv) training wide residual networks and densely connected networks on the CIFAR data sets. ALI-G produces state-of-the-art results among adaptive methods, and even yields comparable performance with SGD, which requires manually tuned learning-rate schedules. Furthermore, ALI-G is simple to implement in any standard deep learning framework and can be used as a drop-in replacement in existing code.""","""This paper uses the interpolation property to design a new optimization algorithm for deep learning, which computes an adaptive learning-rate in closed form at each iteration. The authors also analyzed the convergence rate of the proposed algorithm in the stochastic convex optimization setting. Experiments on several benchmark neural networks and datasets verify the effectiveness of the proposed algorithm. This is a borderline paper and has been carefully discussed. The main objection of the reviewers include: (1) The interplay between regularization and the interpolation property is not clear; and (2) the proposed algorithm is no better than SGD in any of the benchmarks except one, where SGD's learning rate is set to be a constant. After the author response, this paper still does not gather sufficient support. So I encourage the authors to improve this paper and resubmit it to future conference.""" 401,"""Isolating Latent Structure with Cross-population Variational Autoencoders""","['variational autoencoder', 'latent variable model', 'probabilistic graphical model', 'machine learning', 'deep learning', 'continual learning']","""A significant body of recent work has examined variational autoencoders as a powerful approach for tasks which involve modeling the distribution of complex data such as images and text. In this work, we present a framework for modeling multiple data sets which come from differing distributions but which share some common latent structure. By incorporating architectural constraints and using a mutual information regularized form of the variational objective, our method successfully models differing data populations while explicitly encouraging the isolation of the shared and private latent factors. This enables our model to learn useful shared structure across similar tasks and to disentangle cross-population representations in a weakly supervised way. We demonstrate the utility of our method on several applications including image denoising, sub-group discovery, and continual learning.""","""The paper proposes a hierarchical Bayesian model over multiple data sets that has both data set specific as well as shared parameters. The data set specific parameters are further encouraged to only capture aspects that vary across data sets by an addition mutual information contribution to the training loss. The proposed method is compared to standard VAEs on multiple data sets. The reviewers agree that the main approach of the paper is sensible. However, concerns were raised about general novelty, about the theoretical justification for the proposed loss function and about the lack of non-trivial baselines. The authors' rebuttal did not manage to full address these points. Based on the reviews and my own reading, I think this paper is slightly below acceptance threshold.""" 402,"""Sharing Knowledge in Multi-Task Deep Reinforcement Learning""","['Deep Reinforcement Learning', 'Multi-Task']","""We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning. We leverage the assumption that learning from different tasks, sharing common properties, is helpful to generalize the knowledge of them resulting in a more effective feature extraction compared to learning a single task. Intuitively, the resulting set of features offers performance benefits when used by Reinforcement Learning algorithms. We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks, extending the well-known finite-time bounds of Approximate Value-Iteration to the multi-task setting. In addition, we complement our analysis by proposing multi-task extensions of three Reinforcement Learning algorithms that we empirically evaluate on widely used Reinforcement Learning benchmarks showing significant improvements over the single-task counterparts in terms of sample efficiency and performance.""","""This paper considers the benefits of deep multi-task RL with shared representations, by deriving bounds for multi-task approximate value and policy iteration bounds. This shows both theoretically and empirically that shared representations across multiple tasks can outperform single task performance. There were a number of minor concerns from the reviewers regarding relation to prior work and details of the analysis, but these were clarified in the discussion. This paper adds important theoretical analysis to the literature, and so I recommend it is accepted.""" 403,"""Crafting Data-free Universal Adversaries with Dilate Loss""",[],"""We introduce a method to create Universal Adversarial Perturbations (UAP) for a given CNN in a data-free manner. Data-free approaches suite scenarios where the original training data is unavailable for crafting adversaries. We show that the adversary generation with full training data can be approximated to a formulation without data. This is realized through a sequential optimization of the adversarial perturbation with the proposed dilate loss. Dilate loss basically maximizes the Euclidean norm of the output before nonlinearity at any layer. By doing so, the perturbation constrains the ReLU activation function at every layer to act roughly linear for data points and thus eliminate the dependency on data for crafting UAPs. Extensive experiments demonstrate that our method not only has theoretical support, but achieves higher fooling rate than the existing data-free work. Furthermore, we evidence improvement in limited data cases as well.""","""This paper focuses on finding universal adversarial perturbations, that is, a single noise pattern that can be applied to any input to fool the network in many cases. Further more, it focuses on the data-free setting, where such a perturbation is found without having access to data (images) from the distribution that train- and test data comes from. The reviewers were very conflicted about this paper. Among others, the strong experimental results and the clarity of writing and analysis were praised. However, there was also criticism of the amount of novelty compared to GDUAP, on the strong assumptions needed (potentially limiting the applicability), and on some weakness in the theoretical analysis. In the end, the paper seems in current form not convincing enough for me to recommend acceptance for ICLR. """ 404,"""Rigging the Lottery: Making All Tickets Winners""","['sparse training', 'sparsity', 'pruning', 'lottery tickets', 'imagenet', 'resnet', 'mobilenet', 'efficiency', 'optimization', 'local minima']","""Sparse neural networks have been shown to yield computationally efficient networks with improved inference times. There is a large body of work on training dense networks to yield sparse networks for inference (Molchanov et al., 2017;Zhu & Gupta, 2018; Louizos et al., 2017; Li et al., 2016; Guo et al., 2016). This limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-to-sparse training methods. Our method updates the topology of the network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires less floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results with ResNet-50, MobileNet v1 and MobileNet v2 on the ImageNet-2012 dataset. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static.""","""A somewhat new approach to growing sparse networks. Experimental validation is good, focussing on ImageNet and CIFAR-10, plus experiments on language modelling. Though efficient in computation and storage size, the approach does not have a theoretical foundation. That does not agree with the intended scope of ICLR. I strongly suggest the authors submit elsewhere.""" 405,"""Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model""",[],"""Deep reinforcement learning (RL) algorithms can use high-capacity deep networks to learn directly from image observations. However, these kinds of observation spaces present a number of challenges in practice, since the policy must now solve two problems: a representation learning problem, and a task learning problem. In this paper, we aim to explicitly learn representations that can accelerate reinforcement learning from images. We propose the stochastic latent actor-critic (SLAC) algorithm: a sample-efficient and high-performing RL algorithm for learning policies for complex continuous control tasks directly from high-dimensional image inputs. SLAC learns a compact latent representation space using a stochastic sequential latent variable model, and then learns a critic model within this latent space. By learning a critic within a compact state space, SLAC can learn much more efficiently than standard RL methods. The proposed model improves performance substantially over alternative representations as well, such as variational autoencoders. In fact, our experimental evaluation demonstrates that the sample efficiency of our resulting method is comparable to that of model-based RL methods that directly use a similar type of model for control. Furthermore, our method outperforms both model-free and model-based alternatives in terms of final performance and sample efficiency, on a range of difficult image-based control tasks. Our code and videos of our results are available at our website.""","""An actor-critic method is introduced that explicitly aims to learn a good representation using a stochastic latent variable model. There is disagreement among the reviewers regarding the significance of this paper. Two of the three reviewers argue that several strong claims made in the paper that are not properly backed up by evidence. In particular, it is not sufficiently clear to what degree the shown performance improvement is due to the stochastic nature of the model used, one of the key points of the paper. I recommend that the authors provide more empirical evidence to back up their claims and then resubmit.""" 406,"""Shallow VAEs with RealNVP Prior Can Perform as Well as Deep Hierarchical VAEs""","['Variational Auto-encoder', 'RealNVP', 'learnable prior']","""Using powerful posterior distributions is a popular technique in variational inference. However, recent works showed that the aggregated posterior may fail to match unit Gaussian prior, even with expressive posteriors, thus learning the prior becomes an alternative way to improve the variational lower-bound. We show that using learned RealNVP prior and just one latent variable in VAE, we can achieve test NLL comparable to very deep state-of-the-art hierarchical VAE, outperforming many previous works with complex hierarchical VAE architectures. We hypothesize that, when coupled with Gaussian posteriors, the learned prior can encourage appropriate posterior overlapping, which is likely to improve reconstruction loss and lower-bound, supported by our experimental results. We demonstrate that, with learned RealNVP prior, -VAE can have better rate-distortion curve than using fixed Gaussian prior.""","""This paper provides an interesting insight into the fitting of variational autoencoders. While much of the recent literature focuses on training ever more expressive models, the authors demonstrate that learning a flexible prior can provide an equally strong model. Unfortunately one review is somewhat terse. Among the other reviews, one reviewer found the paper very interesting and compelling but did not feel comfortable raising their score to ""accept"" in the discussion phase citing a lack of compelling empirical results in compared to baselines. Both reviewers were concerned about novelty in light of Huang et al., in which a RealNVP prior is also learned in a VAE. AnonReviewer3 also felt that the experiments were not thorough enough to back up the claims in the paper. Unfortunately, for these reasons the recommendation is to reject. More compelling empirical results with carefully chosen baselines to back up the claims of the paper and comparison to existing literature (Huang et al) would make this paper much stronger. """ 407,"""Decoupling Hierarchical Recurrent Neural Networks With Locally Computable Losses""",[],"""Learning long-term dependencies is a key long-standing challenge of recurrent neural networks (RNNs). Hierarchical recurrent neural networks (HRNNs) have been considered a promising approach as long-term dependencies are resolved through shortcuts up and down the hierarchy. Yet, the memory requirements of Truncated Backpropagation Through Time (TBPTT) still prevent training them on very long sequences. In this paper, we empirically show that in (deep) HRNNs, propagating gradients back from higher to lower levels can be replaced by locally computable losses, without harming the learning capability of the network, over a wide range of tasks. This decoupling by local losses reduces the memory requirements of training by a factor exponential in the depth of the hierarchy in comparison to standard TBPTT.""","""All reviewers gave this paper a score of 1. The AC recommends rejection.""" 408,"""Evaluations and Methods for Explanation through Robustness Analysis""","['Interpretability', 'Explanations', 'Adversarial Robustness']","""Among multiple ways of interpreting a machine learning model, measuring the importance of a set of features tied to a prediction is probably one of the most intuitive way to explain a model. In this paper, we establish the link between a set of features to a prediction with a new evaluation criteria, robustness analysis, which measures the minimum tolerance of adversarial perturbation. By measuring the tolerance level for an adversarial attack, we can extract a set of features that provides most robust support for a current prediction, and also can extract a set of features that contrasts the current prediction to a target class by setting a targeted adversarial attack. By applying this methodology to various prediction tasks across multiple domains, we observed the derived explanations are indeed capturing the significant feature set qualitatively and quantitatively.""","""The paper proposes an approach for finding an explainable subset of features by choosing features that simultaneously are: most important for the prediction task, and robust against adversarial perturbation. The paper provides quantitative and qualitative evidence that the proposed method works. The paper had two reviews (both borderline), and the while the authors responded enthusiastically, the reviewers did not further engage during the discussion period. The paper has a promising idea, but the presentation and execution in its current form have been found to be not convincing by the reviewers. Unfortunately, the submission as it stands is not yet suitable for ICLR.""" 409,"""Deep Hierarchical-Hyperspherical Learning (DH^2L)""",[],"""Regularization is known to be an inexpensive and reasonable solution to alleviate over-fitting problems of inference models, including deep neural networks. In this paper, we propose a hierarchical regularization which preserves the semantic structure of a sample distribution. At the same time, this regularization promotes diversity by imposing distance between parameter vectors enlarged within semantic structures. To generate evenly distributed parameters, we constrain them to lie on \emph{hierarchical hyperspheres}. Evenly distributed parameters are considered to be less redundant. To define hierarchical parameter space, we propose to reformulate the topology space with multiple hypersphere space. On each hypersphere space, the projection parameter is defined by two individual parameters. Since maximizing groupwise pairwise distance between points on hypersphere is nontrivial (generalized Thomson problem), we propose a new discrete metric integrated with continuous angle metric. Extensive experiments on publicly available datasets (CIFAR-10, CIFAR-100, CUB200-2011, and Stanford Cars), our proposed method shows improved generalization performance, especially when the number of super-classes is larger.""","""The paper proposes a hierarchical diversity promoting regularizer for neural networks. Experiments are shown with this regularizer applied to the last fully-connected layer of the network, in addition to L2 and energy regularizers on other layers. Reviewers found the paper well-motivated but had concerns on writing/readability of the paper and that it provides only marginal improvements over existing simple regularizers such as L2. I would encourage the authors to look for scenarios where the proposed regularizer can show clear improvements and resubmit to a future venue. """ 410,"""Self-Attentional Credit Assignment for Transfer in Reinforcement Learning""","['reinforcement learning', 'transfer learning', 'credit assignment']","""The ability to transfer knowledge to novel environments and tasks is a sensible desiderata for general learning agents. Despite the apparent promises, transfer in RL is still an open and little exploited research area. In this paper, we take a brand-new perspective about transfer: we suggest that the ability to assign credit unveils structural invariants in the tasks that can be transferred to make RL more sample efficient. Our main contribution is Secret, a novel approach to transfer learning for RL that uses a backward-view credit assignment mechanism based on a self-attentive architecture. Two aspects are key to its generality: it learns to assign credit as a separate offline supervised process and exclusively modifies the reward function. Consequently, it can be supplemented by transfer methods that do not modify the reward function and it can be plugged on top of any RL algorithm.""","""The paper introduces a novel approach to transfer learning in RL based on credit assignment. The reviewers had quite diverse opinions on this paper. The strength of the paper is that it introduces an interesting new direction for transfer learning in RL. However, there are some questions regarding design choices and whether the experiments sufficiently validate the idea (i.e., the sensitivity to hyperparameters is a question that is not sufficiently addressed). Overall, this research has great potential. However, a more extensive empirical study is necessary before it can be accepted.""" 411,"""Critical initialisation in continuous approximations of binary neural networks""",[],"""The training of stochastic neural network models with binary ( pseudo-formula ) weights and activations via continuous surrogate networks is investigated. We derive new surrogates using a novel derivation based on writing the stochastic neural network as a Markov chain. This derivation also encompasses existing variants of the surrogates presented in the literature. Following this, we theoretically study the surrogates at initialisation. We derive, using mean field theory, a set of scalar equations describing how input signals propagate through the randomly initialised networks. The equations reveal whether so-called critical initialisations exist for each surrogate network, where the network can be trained to arbitrary depth. Moreover, we predict theoretically and confirm numerically, that common weight initialisation schemes used in standard continuous networks, when applied to the mean values of the stochastic binary weights, yield poor training performance. This study shows that, contrary to common intuition, the means of the stochastic binary weights should be initialised close to 1$, for deeper networks to be trainable.""","""The authors study neural networks with binary weights or activations, and the so-called ""differentiable surrogates"" used to train them. They present an analysis that unifies previously proposed surrogates and they study critical initialization of weights to facilitate trainability. The reviewers agree that the main topic of the paper is important (in particular initialization heuristics of neural networks), however they found the presentation of the content lacking in clarity as well as in clearly emphasizing the main contributions. The authors imporved the readability of the manuscript in the rebuttal. This paper seems to be at acceptance threshold and 2 of 3 reviewers indicated low confidence. Not being familiar with this line of work, I recommend acceptance following the average review score.""" 412,"""MMD GAN with Random-Forest Kernels""","['GANs', 'MMD', 'kernel', 'random forest', 'unbiased gradients']","""In this paper, we propose a novel kind of kernel, random forest kernel, to enhance the empirical performance of MMD GAN. Different from common forests with deterministic routings, a probabilistic routing variant is used in our innovated random-forest kernel, which is possible to merge with the CNN frameworks. Our proposed random-forest kernel has the following advantages: From the perspective of random forest, the output of GAN discriminator can be viewed as feature inputs to the forest, where each tree gets access to merely a fraction of the features, and thus the entire forest benefits from ensemble learning. In the aspect of kernel method, random-forest kernel is proved to be characteristic, and therefore suitable for the MMD structure. Besides, being an asymmetric kernel, our random-forest kernel is much more flexible, in terms of capturing the differences between distributions. Sharing the advantages of CNN, kernel method, and ensemble learning, our random-forest kernel based MMD GAN obtains desirable empirical performances on CIFAR-10, CelebA and LSUN bedroom data sets. Furthermore, for the sake of completeness, we also put forward comprehensive theoretical analysis to support our experimental results.""","""Reviewers raise the serious issue that the proof of Theorem 2 is plagiarized from Theorem 1 of ""Demystifying MMD GANs"" (pseudo-url). With no response from the authors, this is a clear reject. """ 413,"""GraphSAINT: Graph Sampling Based Inductive Learning Method""","['Graph Convolutional Networks', 'Graph sampling', 'Network embedding']","""Graph Convolutional Networks (GCNs) are powerful models for learning representations of attributed graphs. To scale GCNs to large graphs, state-of-the-art methods use various layer sampling techniques to alleviate the ""neighbor explosion"" problem during minibatch training. We propose GraphSAINT, a graph sampling based inductive learning method that improves training efficiency and accuracy in a fundamentally different way. By changing perspective, GraphSAINT constructs minibatches by sampling the training graph, rather than the nodes or edges across GCN layers. Each iteration, a complete GCN is built from the properly sampled subgraph. Thus, we ensure fixed number of well-connected nodes in all layers. We further propose normalization technique to eliminate bias, and sampling algorithms for variance reduction. Importantly, we can decouple the sampling from the forward and backward propagation, and extend GraphSAINT with many architecture variants (e.g., graph attention, jumping connection). GraphSAINT demonstrates superior performance in both accuracy and training time on five large graphs, and achieves new state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970). ""","""All three reviewers advocated acceptance. The AC agrees, feeling the paper is interesting. """ 414,"""Oblique Decision Trees from Derivatives of ReLU Networks""","['oblique decision trees', 'ReLU networks']","""We show how neural models can be used to realize piece-wise constant functions such as decision trees. The proposed architecture, which we call locally constant networks, builds on ReLU networks that are piece-wise linear and hence their associated gradients with respect to the inputs are locally constant. We formally establish the equivalence between the classes of locally constant networks and decision trees. Moreover, we highlight several advantageous properties of locally constant networks, including how they realize decision trees with parameter sharing across branching / leaves. Indeed, only pseudo-formula neurons suffice to implicitly model an oblique decision tree with pseudo-formula leaf nodes. The neural representation also enables us to adopt many tools developed for deep networks (e.g., DropConnect (Wan et al., 2013)) while implicitly training decision trees. We demonstrate that our method outperforms alternative techniques for training oblique decision trees in the context of molecular property classification and regression tasks. ""","""This paper leverages the piecewise linearity of predictions in ReLU neural networks to encode and learn piecewise constant predictors akin to oblique decision trees. The reviewers think the paper is interesting, and the idea is clever. The paper can be further improved in experiments. This includes comparison to ensembles of traditional trees or (in some cases) simple ReLU networks. Also the tradeoffs other than accuracy between the method and baselines are also interesting. """ 415,"""INFERENCE, PREDICTION, AND ENTROPY RATE OF CONTINUOUS-TIME, DISCRETE-EVENT PROCESSES""",['continuous-time prediction'],"""The inference of models, prediction of future symbols, and entropy rate estimation of discrete-time, discrete-event processes is well-worn ground. However, many time series are better conceptualized as continuous-time, discrete-event processes. Here, we provide new methods for inferring models, predicting future symbols, and estimating the entropy rate of continuous-time, discrete-event processes. The methods rely on an extension of Bayesian structural inference that takes advantage of neural networks universal approximation power. Based on experiments with simple synthetic data, these new methods seem to be competitive with state-of- the-art methods for prediction and entropy rate estimation as long as the correct model is inferred.""","""The authors present a Bayesian model for time series which are represented as discrete events in continuous time and describe methods for doing parameter inference, future event prediction and entropy rate estimation for such processes. However, the reviewers find that the novelty of the paper is not high enough, and without sufficient acknowledgement and comparison to existing literature.""" 416,"""Wildly Unsupervised Domain Adaptation and Its Powerful and Efficient Solution""",[],"""In unsupervised domain adaptation (UDA), classifiers for the target domain (TD) are trained with clean labeled data from the source domain (SD) and unlabeled data from TD. However, in the wild, it is hard to acquire a large amount of perfectly clean labeled data in SD given limited budget. Hence, we consider a new, more realistic and more challenging problem setting, where classifiers have to be trained with noisy labeled data from SD and unlabeled data from TD---we name it wildly UDA (WUDA). We show that WUDA ruins all UDA methods if taking no care of label noise in SD, and to this end, we propose a Butterfly framework, a powerful and efficient solution to WUDA. Butterfly maintains four models (e.g., deep networks) simultaneously, where two take care of all adaptations (i.e., noisy-to-clean, labeled-to-unlabeled, and SD-to-TD-distributional) and then the other two can focus on classification in TD. As a consequence, Butterfly possesses all the conceptually necessary components for solving WUDA. Experiments demonstrate that under WUDA, Butterfly significantly outperforms existing baseline methods.""","""The authors proposed a new problem setting called Wildly UDA (WUDA) where the labels in the source domain are noisy. They then proposed the ""butterfly"" method, combining co-teaching with pseudo labeling and evaluated the method on a range of WUDA problem setup. In general, there is a concern that Butterfly as the combination between co-teaching and pseudo labeling is weak on the novelty side. In this case the value of the method can be assessed by strong empirical result. However as pointed out by Reviewer 3, a common setup (SVHN<-> MNIST) that appeared in many UDA paper was missing in the original draft. The author added the result for SVHN<-> MNIST as a response to review 3, however they only considered the UDA setting, not WUDA, hence the value of that experiment was limited. In addition, there are other UDA methods that achieve significantly better performance on SVHN<->MNIST that should be considered among the baselines. For example DIRT-T (Shu et al 2018) has a second phase where the decision boundary on the target domain is adjusted, and that could provide some robustness against a decision boundary affected by noise. Shu et al (2018) A DIRT-T Approach to Unsupervised Domain Adaptation. ICLR 2018. pseudo-url I suggest that the authors consider performing the full experiment with WUDA using SVHN<->MNIST, and also consider the use of stronger UDA methods among the baseline. """ 417,"""CP-GAN: Towards a Better Global Landscape of GANs""","['GAN', 'global landscape', 'non-convex optimization', 'min-max optimization', 'dynamics']","""GANs have been very popular in data generation and unsupervised learning, but our understanding of GAN training is still very limited. One major reason is that GANs are often formulated as non-convex-concave min-max optimization. As a result, most recent studies focused on the analysis in the local region around the equilibrium. In this work, we perform a global analysis of GANs from two perspectives: the global landscape of the outer-optimization problem and the global behavior of the gradient descent dynamics. We find that the original GAN has exponentially many bad strict local minima which are perceived as mode-collapse, and the training dynamics (with linear discriminators) cannot escape mode collapse. To address these issues, we propose a simple modification to the original GAN, by coupling the generated samples and the true samples. We prove that the new formulation has no bad basins, and its training dynamics (with linear discriminators) has a Lyapunov function that leads to global convergence. Our experiments on standard datasets show that this simple loss outperforms the original GAN and WGAN-GP. ""","""The paper is proposed a rejection based on majority reviews.""" 418,"""Towards Modular Algorithm Induction""","['algorithm induction', 'reinforcement learning', 'program synthesis', 'modular']","""We present a modular neural network architecture MAIN that learns algorithms given a set of input-output examples. MAIN consists of a neural controller that interacts with a variable-length input tape and learns to compose modules together with their corresponding argument choices. Unlike previous approaches, MAIN uses a general domain-agnostic mechanism for selection of modules and their arguments. It uses a general input tape layout together with a parallel history tape to indicate most recently used locations. Finally, it uses a memoryless controller with a length-invariant self-attention based input tape encoding to allow for random access to tape locations. The MAIN architecture is trained end-to-end using reinforcement learning from a set of input-output examples. We evaluate MAIN on five algorithmic tasks and show that it can learn policies that generalizes perfectly to inputs of much longer lengths than the ones used for training.""","""The reviewers all agreed that although there is a sensible idea here, the method and presentation need a lot of work, especially their treatment of related methods.""" 419,"""CloudLSTM: A Recurrent Neural Model for Spatiotemporal Point-cloud Stream Forecasting""","['spatio-temporal forecasting', 'point cloud stream forecasting', 'recurrent neural network']","""This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources. We design a Dynamic Point-cloud Convolution (D-Conv) operator as the core component of CloudLSTMs, which performs convolution directly over point-clouds and extracts local spatial features from sets of neighboring points that surround different elements of the input. This operator maintains the permutation invariance of sequence-to-sequence learning frameworks, while representing neighboring correlations at each time step -- an important aspect in spatiotemporal predictive learning. The D-Conv operator resolves the grid-structural data requirements of existing spatiotemporal forecasting models and can be easily plugged into traditional LSTM architectures with sequence-to-sequence learning and attention mechanisms. We apply our proposed architecture to two representative, practical use cases that involve point-cloud streams, i.e. mobile service traffic forecasting and air quality indicator forecasting. Our results, obtained with real-world datasets collected in diverse scenarios for each use case, show that CloudLSTM delivers accurate long-term predictions, outperforming a variety of neural network models.""","""The paper presents an approach to forecasting over temporal streams of permutation-invariant data such as point clouds. The approach is based on an operator (DConv) that is related to continuous convolution operators such as X-Conv and others. The reviews are split. After the authors' responses, concerns remain and two ratings remain ""3"". The AC agrees with the concerns and recommends against accepting the paper.""" 420,"""V4D: 4D Convolutional Neural Networks for Video-level Representation Learning""","['video-level representation learning', 'video action recognition', '4D CNNs']","""Most existing 3D CNN structures for video representation learning are clip-based methods, and do not consider video-level temporal evolution of spatio-temporal features. In this paper, we propose Video-level 4D Convolutional Neural Networks, namely V4D, to model the evolution of long-range spatio-temporal representation with 4D convolutions, as well as preserving 3D spatio-temporal representations with residual connections. We further introduce the training and inference methods for the proposed V4D. Extensive experiments are conducted on three video recognition benchmarks, where V4D achieves excellent results, surpassing recent 3D CNNs by a large margin.""","""This paper proposes video-level 4D CNNs and the corresponding training and inference methods for improved video representation learning. The proposed model achieves state-of-the-art performance on three action recognition tasks. Reviewers agree that the idea well motivated and interesting, but were initially concerned with positioning with respect to the related work, novelty, and computational tractability. As these issues were mostly resolved during the discussion phase, I will recommend the acceptance of this paper. We ask the authors to address the points raised during the discussion to the manuscript, with a focus on the tradeoff between the improved performance and computational cost.""" 421,"""Event Discovery for History Representation in Reinforcement Learning""","['reinforcement learning', 'self-supervision', 'POMDP']","""Environments in Reinforcement Learning (RL) are usually only partially observable. To address this problem, a possible solution is to provide the agent with information about past observations. While common methods represent this history using a Recurrent Neural Network (RNN), in this paper we propose an alternative representation which is based on the record of the past events observed in a given episode. Inspired by the human memory, these events describe only important changes in the environment and, in our approach, are automatically discovered using self-supervision. We evaluate our history representation method using two challenging RL benchmarks: some games of the Atari-57 suite and the 3D environment Obstacle Tower. Using these benchmarks we show the advantage of our solution with respect to common RNN-based approaches.""","""The authors propose approaches to handle partial observability in reinforcement learning. The reviewers agree that the paper does not sufficiently justify the methods that are proposed and even the experimental performance shows that the proposed method is not always better than baselines.""" 422,"""MissDeepCausal: causal inference from incomplete data using deep latent variable models""","['treatment effect estimation', 'missing values', 'variational autoencoders', 'importance sampling', 'double robustness']","""Inferring causal effects of a treatment, intervention or policy from observational data is central to many applications. However, state-of-the-art methods for causal inference seldom consider the possibility that covariates have missing values, which is ubiquitous in many real-world analyses. Missing data greatly complicate causal inference procedures as they require an adapted unconfoundedness hypothesis which can be difficult to justify in practice. We circumvent this issue by considering latent confounders whose distribution is learned through variational autoencoders adapted to missing values. They can be used either as a pre-processing step prior to causal inference but we also suggest to embed them in a multiple imputation strategy to take into account the variability due to missing values. Numerical experiments demonstrate the effectiveness of the proposed methodology especially for non-linear models compared to competitors.""","""This paper addresses the problem of causal inference from incomplete data. The main idea is to use a latent confounders through a VAE. A multiple imputation strategy is then used to account for missing values. Reviewers have mixed responses to this paper. Initially, the scores were 8,6,3. After discussion the reviewer who rated is 8 reduced their score to 6, but at the same time the score of 3 went up to 6. The reviewers agree that the problem tackled in the paper is difficult, and also acknowledge that the rebuttal of the paper was reasonable and honest. The authors added a simulation study which shows good results. The main argument towards rejection is that the paper does not beat the state of the art. I do think that this is still ok if the paper brings useful insights for the community even though it does not beat the state fo the art. For now, with the current score, the paper does not make the cut. For this reason, I recommend to reject the paper, but I encourage the authors to resubmit this to another venue after improving the paper.""" 423,"""How noise affects the Hessian spectrum in overparameterized neural networks""","['noise', 'optimization', 'loss landscape', 'Hessian']","""Stochastic gradient descent (SGD) forms the core optimization method for deep neural networks. While some theoretical progress has been made, it still remains unclear why SGD leads the learning dynamics in overparameterized networks to solutions that generalize well. Here we show that for overparameterized networks with a degenerate valley in their loss landscape, SGD on average decreases the trace of the Hessian of the loss. We also generalize this result to other noise structures and show that isotropic noise in the non-degenerate subspace of the Hessian decreases its determinant. In addition to explaining SGDs role in sculpting the Hessian spectrum, this opens the door to new optimization approaches that may confer better generalization performance. We test our results with experiments on toy models and deep neural networks.""","""The study of the impact of the noise on the Hessian is interesting and I commend the authors for attacking this difficult problem. After the rebuttal and discussion, the reviewers had two concerns: - The strength of the assumptions of the theorem - Assuming the assumptions are reasonable, the conclusions to draw given the current weak link between Hessian and generalization. I'm confident the authors will be able to address these issues for a later submission.""" 424,"""Unsupervised Few Shot Learning via Self-supervised Training""","['few shot learning', 'self-supervised learning', 'meta-learning']","""Learning from limited exemplars (few-shot learning) is a fundamental, unsolved problem that has been laboriously explored in the machine learning community. However, current few-shot learners are mostly supervised and rely heavily on a large amount of labeled examples. Unsupervised learning is a more natural procedure for cognitive mammals and has produced promising results in many machine learning tasks. In the current study, we develop a method to learn an unsupervised few-shot learner via self-supervised training (UFLST), which can effectively generalize to novel but related classes. The proposed model consists of two alternate processes, progressive clustering and episodic training. The former generates pseudo-labeled training examples for constructing episodic tasks; and the later trains the few-shot learner using the generated episodic tasks which further optimizes the feature representations of data. The two processes facilitate with each other, and eventually produce a high quality few-shot learner. Using the benchmark dataset Omniglot, we show that our model outperforms other unsupervised few-shot learning methods to a large extend and approaches to the performances of supervised methods. Using the benchmark dataset Market1501, we further demonstrate the feasibility of our model to a real-world application on person re-identification.""","""This paper proposes an approach for unsupervised meta-learning for few-shot learning that iteratively combines clustering and episodic learning. The approach is interesting, and the topic is of interest to the ICLR community. Further, it is nice to see experiments on a more real world setting with the Market1501 dataset. However, the paper lacks any meaningful comparison to prior works on unsupervised meta-learning. While it is accurate that the architecture used and/or assumptions used in this paper are somewhat different from those in prior works, it's important to find a way to compare to at least one of these prior methods in a meaningful way (e.g. by setting up a controlled comparison by running these prior methods in the experimental set-up considered in this work). Without such as comparison, it's impossible to judge the significance of this work in the context of prior papers. The paper isn't ready for publication at ICLR.""" 425,"""Where is the Information in a Deep Network?""","['Information', 'Learning Dynamics', 'PAC-Bayes', 'Deep Learning']","""Whatever information a deep neural network has gleaned from past data is encoded in its weights. How this information affects the response of the network to future data is largely an open question. In fact, even how to define and measure information in a network entails some subtleties. We measure information in the weights of a deep neural network as the optimal trade-off between accuracy of the network and complexity of the weights relative to a prior. Depending on the prior, the definition reduces to known information measures such as Shannon Mutual Information and Fisher Information, but in general it affords added flexibility that enables us to relate it to generalization, via the PAC-Bayes bound, and to invariance. For the latter, we introduce a notion of effective information in the activations, which are deterministic functions of future inputs. We relate this to the Information in the Weights, and use this result to show that models of low (information) complexity not only generalize better, but are bound to learn invariant representations of future inputs. These relations hinge not only on the architecture of the model, but also on how it is trained.""","""This paper is full of ideas. However, a logical argument is only as strong as its weakest link, and I believe the current paper has some weak links. For example, the attempt to tie the behavior of SGD to free energy minimization relies on unrealistic approximations. Second, the bounds based on limiting flat priors become trivial. The authors in-depth response to my own review was much appreciated, especially given its last minute appearance. Unfortunately, I was not convinced by the arguments. In part, the authors argue that the logical argument they are making is not sensitive to certain issues that I raised, but this only highlights for me that the argument being made is not very precise. I can imagine a version of this work with sharper claims, built on clearly stated assumptions/conjectures about SGD's dynamics, RATHER THAN being framed as the consequences of clearly inaccurate approximations. The behavior of diffusions can be presented as evidence that the assumptions/conjectures (that cannot be proven at the moment, but which are needed to complete the logical argument) are reasonable. However, I am also not convinced that it is trivial to do this, and so the community must have a chance to review a major revision.""" 426,"""Picking Winning Tickets Before Training by Preserving Gradient Flow""","['neural network', 'pruning before training', 'weight pruning']","""Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels. Our code is made public at: pseudo-url.""","""This paper proposes a method to improve the training of sparse network by ensuring the gradient is preserved at initialization. The reviewers found that the approach was well motivated and well explained. The experimental evaluation considers challenging benchmarks such as Imagenet and includes strong baselines. """ 427,"""LAMOL: LAnguage MOdeling for Lifelong Language Learning""","['NLP', 'Deep Learning', 'Lifelong Learning']","""Most research on lifelong learning applies to images or games, but not language. We present LAMOL, a simple yet effective method for lifelong language learning (LLL) based on language modeling. LAMOL replays pseudo-samples of previous tasks while requiring no extra memory or model capacity. Specifically, LAMOL is a language model that simultaneously learns to solve the tasks and generate training samples. When the model is trained for a new task, it generates pseudo-samples of previous tasks for training alongside data for the new task. The results show that LAMOL prevents catastrophic forgetting without any sign of intransigence and can perform five very different language tasks sequentially with only one model. Overall, LAMOL outperforms previous methods by a considerable margin and is only 2-3% worse than multitasking, which is usually considered the LLL upper bound. The source code is available at pseudo-url.""","""This paper proposes a new method for lifelong learning of language using language modeling. Their training scheme is designed so as to prevent catastrophic forgetting. The reviewers found the motivation clear and that the proposed method outperforms prior related work. Reviewers raised concerns about the title and the lack of some baselines which the authors have addressed in the rebuttal and their revision.""" 428,"""Information-Theoretic Local Minima Characterization and Regularization""","['local minima', 'generalization', 'regularization', 'deep learning theory']","""Recent advances in deep learning theory have evoked the study of generalizability across different local minima of deep neural networks (DNNs). While current work focused on either discovering properties of good local minima or developing regularization techniques to induce good local minima, no approach exists that can tackle both problems. We achieve these two goals successfully in a unified manner. Specifically, based on the Fisher information we propose a metric both strongly indicative of generalizability of local minima and effectively applied as a practical regularizer. We provide theoretical analysis including a generalization bound and empirically demonstrate the success of our approach in both capturing and improving the generalizability of DNNs. Experiments are performed on CIFAR-10 and CIFAR-100 for various network architectures.""","""This paper proposes using the Fisher information matrix to characterize local minima of deep network loss landscapes to indicate generalizability of a local minimum. While the reviewers agree that this paper contains interesting ideas and its presentation has been substantially improved during the discussion period, there are still issues that remain unanswered, in particular between the main objective/claims and the presented evidence. The paper will benefit from a revision and resubmission to another venue.""" 429,"""Temporal-difference learning for nonlinear value function approximation in the lazy training regime""","['deep reinforcement learning', 'function approximation', 'temporal-difference', 'lazy training']","""We discuss the approximation of the value function for infinite-horizon discounted Markov Reward Processes (MRP) with nonlinear functions trained with the Temporal-Difference (TD) learning algorithm. We consider this problem under a certain scaling of the approximating function, leading to a regime called lazy training. In this regime the parameters of the model vary only slightly during the learning process, a feature that has recently been observed in the training of neural networks, where the scaling we study arises naturally, implicit in the initialization of their parameters. Both in the under- and over-parametrized frameworks, we prove exponential convergence to local, respectively global minimizers of the above algorithm in the lazy training regime. We then give examples of such convergence results in the case of models that diverge if trained with non-lazy TD learning, and in the case of neural networks.""","""This paper provides convergence results for Non-linear TD under lazy training. This paper tackles the important and challenging task of improving our theoretical understanding of deep RL. We have lots of empirical evidence Q-learning and TD can work with NNs, and even empirical work that attempts to characterize when we should expect it to fail. Such empirical work is always limited and we need theory to supplement our empirical knowledge. This paper attempts to extend recent theoretical work on the convergence of supervised training of NN to the policy evaluation setting with TD. The main issue revolves around the presentation of the work. The reviewers found the paper difficult to read (ok for theory work). But, the paper did not clearly discuss and characterize the significance of the work: how limited is the lazy training regime, when would it be useful? Now that we have this result, do we have any more insights for algorithm design (improving nonlinear TD), or comments about when we expect NN policy evaluation to work? This all reads like: the paper needs a better intro and discussion of the implications and limitations of the results, and indeed this is what the reviewers were looking for. Unfortunately the author response and paper submitted were lacking in this respect. Even the strongest advocates of the work found it severely lacking explanation and discussion. They felt that the paper could be accepted, but only after extensive revision. The direction of the work is important. The work is novel, and not a small undertaking. However, to be published the authors should spend more time explaining the framework, the results, and the limitations to the reader. """ 430,"""Optimal Attacks on Reinforcement Learning Policies""",[],"""Control policies, trained using the Deep Reinforcement Learning, have been recently shown to be vulnerable to adversarial attacks introducing even very small perturbations to the policy input. The attacks proposed so far have been designed using heuristics, and build on existing adversarial example crafting techniques used to dupe classifiers in supervised learning. In contrast, this paper investigates the problem of devising optimal attacks, depending on a well-defined attacker's objective, e.g., to minimize the main agent average reward. When the policy and the system dynamics, as well as rewards, are known to the attacker, a scenario referred to as a white-box attack, designing optimal attacks amounts to solving a Markov Decision Process. For what we call black-box attacks, where neither the policy nor the system is known, optimal attacks can be trained using Reinforcement Learning techniques. Through numerical experiments, we demonstrate the efficiency of our attacks compared to existing attacks (usually based on Gradient methods). We further quantify the potential impact of attacks and establish its connection to the smoothness of the policy under attack. Smooth policies are naturally less prone to attacks (this explains why Lipschitz policies, with respect to the state, are more resilient). Finally, we show that from the main agent perspective, the system uncertainties and the attacker can be modelled as a Partially Observable Markov Decision Process. We actually demonstrate that using Reinforcement Learning techniques tailored to POMDP (e.g. using Recurrent Neural Networks) leads to more resilient policies. ""","""This paper studies the problem of devising optimal attacks in deep RL to minimize the main agent average reward. In the white-box attack setting, optimal attacks amounts to solving a Markov Decision Process, while in black-box attacks, optimal attacks can be trained using RL techniques. Empirical efficiency of the attacks was demonstrated. It has valuable contributions on studying the adversarial robustness on deep RL. However, the current motivation and setup needs to be made clearer, and so is not being accepted at this time. We hope for these comments to help improve a future version.""" 431,"""How much Position Information Do Convolutional Neural Networks Encode?""","['network understanding', 'absolute position information']","""In contrast to fully connected networks, Convolutional Neural Networks (CNNs) achieve efficiency by learning weights associated with local filters with a finite spatial extent. An implication of this is that a filter may know what it is looking at, but not where it is positioned in the image. Information concerning absolute position is inherently useful, and it is reasonable to assume that deep CNNs may implicitly learn to encode this information if there is a means to do so. In this paper, we test this hypothesis revealing the surprising degree of absolute position information that is encoded in commonly used neural networks. A comprehensive set of experiments show the validity of this hypothesis and shed light on how and where this information is represented while offering clues to where positional information is derived from in deep CNNs.""","""This paper analyzes the weights associated with filters in CNNs and finds that they encode positional information (i.e. near the edges of the image). A detailed discussion and analysis is performed, which shows where this positional information comes from. The reviewers were happy with your paper and found it to be quite interesting. The reviewers felt your paper addressed an important (and surprising!) issue not previously recognized in CNNs.""" 432,"""Fully Convolutional Graph Neural Networks using Bipartite Graph Convolutions""","['Graph Neural Networks', 'Graph Convolutional Networks']","""Graph neural networks have been adopted in numerous applications ranging from learning relational representations to modeling data on irregular domains such as point clouds, social graphs, and molecular structures. Though diverse in nature, graph neural network architectures remain limited by the graph convolution operator whose input and output graphs must have the same structure. With this restriction, representational hierarchy can only be built by graph convolution operations followed by non-parameterized pooling or expansion layers. This is very much like early convolutional network architectures, which later have been replaced by more effective parameterized strided and transpose convolution operations in combination with skip connections. In order to bring a similar change to graph convolutional networks, here we introduce the bipartite graph convolution operation, a parameterized transformation between different input and output graphs. Our framework is general enough to subsume conventional graph convolution and pooling as its special cases and supports multi-graph aggregation leading to a class of flexible and adaptable network architectures, termed BiGraphNet. By replacing the sequence of graph convolution and pooling in hierarchical architectures with a single parametric bipartite graph convolution, (i) we answer the question of whether graph pooling matters, and (ii) accelerate computations and lower memory requirements in hierarchical networks by eliminating pooling layers. Then, with concrete examples, we demonstrate that the general BiGraphNet formalism (iii) provides the modeling flexibility to build efficient architectures such as graph skip connections, and autoencoders.""","""All three reviewers are consistently negative on this paper. Thus a reject is recommended.""" 433,"""Generating Semantic Adversarial Examples with Differentiable Rendering""","['semantic adversarial examples', 'inverse graphics', 'differentiable rendering']","""Machine learning (ML) algorithms, especially deep neural networks, have demonstrated success in several domains. However, several types of attacks have raised concerns about deploying ML in safety-critical domains, such as autonomous driving and security. An attacker perturbs a data point slightly in the pixel space and causes the ML algorithm to misclassify (e.g. a perturbed stop sign is classified as a yield sign). These perturbed data points are called adversarial examples, and there are numerous algorithms in the literature for constructing adversarial examples and defending against them. In this paper we explore semantic adversarial examples (SAEs) where an attacker creates perturbations in the semantic space. For example, an attacker can change the background of the image to be cloudier to cause misclassification. We present an algorithm for constructing SAEs that uses recent advances in differential rendering and inverse graphics. ""","""The authors present a way for generating adversarial examples using discrete perturbations, i.e., perturbations that, unlike pixel ones, carry some semantics. Thus, in order to do so, they assume the existence of an inverse graphics framework. Results are conducted in the VKITTI dataset. Overall, the main serious concern expressed by the reviewers has to do with the general applicability of this method, since it requires an inverse graphics framework, which all-in-all is not a trivial task, so it is not clear how such a method would scale to more real datasets. A secondary concern has to do with the fact that the proposed method seems to be mostly a way to perform semantic data-augmentation rather than a way to avoid malicious attacks. In the latter case, we would want to know something about the generality of this method (e.g., what happens a model is trained for this attacks but then a more pixel-based attack is applied). As such, I do not believe that this submission is ready for publication at ICLR. However, the technique is an interesting idea it would be interesting if a later submission would provide empirical evidence about/investigate the generality of this idea. """ 434,"""A Dynamic Approach to Accelerate Deep Learning Training""","['reduced precision', 'bfloat16', 'CNN', 'DNN', 'dynamic precision', 'mixed precision']","""Mixed-precision arithmetic combining both single- and half-precision operands in the same operation have been successfully applied to train deep neural networks. Despite the advantages of mixed-precision arithmetic in terms of reducing the need for key resources like memory bandwidth or register file size, it has a limited capacity for diminishing computing costs and requires 32 bits to represent its output operands. This paper proposes two approaches to replace mixed-precision for half-precision arithmetic during a large portion of the training. The first approach achieves accuracy ratios slightly slower than the state-of-the-art by using half-precision arithmetic during more than 99% of training. The second approach reaches the same accuracy as the state-of-the-art by dynamically switching between half- and mixed-precision arithmetic during training. It uses half-precision during more than 94% of the training process. This paper is the first in demonstrating that half-precision can be used for a very large portion of DNNs training and still reach state-of-the-art accuracy.""","""The submission proposes a dynamic approach to training a neural net which switches between half and full-precision operations while maintaining the same classifier accuracy, resulting in a speed up in training time. Empirical results show the value of the approach, and the authors have added additional sensitivity analysis by sweeping over hyperparameters. The reviewers were concerned about the novelty of the approach as well as the robustness of the claims that accuracy can be maintained even in the accelerated, dynamic regime. After discussion there were still concerns about the sensitivity analysis and the significance of the results. The recommendation is to reject the paper at this time.""" 435,"""Curvature Graph Network""","['Deep Learning', 'Graph Convolution', 'Ricci Curvature.']","""Graph-structured data is prevalent in many domains. Despite the widely celebrated success of deep neural networks, their power in graph-structured data is yet to be fully explored. We propose a novel network architecture that incorporates advanced graph structural features. In particular, we leverage discrete graph curvature, which measures how the neighborhoods of a pair of nodes are structurally related. The curvature of an edge (x, y) defines the distance taken to travel from neighbors of x to neighbors of y, compared with the length of edge (x, y). It is a much more descriptive feature compared to previously used features that only focus on node specific attributes or limited topological information such as degree. Our curvature graph convolution network outperforms state-of-the-art on various synthetic and real-world graphs, especially the larger and denser ones.""","""The paper presents a novel graph convolutional network by integrating the curvature information (based on the concept of Ricci curvature). The key idea is well motivated and the paper is clearly written. Experimental results show that the proposed curvature graph network methods outperform existing graph convolution algorithms. One potential limitation is the computational cost of computing the Ricci curvature, which is discussed in the appendix. Overall, the concept of using curvature in graph convolutional networks seems like a novel and promising idea, and I also recommend acceptance.""" 436,"""Disentanglement by Nonlinear ICA with General Incompressible-flow Networks (GIN)""","['disentanglement', 'nonlinear ICA', 'representation learning', 'feature discovery', 'theoretical justification']","""A central question of representation learning asks under which conditions it is possible to reconstruct the true latent variables of an arbitrarily complex generative process. Recent breakthrough work by Khemakhem et al. (2019) on nonlinear ICA has answered this question for a broad class of conditional generative processes. We extend this important result in a direction relevant for application to real-world data. First, we generalize the theory to the case of unknown intrinsic problem dimension and prove that in some special (but not very restrictive) cases, informative latent variables will be automatically separated from noise by an estimating model. Furthermore, the recovered informative latent variables will be in one-to-one correspondence with the true latent variables of the generating process, up to a trivial component-wise transformation. Second, we introduce a modification of the RealNVP invertible neural network architecture (Dinh et al. (2016)) which is particularly suitable for this type of problem: the General Incompressible-flow Network (GIN). Experiments on artificial data and EMNIST demonstrate that theoretical predictions are indeed verified in practice. In particular, we provide a detailed set of exactly 22 informative latent variables extracted from EMNIST.""","""This paper builds on the recent theoretical work by Khemakhem et al. (2019) to propose a novel flow-based method for performing non-linear ICA. The paper is well written, includes theoretical justifications for the proposed approach and convincing experimental results. Many of the initial minor concerns raised by the reviewers were addressed during the discussion stage, and all of the reviewers agree that this paper is an important contribution to the field and hence should be accepted. Hence, I am happy to recommend the acceptance of this paper as an oral. """ 437,"""Molecular Graph Enhanced Transformer for Retrosynthesis Prediction""",[],"""With massive possible synthetic routes in chemistry, retrosynthesis prediction is still a challenge for researchers. Recently, retrosynthesis prediction is formulated as a Machine Translation (MT) task. Namely, since each molecule can be represented as a Simplied Molecular-Input Line-Entry System (SMILES) string, the process of synthesis is analogized to a process of language translation from reactants to products. However, the MT models that applied on SMILES data usually ignore the information of natural atomic connections and the topology of molecules. In this paper, we propose a Graph Enhanced Transformer (GET) framework, which adopts both the sequential and graphical information of molecules. Four different GET designs are proposed, which fuse the SMILES representations with atom embedding learned from our improved Graph Neural Network (GNN). Empirical results show that our model signicantly outperforms the Transformer model in test accuracy.""","""Several approaches can be used to feed structured data to a neural network, such as convolutions or recurrent network. This paper proposes to combine both roads, by presenting molecular structures to the network using both their graph structured and a serialized representation (SMILES), that are processed by a framework combining the strenth of Graph Neural Network and the sequential transformer architecture. The technical quality of the paper seems good, with R1 commenting on the performance relative to SOTA seq2seq based methods and R3 commenting on the benefits of using more plausible constraints. The problem of using data with complex structure is highly relevant for ICLR. However, the novelty was deemed on the low side. As a very competitive conference, this is one of the key aspects necessary for successful ICLR papers. All reviewers agree that the novelty is too low for the current (high) bar of ICLR. """ 438,"""FLUID FLOW MASS TRANSPORT FOR GENERATIVE NETWORKS""","['generative network', 'optimal mass transport', 'gaussian mixture', 'model matching']","""Generative Adversarial Networks have been shown to be powerful tools for generating content resulting in them being intensively studied in recent years. Training these networks requires maximizing a generator loss and minimizing a discriminator loss, leading to a difficult saddle point problem that is slow and difficult to converge. Motivated by techniques in the registration of point clouds and the fluid flow formulation of mass transport, we investigate a new formulation that is based on strict minimization, without the need for the maximization. This formulation views the problem as a matching problem rather than an adversarial one, and thus allows us to quickly converge and obtain meaningful metrics in the optimization path.""","""The submission is concerned with providing a transport based formulation for generative modeling in order to avoid the standard max/min optimization challenge of GANs. The authors propose representing the divergence with a fluid flow model, the solution of which can be found by discretizing the space, resulting in an alignment of high dimensional point clouds. The authors disagreed about the novelty and clarity of the work, but they did agree that the empirical and theoretical support was lacking, and that the paper could be substantially improved through better validation and better results - in particular, the approach struggles with MNIST digit generation compared to other methods. The recommendation is to not accept the submission at this time.""" 439,"""DIME: AN INFORMATION-THEORETIC DIFFICULTY MEASURE FOR AI DATASETS""","['Information Theory', 'Fano’s Inequality', 'Difficulty Measure', 'Donsker-Varadhan Representation', 'Theory']","""Evaluating the relative difficulty of widely-used benchmark datasets across time and across data modalities is important for accurately measuring progress in machine learning. To help tackle this problem, we proposeDIME, an information-theoretic DIfficulty MEasure for datasets, based on conditional entropy estimation of the sample-label distribution. Theoretically, we prove a model-agnostic and modality-agnostic lower bound on the 0-1 error by extending Fanos inequality to the common supervised learning scenario where labels are discrete and features are continuous. Empirically, we estimate this lower bound using a neural network to compute DIME. DIME can be decomposed into components attributable to the data distribution and the number of samples. DIME can also compute per-class difficulty scores. Through extensive experiments on both vision and language datasets, we show that DIME is well-aligned with empirically observed performance of state-of-the-art machine learning models. We hope that DIME can aid future dataset design and model-training strategies.""","""This paper proposes a measure of inherent difficulty of datasets. While reviewers agree that there are good ideas in this paper that is worth pursuing, several concerns has been risen by reviewers, which are mostly acknowledged by the authors. We look forward to seeing an improved version of this paper soon! """ 440,"""Disentangling Trainability and Generalization in Deep Learning""","['NTK', 'NNGP', 'mean field theory', 'CNN', 'trainability and generalization', 'Gaussian process']","""A fundamental goal in deep learning is the characterization of trainability and generalization of neural networks as a function of their architecture and hyperparameters. In this paper, we discuss these challenging issues in the context of wide neural networks at large depths where we will see that the situation simplifies considerably. To do this, we leverage recent advances that have separately shown: (1) that in the wide network limit, random networks before training are Gaussian Processes governed by a kernel known as the Neural Network Gaussian Process (NNGP) kernel, (2) that at large depths the spectrum of the NNGP kernel simplifies considerably and becomes ``weakly data-dependent'', and (3) that gradient descent training of wide neural networks is described by a kernel called the Neural Tangent Kernel (NTK) that is related to the NNGP. Here we show that by combining the in the large depth limit the spectrum of the NTK simplifies in much the same way as that of the NNGP kernel. By analyzing this spectrum, we arrive at a precise characterization of trainability and generalization across a range of architectures including Fully Connected Networks (FCNs) and Convolutional Neural Networks (CNNs). We find that there are large regions of hyperparameter space where networks will train but will fail to generalize, in contrast with several recent results. By comparing CNNs with- and without-global average pooling, we show that CNNs without average pooling have very nearly identical learning dynamics to FCNs while CNNs with pooling contain a correction that alters its generalization performance. We perform a thorough empirical investigation of these theoretical results and finding excellent agreement on real datasets.""","""The paper investigates the trainability and generalization of deep networks as a function of hyperparameters/architecture, while focusing on wide nets of large depth; it aims to characterize regions of hyperparameter space where networks generalize well vs where they do not; empirical observations are demonstrated to support theoretical results. However, all reviewers agree that, while the topic of the paper is important and interesting, more work is required to improve the readability and clarify the exposition to support the proposed theoretical results. """ 441,"""Representation Learning with Multisets""","['multisets', 'fuzzy sets', 'permutation invariant', 'representation learning', 'containment', 'partial order', 'clustering']","""We study the problem of learning permutation invariant representations that can capture containment relations. We propose training a model on a novel task: predicting the size of the symmetric difference between pairs of multisets, sets which may contain multiple copies of the same object. With motivation from fuzzy set theory, we formulate both multiset representations and how to predict symmetric difference sizes given these representations. We model multiset elements as vectors on the standard simplex and multisets as the summations of such vectors, and we predict symmetric difference as the l1-distance between multiset representations. We demonstrate that our representations more effectively predict the sizes of symmetric differences than DeepSets-based approaches with unconstrained object representations. Furthermore, we demonstrate that the model learns meaningful representations, mapping objects of different classes to different standard basis vectors.""","""While the reviewers appreciated the problem to learn a multiset representation, two reviewers found the technical contribution to be minor, as well as limited experiments. The rebuttal and revision addressed concerns about the motivation of the approach, but the experimental issues remain. The paper would likely substantially improve with additional experiments.""" 442,"""Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming""","['deep learning', 'object detection', 'robustness', 'neural networks', 'data augmentation', 'autonomous driving']","""The ability to detect objects regardless of image distortions or weather conditions is crucial for real-world applications of deep learning like autonomous driving. We here provide an easy-to-use benchmark to assess how object detection models perform when image quality degrades. The three resulting benchmark datasets, termed PASCAL-C, COCO-C and Cityscapes-C, contain a large variety of image corruptions. We show that a range of standard object detection models suffer a severe performance loss on corrupted images (down to 30-60% of the original performance). However, a simple data augmentation trick - stylizing the training images - leads to a substantial increase in robustness across corruption type, severity and dataset. We envision our comprehensive benchmark to track future progress towards building robust object detection models. Benchmark, code and data are available at: (hidden for double blind review)""","""This paper proposes a benchmark for assessing the impact of image quality degradation (e.g. simulated fog, snow, frost) on the performance of object detection models. The authors introduce corrupted versions of popular object detection datasets, namely PASCAL-C, COCO-C and Cityscapes-C, and an evaluation protocol which reveals that the current models are not robust to such corruptions (losing as much as 60% of the performance). The authors then show that a simple data augmentation scheme significantly improves robustness. The reviewers agree that the manuscript is well written and that the proposed benchmark reveals major drawbacks of current detection models. However, two critical issues with the paper paper remain, namely lack of novelty in light of Geirhos et al., and how to actually use this benchmark in practice. I will hence recommend the rejection of this paper in the current state. Nevertheless, we encourage the authors to address the raised shortcomings (the new experiments reported in the rebuttal are a good starting point). """ 443,"""Learning Surrogate Losses""","['Surrogate losses', 'Non-differentiable losses']","""The minimization of loss functions is the heart and soul of Machine Learning. In this paper, we propose an off-the-shelf optimization approach that can seamlessly minimize virtually any non-differentiable and non-decomposable loss function (e.g. Miss-classification Rate, AUC, F1, Jaccard Index, Mathew Correlation Coefficient, etc.). Our strategy learns smooth relaxation versions of the true losses by approximating them through a surrogate neural network. The proposed loss networks are set-wise models which are invariant to the order of mini-batch instances. Ultimately, the surrogate losses are learned jointly with the prediction model via bilevel optimization. Empirical results on multiple datasets with diverse real-life loss functions compared with state-of-the-art baselines demonstrate the efficiency of learning surrogate losses.""","""Unfortunately, this was a borderline paper that generated disagreement among the reviewers. After high level round of additional deliberation it was decided that this paper does not yet meet the standard for acceptance. The paper proposes a potentially interesting approach to learning surrogates for non-differentiable and non-decomposable loss functions. However, the work is a bit shallow technically, as any supporting theoretical justification is supplied by pointing to other work. The paper would be stronger with a more serious and comprehensive analysis. The reviewers criticized the lack of clarity in the technical exposition, which the authors attempted to mitigate in the rebuttal/revision process. The paper would benefit from additional clarity and systematic presentation of complete details to allow reproduction.""" 444,"""You CAN Teach an Old Dog New Tricks! On Training Knowledge Graph Embeddings""","['knowledge graph embeddings', 'hyperparameter optimization']","""Knowledge graph embedding (KGE) models learn algebraic representations of the entities and relations in a knowledge graph. A vast number of KGE techniques for multi-relational link prediction have been proposed in the recent literature, often with state-of-the-art performance. These approaches differ along a number of dimensions, including different model architectures, different training strategies, and different approaches to hyperparameter optimization. In this paper, we take a step back and aim to summarize and quantify empirically the impact of each of these dimensions on model performance. We report on the results of an extensive experimental study with popular model architectures and training strategies across a wide range of hyperparameter settings. We found that when trained appropriately, the relative performance differences between various model architectures often shrinks and sometimes even reverses when compared to prior results. For example, RESCAL~\citep{nickel2011three}, one of the first KGE models, showed strong performance when trained with state-of-the-art techniques; it was competitive to or outperformed more recent architectures. We also found that good (and often superior to prior studies) model configurations can be found by exploring relatively few random samples from a large hyperparameter space. Our results suggest that many of the more advanced architectures and techniques proposed in the literature should be revisited to reassess their individual benefits. To foster further reproducible research, we provide all our implementations and experimental results as part of the open source LibKGE framework.""","""The authors analyze knowledge graph embedding models for multi-relational link predictions. Three reviewers like the work and recommend acceptance. The paper further received several positive comments from the public. This is solid work and should be accepted.""" 445,"""Scalable Model Compression by Entropy Penalized Reparameterization""","['deep learning', 'model compression', 'computer vision', 'information theory']","""We describe a simple and general neural network weight compression approach, in which the network parameters (weights and biases) are represented in a latent space, amounting to a reparameterization. This space is equipped with a learned probability model, which is used to impose an entropy penalty on the parameter representation during training, and to compress the representation using a simple arithmetic coder after training. Classification accuracy and model compressibility is maximized jointly, with the bitrateaccuracy trade-off specified by a hyperparameter. We evaluate the method on the MNIST, CIFAR-10 and ImageNet classification benchmarks using six distinct model architectures. Our results show that state-of-the-art model compression can be achieved in a scalable and general way without requiring complex procedures such as multi-stage training.""","""The paper describes a simple method for neural network compression by applying Shannon-type encoding. This is a fresh and nice idea, as noted by reviewers. A disadvantage is that the architectures on ImageNet are not the most efficient ones. Also, the review misses several important works on low-rank factorization of weights for the compression (Lebedev et. al, Novikov et. al). But overall, a good paper.""" 446,"""Better Knowledge Retention through Metric Learning""","['metric learning', 'continual learning', 'catastrophic forgetting']","""In a continual learning setting, new categories may be introduced over time, and an ideal learning system should perform well on both the original categories and the new categories. While deep neural nets have achieved resounding success in the classical setting, they are known to forget about knowledge acquired in prior episodes of learning if the examples encountered in the current episode of learning are drastically different from those encountered in prior episodes. This makes deep neural nets ill-suited to continual learning. In this paper, we propose a new model that can both leverage the expressive power of deep neural nets and is resilient to forgetting when new categories are introduced. We demonstrate an improvement in terms of accuracy on original classes compared to a vanilla deep neural net.""","""Catastrophic forgetting in neural networks is a real problem, and this paper suggests a mechanism for avoiding this using a k-nearest neighbor mechanism in the final layer. The reason is that the layers below the last layer should not change significantly when very different data is introduced. While the idea is interesting none of the reviewers is entirely convinced about the execution and empirical tests, which had partially inconclusive. The reviewers had a number of questions, which were only partially satisfactorily answered. While some of the reviewers had less familiarity with the specific research topic, the seemingly most knowledgeable reviewer does not think the paper is ready for publication. On balance, I think the paper cannot be accepted in its current state. The idea is interesting, but needs more work.""" 447,"""Dual-module Inference for Efficient Recurrent Neural Networks""","['memory-efficient RNNs', 'dynamic execution', 'computation skipping']","""Using Recurrent Neural Networks (RNNs) in sequence modeling tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements because of the memory-bound execution pattern of RNNs. We propose a big-little dual-module inference to dynamically skip unnecessary memory access and computation to speedup RNN inference. Leveraging the error-resilient feature of nonlinear activation functions used in RNNs, we propose to use a lightweight little module that approximates the original RNN layer, which is referred to as the big module, to compute activations of the insensitive region that are more error-resilient. The expensive memory access and computation of the big module can be reduced as the results are only used in the sensitive region. Our method can reduce the overall memory access by 40% on average and achieve 1.54x to 1.75x speedup on CPU-based server platform with negligible impact on model quality.""","""This paper presents an efficient RNN architecture that dynamically switches big and little modules during inference. In the experiments, authors demonstrate that the proposed method achieves favorable speed up compared to baselines, and the contribution is orthogonal to weight pruning. All reviewers agree that the paper is well-written and that the proposed method is easy to understand and reasonable. However, its methodological contribution is limited because the core idea is essentially the same as distillation, and dynamically gating the modules is a common technique in general. Moreover, I agree with the reviewers that the method should be compared with more other state-of-the-art methods in this context. Accelerating or compressing DNNs are intensively studied topics and there are many approaches other than weight pruning, as authors also mention in the paper. As the possible contribution of the paper is more on the empirical side, it is necessary to thoroughly compare with other possible approaches to show that the proposed method is really a good solution in practice. For these reasons, Id like to recommend rejection. """ 448,"""SNOW: Subscribing to Knowledge via Channel Pooling for Transfer & Lifelong Learning of Convolutional Neural Networks""","['channel pooling', 'efficient training and inferencing', 'lifelong learning', 'transfer learning', 'multi task']","""SNOW is an efficient learning method to improve training/serving throughput as well as accuracy for transfer and lifelong learning of convolutional neural networks based on knowledge subscription. SNOW selects the top-K useful intermediate feature maps for a target task from a pre-trained and frozen source model through a novel channel pooling scheme, and utilizes them in the task-specific delta model. The source model is responsible for generating a large number of generic feature maps. Meanwhile, the delta model selectively subscribes to those feature maps and fuses them with its local ones to deliver high accuracy for the target task. Since a source model takes part in both training and serving of all target tasks in an inference-only mode, one source model can serve multiple delta models, enabling significant computation sharing. The sizes of such delta models are fractional of the source model, thus SNOW also provides model-size efficiency. Our experimental results show that SNOW offers a superior balance between accuracy and training/inference speed for various image classification tasks to the existing transfer and lifelong learning practices.""","""This paper proposes a method, SNOW, for improving the speed of training and inference for transfer and lifelong learning by subscribing the target delta model to the knowledge of source pretrained model via channel pooling. Reviewers and AC agree that this paper is well written, with simple but sound technique towards an important problem and with promising empirical performance. The main critique is that the approach can only tackle transfer learning while failing in the lifelong setting. Authors provided convincing feedbacks on this key point. Details requested by the reviewers were all well addressed in the revision. Hence I recommend acceptance.""" 449,"""Unsupervised Distillation of Syntactic Information from Contextualized Word Representations""","['dismantlement', 'contextualized word representations', 'language models', 'representation learning']","""Contextualized word representations, such as ELMo and BERT, were shown to perform well on a various of semantic and structural (syntactic) task. In this work, we tackle the task of unsupervised disentanglement between semantics and structure in neural language representations: we aim to learn a transformation of the contextualized vectors, that discards the lexical semantics, but keeps the structural information. To this end, we automatically generate groups of sentences which are structurally similar but semantically different, and use metric-learning approach to learn a transformation that emphasizes the structural component that is encoded in the vectors. We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics. Finally, we demonstrate the utility of our distilled representations by showing that they outperform the original contextualized representations in few-shot parsing setting.""","""This paper aims to disentangle semantics and syntax inside of popular contextualized word embedding models. They use the model to generate sentences which are structurally similar but semantically different. This paper generated a lot of discussion. The reviewers do like the method for generating structurally similar sentences, and the triplet loss. They felt the evaluation methods were clever. However, one reviewer raised several issues. First, they thought the idea of syntax had not been well defined. They also thought the evaluation did not support the claims. The reviewer also argued very hard for the need to compare performance to SOTA models. The authors argued that beating SOTA is not the goal of their work, rather it is to understand what SOTA models are doing. The reviewers also argue that nearest neighbors is not a good method for evaluating the syntactic information in the representations. I hope all of the comments of the reviewers will help improve the paper as it is revised for a future submission.""" 450,"""Accelerating Reinforcement Learning Through GPU Atari Emulation""","['GPU', 'reinforcement learning']","""We introduce CuLE (CUDA Learning Environment), a CUDA port of the Atari Learning Environment (ALE) which is used for the development of deep reinforcement algorithms. CuLE overcomes many limitations of existing CPU-based emulators and scales naturally to multiple GPUs. It leverages GPU parallelization to run thousands of games simultaneously and it renders frames directly on the GPU, to avoid the bottleneck arising from the limited CPU-GPU communication bandwidth. CuLE generates up to 155M frames per hour on a single GPU, a finding previously achieved only through a cluster of CPUs. Beyond highlighting the differences between CPU and GPU emulators in the context of reinforcement learning, we show how to leverage the high throughput of CuLE by effective batching of the training data, and show accelerated convergence for A2C+V-trace. CuLE is available at [hidden URL].""","""The paper presented a detailed discussion on the implementation of a library emulating Atari games on GPU for efficient reinforcement learning. The analysis is very thoroughly done. The major concern is whether this paper is a good fit to this conference. The developed library would be useful to researchers and the discussion is interesting with respect to system design and implementation, but the technical depth seems not sufficient.""" 451,"""NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural Architecture Search""","['Neural Architecture Search', 'Deep Learning', 'Computer Vision']","""One-shot neural architecture search (NAS) has played a crucial role in making NAS methods computationally feasible in practice. Nevertheless, there is still a lack of understanding on how these weight-sharing algorithms exactly work due to the many factors controlling the dynamics of the process. In order to allow a scientific study of these components, we introduce a general framework for one-shot NAS that can be instantiated to many recently-introduced variants and introduce a general benchmarking framework that draws on the recent large-scale tabular benchmark NAS-Bench-101 for cheap anytime evaluations of one-shot NAS methods. To showcase the framework, we compare several state-of-the-art one-shot NAS methods, examine how sensitive they are to their hyperparameters and how they can be improved by tuning their hyperparameters, and compare their performance to that of blackbox optimizers for NAS-Bench-101.""","""The authors present a new benchmark for architecture search. Reviews were somewhat mixed, but also with mixed confidence scores. I recommend acceptance as poster - and encourage the authors to also cite pseudo-url""" 452,"""Customizing Sequence Generation with Multi-Task Dynamical Systems""","['Time-series modelling', 'Dynamical systems', 'RNNs', 'Multi-task learning']","""Dynamical system models (including RNNs) often lack the ability to adapt the sequence generation or prediction to a given context, limiting their real-world application. In this paper we show that hierarchical multi-task dynamical systems (MTDSs) provide direct user control over sequence generation, via use of a latent code z that specifies the customization to the individual data sequence. This enables style transfer, interpolation and morphing within generated sequences. We show the MTDS can improve predictions via latent code interpolation, and avoid the long-term performance degradation of standard RNN approaches.""","""This work proposes a dynamical systems model to allow the user to better control sequence generation via the latent z. Reviewers all agreed the that the proposed method is quite interesting. However, reviewers also felt that current evaluations were weak and were ultimately unconvinced by the author rebuttal. I recommend the authors resubmit with a stronger set of experiments as suggested by Reviewers 2 and 3.""" 453,"""Domain Adaptive Multibranch Networks""","['Domain Adaptation', 'Computer Vision']","""We tackle unsupervised domain adaptation by accounting for the fact that different domains may need to be processed differently to arrive to a common feature representation effective for recognition. To this end, we introduce a deep learning framework where each domain undergoes a different sequence of operations, allowing some, possibly more complex, domains to go through more computations than others. This contrasts with state-of-the-art domain adaptation techniques that force all domains to be processed with the same series of operations, even when using multi-stream architectures whose parameters are not shared. As evidenced by our experiments, the greater flexibility of our method translates to higher accuracy. Furthermore, it allows us to handle any number of domains simultaneously.""","""Although some criticism remains for experiments, I suggest to accept this paper.""" 454,"""Semi-supervised Learning by Coaching""","['semi-supervised', 'teacher', 'student', 'label propagation', 'image classification']","""Recent semi-supervised learning (SSL) methods often have a teacher to train a student in order to propagate labels from labeled data to unlabeled data. We argue that a weakness of these methods is that the teacher does not learn from the students mistakes during the course of students learning. To address this weakness, we introduce Coaching, a framework where a teacher generates pseudo labels for unlabeled data, from which a student will learn and the students performance on labeled data will be used as reward to train the teacher using policy gradient. Our experiments show that Coaching significantly improves over state-of-the-art SSL baselines. For instance, on CIFAR-10, with only 4,000 labeled examples, a WideResNet-28-2 trained by Coaching achieves 96.11% accuracy, which is better than 94.9% achieved by the same architecture trained with 45,000 labeled. On ImageNet with 10% labeled examples, Coaching trains a ResNet-50 to 72.94% top-1 accuracy, comfortably outperforming the existing state-of-the-art by more than 4%. Coaching also scales successfully to the high data regime with full ImageNet. Specifically, with additional 9 million unlabeled images from OpenImages, Coaching trains a ResNet-50 to 82.34% top-1 accuracy, setting a new state-of-the-art for the architecture on ImageNet without using extra labeled data.""","""Authors propose a new method of semi-supervised learning and provide empirical results. Reviewers found the presentation of the method confusing and poorly motivated. Despite the rebuttal, reviewers still did not find clarity on how or why the method works as well as it does.""" 455,"""Hierarchical Graph Matching Networks for Deep Graph Similarity Learning""","['Graph Neural Network', 'Graph Matching Network', 'Graph Similarity Learning']","""While the celebrated graph neural networks yields effective representations for individual nodes of a graph, there has been relatively less success in extending to deep graph similarity learning. Recent work has considered either global-level graph-graph interactions or low-level node-node interactions, ignoring the rich cross-level interactions between parts of a graph and a whole graph. In this paper, we propose a Hierarchical Graph Matching Network (HGMN) for computing the graph similarity between any pair of graph-structured objects. Our model jointly learns graph representations and a graph matching metric function for computing graph similarity in an end-to-end fashion. The proposed HGMN model consists of a multi-perspective node-graph matching network for effectively learning cross-level interactions between parts of a graph and a whole graph, and a siamese graph neural network for learning global-level interactions between two graphs. Our comprehensive experiments demonstrate that our proposed HGMN consistently outperforms state-of-the-art graph matching networks baselines for both classification and regression tasks. ""","""The submission proposes an architecture to learn a similarity metric for graph matching. The architecture uses node-graph information in order to learn a more expressive, multi-level similarity score. The hierarchical approach is empirically validated on a limited set of graphs for which pairwise matching information is available and is shown to outperform other methods for classification and regression tasks. The reviewers were divided in their scores for this paper, but all noted that the approach was somewhat incremental and empirically motivated, without adequate analysis, theoretical justification, or extensive benchmark validation. Although the approach has value, more work is needed to support the method fully. Recommendation is to reject at this time.""" 456,"""Towards Effective 2-bit Quantization: Pareto-optimal Bit Allocation for Deep CNNs Compression""",[],"""State-of-the-art quantization methods can compress deep neural networks down to 4 bits without losing accuracy. However, when it comes to 2 bits, the performance drop is still noticeable. One problem in these methods is that they assign equal bit rate to quantize weights and activations in all layers, which is not reasonable in the case of high rate compression (such as 2-bit quantization), as some of layers in deep neural networks are sensitive to quantization and performing coarse quantization on these layers can hurt the accuracy. In this paper, we address an important problem of how to optimize the bit allocation of weights and activations for deep CNNs compression. We first explore the additivity of output error caused by quantization and find that additivity property holds for deep neural networks which are continuously differentiable in the layers. Based on this observation, we formulate the optimal bit allocation problem of weights and activations in a joint framework and propose a very efficient method to solve the optimization problem via Lagrangian Formulation. Our method obtains excellent results on deep neural networks. It can compress deep CNN ResNet-50 down to 2 bits with only 0.7% accuracy loss. To the best our knowledge, this is the first paper that reports 2-bit results on deep CNNs without hurting the accuracy.""","""This works presents a method for inferring the optimal bit allocation for quantization of weights and activations in CNNs. The formulation is sound and the experiments are complete. However, the main concern is that the paper is very similar to a recent work by the authors, which is not cited.""" 457,"""Shifted Randomized Singular Value Decomposition""","['SVD', 'PCA', 'Randomized Algorithms']","""We extend the randomized singular value decomposition (SVD) algorithm (Halko et al., 2011) to estimate the SVD of a shifted data matrix without explicitly constructing the matrix in the memory. With no loss in the accuracy of the original algorithm, the extended algorithm provides for a more efficient way of matrix factorization. The algorithm facilitates the low-rank approximation and principal component analysis (PCA) of off-center data matrices. When applied to different types of data matrices, our experimental results confirm the advantages of the extensions made to the original algorithm.""","""The proposed algorithm is found to be a straightforward extension of the previous work, which is not sufficient to warrant publication in ICLR2020.""" 458,"""Graph Constrained Reinforcement Learning for Natural Language Action Spaces""","['natural language generation', 'deep reinforcement learning', 'knowledge graphs', 'interactive fiction']","""Interactive Fiction games are text-based simulations in which an agent interacts with the world purely through natural language. They are ideal environments for studying how to extend reinforcement learning agents to meet the challenges of natural language understanding, partial observability, and action generation in combinatorially-large text-based action spaces. We present KG-A2C, an agent that builds a dynamic knowledge graph while exploring and generates actions using a template-based action space. We contend that the dual uses of the knowledge graph to reason about game state and to constrain natural language generation are the keys to scalable exploration of combinatorially large natural language actions. Results across a wide variety of IF games show that KG-A2C outperforms current IF agents despite the exponential increase in action space size.""","""This paper applies reinforcement learning to text adventure games by using knowledge graphs to constrain the action space. This is an exciting problem with relatively little work performed on it. Reviews agree that this is an interesting paper, well written, with good results. There are some concerns about novelty but general agreement that the paper should be accepted. I therefore recommend acceptance.""" 459,"""Accelerating First-Order Optimization Algorithms""","['Neural Networks', 'Gradient Descent', 'First order optimization']","""Several stochastic optimization algorithms are currently available. In most cases, selecting the best optimizer for a given problem is not an easy task. Therefore, instead of looking for yet another absolute best optimizer, accelerating existing ones according to the context might prove more effective. This paper presents a simple and intuitive technique to accelerate first-order optimization algorithms. When applied to first-order optimization algorithms, it converges much more quickly and achieves lower function/loss values when compared to traditional algorithms. The proposed solution modifies the update rule, based on the variation of the direction of the gradient during training. Several tests were conducted with SGD, AdaGrad, Adam and AMSGrad on three public datasets. Results clearly show that the proposed technique, has the potential to improve the performance of existing optimization algorithms.""","""All reviewers recommend rejection, and the authors have not provided a response. """ 460,"""Hamiltonian Generative Networks""","['Hamiltonian dynamics', 'normalising flows', 'generative model', 'physics']","""The Hamiltonian formalism plays a central role in classical and quantum physics. Hamiltonians are the main tool for modelling the continuous time evolution of systems with conserved quantities, and they come equipped with many useful properties, like time reversibility and smooth interpolation in time. These properties are important for many machine learning problems - from sequence prediction to reinforcement learning and density modelling - but are not typically provided out of the box by standard tools such as recurrent neural networks. In this paper, we introduce the Hamiltonian Generative Network (HGN), the first approach capable of consistently learning Hamiltonian dynamics from high-dimensional observations (such as images) without restrictive domain assumptions. Once trained, we can use HGN to sample new trajectories, perform rollouts both forward and backward in time, and even speed up or slow down the learned dynamics. We demonstrate how a simple modification of the network architecture turns HGN into a powerful normalising flow model, called Neural Hamiltonian Flow (NHF), that uses Hamiltonian dynamics to model expressive densities. Hence, we hope that our work serves as a first practical demonstration of the value that the Hamiltonian formalism can bring to machine learning. More results and video evaluations are available at: pseudo-url""","""The paper introduces a novel way of learning Hamiltonian dynamics with a generative network. The Hamiltonian generative network (HGN) learns the dynamics directly from data by embedding observations in a latent space, which is then transformed into a phase space describing the system's initial (abstract) position and momentum. Using a second network, the Hamiltonian network, the position and momentum are reduced to a scalar, interpreted as the Hamiltonian of the system, which can then be used to do rollouts in the phase space using techniques known from, e.g., Hamiltonian Monte Carlo sampling. An important ingredient of the paper is the fact that no access to the derivatives of the Hamiltonian is needed. The reviewers agree that this paper is a good contribution, and I recommend acceptance.""" 461,"""Latent Normalizing Flows for Many-to-Many Cross-Domain Mappings""",[],"""Learned joint representations of images and text form the backbone of several important cross-domain tasks such as image captioning. Prior work mostly maps both domains into a common latent representation in a purely supervised fashion. This is rather restrictive, however, as the two domains follow distinct generative processes. Therefore, we propose a novel semi-supervised framework, which models shared information between domains and domain-specific information separately. The information shared between the domains is aligned with an invertible neural network. Our model integrates normalizing flow-based priors for the domain-specific information, which allows us to learn diverse many-to-many mappings between the two domains. We demonstrate the effectiveness of our model on diverse tasks, including image captioning and text-to-image synthesis.""","""This paper addresses the problem of many-to-many cross-domain mapping tasks with a double variational auto-encoder architecture, making use of the normalizing flow-based priors. Reviewers and AC unanimously agree that it is a well written paper with a solid approach to a complicated real problem supported by good experimental results. There are still some concerns with confusing notations, and with human study to further validate their approach, which should be addressed in a future version. I recommend acceptance.""" 462,"""Sticking to the Facts: Confident Decoding for Faithful Data-to-Text Generation""","['Natural Language Processing', 'Text Generation', 'Data-to-Text Generation', 'Hallucination', 'Calibration', 'Variational Bayes']","""Neural conditional text generation systems have achieved significant progress in recent years, showing the ability to produce highly fluent text. However, the inherent lack of controllability in these systems allows them to hallucinate factually incorrect phrases that are unfaithful to the source, making them often unsuitable for many real world systems that require high degrees of precision. In this work, we propose a novel confidence oriented decoder that assigns a confidence score to each target position. This score is learned in training using a variational Bayes objective, and can be leveraged at inference time using a calibration technique to promote more faithful generation. Experiments on a structured data-to-text dataset -- WikiBio -- show that our approach is more faithful to the source than existing state-of-the-art approaches, according to both automatic metrics and human evaluation.""","""This paper proposes to improve the faithfulness of data-to-text generation models, through an attention-based confidence measure and a variational approach for learning the model. There is some reviewer disagreement on this paper. All agree that the problem is important and ideas interesting, while some reviewers feel that the methods are insufficiently justified and/or the results unconvincing. In addition, there is not much technical novelty here from a machine learning perspective; the contribution is to a specific task. Overall I think this paper would fit in much better in an NLP conference/journal.""" 463,"""Forecasting Deep Learning Dynamics with Applications to Hyperparameter Tuning""",[],"""Well-performing deep learning models have enormous impact, but getting them to perform well is complicated, as the model architecture must be chosen and a number of hyperparameters tuned. This requires experimentation, which is timeconsuming and costly. We propose to address the problem of hyperparameter tuning by learning to forecast the training behaviour of deep learning architectures. Concretely, we introduce a forecasting model that, given a hyperparameter schedule (e.g., learning rate, weight decay) and a history of training observations (such as loss and accuracy), predicts how the training will continue. Naturally, forecasting is much faster and less expensive than running actual deep learning experiments. The main question we study is whether the forecasting model is good enough to be of use - can it indeed replace real experiments? We answer this affirmatively in two ways. For one, we show that the forecasted curves are close to real ones. On the practical side, we apply our forecaster to learn hyperparameter tuning policies. We experiment on a version of ResNet on CIFAR10 and on Transformer in a language modeling task. The policies learned using our forecaster match or exceed the ones learned in real experiments and in one case even the default schedules discovered by researchers. We study the learning rate schedules created using the forecaster are find that they are not only effective, but also lead to interesting insights.""","""This paper trains a transformer to extrapolate learning curves, and uses this in a model-based RL framework to automatically tune hyperparameters. This might be a good approach, but it's hard to know because the experiments don't include direct comparisons against existing hyperparameter optimization/adaptation techniques (either the ones based on extrapolating training curves, or standard ones like BayesOpt or PBT). The presentation is also fairly informal, and it's not clear if a reader would be able to reproduce the results. Overall, I think there's significant cleanup and additional experiments needed before publication in ICLR. """ 464,"""CZ-GEM: A FRAMEWORK FOR DISENTANGLED REPRESENTATION LEARNING""","['disentangled representation learning', 'gan', 'generative model', 'simulator']","""Learning disentangled representations of data is one of the central themes in unsupervised learning in general and generative modelling in particular. In this work, we tackle a slightly more intricate scenario where the observations are generated from a conditional distribution of some known control variate and some latent noise variate. To this end, we present a hierarchical model and a training method (CZ-GEM) that leverages some of the recent developments in likelihood-based and likelihood-free generative models. We show that by formulation, CZ-GEM introduces the right inductive biases that ensure the disentanglement of the control from the noise variables, while also keeping the components of the control variate disentangled. This is achieved without compromising on the quality of the generated samples. Our approach is simple, general, and can be applied both in supervised and unsupervised settings.""","""The paper addresses the problem of learning disentangled representations in supervised and unsupervised settings. In general, the problem of representation learning in of course a core problem in ICLR. However, in the set-up described by the authors, R2 commented on the the set-up for supervised being a bit unnatural in as detailed labels need to be given (somewhat confusingly, the labels are called control variates in the paper). Several reviewers commented on the novelty of the paper being on the low side, with R2 commenting the contribution being fairly small, and R3 noting similarities to stackgan. There were also some comments on quality, and clarity. On the topic of technical quality, R2 did note that the authors present extensive results, but R3 mentions that the case for the disentanglement improving is not sufficiently supported. In terms of clarity, there was some initial confusing about e.g. the inference procedure, though the authors addressed these issues in the discussion. """ 465,"""GRASPEL: GRAPH SPECTRAL LEARNING AT SCALE""","['Spectral graph theory', 'graph learning', 'data clustering', 't-SNE visualization']","""Learning meaningful graphs from data plays important roles in many data mining and machine learning tasks, such as data representation and analysis, dimension reduction, data clustering, and visualization, etc. In this work, we present a scalable spectral approach to graph learning from data. By limiting the precision matrix to be a graph Laplacian, our approach aims to estimate ultra-sparse weighted graphs and has a clear connection with the prior graphical Lasso method. By interleaving nearly-linear time spectral graph sparsification, coarsening and embedding procedures, ultra-sparse yet spectrally-stable graphs can be iteratively constructed in a highly-scalable manner. Compared with prior graph learning approaches that do not scale to large problems, our approach is highly-scalable for constructing graphs that can immediately lead to substantially improved computing efficiency and solution quality for a variety of data mining and machine learning applications, such as spectral clustering (SC), and t-Distributed Stochastic Neighbor Embedding (t-SNE). ""","""This paper proposes a scalable approach for graph learning from data. The reviewers think the approach appears heuristic and it is not clear the algorithm is optimizing the proposed sparse graph recovery objective. """ 466,"""Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks""","['Adversarial Robustness', 'Generalization', 'Neural Computing', 'Deep Learning']","""Current artificial neural networks (ANNs) can perform and excel at a variety of tasks ranging from image classification to spam detection through training on large datasets of labeled data. While the trained network may perform well on similar testing data, inputs that differ even slightly from the training data may trigger unpredictable behavior. Due to this limitation, it is possible to design inputs with very small perturbations that can result in misclassification. These adversarial attacks present a security risk to deployed ANNs and indicate a divergence between how ANNs and humans perform classification. Humans are robust at behaving in the presence of noise and are capable of correctly classifying objects that are noisy, blurred, or otherwise distorted. It has been hypothesized that sleep promotes generalization of knowledge and improves robustness against noise in animals and humans. In this work, we utilize a biologically inspired sleep phase in ANNs and demonstrate the benefit of sleep on defending against adversarial attacks as well as in increasing ANN classification robustness. We compare the sleep algorithm's performance on various robustness tasks with two previously proposed adversarial defenses - defensive distillation and fine-tuning. We report an increase in robustness after sleep phase to adversarial attacks as well as to general image distortions for three datasets: MNIST, CUB200, and a toy dataset. Overall, these results demonstrate the potential for biologically inspired solutions to solve existing problems in ANNs and guide the development of more robust, human-like ANNs.""","""""Sleep"" is introduced as a way of increasing robustness in neural network training. To sleep, the network is converted into a spiking network and goes through phases of more and less intense activation. The results are quite good when it comes to defending against adversarial examples. Reviewers agree that the method is novel and interesting. Authors responded to the reviewers' questions (one of the reviewers had a quite extensive set of questions) satisfactorily, and improved the paper significantly in the process. I think the paper should be accepted on the grounds of novelty and good results.""" 467,"""Utilizing Edge Features in Graph Neural Networks via Variational Information Maximization""","['Graph Neural Network', 'Edge Feature', 'Mutual Information']","""Graph Neural Networks (GNNs) broadly follow the scheme that the representation vector of each node is updated recursively using the message from neighbor nodes, where the message of a neighbor is usually pre-processed with a parameterized transform matrix. To make better use of edge features, we propose the Edge Information maximized Graph Neural Network (EIGNN) that maximizes the Mutual Information (MI) between edge features and message passing channels. The MI is reformulated as a differentiable objective via a variational approach. We theoretically show that the newly introduced objective enables the model to preserve edge information, and empirically corroborate the enhanced performance of MI-maximized models across a broad range of learning tasks including regression on molecular graphs and relation prediction in knowledge graphs.""","""This paper proposed an auxiliary loss based on mutual information for graph neural network. Such loss is to maximize the mutual information between edge representation and corresponding edge feature in GNN message passing function. GNN with edge features have already been proposed in the literature. Furthermore, the reviewers think the paper needs to improve further in terms of explain more clearly the motivation and rationale behind the method. """ 468,"""Progressive Memory Banks for Incremental Domain Adaptation""","['natural language processing', 'domain adaptation']","""This paper addresses the problem of incremental domain adaptation (IDA) in natural language processing (NLP). We assume each domain comes one after another, and that we could only access data in the current domain. The goal of IDA is to build a unified model performing well on all the domains that we have encountered. We adopt the recurrent neural network (RNN) widely used in NLP, but augment it with a directly parameterized memory bank, which is retrieved by an attention mechanism at each step of RNN transition. The memory bank provides a natural way of IDA: when adapting our model to a new domain, we progressively add new slots to the memory bank, which increases the number of parameters, and thus the model capacity. We learn the new memory slots and fine-tune existing parameters by back-propagation. Experimental results show that our approach achieves significantly better performance than fine-tuning alone. Compared with expanding hidden states, our approach is more robust for old domains, shown by both empirical and theoretical results. Our model also outperforms previous work of IDA including elastic weight consolidation and progressive neural networks in the experiments.""","""This paper introduces an RNN based approach to incremental domain adaptation in natural language processing, where the RNN is progressively augmented with the parameterized memory bank which is shown to be better than expanding the RNN states. Reviewers and AC acknowledge that this paper is well written with interesting ideas and practical value. Domain adaptation in the incremental setting, where domains come in a streaming way with only the current one accessible, can find some realistic application scenarios. The proposed extensible attention mechanism is solid and works well on several NLP tasks. Several concerns were raised by the reviewers regarding the comparative and ablation studies, which were well resolved in the rebuttal. The authors are encouraged to generalize their approach to other application domains other than NLP to show the generality of their approach. I recommend acceptance.""" 469,"""Variable Complexity in the Univariate and Multivariate Structural Causal Model""",[],"""We show that by comparing the individual complexities of univariante cause and effect in the Structural Causal Model, one can identify the cause and the effect, without considering their interaction at all. The entropy of each variable is ineffective in measuring the complexity, and we propose to capture it by an autoencoder that operates on the list of sorted samples. Comparing the reconstruction errors of the two autoencoders, one for each variable, is shown to perform well on the accepted benchmarks of the field. In the multivariate case, where one can ensure that the complexities of the cause and effect are balanced, we propose a new method that mimics the disentangled structure of the causal model. We extend the results of~\cite{Zhang:2009:IPC:1795114.1795190} to the multidimensional case, showing that such modeling is only likely in the direction of causality. Furthermore, the learned model is shown theoretically to perform the separation to the causal component and to the residual (noise) component. Our multidimensional method obtains a significantly higher accuracy than the literature methods.""","""The author response and revisions to the manuscript motivated two reviewers to increase their scores to weak accept. While these revisions increased the quality of the work, the overall assessment is just shy of the threshold for inclusion.""" 470,"""Spectral Embedding of Regularized Block Models""","['Spectral embedding', 'regularization', 'block models', 'clustering']","""Spectral embedding is a popular technique for the representation of graph data. Several regularization techniques have been proposed to improve the quality of the embedding with respect to downstream tasks like clustering. In this paper, we explain on a simple block model the impact of the complete graph regularization, whereby a constant is added to all entries of the adjacency matrix. Specifically, we show that the regularization forces the spectral embedding to focus on the largest blocks, making the representation less sensitive to noise or outliers. We illustrate these results on both on both synthetic and real data, showing how regularization improves standard clustering scores. ""","""The paper proposes a nice and easy way to regularize spectral graph embeddings, and explains the effect through a nice set of experiments. Therefore, I recommend acceptance.""" 471,"""Reweighted Proximal Pruning for Large-Scale Language Representation""","['Language Representation', 'Machine Learning', 'Deep Learning', 'Optimizer', 'Statistical Learning', 'Model Compression']","""Recently, pre-trained language representation flourishes as the mainstay of the natural language understanding community, e.g., BERT. These pre-trained language representations can create state-of-the-art results on a wide range of downstream tasks. Along with continuous significant performance improvement, the size and complexity of these pre-trained neural models continue to increase rapidly. Is it possible to compress these large-scale language representation models? How will the pruned language representation affect the downstream multi-task transfer learning objectives? In this paper, we propose Reweighted Proximal Pruning (RPP), a new pruning method specifically designed for a large-scale language representation model. Through experiments on SQuAD and the GLUE benchmark suite, we show that proximal pruned BERT keeps high accuracy for both the pre-training task and the downstream multiple fine-tuning tasks at high prune ratio. RPP provides a new perspective to help us analyze what large-scale language representation might learn. Additionally, RPP makes it possible to deploy a large state-of-the-art language representation model such as BERT on a series of distinct devices (e.g., online servers, mobile phones, and edge devices).""","""This paper proposes a novel pruning method for use with transformer text encoding models like BERT, and show that it can dramatically reduce the number of non-zero weights in a trained model while only slightly harming performance. This is one of the hardest cases in my pile. The topic is obviously timely and worthwhile. None of the reviewers was able to give a high-confidence assessment, but the reviews were all ultimately leaning positive. However, the reviewers didn't reach a clear consensus on the main strengths of the paper, even after some private discussion, and they raised many concerns. These concerns, taken together, make me doubt that the current paper represents a substantial, sound contribution to the model compression literature in NLP. I'm voting to reject, on the basis of: - Recurring concerns about missing strong baselines, which make it less clear that the new method is an ideal choice. - Relatively weak motivations for the proposed method (pruning a pre-trained model before fine-tuning) in the proposed application domain (mobile devices). - Recurring concerns about thin analysis.""" 472,"""Deep Imitative Models for Flexible Inference, Planning, and Control""","['imitation learning', 'planning', 'autonomous driving']","""Imitation Learning (IL) is an appealing approach to learn desirable autonomous behavior. However, directing IL to achieve arbitrary goals is difficult. In contrast, planning-based algorithms use dynamics models and reward functions to achieve goals. Yet, reward functions that evoke desirable behavior are often difficult to specify. In this paper, we propose ""Imitative Models"" to combine the benefits of IL and goal-directed planning. Imitative Models are probabilistic predictive models of desirable behavior able to plan interpretable expert-like trajectories to achieve specified goals. We derive families of flexible goal objectives, including constrained goal regions, unconstrained goal sets, and energy-based goals. We show that our method can use these objectives to successfully direct behavior. Our method substantially outperforms six IL approaches and a planning-based approach in a dynamic simulated autonomous driving task, and is efficiently learned from expert demonstrations without online data collection. We also show our approach is robust to poorly-specified goals, such as goals on the wrong side of the road.""","""This paper proposes to build an 'imitative model' to improve the performance for imitation learning. The main idea is to combine the model-based RL type of work to the imitation learning approach. The model is trained using a probabilistic method and can help the agent imitate goals that were previously not easy to achieve with previous works. Reviewers 2 and 3 strongly agree that the paper should be accepted. R3 has increased their score after the rebuttal, and the authors' response helped in this case. Based on reviewers score, I recommend to accept this paper.""" 473,"""Supervised learning with incomplete data via sparse representations""","['Incomplete data', 'supervised learning', 'sparse representations']","""This paper addresses the problem of training a classifier on incomplete data and its application to a complete or incomplete test dataset. A supervised learning method is developed to train a general classifier, such as a logistic regression or a deep neural network, using only a limited number of observed entries, assuming sparse representations of data vectors on an unknown dictionary. The proposed method simultaneously learns the classifier, the dictionary and the corresponding sparse representations of each input data sample. A theoretical analysis is also provided comparing this method with the standard imputation approach, which consists on performing data completion followed by training the classifier based on their reconstructions. The limitations of this last ""sequential"" approach are identified, and a description of how the proposed new ""simultaneous"" method can overcome the problem of indiscernible observations is provided. Additionally, it is shown that, if it is possible to train a classifier on incomplete observations so that its reconstructions are well separated by a hyperplane, then the same classifier also correctly separates the original (unobserved) data samples. Extensive simulation results are presented on synthetic and well-known reference datasets that demonstrate the effectiveness of the proposed method compared to traditional data imputation methods.""","""This was a difficult paper to decide, given the strong disagreement between reviewer assessments. After the discussion it became clear that the paper tackles some well studied issues while neglecting to cite some relevant works. The significance and novelty of the contribution was directly challenged, yet I could not see a convincing case presented to mitigate these criticisms. The paper needs to do a better job of placing the work in the context of the existing literature, and establishing the significance and novelty of its main contributions.""" 474,"""Transformer-XH: Multi-Evidence Reasoning with eXtra Hop Attention""","['Transformer-XH', 'multi-hop QA', 'fact verification', 'extra hop attention', 'structured modeling']","""Transformers have achieved new heights modeling natural language as a sequence of text tokens. However, in many real world scenarios, textual data inherently exhibits structures beyond a linear sequence such as trees and graphs; many tasks require reasoning with evidence scattered across multiple pieces of texts. This paper presents Transformer-XH, which uses eXtra Hop attention to enable intrinsic modeling of structured texts in a fully data-driven way. Its new attention mechanism naturally hops across the connected text sequences in addition to attending over tokens within each sequence. Thus, Transformer-XH better conducts joint multi-evidence reasoning by propagating information between documents and constructing global contextualized representations. On multi-hop question answering, Transformer-XH leads to a simpler multi-hop QA system which outperforms previous state-of-the-art on the HotpotQA FullWiki setting. On FEVER fact verification, applying Transformer-XH provides state-of-the-art accuracy and excels on claims whose verification requires multiple evidence.""","""This work examines a problem that is of considerable interest to the community and does a good job of presenting the work. The AC recommends acceptance.""" 475,"""Optimal Strategies Against Generative Attacks""",[],"""Generative neural models have improved dramatically recently. With this progress comes the risk that such models will be used to attack systems that rely on sensor data for authentication and anomaly detection. Many such learning systems are installed worldwide, protecting critical infrastructure or private data against malfunction and cyber attacks. We formulate the scenario of such an authentication system facing generative impersonation attacks, characterize it from a theoretical perspective and explore its practical implications. In particular, we ask fundamental theoretical questions in learning, statistics and information theory: How hard is it to detect a ""fake reality""? How much data does the attacker need to collect before it can reliably generate nominally-looking artificial data? Are there optimal strategies for the attacker or the authenticator? We cast the problem as a maximin game, characterize the optimal strategy for both attacker and authenticator in the general case, and provide the optimal strategies in closed form for the case of Gaussian source distributions. Our analysis reveals the structure of the optimal attack and the relative importance of data collection for both authenticator and attacker. Based on these insights we design practical learning approaches and show that they result in models that are more robust to various attacks on real-world data.""","""This paper concerns the problem of defending against generative ""attacks"": that is, falsification of data for malicious purposes through the use of synthesized data based on ""leaked"" samples of real data. The paper casts the problem formally and assesses the problem of authentication in terms of the sample complexity at test time and the sample budget of the attacker. The authors prove a Nash equillibrium exists, derive a closed form for the special case of multivariate Gaussian data, and propose an algorithm called GAN in the Middle leveraging the developed principles, showing an implementation to perform better than authentication baselines and suggesting other applications. Reviewers were overall very positive, in agreement that the problem addressed is important and the contribution made is significant. Most criticisms were superficial. This is a dense piece of work, and presentation could still be improved. However this is clearly a significant piece of work addressing a problem of increasing importance, and is worthy of acceptance.""" 476,"""Contextual Text Style Transfer""",[],"""In this paper, we introduce a new task, Contextual Text Style Transfer, to translate a sentence within a paragraph context into the desired style (e.g., informal to formal, offensive to non-offensive). Two new datasets, Enron-Context and Reddit-Context, are introduced for this new task, focusing on formality and offensiveness, respectively. Two key challenges exist in contextual text style transfer: 1) how to preserve the semantic meaning of the target sentence and its consistency with the surrounding context when generating an alternative sentence with a specific style; 2) how to deal with the lack of labeled parallel data. To address these challenges, we propose a Context-Aware Style Transfer (CAST) model, which leverages both parallel and non-parallel data for joint model training. For parallel training data, CAST uses two separate encoders to encode each input sentence and its surrounding context, respectively. The encoded feature vector, together with the target style information, are then used to generate the target sentence. A classifier is further used to ensure contextual consistency of the generated sentence. In order to lever-age massive non-parallel corpus and to enhance sentence encoder and decoder training, additional self-reconstruction and back-translation losses are introduced. Experimental results on Enron-Context and Reddit-Context demonstrate the effectiveness of the proposed model over state-of-the-art style transfer methods, across style accuracy, content preservation, and contextual consistency metrics.""","""The paper proposes a new style transfer task, contextual style transfer, which hypothesises that the document context of the sentence is important, as opposed to previous work which only looked at sentence context. A major contribution of the paper is the creation of two new crowd-sourced datasets, Enron-Context and Reddit-Context focussed on formality and offensiveness. The reviewers are skeptical that it was context that has really improved results on the style transfer tasks. The authors responded to all the reviewers but there was no further discussion. I feel like this paper has not convinced me or the reviewers of the strength of its contribution and, although interesting, I recommend for it to be rejected. """ 477,"""A Mechanism of Implicit Regularization in Deep Learning""","['Implicit Regularization', 'Generalization', 'Deep Neural Network', 'Low Complexity']","""Despite a lot of theoretical efforts, very little is known about mechanisms of implicit regularization by which the low complexity contributes to generalization in deep learning. In particular, causality between the generalization performance, implicit regularization and nonlinearity of activation functions is one of the basic mysteries of deep neural networks (DNNs). In this work, we introduce a novel technique for DNNs called random walk analysis and reveal a mechanism of the implicit regularization caused by nonlinearity of ReLU activation. Surprisingly, our theoretical results suggest that the learned DNNs interpolate almost linearly between data points, which leads to the low complexity solutions in the over-parameterized regime. As a result, we prove that stochastic gradient descent can learn a class of continuously differentiable functions with generalization bounds of the order of pseudo-formula ( pseudo-formula : the number of samples). Furthermore, our analysis is independent of the kernel methods, including neural tangent kernels.""","""This paper analyzes a mechanism of the implicit regularization caused by nonlinearity of ReLU activation, and suggests that the learned DNNs interpolate almost linearly between data points, which leads to the low complexity solutions in the over-parameterized regime. The main objections include (1) some claims in this paper are not appropriate; (2) lack of proper comparison with prior work; and many other issues in the presentation. I agree with the reviewers evaluation and encourage the authors to improve this paper and resubmit to future conference. """ 478,"""Spike-based causal inference for weight alignment""","['causal', 'inference', 'weight', 'transport', 'rdd', 'regression', 'discontinuity', 'design', 'cifar10', 'biologically', 'plausible']","""In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients. For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes. This produces the so-called ""weight transport problem"" for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli. This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm. However, such random weights do not appear to work well for large networks. Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem. The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design. We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST and CIFAR-10. Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem.""","""All authors agree the paper is well written, and there is a good consensus on acceptance. The last reviewer was concerned about a lack of diversity in datasets, but this was addressed in the rebuttal.""" 479,"""RISE and DISE: Two Frameworks for Learning from Time Series with Missing Data""","['Time Series', 'Missing Data', 'RNN']","""Time series with missing data constitute an important setting for machine learning. The most successful prior approaches for modeling such time series are based on recurrent neural networks that learn to impute unobserved values and then treat the imputed values as observed. We start by introducing Recursive Input and State Estimation (RISE), a general framework that encompasses such prior approaches as specific instances. Since RISE instances tend to suffer from poor long-term performance as errors are amplified in feedback loops, we propose Direct Input and State Estimation (DISE), a novel framework in which input and state representations are learned from observed data only. The key to DISE is to include time information in representation learning, which enables the direct modeling of arbitrary future time steps by effectively skipping over missing values, rather than imputing them, thus overcoming the error amplification encountered by RISE methods. We benchmark instances of both frameworks on two forecasting tasks, observing that DISE achieves state-of-the-art performance on both.""","""The paper attacks the important problem of learning time series models with missing data and proposes two learning frameworks, RISE and DISE, for this problem. The reviewers had several concerns about the paper and experimental setup and agree that this paper is not yet ready for publication. Please pay careful attention to the reviewer comments and particularly address the comments related to experimental design, clarity, and references to prior work while editing the paper.""" 480,"""Efficient generation of structured objects with Constrained Adversarial Networks""","['deep generative models', 'generative adversarial networks', 'constraints']","""Despite their success, generative adversarial networks (GANs) cannot easily generate structured objects like molecules or game maps. The issue is that such objects must satisfy structural requirements (e.g., molecules must be chemically valid, game maps must guarantee reachability of the end goal) that are difficult to capture with examples alone. As a remedy, we propose constrained adversarial networks (CANs), which embed the constraints into the model during training by penalizing the generator whenever it outputs invalid structures. As in unconstrained GANs, new objects can be sampled straightforwardly from the generator, but in addition they satisfy the constraints with high probability. Our approach handles arbitrary logical constraints and leverages knowledge compilation techniques to efficiently evaluate the expected disagreement between the model and the constraints. This setup is further extended to hybrid logical-neural constraints for capturing complex requirements like graph reachability. An extensive empirical analysis on constrained images, molecules, and video game levels shows that CANs efficiently generate valid structures that are both high-quality and novel.""","""This paper develops ideas for enabling the data generation with GANs in the presence of structured constraints on the data manifold. This problem is interesting and quite relevant to the ICLR community. The reviewers raised concerns about the similarity to prior work (Xu et al '17), and missing comparisons to previous approaches that study this problem (e.g. Hu et al '18) that make it difficult to judge the significance of the work. Overall, the paper is slightly below the bar for acceptance.""" 481,"""Continuous Meta-Learning without Tasks""","['Meta-learning', 'Continual learning', 'changepoint detection', 'Bayesian learning']","""Meta-learning is a promising strategy for learning to efficiently learn within new tasks, using data gathered from a distribution of tasks. However, the meta-learning literature thus far has focused on the task segmented setting, where at train-time, offline data is assumed to be split according to the underlying task, and at test-time, the algorithms are optimized to learn in a single task. In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with a time-varying task. We present meta-learning via online changepoint analysis (MOCA), an approach which augments a meta-learning algorithm with a differentiable Bayesian changepoint detection scheme. The framework allows both training and testing directly on time series data without segmenting it into discrete tasks. We demonstrate the utility of this approach on a nonlinear meta-regression benchmark as well as two meta-image-classification benchmarks.""","""In this paper the authors view meta-learning under a general, less studied viewpoint, which does not make the typical assumption that task segmentation is provided. In this context, change-point analysis is used as a tool to complement meta-learning in this expanded domain. The expansion of meta-learning in this more general and often more practical context is significant and the paper is generally well written. However, considering this particular (non)segmentation setting is not an entirely novel idea; for example the reviewers have already pointed out [1] (which the authors agreed to discuss), but also [2] is another relevant work. The authors are highly encouraged to incorporate results, or at least a discussion, with respect to at least [2]. It seems likely that inferring boundaries could be more powerful, but it is important to better motivate this for a final paper. Moreover, the paper could be strengthened by significantly expanding the discussion about practical usefulness of the approach. R3 provides a suggestion towards this direction, that is, to explore the performance in a situation where task segmentation is truly unavailable. [1] Rahaf et el. ""Task-Free Continual Learning"". [2] Riemer et al. ""Learning to learn without forgetting by maximizing transfer and minimizing interference"". """ 482,"""Network Pruning for Low-Rank Binary Index""","['Pruning', 'Model compression', 'Index compression', 'low-rank', 'binary matrix decomposition']",""" Pruning is an efficient model compression technique to remove redundancy in the connectivity of deep neural networks (DNNs). A critical problem to represent sparse matrices after pruning is that if fewer bits are used for quantization and pruning rate is enhanced, then the amount of index becomes relatively larger. Moreover, an irregular index form leads to low parallelism for convolutions and matrix multiplications. In this paper, we propose a new network pruning technique that generates a low-rank binary index matrix to compress index data significantly. Specifically, the proposed compression method finds a particular fine-grained pruning mask that can be decomposed into two binary matrices while decompressing index data is performed by simple binary matrix multiplication. We also propose a tile-based factorization technique that not only lowers memory requirements but also enhances compression ratio. Various DNN models (including conv layers and LSTM layers) can be pruned with much fewer indices compared to previous sparse matrix formats while maintaining the same pruning rate.""","""The submission proposes a method to improve over a standard binary network pruning strategy by the inclusion of a structured matrix product to encourage network weight sparsification that can have better memory and computational properties. The idea is well motivated, but there were reviewer concerns about the quality of writing and in particular the quality of the experiments. The reviewers were unanimous that the paper is not suitable for acceptance at ICLR, and no rebuttal was provided.""" 483,"""Lift-the-flap: what, where and when for context reasoning""","['contextual reasoning', 'visual recognition', 'human behavior', 'intelligent sampling']","""Context reasoning is critical in a wide variety of applications where current inputs need to be interpreted in the light of previous experience and knowledge. Both spatial and temporal contextual information play a critical role in the domain of visual recognition. Here we investigate spatial constraints (what image features provide contextual information and where they are located), and temporal constraints (when different contextual cues matter) for visual recognition. The task is to reason about the scene context and infer what a target object hidden behind a flap is in a natural image. To tackle this problem, we first describe an online human psychophysics experiment recording active sampling via mouse clicks in lift-the-flap games and identify clicking patterns and features which are diagnostic for high contextual reasoning accuracy. As a proof of the usefulness of these clicking patterns and visual features, we extend a state-of-the-art recurrent model capable of attending to salient context regions, dynamically integrating useful information, making inferences, and predicting class label for the target object over multiple clicks. The proposed model achieves human-level contextual reasoning accuracy, shares human-like sampling behavior and learns interpretable features for contextual reasoning.""","""The authors present the task lift-the-flap where an agent (artificial or human) is presented with a blurred image and a hidden item. The agent can de-blur the parts of the image by clicking on it. The authors introduce a model for this task (ClickNet) and they compare this against others. As reviewers point, this paper presents an interesting set of experiments and analyses. Overall, this type of work can be quite influential as it gives an alternative way to improve our models by unveiling human strategies and using those as inductive biases for our models. That being said, I find the conclusions of this paper quite narrow for the general audience of ICLR (as R2 and R3 also point), as authors look into an artificial task and show ClickNet performs well. But what have we learned beyond that? How do we use these results to improve either our models or our understanding of these models? I believe these are the type of questions that are missing from the current version of the paper and that if answered would greatly increase its impact and relevance to the ICLR community. At the moment though, I cannot recommend this paper for acceptance. """ 484,"""Continuous Adaptation in Multi-agent Competitive Environments""","['multi-agent environment', 'continuous adaptation', 'Nash equilibrium', 'deep counterfactual regret minimization', 'reinforcement learning', 'stochastic game', 'baseball']","""In a multi-agent competitive environment, we would expect an agent who can quickly adapt to environmental changes may have a higher probability to survive and beat other agents. In this paper, to discuss whether the adaptation capability can help a learning agent to improve its competitiveness in a multi-agent environment, we construct a simplified baseball game scenario to develop and evaluate the adaptation capability of learning agents. Our baseball game scenario is modeled as a two-player zero-sum stochastic game with only the final reward. We purpose a modified Deep CFR algorithm to learn a strategy that approximates the Nash equilibrium strategy. We also form several teams, with different teams adopting different playing strategies, trying to analyze (1) whether an adaptation mechanism can help in increasing the winning percentage and (2) what kind of initial strategies can help a team to get a higher winning percentage. The experimental results show that the learned Nash-equilibrium strategy is very similar to real-life baseball game strategy. Besides, with the proposed strategy adaptation mechanism, the winning percentage can be increased for the team with a Nash-equilibrium initial strategy. Nevertheless, based on the same adaptation mechanism, those teams with deterministic initial strategies actually become less competitive.""","""This paper studies whether adopting strategy adaptation mechanisms helps players improve their performance in zero-sum stochastic games (in this case baseball). Moreover they study two questions in particular, a) whether adaptation techniques are helpful when faced with a small number of iterations and 2) whats the effect of different initial strategies when both teams adopt the same adaptation technique. Reviewers expressed concerns regarding the fact that the authors adaptation techniques improve upon initial strategies, which seems to indicate that their initial strategies were not Nash (despite the use of CFR). In the lack of theory of why this seems to happen at the current setup (and whether indeed the initial strategies are Nash and why do the improve), stronger empirical evidence from more rigorous experiments seem somewhat necessary for recommending acceptance of this paper.""" 485,"""Effect of Activation Functions on the Training of Overparametrized Neural Nets""","['activation functions', 'deep learning theory', 'neural networks']","""It is well-known that overparametrized neural networks trained using gradient based methods quickly achieve small training error with appropriate hyperparameter settings. Recent papers have proved this statement theoretically for highly overparametrized networks under reasonable assumptions. These results either assume that the activation function is ReLU or they depend on the minimum eigenvalue of a certain Gram matrix. In the latter case, existing works only prove that this minimum eigenvalue is non-zero and do not provide quantitative bounds which require that this eigenvalue be large. Empirically, a number of alternative activation functions have been proposed which tend to perform better than ReLU at least in some settings but no clear understanding has emerged. This state of affairs underscores the importance of theoretically understanding the impact of activation functions on training. In the present paper, we provide theoretical results about the effect of activation function on the training of highly overparametrized 2-layer neural networks. A crucial property that governs the performance of an activation is whether or not it is smooth: For non-smooth activations such as ReLU, SELU, ELU, which are not smooth because there is a point where either the rst order or second order derivative is discontinuous, all eigenvalues of the associated Gram matrix are large under minimal assumptions on the data. For smooth activations such as tanh, swish, polynomial, which have derivatives of all orders at all points, the situation is more complex: if the subspace spanned by the data has small dimension then the minimum eigenvalue of the Gram matrix can be small leading to slow training. But if the dimension is large and the data satises another mild condition, then the eigenvalues are large. If we allow deep networks, then the small data dimension is not a limitation provided that the depth is sufcient. We discuss a number of extensions and applications of these results.""","""The article studies the role of the activation function in learning of 2 layer overparaemtrized networks, presenting results on the minimum eigenvalues of the Gram matrix that appears in this type of analysis and which controls the rate of convergence. The article makes numerous observations contributing to the development of principles for the design of activation functions and a better understanding of an active area of investigation as is convergence in overparametrized nets. The reviewers were generally positive about this article. """ 486,"""Encoding word order in complex embeddings""","['word embedding', 'complex-valued neural network', 'position embedding']","""Sequential word order is important when processing text. Currently, neural networks (NNs) address this by modeling word position using position embeddings. The problem is that position embeddings capture the position of individual words, but not the ordered relationship (e.g., adjacency or precedence) between individual word positions. We present a novel and principled solution for modeling both the global absolute positions of words and their order relationships. Our solution generalizes word embeddings, previously defined as independent vectors, to continuous word functions over a variable (position). The benefit of continuous functions over variable positions is that word representations shift smoothly with increasing positions. Hence, word representations in different positions can correlate with each other in a continuous function. The general solution of these functions can be extended to complex-valued variants. We extend CNN, RNN and Transformer NNs to complex-valued versions to incorporate our complex embedding (we make all code available). Experiments on text classification, machine translation and language modeling show gains over both classical word embeddings and position-enriched word embeddings. To our knowledge, this is the first work in NLP to link imaginary numbers in complex-valued representations to concrete meanings (i.e., word order).""","""This paper describes a new language model that captures both the position of words, and their order relationships. This redefines word embeddings (previously thought of as fixed and independent vectors) to be functions of position. This idea is implemented in several models (CNN, RNN and Transformer NNs) to show improvements on multiple tasks and datasets. One reviewer asked for additional experiments, which the authors provided, and which still supported their methodology. In the end, the reviewers agreed this paper should be accepted.""" 487,"""Deep Audio Prior""","['deep audio prior', 'blind sound separation', 'deep learning', 'audio representation']","""Deep convolutional neural networks are known to specialize in distilling compact and robust prior from a large amount of data. We are interested in applying deep networks in the absence of training dataset. In this paper, we introduce deep audio prior (DAP) which leverages the structure of a network and the temporal information in a single audio file. Specifically, we demonstrate that a randomly-initialized neural network can be used with carefully designed audio prior to tackle challenging audio problems such as universal blind source separation, interactive audio editing, audio texture synthesis, and audio co-separation. To understand the robustness of the deep audio prior, we construct a benchmark dataset Universal-150 for universal sound source separation with a diverse set of sources. We show superior audio results than previous work on both qualitatively and quantitative evaluations. We also perform thorough ablation study to validate our design choices.""","""This paper proposes to use CNN'S prior to deal with the tasks in audio processing. The motivation is weak and the presentation is not clear. The technical contribution is trivial.""" 488,"""Frequency Analysis for Graph Convolution Network""","['graph signal processing', 'frequency analysis', 'graph convolution neural network', 'simplified convolution network', 'semi-supervised vertex classification']","""In this work, we develop quantitative results to the learnablity of a two-layers Graph Convolutional Network (GCN). Instead of analyzing GCN under some classes of functions, our approach provides a quantitative gap between a two-layers GCN and a two-layers MLP model. Our analysis is based on the graph signal processing (GSP) approach, which can provide much more useful insights than the message-passing computational model. Interestingly, based on our analysis, we have been able to empirically demonstrate a few case when GCN and other state-of-the-art models cannot learn even when true vertex features are extremely low-dimensional. To demonstrate our theoretical findings and propose a solution to the aforementioned adversarial cases, we build a proof of concept graph neural network model with stacked filters named Graph Filters Neural Network (gfNN). ""","""This paper studies two-layer graph convolutional networks and two-layer multi-layer perceptions and develops quantitative results of their effect in signal processing settings. The paper received 3 reviews by experts working in this area. R1 recommends Weak Accept, indicating that the paper provides some useful insight (e.g. into when graph neural networks are or are not appropriate for particular problems) and poses some specific technical questions. In follow up discussions after the author response, R1 and authors agree that there are some over claims in the paper but that these could be addressed with some toning down of claims and additional discussion. R2 recommends Weak Accept but raises several concerns about the technical contribution of the paper, indicating that some of the conclusions were already known or are unsurprising. R2 concludes ""I vote for weak accept, but I am fine if it is rejected."" R3 recommends Reject, also questioning the significance of the technical contribution and whether some of the conclusions are well-supported by experiments, as well as some minor concerns about clarity of writing. In their thoughtful responses, authors acknowledge these concerns. Given the split decision, the AC also read the paper. While it is clear it has significant merit, the concerns about significance of the contribution and support for conclusions (as acknowledged by authors) are important, and the AC feels a revision of the paper and another round of peer review is really needed to flesh these issues out. """ 489,"""The Implicit Bias of Depth: How Incremental Learning Drives Generalization""","['gradient flow', 'gradient descent', 'implicit regularization', 'implicit bias', 'generalization', 'optimization', 'quadratic network', 'matrix sensing']","""A leading hypothesis for the surprising generalization of neural networks is that the dynamics of gradient descent bias the model towards simple solutions, by searching through the solution space in an incremental order of complexity. We formally define the notion of incremental learning dynamics and derive the conditions on depth and initialization for which this phenomenon arises in deep linear models. Our main theoretical contribution is a dynamical depth separation result, proving that while shallow models can exhibit incremental learning dynamics, they require the initialization to be exponentially small for these dynamics to present themselves. However, once the model becomes deeper, the dependence becomes polynomial and incremental learning can arise in more natural settings. We complement our theoretical findings by experimenting with deep matrix sensing, quadratic neural networks and with binary classification using diagonal and convolutional linear networks, showing all of these models exhibit incremental learning.""","""The paper studies the role of depth on incremental learning, defined as a favorable learning regime in which one searches through the hypothesis space in increasing order of complexity. Specifically, it establishes a dynamical depth separation result, whereby shallow models require exponetially smaller initializations than deep ones in order to operate in the incremental learning regime. Despite some concerns shared amongst reviewers about the significance of these results to explain realistic deep models (that exhibit nonlinear behavior as well as interactions between neurons) and some remarks about the precision of some claims, the overall consensus -- also shared by the AC -- is that this paper puts forward an interesting phenomenon that will likely spark future research in this important direction. The AC thus recommends acceptance. """ 490,"""Dropout: Explicit Forms and Capacity Control""",[],"""We investigate the capacity control provided by dropout in various machine learning problems. First, we study dropout for matrix sensing, where it induces a data-dependent regularizer that, in expectation, equals the weighted trace-norm of the product of the factors. In deep learning, we show that the data-dependent regularizer due to dropout directly controls the Rademacher complexity of the underlying class of deep neural networks. These developments enable us to give concrete generalization error bounds for the dropout algorithm in both matrix completion as well as training deep neural networks. We evaluate our theoretical findings on real-world datasets, including MovieLens, Fashion MNIST, and CIFAR-10.""","""The authors study dropout for matrix sensing and deep learning, and show that dropout induces a data-dependent regularizer in both cases. In both cases, dropout controls quantities that yield generalization bounds. Reviewers raised several concerns, and several of these were vehemently rebutted. The rhetoric of the back and forth slid into unfortunate territory, in my opinion, and I'd prefer not to see this sort of thing happen. On the one hand, I can sympathize with the reviewers trying to argue that (un)related work is not related work. On the other hand, it's best to be generous, or you run into this sort of mess. In the end, even the expert reviewers were unswayed. I suspect the next version of this paper may land more smoothly. While many of the technical issues are rebutted, one that caught my attention pertained to the empirical work. Reviewer #4 noticed that the empirical evaluations do not meet the sample complexity requirements for the bounds to be valid (nevermind loose). The response suggests this is simply a fact of making the bounds looser, but I suspect it may also change their form in this regime, potentially erasing the empirical findings. I suggest the authors carefully consider whether all assumptions are met, and relay this more carefully to readers.""" 491,"""Efficient Saliency Maps for Explainable AI""","['Saliency', 'XAI', 'Efficent', 'Information']","""We describe an explainable AI saliency map method for use with deep convolutional neural networks (CNN) that is much more efficient than popular gradient methods. It is also quantitatively similar or better in accuracy. Our technique works by measuring information at the end of each network scale. This is then combined into a single saliency map. We describe how saliency measures can be made more efficient by exploiting Saliency Map Order Equivalence. Finally, we visualize individual scale/layer contributions by using a Layer Ordered Visualization of Information. This provides an interesting comparison of scale information contributions within the network not provided by other saliency map methods. Our method is generally straight forward and should be applicable to the most commonly used CNNs. (Full source code is available at pseudo-url).""","""The paper presents an efficient approach to computer saliency measures by exploiting saliency map order equivalence (SMOE), and visualization of individual layer contribution by a layer ordered visualization of information. The authors did a good job at addressing most issues raised in the reviews. In the end, two major concerns remained not fully addressed: one is the motivation of efficiency, and the other is how much better SMOE is compared with existing statistics. I think these two issue also determines how significance the work is. After discussion, we agree that while the revised draft pans out to be a much more improved one, the work itself is nothing groundbreaking. Given many other excellent papers on related topics, the paper cannot make the cut for ICLR. """ 492,"""MODiR: Multi-Objective Dimensionality Reduction for Joint Data Visualisation""","['dimensionality reduction', 'visualisation', 'text visualisation', 'network drawing']","""Many large text collections exhibit graph structures, either inherent to the content itself or encoded in the metadata of the individual documents. Example graphs extracted from document collections are co-author networks, citation networks, or named-entity-cooccurrence networks. Furthermore, social networks can be extracted from email corpora, tweets, or social media. When it comes to visualising these large corpora, either the textual content or the network graph are used. In this paper, we propose to incorporate both, text and graph, to not only visualise the semantic information encoded in the documents' content but also the relationships expressed by the inherent network structure. To this end, we introduce a novel algorithm based on multi-objective optimisation to jointly position embedded documents and graph nodes in a two-dimensional landscape. We illustrate the effectiveness of our approach with real-world datasets and show that we can capture the semantics of large document collections better than other visualisations based on either the content or the network information.""","""There is a consensus among reviewers that the paper should not be accepted. No rebuttal was provided, so the paper is rejected. """ 493,"""On the Decision Boundaries of Deep Neural Networks: A Tropical Geometry Perspective""","['Decision boundaries', 'Neural Network', 'Tropical Geometry', 'Network Pruning', 'Adversarial Attacks', 'Lottery Ticket Hypothesis']","""This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piece-wise linear non-linearity activations. We use tropical geometry, a new development in the area of algebraic geometry, to provide a characterization of the decision boundaries of a simple neural network of the form (Affine, ReLU, Affine). Specifically, we show that the decision boundaries are a subset of a tropical hypersurface, which is intimately related to a polytope formed by the convex hull of two zonotopes. The generators of the zonotopes are precise functions of the neural network parameters. We utilize this geometric characterization to shed light and new perspective on three tasks. In doing so, we propose a new tropical perspective for the lottery ticket hypothesis, where we see the effect of different initializations on the tropical geometric representation of the decision boundaries. Also, we leverage this characterization as a new set of tropical regularizers, which deal directly with the decision boundaries of a network. We investigate the use of these regularizers in neural network pruning (removing network parameters that do not contribute to the tropical geometric representation of the decision boundaries) and in generating adversarial input attacks (with input perturbations explicitly perturbing the decision boundaries geometry to change the network prediction of the input). ""","""This paper studies the decision boundaries of a certain class of neural networks (piecewise linear, non-linear activation functions) using tropical geometry, a subfield of algebraic geometry that leverages piece-wise linear structures. Building on earlier work, such piecewise linear networks are shown to be represented as a tropical rational function. This characterisation is used to explain different phenomena of neural network training, such as the 'lottery ticket hypothesis', network pruning, and adversarial attacks. This paper received mixed reviews, owing to its very specialized area. Whereas R1 championed the submission for its technical novelty, the other reviewers felt the current exposition is too inaccessible and some application areas are not properly addressed. The AC shares these concerns, recommends rejection and strongly encourages the authors to address the reviewers concerns in the next iteration. """ 494,"""Function Feature Learning of Neural Networks""",[],"""We present a Function Feature Learning (FFL) method that can measure the similarity of non-convex neural networks. The function feature representation provides crucial insights into the understanding of the relations between different local solutions of identical neural networks. Unlike existing methods that use neuron activation vectors over a given dataset as neural network representation, FFL aligns weights of neural networks and projects them into a common function feature space by introducing a chain alignment rule. We investigate the function feature representation on Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN), finding that identical neural networks trained with different random initializations on different learning tasks by the Stochastic Gradient Descent (SGD) algorithm can be projected into different fixed points. This finding demonstrates the strong connection between different local solutions of identical neural networks and the equivalence of projected local solutions. With FFL, we also find that the semantics are often presented in a bottom-up way. Besides, FFL provides more insights into the structure of local solutions. Experiments on CIFAR-100, NameData, and tiny ImageNet datasets validate the effectiveness of the proposed method.""","""This paper tackles an important problem: understanding if different NN solutions are similar or different. In the current form, however, the main motivation for the approach, and what the empirical results tell us, remains unclear. I read the paper after the updates and after reading reviews and author responses, and still had difficulty understanding the goals and outcomes of the experiments (such as what exactly is being reported as test accuracy and what is meant by: ""High test accuracy means that assumptions are reasonable.""). We highly recommend that the authors revisit the description of the motivation and approach based on comments from reviewers; further explain what is reported as test accuracy in the experiments; and more clearly highlight the insights obtain from the experiments. """ 495,"""Multi-source Multi-view Transfer Learning in Neural Topic Modeling with Pretrained Topic and Word Embeddings""","['Neural Topic Modeling', 'Transfer Learning', 'Unsupervised learning', 'Natural Language Processing']","""Though word embeddings and topics are complementary representations, several past works have only used pretrained word embeddings in (neural) topic modeling to address data sparsity problem in short text or small collection of documents. However, no prior work has employed (pretrained latent) topics in transfer learning paradigm. In this paper, we propose a framework to perform transfer learning in neural topic modeling using (1) pretrained (latent) topics obtained from a large source corpus, and (2) pretrained word and topic embeddings jointly (i.e., multiview) in order to improve topic quality, better deal with polysemy and data sparsity issues in a target corpus. In doing so, we first accumulate topics and word representations from one or many source corpora to build respective pools of pretrained topic (i.e., TopicPool) and word embeddings (i.e., WordPool). Then, we identify one or multiple relevant source domain(s) and take advantage of corresponding topics and word features via the respective pools to guide meaningful learning in the sparse target domain. We quantify the quality of topic and document representations via generalization (perplexity), interpretability (topic coherence) and information retrieval (IR) using short-text, long-text, small and large document collections from news and medical domains. We have demonstrated the state-ofthe- art results on topic modeling with the proposed transfer learning approaches.""","""This paper presents a transfer learning framework in neural topic modeling. Authors claim and reviewers agree that this view of transfer learning in the realm of topic modeling is novel. However, after much deliberation and discussion among the reviewers, we conclude that this paper does not contribute sufficient novelty in terms of the method. Also, reviewers find the experiments and results not sufficiently convincing. I sincerely thank the authors for submitting to ICLR and hope to see a revised paper in a future venue.""" 496,"""Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution""","['Deep Reinforcement Learning', 'Saliency maps', 'Chess', 'Go', 'Atari', 'Interpretable AI', 'Explainable AI']","""As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent. Our proposed approach, SARFA (Specific and Relevant Feature Attribution), generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency. The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweighs irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare SARFA with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that SARFA generates saliency maps that are more interpretable for humans than existing approaches. For the code release and demo videos, see: pseudo-url.""","""A new method of calculating saliency maps for deep networks trained through RL (for example to play games) is presented. The method is aimed at explaining why moves were taken by showing which salient features influenced the move, and seems to work well based on experiments with Chess, Go, and several Atari games. Reviewer 2 had a number of questions related to the performance of the method under various conditions, and these were answered satisfactorily by the reviewers. This is a solid paper with good reasoning and results, though perhaps not super novel, as the basic idea of explaining policies with saliency is not new. It should be accepted for poster presentation. """ 497,"""Physics-Aware Flow Data Completion Using Neural Inpainting""","['neural inpainting', 'fluid dynamics', 'flow data completion', 'physics-aware network']","""In this paper we propose a physics-aware neural network for inpainting fluid flow data. We consider that flow field data inherently follows the solution of the Navier-Stokes equations and hence our network is designed to capture physical laws. We use a DenseBlock U-Net architecture combined with a stream function formulation to inpaint missing velocity data. Our loss functions represent the relevant physical quantities velocity, velocity Jacobian, vorticity and divergence. Obstacles are treated as known priors, and each layer of the network receives the relevant information through concatenation with the previous layer's output. Our results demonstrate the network's capability for physics-aware completion tasks, and the presented ablation studies show the effectiveness of each proposed component.""","""The authors present a physics-aware models for inpainting fluid data. In particular, the authors extend the vanilla U-net architecture and add losses that explicitly bias the network towards physically meaningful solutions. While the reviewers found the work to be interesting, they raised a few questions/objections which are summarised below: 1) Novelty: The reviewers largely found the idea to be novel. I agree that this is indeed novel and a step in the right direction. 2) Experiments: The main objection was to the experimental methodology. In particular, since most of the experiments were on simulated data the reviewers expected simulations where the test conditions were a bit more different than the training conditions. It is not very clear whether the training and test conditions were different and it would have been useful if the authors had clarified this in the rebuttal. The reviewers have also suggested a more thorough ablation study. 3) Organisation: The authors could have used the space more effectively by providing additional details and ablation studies. Unfortunately, the authors did not engage with the reviewers and respond to their queries. I understand that this could have been because of the poor ratings which would have made the authors believe that a discussion wouldn't help. The reviewers have asked very relevant Qs and made some interesting suggestions about the experimental setup. I strongly recommend the authors to consider these during subsequent submissions. Based on the reviewer comments and lack of response from the authors, I recommend that the paper cannot be accepted. """ 498,"""Unifying Graph Convolutional Neural Networks and Label Propagation""","['graph convolutional neural networks', 'label propagation', 'node classification']","""Label Propagation (LPA) and Graph Convolutional Neural Networks (GCN) are both message passing algorithms on graphs. Both solve the task of node classification but LPA propagates node label information across the edges of the graph, while GCN propagates and transforms node feature information. However, while conceptually similar, theoretical relation between LPA and GCN has not yet been investigated. Here we study the relationship between LPA and GCN in terms of two aspects: (1) feature/label smoothing where we analyze how the feature/label of one node are spread over its neighbors; And, (2) feature/label influence of how much the initial feature/label of one node influences the final feature/label of another node. Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification. In our unified model, edge weights are learnable, and the LPA serves as regularization to assist the GCN in learning proper edge weights that lead to improved classification performance. Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models. In a number of experiments on real-world graphs, our model shows superiority over state-of-the-art GCN-based methods in terms of node classification accuracy. ""","""The authors attempt to unify graph convolutional networks and label propagation and propose a model that unifies them. The reviewers liked the idea but felt that more extensive experiments are needed. The impact of labels needs to be specially studied more in-depth.""" 499,"""Multi-Agent Hierarchical Reinforcement Learning for Humanoid Navigation""","['Multi-Agent Reinforcement Learning', 'Reinforcement Learning', 'Hierarchical Reinforcement Learning']","""Multi-agent reinforcement learning is a particularly challenging problem. Current methods have made progress on cooperative and competitive environments with particle-based agents. Little progress has been made on solutions that could op- erate in the real world with interaction, dynamics, and humanoid robots. In this work, we make a significant step in multi-agent models on simulated humanoid robot navigation by combining Multi-Agent Reinforcement Learning (MARL) with Hierarchical Reinforcement Learning (HRL). We build on top of founda- tional prior work in learning low-level physical controllers for locomotion and add a layer to learn decentralized policies for multi-agent goal-directed collision avoidance systems. A video of our results on a multi-agent pursuit environment can be seen here ""","""This paper presents an approach combining multi-agent with hierarchical RL in a custom-made simulated humanoid robotics setting. Although it is an interesting premise and has a compelling motivation (multi-agent, real-world interaction, humanoid robotics), the reviewers had some trouble pinpointing what the significant contributions are. Partly this is due to lack of clarity in the presentation, such as with overlong sections (eg 5.2), unclear descriptions, mistakes in the text, etc. Reviewers also remarked that this paper might be trying to do too much, without performing the necessary experiments/comparisons and analyses needed to interpret the contributions of each component. This work is definitely promising and has the potential to make a nice contribution, given some additional care (experiments, analyses) and rewriting/polishing. As it is, its probably a bit premature for publication at ICLR. """ 500,"""Variational Hyper RNN for Sequence Modeling""","['variational autoencoder', 'hypernetwork', 'recurrent neural network', 'time series']","""In this work, we propose a novel probabilistic sequence model that excels at capturing high variability in time series data, both across sequences and within an individual sequence. Our method uses temporal latent variables to capture information about the underlying data pattern and dynamically decodes the latent information into modifications of weights of the base decoder and recurrent model. The efficacy of the proposed method is demonstrated on a range of synthetic and real-world sequential data that exhibit large scale variations, regime shifts, and complex dynamics.""","""The paper proposes a neural network architecture that uses a hypernetwork (RNN or feedforward) to generate weights for a network (variational RNN), that models sequential data. An empirical comparison of a large number of configurations on synthetic and real world data show the promise of this method. The authors have been very responsive during the discussion period, and generated many new results to address some reviewer concerns. Apart from one reviewer, the others did not engage in further discussion in response to the authors updating their paper. The paper provides a tweak to the hypernetwork idea for modeling sequential data. There are many strong submissions at ICLR this year on RNNs, and the submission in its current state unfortunately does not pass the threshold.""" 501,"""QGAN: Quantize Generative Adversarial Networks to Extreme low-bits""","['generative adversarial networks', 'quantization', 'extreme low bits']","""The intensive computation and memory requirements of generative adversarial neural networks (GANs) hinder its real-world deployment on edge devices such as smartphones. Despite the success in model reduction of convolutional neural networks (CNNs), neural network quantization methods have not yet been studied on GANs, which are mainly faced with the issues of both the effectiveness of quantization algorithms and the instability of training GAN models. In this paper, we start with an extensive study on applying existing successful CNN quantization methods to quantize GAN models to extreme low bits. Our observation reveals that none of them generates samples with reasonable quality because of the underrepresentation of quantized weights in models, and the generator and discriminator networks show different sensitivities upon the quantization precision. Motivated by these observations, we develop a novel quantization method for GANs based on EM algorithms, named as QGAN. We also propose a multi-precision algorithm to help find an appropriate quantization precision of GANs given image qualities requirements. Experiments on CIFAR-10 and CelebA show that QGAN can quantize weights in GANs to even 1-bit or 2-bit representations with results of quality comparable to original models.""","""main summary: method for quantizing GAN discussion: reviewer 1: well-written paper, but reviewer questions novelty reviewer 2: well-written, but some details are missing in the paper as well as comparisons to related work reviewer 3: well-written and interesting topic, related work section and clarity of results could be improved recommendation: all reviewers agree paper could be improved by better comparison to related work and better clarity of presentation. Marking paper as reject.""" 502,"""A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms""","['meta-learning', 'transfer learning', 'structure learning', 'modularity', 'causality']","""We propose to use a meta-learning objective that maximizes the speed of transfer on a modified distribution to learn how to modularize acquired knowledge. In particular, we focus on how to factor a joint distribution into appropriate conditionals, consistent with the causal directions. We explain when this can work, using the assumption that the changes in distributions are localized (e.g. to one of the marginals, for example due to an intervention on one of the variables). We prove that under this assumption of localized changes in causal mechanisms, the correct causal graph will tend to have only a few of its parameters with non-zero gradient, i.e. that need to be adapted (those of the modified variables). We argue and observe experimentally that this leads to faster adaptation, and use this property to define a meta-learning surrogate score which, in addition to a continuous parametrization of graphs, would favour correct causal graphs. Finally, motivated by the AI agent point of view (e.g. of a robot discovering its environment autonomously), we consider how the same objective can discover the causal variables themselves, as a transformation of observed low-level variables with no causal meaning. Experiments in the two-variable case validate the proposed ideas and theoretical results.""","""This paper proposes to discover causal mechanisms through meta-learning, and suggests an approach for doing so. The reviewers raised concerns about the key hypothesis (that the right causal model implies higher expected online likelihood) not being sufficiently backed up through theory or through experiments on real data. The authors pointed to a recent paper that builds upon this work and tests on a more realistic problem setting. However, the newer paper measures not the online likelihood of adaptation, but just the training error during adaptation, suggesting that the approach in this paper may be worse. Despite the concerns, the reviewers generally agreed that the paper included novel and interesting ideas, and addressed a number of the reviewers' other concerns about the clarity, references, and experiments. Hence, it makes a worthwhile contribution to ICLR.""" 503,"""SGD Learns One-Layer Networks in WGANs""","['Wasserstein GAN', 'global min-max', 'one-layer network']","""Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax problem to global optimality, but in practice, are successfully trained with stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution in polynomial time and sample complexity.""","""This article studies convergence of WGAN training using SGD and generators of the form pseudo-formula , with results on convergence with polynomial time and sample complexity under the assumption that the target distribution can be expressed by this type of generator. This expands previous work that considered linear generators. An important point of discussion was the choice of the discriminator as a linear or quadratic function. The authors' responses clarified some of the initial criticism, and the scores improved slightly. Following the discussion, the reviewers agreed that the problem being studied is a difficult one and that the paper makes some important contributions. However, they still found that the considered settings are very restrictive, maintaining that quadratic discriminators would work only for the very simple type of generators and targets under consideration. Although the article makes important advances towards understanding convergence of WGAN training with nonlinear models, the relevance of the contribution could be greatly enhanced by addressing / discussing the plausibility or implications of the analysis in a practical setting, in the best case scenario addressing a more practical type of neural networks. """ 504,"""Stein Bridging: Enabling Mutual Reinforcement between Explicit and Implicit Generative Models""","['generative models', 'generative adversarial networks', 'energy models']","""Deep generative models are generally categorized into explicit models and implicit models. The former assumes an explicit density form whose normalizing constant is often unknown; while the latter, including generative adversarial networks (GANs), generates samples using a push-forward mapping. In spite of substantial recent advances demonstrating the power of the two classes of generative models in many applications, both of them, when used alone, suffer from respective limitations and drawbacks. To mitigate these issues, we propose Stein Bridging, a novel joint training framework that connects an explicit density estimator and an implicit sample generator with Stein discrepancy. We show that the Stein Bridge induces new regularization schemes for both explicit and implicit models. Convergence analysis and extensive experiments demonstrate that the Stein Bridging i) improves the stability and sample quality of the GAN training, and ii) facilitates the density estimator to seek more modes in data and alleviate the mode-collapse issue. Additionally, we discuss several applications of Stein Bridging and useful tricks in practical implementation used in our experiments.""","""The paper proposes a generative model that jointly trains an implicit generative model and an explicit energy based model using Stein's method. There are concerns about technical correctness of the proofs and the authors are advised to look carefully into the points raised by the reviewers. """ 505,"""Removing input features via a generative model to explain their attributions to classifier's decisions""","['attribution maps', 'generative models', 'inpainting', 'counterfactual', 'explanations', 'interpretability', 'explainability']","""Interpretability methods often measure the contribution of an input feature to an image classifier's decisions by heuristically removing it via e.g. blurring, adding noise, or graying out, which often produce unrealistic, out-of-samples. Instead, we propose to integrate a generative inpainter into three representative attribution map methods as a mechanism for removing input features. Compared to the original counterparts, our methods (1) generate more plausible counterfactual samples under the true data generating process; (2) are more robust to hyperparameter settings; and (3) localize objects more accurately. Our findings were consistent across both ImageNet and Places365 datasets and two different pairs of classifiers and inpainters.""","""Perturbation-based methods often produce artefacts that make the perturbed samples less realistic. This paper proposes to corrects this through use of an inpainter. Authors claim that this results in more plausible perturbed samples and produces methods more robust to hyperparameter settings. Reviewers found the work intuitive and well-motivated, well-written, and the experiments comprehensive. However they also had concerns about minimal novelty and unfair experimental comparisons, as well as inconclusive results. Authors response have not sufficiently addressed these concerns. Therefore, we recommend rejection.""" 506,"""Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning?""","['rl', 'hierarchy', 'reinforcement learning']","""Hierarchical reinforcement learning has demonstrated significant success at solving difficult reinforcement learning (RL) tasks. Previous works have motivated the use of hierarchy by appealing to a number of intuitive benefits, including learning over temporally extended transitions, exploring over temporally extended periods, and training and exploring in a more semantically meaningful action space, among others. However, in fully observed, Markovian settings, it is not immediately clear why hierarchical RL should provide benefits over standard ""shallow"" RL architectures. In this work, we isolate and evaluate the claimed benefits of hierarchical RL on a suite of tasks encompassing locomotion, navigation, and manipulation. Surprisingly, we find that most of the observed benefits of hierarchy can be attributed to improved exploration, as opposed to easier policy learning or imposed hierarchical structures. Given this insight, we present exploration techniques inspired by hierarchy that achieve performance competitive with hierarchical RL while at the same time being much simpler to use and implement. ""","""This paper seeks to analyse the important question around why hierarchical reinforcement learning can be beneficial. The findings show that improved exploration is at the core of this improved performance. Based on these findings, the paper also proposes some simple exploration techniques which are shown to be competitive with hierarchical RL approaches. This is a really interesting paper that could serve to address an oft speculated about result of the relation between HRL and exploration. While the findings of the paper are intuitive, it was agreed by all reviewers that the claims are too general for the evidence presented. The paper should be extended with a wider range of experiments covering more domains and algorithms, and would also benefit from some theoretical results. As it stands this paper should not be accepted.""" 507,"""A Data-Efficient Mutual Information Neural Estimator for Statistical Dependency Testing""","['mutual information', 'fMRI', 'inter-subject correation', 'mutual information neural estimation', 'meta-learning', 'statistical test of dependency']","""Measuring Mutual Information (MI) between high-dimensional, continuous, random variables from observed samples has wide theoretical and practical applications. Recent works have developed accurate MI estimators through provably low-bias approximations and tight variational lower bounds assuming abundant supply of samples, but require an unrealistic number of samples to guarantee statistical significance of the estimation. In this work, we focus on improving data efficiency and propose a Data-Efficient MINE Estimator (DEMINE) that can provide a tight lower confident interval of MI under limited data, through adding cross-validation to the MINE lower bound (Belghazi et al., 2018). Hyperparameter search is employed and a novel meta-learning approach with task augmentation is developed to increase robustness to hyperparamters, reduce overfitting and improve accuracy. With improved data-efficiency, our DEMINE estimator enables statistical testing of dependency at practical dataset sizes. We demonstrate the effectiveness of DEMINE on synthetic benchmarks and a real world fMRI dataset, with application of inter-subject correlation analysis.""","""The paper deal with a mutual information based dependency test. The reviewers have provided extensive and constructive feedback on the paper. The authors have in turn given detailed response withsome new experiments and plans for improvement. Overall the reviewers are not convinced the paper is ready for publication. """ 508,"""On Mutual Information Maximization for Representation Learning""","['mutual information', 'representation learning', 'unsupervised learning', 'self-supervised learning']","""Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data. This comes with several immediate problems: For example, MI is notoriously hard to estimate, and using it as an objective for representation learning may lead to highly entangled representations due to its invariance under arbitrary invertible transformations. Nevertheless, these methods have been repeatedly shown to excel in practice. In this paper we argue, and provide empirical evidence, that the success of these methods cannot be attributed to the properties of MI alone, and that they strongly depend on the inductive bias in both the choice of feature extractor architectures and the parametrization of the employed MI estimators. Finally, we establish a connection to deep metric learning and argue that this interpretation may be a plausible explanation for the success of the recently introduced methods.""","""This paper exams the role of mutual information (MI) estimation in representation learning. Through experiments, they show that the large MI is not predictive of downstream performance, and the empirical success of methods like InfoMax may be more attributed to the inductive bias in the choice of architectures of discriminators, rather than accurate MI estimation. The work is well appreciated by the reviewers. It forms a strong contribution and may motivate subsequent works in the field. """ 509,"""Reformer: The Efficient Transformer""","['attention', 'locality sensitive hashing', 'reversible layers']","""Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O( pseudo-formula ) to O( \log L where pseudo-formula is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.""","""Transformer models have proven to be quite successful when applied to a variety of ML tasks such as NLP. However, the computational and memory requirements can at times be prohibitive, such as when dealing with long sequences. This paper proposes locality-sensitive hashing to reduce the sequence-length complexity, as well as reversible residual layers to reduce storage requirements. Experimental results confirm that the performance of Transformer models can be preserved even with these new efficiencies in place, and hence, this paper will likely have significant impact within the community. Some relatively minor points notwithstanding, all reviewers voted for acceptance which is my recommendation as well. Note that this paper was also vetted by several detailed external commenters. In all cases the authors provided reasonable feedback, and the final revision of the work will surely be even stronger.""" 510,"""The Effect of Residual Architecture on the Per-Layer Gradient of Deep Networks""",[],"""A critical part of the training process of neural networks takes place in the very first gradient steps post initialization. In this work, we study the connection between the network's architecture and initialization parameters, to the statistical properties of the gradient in random fully connected ReLU networks, through the study of the the Jacobian. We compare three types of architectures: vanilla networks, ResNets and DenseNets. The later two, as we show, preserve the variance of the gradient norm through arbitrary depths when initialized properly, which prevents exploding or decaying gradients at deeper layers. In addition, we show that the statistics of the per layer gradient norm is a function of the architecture and the layer's size, but surprisingly not the layer's depth. This depth invariant result is surprising in light of the literature results that state that the norm of the layer's activations grows exponentially with the specific layer's depth. Experimental support is given in order to validate our theoretical results and to reintroduce concatenated ReLU blocks, which, as we show, present better initialization properties than ReLU blocks in the case of fully connected networks.""","""This paper studies the statistics of activation norms and Jacobian norms for randomly-initialized ReLU networks in the presence (and absence) of various types of residual connections. Whereas the variance of the gradient norm grows with depth for vanilla networks, it can be depth-independent for residual networks when using the proper initialization. Reviewers were positive about the setup, but also pointed out important shortcomings on the current manuscript, especially related to the lack of significance of the measured gradient norm statistics with regards to generalisation, and with some techinical aspects of the derivations. For these reasons, the AC believes this paper will strongly benefit from an extra iteration. """ 511,"""NAS evaluation is frustratingly hard""","['neural architecture search', 'nas', 'benchmark', 'reproducibility', 'harking']","""Neural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue. While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all. As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others. Our first contribution is a benchmark of 8 NAS methods on 5 datasets. To overcome the hurdle of comparing methods with different search spaces, we propose using a methods relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols. Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline. We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline. These experiments highlight that: (i) the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures; (ii) the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings; (iii) the hand-designed macrostructure (cells) is more important than the searched micro-structure (operations); and (iv) the depth-gap is a real phenomenon, evidenced by the change in rankings between 8 and 20 cell architectures. To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls, e.g. difficulties in reproducibility and comparison of search methods. The code used is available at pseudo-url.""","""Summary: This paper provides comprehensive empirical evidence for some of the systemic issues in the NAS community, for example showing that several published NAS algorithms do not outperform random sampling on previously unseen data and that the training pipeline is more important in the DARTS space than the exact choice of neural architecture. I very much appreciate that code is available for reproducibility. Reviewer scores and discussion: The reviewers' scores have very high variance: 2/3 reviewers gave clear acceptance scores (8,8), very much liking the paper, whereas one reviewer gave a clear rejection score (1). In the discussion between the reviewers and the AC, despite the positive comments of the other reviewers, AnonReviewer 2 defended his/her position, arguing that the novelty is too low given previous works. The other reviewers argued against this, emphasizing that it is an important contribution to show empirical evidence for the importance of the training protocol (note that the intended contribution is *not* to introduce these training protocols; they are taken from previous work). Due to the high variance, I read the paper myself in detail. Here are my own two cents: - It is not new to compare to a single random sample. Sciuto et al clearly proposed this first; see Figure 1 (c) in pseudo-url - The systematic experiments showing the importance of the training pipeline are very useful, providing proper and much needed empirical evidence for the many existing suggestions that this might be the case. Figure 3 is utterly convincing. - Throughout, it would be good to put the work into perspective a bit more. E.g., correlations have been studied by many authors before. Also, the paper cites the best practice checklist in the beginning, but does not mention it in the section on best practices (my view is that this paper is in line with that checklist and provides important evidence for several points in it; the checklist also contains other points not being discussed in this paper; it would be good to know whether this paper suggests any new points for the checklist). Recommendation: Overall, I firmly believe that this paper is an important contribution to the NAS community. It may be viewed by some as ""just"" running some experiments, but the experiments it shows are very informative and will impact the community and help guide it in the right direction. I therefore recommend acceptance (as a poster).""" 512,"""On Weight-Sharing and Bilevel Optimization in Architecture Search""","['neural architecture search', 'weight-sharing', 'bilevel optimization', 'non-convex optimization', 'hyperparameter optimization', 'model selection']","""Weight-sharingthe simultaneous optimization of multiple neural networks using the same parametershas emerged as a key component of state-of-the-art neural architecture search. However, its success is poorly understood and often found to be surprising. We argue that, rather than just being an optimization trick, the weight-sharing approach is induced by the relaxation of a structured hypothesis space, and introduces new algorithmic and theoretical challenges as well as applications beyond neural architecture search. Algorithmically, we show how the geometry of ERM for weight-sharing requires greater care when designing gradient- based minimization methods and apply tools from non-convex non-Euclidean optimization to give general-purpose algorithms that adapt to the underlying structure. We further analyze the learning-theoretic behavior of the bilevel optimization solved by practical weight-sharing methods. Next, using kernel configuration and NLP feature selection as case studies, we demonstrate how weight-sharing applies to the architecture search generalization of NAS and effectively optimizes the resulting bilevel objective. Finally, we use our optimization analysis to develop a simple exponentiated gradient method for NAS that aligns with the underlying optimization geometry and matches state-of-the-art approaches on CIFAR-10.""","""Since there were only two official reviews submitted, I reviewed the paper to form a third viewpoint. I agree with reviewer 2 on the following points, which support rejection of the paper: 1) Only CIFAR is evaluated without Penn Treebank; 2) The ""faster convergence"" is not empirically justified by better final accuracy with same amount of search cost; and 3) The advantage of the proposed ACSA over SBMD is not clearly demonstrated in the paper. The scores of the two official reviews are insufficient for acceptance, and an additional review did not overturn this view.""" 513,"""U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation""","['Image-to-Image Translation', 'Generative Attentional Networks', 'Adaptive Layer-Instance Normalization']","""We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based method which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets. Experimental results show the superiority of the proposed method compared to the existing state-of-the-art models with a fixed network architecture and hyper-parameters. ""","""The paper proposes a new architecture for unsupervised image2image translation. Following the revision/discussion, all reviewers agree that the proposed ideas are reasonable, well described, convincingly validated, and of clear though limited novelty. Accept.""" 514,"""Ladder Polynomial Neural Networks""",['polynomial neural networks'],"""The underlying functions of polynomial neural networks are polynomial functions. These networks are shown to have nice theoretical properties by previous analysis, but they are actually hard to train when their polynomial orders are high. In this work, we devise a new type of activations and then create the Ladder Polynomial Neural Network (LPNN). This new network can be trained with generic optimization algorithms. With a feedforward structure, it can also be combined with deep learning techniques such as batch normalization and dropout. Furthermore, an LPNN provides good control of its polynomial order because its polynomial order increases by 1 with each of its hidden layers. In our empirical study, deep LPNN models achieve good performances in a series of regression and classification tasks.""","""This paper proposes a new type of Polynomial NN called Ladder Polynomial NN (LPNN) which is easy to train with general optimization algorithms and can be combined with techniques like batch normalization and dropout. Experiments show it works better than FMs with simple classification and regression tasks, but no experiments are done in more complex tasks. All reviewers agree the paper addresses an interesting question and makes some progress but the contribution is limited and there are still many ways to improve.""" 515,"""Generating Dialogue Responses From A Semantic Latent Space""","['dialog', 'chatbot', 'open domain conversation', 'CCA']","""Generic responses are a known issue for open-domain dialog generation. Most current approaches model this one-to-many task as a one-to-one task, hence being unable to integrate information from multiple semantically similar valid responses of a prompt. We propose a novel dialog generation model that learns a semantic latent space, on which representations of semantically related sentences are close to each other. This latent space is learned by maximizing correlation between the features extracted from prompt and responses. Learning the pair relationship between the prompts and responses as a regression task on the latent space, instead of classification on the vocabulary using MLE loss, enables our model to view semantically related responses collectively. An additional autoencoder is trained, for recovering the full sentence from the latent space. Experimental results show that our proposed model eliminates the generic response problem, while achieving comparable or better coherence compared to baselines.""","""This paper proposes a response generation approach that aims to tackle the generic response problem. The approach is learning a latent semantic space by maximizing the correlation between features extracted from prompts and responses. The reviewers were concerned about the lack of comparison with previous papers tackling the same problem, and did not change their decision (i.e., were not convinced) even after the rebuttal. Hence, I suggest a reject for this paper.""" 516,"""Truth or backpropaganda? An empirical investigation of deep learning theory""","['Deep learning', 'generalization', 'loss landscape', 'robustness']","""We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike. In this work, we: (1) prove the widespread existence of suboptimal local minima in the loss landscape of neural networks, and we use our theory to find examples; (2) show that small-norm parameters are not optimal for generalization; (3) demonstrate that ResNets do not conform to wide-network theories, such as the neural tangent kernel, and that the interaction between skip connections and batch normalization plays a role; (4) find that rank does not correlate with generalization or robustness in a practical setting.""","""The authors take a closer look at widely held beliefs about neural networks. Using a mix of analysis and experiment, they shed some light on the ways these assumptions break down. The paper contributes to our understanding of various phenomena and their connection to generalization, and should be a useful paper for theoreticians searching for predictive theories.""" 517,"""Generative Adversarial Networks For Data Scarcity Industrial Positron Images With Attention""",[],"""In the industrial field, the positron annihilation is not affected by complex environment, and the gamma-ray photon penetration is strong, so the nondestructive detection of industrial parts can be realized. Due to the poor image quality caused by gamma-ray photon scattering, attenuation and short sampling time in positron process, we propose the idea of combining deep learning to generate positron images with good quality and clear details by adversarial nets. The structure of the paper is as follows: firstly, we encode to get the hidden vectors of medical CT images based on transfer Learning, and use PCA to extract positron image features. Secondly, we construct a positron image memory based on attention mechanism as a whole input to the adversarial nets which uses medical hidden variables as a query. Finally, we train the whole model jointly and update the input parameters until convergence. Experiments have proved the possibility of generating rare positron images for industrial non-destructive testing using countermeasure networks, and good imaging results have been achieved.""","""The paper studies Positron Emission Tomography (PET) in medical imaging. The paper focuses on the challenges created by gamma-ray photon scattering, that results in poor image quality. To tackle this problem and enhance the image quality, the paper suggests using generative adversarial networks. Unfortunately due to poor writing and severe language issues, none of the three reviewers were able to properly assess the paper [see the reviews for multiple examples of this]. In addition, in places, some important implementation details were missing. The authors chose not to response to reviewers' concerns. In its current form, the submission cannot be well understood by people interested in reading the paper, so it needs to be improved and resubmitted. """ 518,"""MGP-AttTCN: An Interpretable Machine Learning Model for the Prediction of Sepsis""","['time series analysis', 'interpretability', 'Gaussian Processes', 'attention neural networks']","""With a mortality rate of 5.4 million lives worldwide every year and a healthcare cost of more than 16 billion dollars in the USA alone, sepsis is one of the leading causes of hospital mortality and an increasing concern in the ageing western world. Recently, medical and technological advances have helped re-define the illness criteria of this disease, which is otherwise poorly understood by the medical society. Together with the rise of widely accessible Electronic Health Records, the advances in data mining and complex nonlinear algorithms are a promising avenue for the early detection of sepsis. This work contributes to the research effort in the field of automated sepsis detection with an open-access labelling of the medical MIMIC-III data set. Moreover, we propose MGP-AttTCN: a joint multitask Gaussian Process and attention-based deep learning model to early predict the occurrence of sepsis in an interpretable manner. We show that our model outperforms the current state-of-the-art and present evidence that different labelling heuristics lead to discrepancies in task difficulty.""","""The problem of introducing interpretability into sepsis prediction frameworks is one that I find a very important contribution, and I personally like the ideas presented in this paper. However, there are two reviewers, who have experience at the boundary of ML and HC, who are flagging this paper as currently not focusing on the technical novelty, and explaining the HC application enough to be appreciated by the ICLR audience. As such my recommendation is to edit the exposition so that it more appropriate for a general ML audience, or to submit it to an ML for HC meeting. Great work, and I hope it finds the right audience/focus soon. """ 519,"""Filter redistribution templates for iteration-lessconvolutional model reduction""","['Model reduction', 'Pruning', 'filter distribution']","""Automatic neural network discovery methods face an enormous challenge caused for the size of the search space. A common practice is to split this space at different levels and to explore only a part of it. Neural architecture search methods look for how to combine a subset of layers, which are the most promising, to create an architecture while keeping a predefined number of filters in each layer. On the other hand, pruning techniques take a well known architecture and look for the appropriate number of filters per layer. In both cases the exploration is made iteratively, training models several times during the search. Inspired by the advantages of the two previous approaches, we proposed a fast option to find models with improved characteristics. We apply a small set of templates, which are considered promising, for make a redistribution of the number of filters in an already existing neural network. When compared to the initial base models, we found that the resulting architectures, trained from scratch, surpass the original accuracy even after been reduced to fit the same amount of resources.""","""This paper examines how different distributions of the layer-wise number of CNN filters, as partitioned into a set of fixed templates, impacts the performance of various baseline deep architectures. Testing is conducting from the viewpoint of balancing accuracy with various resource metrics such as number of parameters, memory footprint, etc. In the end, reviewer scores were partitioned as two accepts and two rejects. However, the actual comments indicate that both nominal accept reviewers expressed borderline opinions regarding this work (e.g., one preferred a score of 4 or 5 if available, while the other explicitly stated that the paper was borderline acceptance-worthy). Consequently in aggregate there was no strong support for acceptance and non-dismissable sentiment towards rejection. For example, consistent with reviewer comments, a primary concern with this paper is that the novelty and technical contribution is rather limited, and hence, to warrant acceptance the empirical component should be especially compelling. However, all the experiments are limited to cifar10/cifar100 data, with the exception of a couple extra tests on tiny ImageNet added after the rebuttal. But these latter experiments are not so convincing since the base architecture has the best accuracy on VGG, and only on a single MobileNet test do we actually see clear-cut improvement. Moreover, these new results appear to be based on just a single trial per data set (this important detail is unclear), and judging from Figure 2 of the revision, MobileNet results on cifar data can have very high variance blurring the distinction between methods. It is therefore hard to draw firm conclusions at this point, and these two additional tiny ImageNet tests notwithstanding, we don't really know how to differentiate phenomena that are intrinsic to cifar data from other potentially relevant factors. Overall then, my view is that far more testing with different data types is warranted to strengthen the conclusions of this paper and compensate for the modest technical contribution. Note also that training with all of these different filter templates is likely no less computationally expensive than some state-of-the-art pruning or related compression methods, and therefore it would be worth comparing head-to-head with such approaches. This is especially true given that in many scenarios, test-time computational resources are more critical than marginal differences in training time, etc.""" 520,"""Scalable and Order-robust Continual Learning with Additive Parameter Decomposition""","['Continual Learning', 'Lifelong Learning', 'Catastrophic Forgetting', 'Deep Learning']","""While recent continual learning methods largely alleviate the catastrophic problem on toy-sized datasets, there are issues that remain to be tackled in order to apply them to real-world problem domains. First, a continual learning model should effectively handle catastrophic forgetting and be efficient to train even with a large number of tasks. Secondly, it needs to tackle the problem of order-sensitivity, where the performance of the tasks largely varies based on the order of the task arrival sequence, as it may cause serious problems where fairness plays a critical role (e.g. medical diagnosis). To tackle these practical challenges, we propose a novel continual learning method that is scalable as well as order-robust, which instead of learning a completely shared set of weights, represents the parameters for each task as a sum of task-shared and sparse task-adaptive parameters. With our Additive Parameter Decomposition (APD), the task-adaptive parameters for earlier tasks remain mostly unaffected, where we update them only to reflect the changes made to the task-shared parameters. This decomposition of parameters effectively prevents catastrophic forgetting and order-sensitivity, while being computation- and memory-efficient. Further, we can achieve even better scalability with APD using hierarchical knowledge consolidation, which clusters the task-adaptive parameters to obtain hierarchically shared parameters. We validate our network with APD, APD-Net, on multiple benchmark datasets against state-of-the-art continual learning methods, which it largely outperforms in accuracy, scalability, and order-robustness.""","""The submission addresses the problem of continual learning with large numbers of tasks and variable task ordering and proposes a parameter decomposition approach such that part of the parameters are task-adaptive and some are task-shared. The validation is on omniglot and other benchmarks. The reviews were mixed on this paper, but most reviewers were favorably impressed with the problem setup, the scalability of the method, and the results. The baselines were limited but acceptable. The recommendation is to accept this paper, but the authors are advised to address all the points in the reviews in their final revision.""" 521,"""A Generative Model for Molecular Distance Geometry""","['graph neural networks', 'variational autoencoders', 'distance geometry', 'molecular conformation']","""Computing equilibrium states for many-body systems, such as molecules, is a long-standing challenge. In the absence of methods for generating statistically independent samples, great computational effort is invested in simulating these systems using, for example, Markov chain Monte Carlo. We present a probabilistic model that generates such samples for molecules from their graph representations. Our model learns a low-dimensional manifold that preserves the geometry of local atomic neighborhoods through a principled learning representation that is based on Euclidean distance geometry. We create a new dataset for molecular conformation generation with which we show experimentally that our generative model achieves state-of-the-art accuracy. Finally, we show how to use our model as a proposal distribution in an importance sampling scheme to compute molecular properties.""","""The paper presents a solution to generating molecule with three dimensional structure by learning a low-dimensional manifold that preserves the geometry of local atomic neighborhoods based on Euclidean distance geometry. The application is interesting and the proposed solution is reasonable. The authors did a good job at addressing most concerns raised in the reviews and updating the draft. Two main concerns were left unresolved: one is the lack of novelty in the proposed model, and the other is that some arguments in the paper are not fully supported. The paper could benefit from one more round of revision before being ready for publication. """ 522,"""Imagining the Latent Space of a Variational Auto-Encoders""","['VAE', 'GAN']",""" Variational Auto-Encoders (VAEs) are designed to capture compressible information about a dataset. As a consequence the information stored in the latent space is seldom sufficient to reconstruct a particular image. To help understand the type of information stored in the latent space we train a GAN-style decoder constrained to produce images that the VAE encoder will map to the same region of latent space. This allows us to ''imagine'' the information captured in the latent space. We argue that this is necessary to make a VAE into a truly generative model. We use our GAN to visualise the latent space of a standard VAE and of a pseudo-formula -VAE.""","""The paper proposes a new method for improving generative properties of VAE model. The reviewers unanimously agree that this paper is not ready to be published, particularly being concerned about the unclear objective and potentially misleading claims of the paper. Multiple reviewers pointed out about incorrect claims and statements without theoretical or empirical justification. The reviewers also mention that the paper does not provide new insights about VAE model as MDL interpretation of VAE it is not new.""" 523,"""Influence-Based Multi-Agent Exploration""","['Multi-agent reinforcement learning', 'Exploration']","""Intrinsically motivated reinforcement learning aims to address the exploration challenge for sparse-reward tasks. However, the study of exploration methods in transition-dependent multi-agent settings is largely absent from the literature. We aim to take a step towards solving this problem. We present two exploration methods: exploration via information-theoretic influence (EITI) and exploration via decision-theoretic influence (EDTI), by exploiting the role of interaction in coordinated behaviors of agents. EITI uses mutual information to capture the interdependence between the transition dynamics of agents. EDTI uses a novel intrinsic reward, called Value of Interaction (VoI), to characterize and quantify the influence of one agent's behavior on expected returns of other agents. By optimizing EITI or EDTI objective as a regularizer, agents are encouraged to coordinate their exploration and learn policies to optimize the team performance. We show how to optimize these regularizers so that they can be easily integrated with policy gradient reinforcement learning. The resulting update rule draws a connection between coordinated exploration and intrinsic reward distribution. Finally, we empirically demonstrate the significant strength of our methods in a variety of multi-agent scenarios.""","""The paper presents a new take on exploration in multi-agent reinforcement learning settings, and presents two approaches, one motivated by information theoretic, the other by decision theoretic influence on other agents. Reviewers consider the proposed approach ""pretty elegant, and in a sense seem fundamental"", the experimental section ""thorough"", and expect the work to ""encourage future work to explore more problems in this area"". Several questions were raised, especially regarding related work, comparison to single agent exploration approaches, and several clarifying questions. These were largely addressed by the authors, resulting in a strong submission with valuable contributions.""" 524,"""Tensorized Embedding Layers for Efficient Model Compression""","['Embedding layers compression', 'tensor networks', 'low-rank factorization']","""The embedding layers transforming input words into real vectors are the key components of deep neural networks used in natural language processing. However, when the vocabulary is large, the corresponding weight matrices can be enormous, which precludes their deployment in a limited resource setting. We introduce a novel way of parametrizing embedding layers based on the Tensor Train (TT) decomposition, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance. We evaluate our method on a wide range of benchmarks in natural language processing and analyze the trade-off between performance and compression ratios for a wide range of architectures, from MLPs to LSTMs and Transformers.""","""This paper has been reviewed by three reviewers and received scores: 6/3/8. While two reviewers were reasonably positive, they also did not provide a very compelling reviews (e.g. one rev. just reiterated the rationale behind tensor model compression and the other admitted the paper is of limited novelty). Perhaps the shortest review (and perhaps the most telling) prompts authors to the fact that the model compression with tensor decompositions is quite common in the literature these days. One example could be T-Net: Parametrizing Fully Convolutional Nets with a Single High-Order Tensor by Kossaifi et al. Very likely the authors will find many more recent developments on model compression with/without tensor decomp. For a good paper in this topic, authors should carefully consider various tensor factorizations (Tucker, TT, tensor rings, t-product and many more) and consider theoretical contributions and guarantees. Taking into account all pros and cons, this submissions falls marginally short of the ICLR 2020 threshold but the authors are encouraged to work on further developments.""" 525,"""SGD with Hardness Weighted Sampling for Distributionally Robust Deep Learning""","['distributionally robust optimization', 'distributionally robust deep learning', 'over-parameterized deep neural networks', 'deep neural networks', 'AI safety', 'hard example mining']","""Distributionally Robust Optimization (DRO) has been proposed as an alternative to Empirical Risk Minimization (ERM) in order to account for potential biases in the training data distribution. However, its use in deep learning has been severely restricted due to the relative inefficiency of the optimizers available for DRO compared to the wide-spread Stochastic Gradient Descent (SGD) based optimizers for deep learning with ERM. In this work, we demonstrate that SGD with hardness weighted sampling is a principled and efficient optimization method for DRO in machine learning and is particularly suited in the context of deep learning. Similar to a hard example mining strategy in essence and in practice, the proposed algorithm is straightforward to implement and computationally as efficient as SGD-based optimizers used for deep learning. It only requires adding a softmax layer and maintaining an history of the loss values for each training example to compute adaptive sampling probabilities. In contrast to typical ad hoc hard mining approaches, and exploiting recent theoretical results in deep learning optimization, we prove the convergence of our DRO algorithm for over-parameterized deep learning networks with ReLU activation and finite number of layers and parameters. Preliminary results demonstrate the feasibility and usefulness of our approach.""","""This paper proposes a modification of SGD to do distributionally-robust optimization of deep networks. The main idea is sensible enough, however, the inadequate handling of baselines and relatively toy nature of the experiments means that this paper needs more work to be accepted.""" 526,"""To Relieve Your Headache of Training an MRF, Take AdVIL""","['Markov Random Fields', 'Undirected Graphical Models', 'Variational Inference', 'Black-box Infernece']","""We propose a black-box algorithm called {\it Adversarial Variational Inference and Learning} (AdVIL) to perform inference and learning on a general Markov random field (MRF). AdVIL employs two variational distributions to approximately infer the latent variables and estimate the partition function of an MRF, respectively. The two variational distributions provide an estimate of the negative log-likelihood of the MRF as a minimax optimization problem, which is solved by stochastic gradient descent. AdVIL is proven convergent under certain conditions. On one hand, compared with contrastive divergence, AdVIL requires a minimal assumption about the model structure and can deal with a broader family of MRFs. On the other hand, compared with existing black-box methods, AdVIL provides a tighter estimate of the log partition function and achieves much better empirical results. ""","""The paper proposes a black box algorithm for MRF training, utilizing a novel approach based on variational approximations of both the positive and negative phase terms of the log likelihood gradient (as R2 puts it, ""a fairly creative combination of existing approaches""). Several technical and rhetorical points were raised by the reviewers, most of which seem to have been satisfactorily addressed, but all reviewers agreed that this was a good direction. The main weakness of the work is that the empirical work is very small scale, mainly due to the bottleneck imposed by an inner loop optimization of the variational distribution q(v, h). I believe it's important to note that most truly large scale results in the literature revolve around purely feedforward models that don't require expensive to compute approximations; that said, MNIST experiments would have been nice. Nevertheless, this work seems like a promising step on a difficult problem, and it seems that the ideas herein are worth disseminating, hopefully stimulating future work on rendering this procedure less expensive and more scalable.""" 527,"""Lazy-CFR: fast and near-optimal regret minimization for extensive games with imperfect information""",[],"""Counterfactual regret minimization (CFR) methods are effective for solving two-player zero-sum extensive games with imperfect information with state-of-the-art results. However, the vanilla CFR has to traverse the whole game tree in each round, which is time-consuming in large-scale games. In this paper, we present Lazy-CFR, a CFR algorithm that adopts a lazy update strategy to avoid traversing the whole game tree in each round. We prove that the regret of Lazy-CFR is almost the same to the regret of the vanilla CFR and only needs to visit a small portion of the game tree. Thus, Lazy-CFR is provably faster than CFR. Empirical results consistently show that Lazy-CFR is significantly faster than the vanilla CFR.""","""The paper proposed an regret based approach to speed up counterfactural regret minimization. The reviewers find the proposed approach interesting. However, the method require large memory. More experimental comparisons and comparisons pointed out by reviewers and public comments will help improve the paper. """ 528,"""Estimating Gradients for Discrete Random Variables by Sampling without Replacement""","['gradient', 'estimator', 'discrete', 'categorical', 'sampling', 'without replacement', 'reinforce', 'baseline', 'variance', 'gumbel', 'vae', 'structured prediction']","""We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement, which reduces variance as it avoids duplicate samples. We show that our estimator can be derived as the Rao-Blackwellization of three different estimators. Combining our estimator with REINFORCE, we obtain a policy gradient estimator and we reduce its variance using a built-in control variate which is obtained without additional model evaluations. The resulting estimator is closely related to other gradient estimators. Experiments with a toy problem, a categorical Variational Auto-Encoder and a structured prediction problem show that our estimator is the only estimator that is consistently among the best estimators in both high and low entropy settings.""","""The authors derive a novel, unbiased gradient estimator for discrete random variables based on sampling without replacement. They relate their estimator to existing multi-sample estimators and motivate why we would expect reduced variance. Finally, they evaluate their estimator across several tasks and show that is performs well in all of them. The reviewers agree that the revised paper is well-written and well-executed. There was some concern about that effectiveness of the estimator, however, the authors clarified that ""it is the only estimator that performs well across different settings (high and low entropy). Therefore it is more robust and a strict improvement to any of these estimators which only have good performance in either high or low entropy settings."" Reviewer 2 was still not convinced about the strength of the analysis of the estimator, and this is indeed quantifying the variance reduction theoretically would be an improvement. Overall, the paper is a nice addition to the set of tools for computing gradients of expectations of discrete random variables. I recommend acceptance. """ 529,"""GQ-Net: Training Quantization-Friendly Deep Networks""","['Network quantization', 'Efficient deep learning']","""Network quantization is a model compression and acceleration technique that has become essential to neural network deployment. Most quantization methods per- form fine-tuning on a pretrained network, but this sometimes results in a large loss in accuracy compared to the original network. We introduce a new technique to train quantization-friendly networks, which can be directly converted to an accurate quantized network without the need for additional fine-tuning. Our technique allows quantizing the weights and activations of all network layers down to 4 bits, achieving high efficiency and facilitating deployment in practical settings. Com- pared to other fully quantized networks operating at 4 bits, we show substantial improvements in accuracy, for example 66.68% top-1 accuracy on ImageNet using ResNet-18, compared to the previous state-of-the-art accuracy of 61.52% Louizos et al. (2019) and a full precision reference accuracy of 69.76%. We performed a thorough set of experiments to test the efficacy of our method and also conducted ablation studies on different aspects of the method and techniques to improve training stability and accuracy. Our codebase and trained models are available on GitHub.""","""The paper propose a new quantization-friendly network training algorithm called GQ (or DQ) net. The paper is well-written, and the proposed idea is interesting. Empirical results are also good. However, the major performance improvement comes from the combination of different incremental improvements. Some of these additional steps do seem orthogonal to the proposed idea. Also, it is not clear how robust the method is to the various hyperparameters / schedules. For example, it seems that some of the suggested training options are conflicting each other. More in-depth discussions and analysis on the setting of the regularization parameter and schedule for the loss term blending parameters will be useful.""" 530,"""Gradient-Based Neural DAG Learning""","['Structure Learning', 'Causality', 'Density estimation']","""We propose a novel score-based approach to learning a directed acyclic graph (DAG) from observational data. We adapt a recently proposed continuous constrained optimization formulation to allow for nonlinear relationships between variables using neural networks. This extension allows to model complex interactions while avoiding the combinatorial nature of the problem. In addition to comparing our method to existing continuous optimization methods, we provide missing empirical comparisons to nonlinear greedy search methods. On both synthetic and real-world data sets, this new method outperforms current continuous methods on most tasks while being competitive with existing greedy search methods on important metrics for causal inference.""","""In this paper, the authors propose a novel approach for learning the structure of a directed acyclic graph from observational data that allows to flexibly model nonlinear relationships between variables using neural networks. While the reviewers initially had concerns with respect to the positioning of the paper and various questions regarding theoretical results and experiments, these concerns have been addressed satisfactorily during the discussion period. The paper is now acceptable for publication in ICLR-2020. """ 531,"""Differentiable learning of numerical rules in knowledge graphs""","['knowledge graphs', 'rule learning', 'differentiable neural logic']","""Rules over a knowledge graph (KG) capture interpretable patterns in data and can be used for KG cleaning and completion. Inspired by the TensorLog differentiable logic framework, which compiles rule inference into a sequence of differentiable operations, recently a method called Neural LP has been proposed for learning the parameters as well as the structure of rules. However, it is limited with respect to the treatment of numerical features like age, weight or scientific measurements. We address this limitation by extending Neural LP to learn rules with numerical values, e.g., People younger than 18 typically live with their parents. We demonstrate how dynamic programming and cumulative sum operations can be exploited to ensure efficiency of such extension. Our novel approach allows us to extract more expressive rules with aggregates, which are of higher quality and yield more accurate predictions compared to rules learned by the state-of-the-art methods, as shown by our experiments on synthetic and real-world datasets.""","""This paper presents a number of improvements on existing approaches to neural logic programming. The reviews are generally positive: two weak accepts, one weak reject. Reviewer 2 seems wholly in favour of acceptance at the end of discussion, and did not clarify why they were sticking to their score of weak accept. The main reason Reviewer 1 sticks to 6 rather than 8 is that the work extends existing work rather than offering a ""fundamental contribution"", but otherwise is very positive. I personally feel that a) most work extends existing work b) there is room in our conferences for such well executed extensions (standing on the shoulders of giants etc). Reviewer 3 is somewhat unconvinced by the nature of the evaluation. While I understand their reservations, they state that they would not be offended by the paper being accepted in spite of their reservations. Overall, I find that the review group leans more in favour of acceptance, and an happy to recommend acceptance for the paper as it makes progress in an interesting area at the intersection of differentiable programming and logic-based programming.""" 532,"""Learning DNA folding patterns with Recurrent Neural Networks ""","['Machine Learning', 'Recurrent Neural Networks', '3D chromatin structure', 'topologically associating domains', 'computational biology.']",""" The recent expansion of machine learning applications to molecular biology proved to have a significant contribution to our understanding of biological systems, and genome functioning in particular. Technological advances enabled the collection of large epigenetic datasets, including information about various DNA binding factors (ChIP-Seq) and DNA spatial structure (Hi-C). Several studies have confirmed the correlation between DNA binding factors and Topologically Associating Domains (TADs) in DNA structure. However, the information about physical proximity represented by genomic coordinate was not yet used for the improvement of the prediction models. In this research, we focus on Machine Learning methods for prediction of folding patterns of DNA in a classical model organism Drosophila melanogaster. The paper considers linear models with four types of regularization, Gradient Boosting and Recurrent Neural Networks for the prediction of chromatin folding patterns from epigenetic marks. The bidirectional LSTM RNN model outperformed all the models and gained the best prediction scores. This demonstrates the utilization of complex models and the importance of memory of sequential DNA states for the chromatin folding. We identify informative epigenetic features that lead to the further conclusion of their biological significance.""","""The authors consider the problem of predicting DNA folding patterns. They use a range of simple, linear models and find that a bi-LSTM architecture yielded best performance. This paper is below acceptance. Reviewers pointed out strong similarity to previously published work. Furthermore the manuscript lacked in clarity, leaving uncertain eg details about experimental details. """ 533,"""VIMPNN: A physics informed neural network for estimating potential energies of out-of-equilibrium systems""","['neural network', 'chemical energy estimation', 'density functional theory']","""Simulation of molecular and crystal systems enables insight into interesting chemical properties that benefit processes ranging from drug discovery to material synthesis. However these simulations can be computationally expensive and time consuming despite the approximations through Density Functional Theory (DFT). We propose the Valence Interaction Message Passing Neural Network (VIMPNN) to approximate DFT's ground-state energy calculations. VIMPNN integrates physics prior knowledge such as the existence of different interatomic bounds to estimate more accurate energies. Furthermore, while many previous machine learning methods consider only stable systems, our proposed method is demonstrated on unstable systems at different atomic distances. VIMPNN predictions can be used to determine the stable configurations of systems, i.e. stable distance for atoms -- a necessary step for the future simulation of crystal growth for example. Our method is extensively evaluated on a augmented version of the QM9 dataset that includes unstable molecules, as well as a new dataset of infinite- and finite-size crystals, and is compared with the Message Passing Neural Network (MPNN). VIMPNN has comparable accuracy with DFT, while allowing for 5 orders of magnitude in computational speed up compared to DFT simulations, and produces more accurate and informative potential energy curves than MPNN for estimating stable configurations.""","""The paper considers the problem of estimating the electronic structure's ground state energy of a given atomic system by means of supervised machine learning, as a fast alternative to conventional explicit methods (DFT). For this purpose, it modifies the neural message-passing architecture to account for further physical properties, and it extends the empirical validation to also include unstable molecules. Reviewers acknowledged the valuable experimental setup of this work and the significance of the results in the application domain, but were generally skeptical about the novelty of the machine learning model under study. Ultimately, and given that the main focus of this conference is on Machine Learning methodology, this AC believes this work could be more suitable in a more specialized venue in computational/quantum chemistry. """ 534,"""Deep Reasoning Networks: Thinking Fast and Slow, for Pattern De-mixing""","['Deep Reasoning Network', 'Pattern De-mixing']","""We introduce Deep Reasoning Networks (DRNets), an end-to-end framework that combines deep learning with reasoning for solving pattern de-mixing problems, typically in an unsupervised or weakly-supervised setting. DRNets exploit problem structure and prior knowledge by tightly combining logic and constraint reasoning with stochastic-gradient-based neural network optimization. We illustrate the power of DRNets on de-mixing overlapping hand-written Sudokus (Multi-MNIST-Sudoku) and on a substantially more complex task in scientific discovery that concerns inferring crystal structures of materials from X-ray diffraction data (Crystal-Structure-Phase-Mapping). DRNets significantly outperform the state of the art and experts' capabilities on Crystal-Structure-Phase-Mapping, recovering more precise and physically meaningful crystal structures. On Multi-MNIST-Sudoku, DRNets perfectly recovered the mixed Sudokus' digits, with 100% digit accuracy, outperforming the supervised state-of-the-art MNIST de-mixing models.""","""The paper received mixed reviews of WR (R1), WR (R2) and WA (R3). AC has carefully read all the reviews/rebuttal/comments and examined the paper. AC agrees with R1 and R2's concerns, specifically around overclaiming around reasoning. Also AC was unnerved, as was R2 and R3, by the notion of continuing to train on the test set (and found the rebuttal unconvincing on this point). Overall, the AC feels this paper cannot be accepted. The authors should remove the unsupported/overly bold claims in their paper and incorporate the constructive suggestions from the reviewers in a revised version of the paper.""" 535,"""Neural Video Encoding""","['Kolmogorov complexity', 'differentiable programming', 'convolutional neural networks']","""Deep neural networks have had unprecedented success in computer vision, natural language processing, and speech largely due to the ability to search for suitable task algorithms via differentiable programming. In this paper, we borrow ideas from Kolmogorov complexity theory and normalizing flows to explore the possibilities of finding arbitrary algorithms that represent data. In particular, algorithms which encode sequences of video image frames. Ultimately, we demonstrate neural video encoded using convolutional neural networks to transform autoregressive noise processes and show that this method has surprising cryptographic analogs for information security.""","""The paper has several clarity and novelty issues.""" 536,"""Wasserstein-Bounded Generative Adversarial Networks""","['GAN', 'WGAN', 'GENERATIVE ADVERSARIAL NETWORKS']","""In the field of Generative Adversarial Networks (GANs), how to design a stable training strategy remains an open problem. Wasserstein GANs have largely promoted the stability over the original GANs by introducing Wasserstein distance, but still remain unstable and are prone to a variety of failure modes. In this paper, we present a general framework named Wasserstein-Bounded GAN (WBGAN), which improves a large family of WGAN-based approaches by simply adding an upper-bound constraint to the Wasserstein term. Furthermore, we show that WBGAN can reasonably measure the difference of distributions which almost have no intersection. Experiments demonstrate that WBGAN can stabilize as well as accelerate convergence in the training processes of a series of WGAN-based variants.""","""The paper presents a framework named Wasserstein-bounded GANs which generalizes WGAN. The paper shows that WBGAN can improve stability. The reviewers raised several questions about the method and the experiments, but these were not addressed. I encourage the authors to revise the draft and resubmit to a different venue.""" 537,"""Generative Cleaning Networks with Quantized Nonlinear Transform for Deep Neural Network Defense""","['Adversarial Defense', 'Adversarial Attack']","""Effective defense of deep neural networks against adversarial attacks remains a challenging problem, especially under white-box attacks. In this paper, we develop a new generative cleaning network with quantized nonlinear transform for effective defense of deep neural networks. The generative cleaning network, equipped with a trainable quantized nonlinear transform block, is able to destroy the sophisticated noise pattern of adversarial attacks and recover the original image content. The generative cleaning network and attack detector network are jointly trained using adversarial learning to minimize both perceptual loss and adversarial loss. Our extensive experimental results demonstrate that our approach outperforms the state-of-art methods by large margins in both white-box and black-box attacks. For example, it improves the classification accuracy for white-box attacks upon the second best method by more than 40\% on the SVHN dataset and more than 20\% on the challenging CIFAR-10 dataset. ""","""This paper presents a method to defend neural networks from adversarial attack. The proposed generative cleaning network has a trainable quantization module which is claimed to be able to eliminate adversarial noise and recover the original image. After the intensive interaction with authors and discussion, one expert reviewer (R3) admitted that the experimental procedure basically makes sense and increased the score to Weak Reject. Yet, R3 is still not satisfied with some details such as the number of BPDA iterations, and more importantly, concludes that the meaningful numbers reported in the paper show only small gains, making the claim of the paper less convincing. As authors seem to have less interest in providing theoretical analysis and support, this issue is critical for decision, and there was no objection from other reviewers. After carefully reading the paper myself, I decided to support the opinion and therefore would like to recommend rejection. """ 538,"""Convolutional Conditional Neural Processes""","['Neural Processes', 'Deep Sets', 'Translation Equivariance']","""We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data. Translation equivariance is an important inductive bias for many learning problems including time series modelling, spatial data, and images. The model embeds data sets into an infinite-dimensional function space, as opposed to finite-dimensional vector spaces. To formalize this notion, we extend the theory of neural representations of sets to include functional representations, and demonstrate that any translation-equivariant embedding can be represented using a convolutional deep-set. We evaluate ConvCNPs in several settings, demonstrating that they achieve state-of-the-art performance compared to existing NPs. We demonstrate that building in translation equivariance enables zero-shot generalization to challenging, out-of-domain tasks.""","""This paper presents Convolutional Conditional Neural Process (ConvCNP), a new member of the neural process family that models translation equivariance. Current models must learn translation equivariance from the data, and the authors show that ConvCNP can learn this as part of the model, which is much more generalisable and efficient. They evaluate the ConvCNP on several benchmarks, including an astronomical time-series modelling experiment, a sim2real experiment, and several image completion experiments and show excellent results. The authors wrote extensive responses the the reviewers, uploading a revised version of the paper, and there was some further discussion. This is a strong paper worthy of inclusion in ICLR and could have a large impact on many fields in ML/AI. """ 539,"""Neural tangent kernels, transportation mappings, and universal approximation""","['Neural Tangent Kernel', 'universal approximation', 'Barron', 'transport mapping']","""This paper establishes rates of universal approximation for the shallow neural tangent kernel (NTK): network weights are only allowed microscopic changes from random initialization, which entails that activations are mostly unchanged, and the network is nearly equivalent to its linearization. Concretely, the paper has two main contributions: a generic scheme to approximate functions with the NTK by sampling from transport mappings between the initial weights and their desired values, and the construction of transport mappings via Fourier transforms. Regarding the first contribution, the proof scheme provides another perspective on how the NTK regime arises from rescaling: redundancy in the weights due to resampling allows individual weights to be scaled down. Regarding the second contribution, the most notable transport mapping asserts that roughly / \delta^{10d} nodes are sufficient to approximate continuous functions, where pseudo-formula depends on the continuity properties of the target function. By contrast, nearly the same proof yields a bound of / \delta^{2d}$ for shallow ReLU networks; this gap suggests a tantalizing direction for future work, separating shallow ReLU networks and their linearization. ""","""The paper considers representational aspects of neural tangent kernels (NTKs). More precisely, recent literature on overparametrized neural networks has identified NTKs as a way to characterize the behavior of gradient descent on wide neural networks as fitting these types of kernels. This paper focuses on the representational aspect: namely that functions of appropriate ""complexity"" can be written as an NTK with parameters close to initialization (comparably close to what results on gradient descent get). The reviewers agree this content is of general interest to the community and with the proposed revisions there is general agreement that the paper has merits to recommend acceptance.""" 540,"""Scaling Laws for the Principled Design, Initialization, and Preconditioning of ReLU Networks""","['initialization', 'mlp', 'relu']","""Abstract In this work, we describe a set of rules for the design and initialization of well-conditioned neural networks, guided by the goal of naturally balancing the diagonal blocks of the Hessian at the start of training. We show how our measure of conditioning of a block relates to another natural measure of conditioning, the ratio of weight gradients to the weights. We prove that for a ReLU-based deep multilayer perceptron, a simple initialization scheme using the geometric mean of the fan-in and fan-out satisfies our scaling rule. For more sophisticated architectures, we show how our scaling principle can be used to guide design choices to produce well-conditioned neural networks, reducing guess-work.""","""This paper proposes a new design space for initialization of neural networks motivated by balancing the singular values of the Hessian. Reviewers found the problem well motivated and agreed that the proposed method has merit, however more rigorous experiments are required to demonstrate that the ideas in this work are significant progress over current known techniques. As noted by Reviewer 2, there has been substantial prior work on initialization and conditioning that needs to be discussed as they relate to the proposed method. The AC notes two additional, closely related initialization schemes that should be discussed [1,2]. Comparing with stronger baselines on more recent modern architectures would improve this work significantly. [1]: pseudo-url [2]: pseudo-url.""" 541,"""Coresets for Accelerating Incremental Gradient Methods""",[],"""Many machine learning problems reduce to the problem of minimizing an expected risk. Incremental gradient (IG) methods, such as stochastic gradient descent and its variants, have been successfully used to train the largest of machine learning models. IG methods, however, are in general slow to converge and sensitive to stepsize choices. Therefore, much work has focused on speeding them up by reducing the variance of the estimated gradient or choosing better stepsizes. An alternative strategy would be to select a carefully chosen subset of training data, train only on that subset, and hence speed up optimization. However, it remains an open question how to achieve this, both theoretically as well as practically, while not compromising on the quality of the final model. Here we develop CRAIG, a method for selecting a weighted subset (or coreset) of training data in order to speed up IG methods. We prove that by greedily selecting a subset S of training data that minimizes the upper-bound on the estimation error of the full gradient, running IG on this subset will converge to the (near)optimal solution in the same number of epochs as running IG on the full data. But because at each epoch the gradients are computed only on the subset S, we obtain a speedup that is inversely proportional to the size of S. Our subset selection algorithm is fully general and can be applied to most IG methods. We further demonstrate practical effectiveness of our algorithm, CRAIG, through an extensive set of experiments on several applications, including logistic regression and deep neural networks. Experiments show that CRAIG, while achieving practically the same loss, speeds up IG methods by up to 10x for convex and 3x for non-convex (deep learning) problems.""","""This paper investigates the practical and theoretical consequences of speeding up training using incremental gradient methods (such as stochastic descent) by calculating the gradients with respect to a specifically chosen sparse subset of data. The reviewers were quite split on the paper. On the one hand, there was a general excitement about the direction of the paper. The idea of speeding up gradient descent is of course hugely relevant to the current machine learning landscape. The approach was also considered novel, and the paper well-written. However, the reviewers also pointed out multiple shortcomings. The experimental section was deemed to lack clarity and baselines. The results on standard dataset were very different from expected, causing worry about the reliability, although this has partially been addressed in additional experiments. The applicability to deep learning and large dataset, as well as the significance of time saved by using this method, were other worries. Unfortunately, I have to agree with the majority of the reviewers that the idea is fascinating, but that more work is required for acceptance to ICLR. """ 542,"""Deep Generative Classifier for Out-of-distribution Sample Detection""","['Out-of-distribution Detection', 'Generative Classifier', 'Deep Neural Networks', 'Multi-class Classification', 'Gaussian Discriminant Analysis']","""The capability of reliably detecting out-of-distribution samples is one of the key factors in deploying a good classifier, as the test distribution always does not match with the training distribution in most real-world applications. In this work, we propose a deep generative classifier which is effective to detect out-of-distribution samples as well as classify in-distribution samples, by integrating the concept of Gaussian discriminant analysis into deep neural networks. Unlike the discriminative (or softmax) classifier that only focuses on the decision boundary partitioning its latent space into multiple regions, our generative classifier aims to explicitly model class-conditional distributions as separable Gaussian distributions. Thereby, we can define the confidence score by the distance between a test sample and the center of each distribution. Our empirical evaluation on multi-class images and tabular data demonstrate that the generative classifier achieves the best performances in distinguishing out-of-distribution samples, and also it can be generalized well for various types of deep neural networks.""","""The paper presents a training method for deep neural networks to detect out-of-distribution samples under perspective of Gaussian discriminant analysis. Reviewers and AC agree that some idea is given in the previous work (although it does not focus on training), and additional ideas in the paper are not super novel. Furthermore, experimental results are weak, e.g., comparison with other deep generative classifiers are desirable, as the paper focuses on training such deep models. Hence, I recommend rejection.""" 543,"""Reject Illegal Inputs: Scaling Generative Classifiers with Supervised Deep Infomax""","['generative classifiers', 'selective classification', 'classification with rejection']","""Deep Infomax~(DIM) is an unsupervised representation learning framework by maximizing the mutual information between the inputs and the outputs of an encoder, while probabilistic constraints are imposed on the outputs. In this paper, we propose Supervised Deep InfoMax~(SDIM), which introduces supervised probabilistic constraints to the encoder outputs. The supervised probabilistic constraints are equivalent to a generative classifier on high-level data representations, where class conditional log-likelihoods of samples can be evaluated. Unlike other works building generative classifiers with conditional generative models, SDIMs scale on complex datasets, and can achieve comparable performance with discriminative counterparts. With SDIM, we could perform \emph{classification with rejection}. Instead of always reporting a class label, SDIM only makes predictions when test samples' largest logits surpass some pre-chosen thresholds, otherwise they will be deemed as out of the data distributions, and be rejected. Our experiments show that SDIM with rejection policy can effectively reject illegal inputs including out-of-distribution samples and adversarial examples.""","""This paper combines a well-known, recently proposed unsupervised representation learning technique technique with a class-conditional negative log likelihood and a squared hinge loss on the class-wise conditional likelihoods, and proposes to use the resulting conditional density model for generative classification. The empirical work appears to validate the claim that their method leads to good out of distribution detection, and better performance using a rejection option. The adversarial defense results are less clear. Reporting raw logits is a strange choice, and difficult to interpret; the table is also difficult to read, and this method of reporting makes it difficult to compare against existing methods. The reviewers generally remarked on presentation issues. R1 asked about the contribution of various loss terms, a matter I feel is underexplored in this work, and the authors mainly replied with a qualitative description of loss behaviour in the joint system, which I don't believe was the question. R1 also asked about the choice of thresholds and the issues of fairness of comparison regarding model capacity, neither of which seemed adequately addressed. R3 remarked on the clarity being lacking, and also that ""Generative modeling of representations is novel, afaik."" (It is not; see, for example, the VQ-VAE line of work where PixelCNN priors are fit on top of representations, and layer-wise pre-training works of the mid 2000s, where generative models were frequently fit on greedily trained feature representations, sometimes in conjunction with a joint generative model of class labels). R2's review was very brief, and with a self-reported low confidence, but their concerns were addressed in a subsequent update. There are three weaknesses which are my grounds for recommending rejection. First, this paper does a poor job of situating itself in the wider body of literature on classification with rejection, which dates to at least the 1970s (see Bartlett & Wengkamp, 2006 and the references therein). Second, the empirical work makes little comparison to other methods in the literature; baselines on clean data are self-generated, and the paper compares to no other adversarial defense proposals. In a minor drawback, ImageNet results are also missing; given that one of the purported advantages of the method is scalability, a large scale benchmark would have strengthened this claim. Third, no ablation study is undertaken that might give us insight into the role of each term of the loss. Given that this is a straightforward combination of well-understood techniques, a fully empirical paper ought to deliver more insight into the combination than this manuscript has.""" 544,"""Defense against Adversarial Examples by Encoder-Assisted Search in the Latent Coding Space""","['Adversarial Defense', 'Auto-encoder', 'Adversarial Attack', 'GAN']","""Deep neural networks were shown to be vulnerable to crafted adversarial perturbations, and thus bring serious safety problems. To solve this problem, we proposed pseudo-formula , a framework for purifying input images by searching a closest natural reconstruction with little computation. We first build a reconstruction network AE-GAN, which adapted auto-encoder by introducing adversarial loss to the objective function. In this way, we can enhance the generative ability of decoder and preserve the abstraction ability of encoder to form a self-organized latent space. In the inference time, when given an input, we will start a search process in the latent space which aims to find the closest reconstruction to the given image on the distribution of normal data. The encoder can provide a good start point for the searching process, which saves much computation cost. Experiments show that our method is robust against various attacks and can reach comparable even better performance to similar methods with much fewer computations.""","""The paper proposes a defense for adversarial attacks based on autoencoders that tries to find the closest point to the natural image in the output span of the decoder and ""purify"" the adversarial example. There were concerns about the work being too incremental over DefenseGAN and about empirical evaluation of the defense. It is crucial to test the defense methods against best available attacks to establish the effectiveness. Authors should also discuss and consider evaluating their method against the attack proposed in pseudo-url that claims to greatly reduce the defense accuracy of DefenseGAN. """ 545,"""DiffTaichi: Differentiable Programming for Physical Simulation""","['Differentiable programming', 'robotics', 'optimal control', 'physical simulation', 'machine learning system']","""We present DiffTaichi, a new differentiable programming language tailored for building high-performance differentiable physical simulators. Based on an imperative programming language, DiffTaichi generates gradients of simulation steps using source code transformations that preserve arithmetic intensity and parallelism. A light-weight tape is used to record the whole simulation program structure and replay the gradient kernels in a reversed order, for end-to-end backpropagation. We demonstrate the performance and productivity of our language in gradient-based learning and optimization tasks on 10 different physical simulators. For example, a differentiable elastic object simulator written in our language is 4.2x shorter than the hand-engineered CUDA version yet runs as fast, and is 188x faster than the TensorFlow implementation. Using our differentiable programs, neural network controllers are typically optimized within only tens of iterations.""","""The paper provides a language for optimizing through physical simulations. The reviewers had a number of concerns related to paper organization and insufficient comparisons to related work (jax). During the discussion phase, the authors significantly updated their paper and ran additional experiments, leading to a much stronger paper.""" 546,"""Structural Language Models for Any-Code Generation""","['Program Generation', 'Structural Language Model', 'SLM', 'Generative Model', 'Code Generation']","""We address the problem of Any-Code Generation (AnyGen) - generating code without any restriction on the vocabulary or structure. The state-of-the-art in this problem is the sequence-to-sequence (seq2seq) approach, which treats code as a sequence and does not leverage any structural information. We introduce a new approach to AnyGen that leverages the strict syntax of programming languages to model a code snippet as tree structural language modeling (SLM). SLM estimates the probability of the program's abstract syntax tree (AST) by decomposing it into a product of conditional probabilities over its nodes. We present a neural model that computes these conditional probabilities by considering all AST paths leading to a target node. Unlike previous structural techniques that have severely restricted the kinds of expressions that can be generated, our approach can generate arbitrary expressions in any programming language. Our model significantly outperforms both seq2seq and a variety of existing structured approaches in generating Java and C# code. We make our code, datasets, and models available online.""","""This paper proposes a new method for code generation based on structured language models. After viewing the paper, reviews, and author response my assessment is that I basically agree with Reviewer 4. (Now, after revision) This work seems to be (1) a bit incremental over other works such as Brockschmidt et al. (2019), and (2) a bit of a niche topic for ICLR. At the same time it has (3) good engineering effort resulting in good scores, and (4) relatively detailed conceptual comparison with other work in the area. Also, (5) the title of ""Structural Language Models for Code Generation"" is clearly over-claiming the contribution of the work -- as cited in the paper there are many language models, unconditional or conditional, that have been used in code generation in the past. In order to be accurate, the title would need to be modified to something that more accurately describes the (somewhat limited) contribution of the work. In general, I found this paper borderline. ICLR, as you know is quite competitive so while this is a reasonably good contribution, I'm not sure whether it checks the box of either high quality or high general interest to warrant acceptance. Because of this, I'm not recommending it for acceptance at this time, but definitely encourage the authors to continue to polish for submission to a different venue (perhaps a domain conference that would be more focused on the underlying task of code generation?)""" 547,"""Skew-Fit: State-Covering Self-Supervised Reinforcement Learning""","['deep reinforcement learning', 'goal space', 'goal conditioned reinforcement learning', 'self-supervised reinforcement learning', 'goal sampling', 'reinforcement learning']","""Autonomous agents that must exhibit flexible and broad capabilities will need to be equipped with large repertoires of skills. Defining each skill with a manually-designed reward function limits this repertoire and imposes a manual engineering burden. Self-supervised agents that set their own goals can automate this process, but designing appropriate goal setting objectives can be difficult, and often involves heuristic design decisions. In this paper, we propose a formal exploration objective for goal-reaching policies that maximizes state coverage. We show that this objective is equivalent to maximizing the entropy of the goal distribution together with goal reaching performance, where goals correspond to full state observations. To instantiate this principle, we present an algorithm called Skew-Fit for learning a maximum-entropy goal distributions. Skew-Fit enables self-supervised agents to autonomously choose and practice reaching diverse goals. We show that, under certain regularity conditions, our method converges to a uniform distribution over the set of valid states, even when we do not know this set beforehand. Our experiments show that it can learn a variety of manipulation tasks from images, including opening a door with a real robot, entirely from scratch and without any manually-designed reward function.""","""This paper tackles the problem of exploration in RL. In order to maximize coverage of the state space, the authors introduce an approach where the agent attempts to reach some self-set goals. The empirically show that agents using this method uniformly visit all valid states under certain conditions. They also show that these agents are able to learn behaviours without providing a manually-defined reward function. The drawback of this work is the combined lack of theoretical justification and limited (marginal) algorithmic novelty given other existing goal-directed techniques. Although they highlight the performance of the proposed approach, the current experiments do not convey a good enough understanding of why this approach works where other existing goal-directed techniques do not, which would be expected from a purely empirical paper. This dampers the contribution, hence I recommend to reject this paper.""" 548,"""Differentiable Bayesian Neural Network Inference for Data Streams""","['Bayesian neural network', 'approximate predictive inference', 'data stream', 'histogram']","""While deep neural networks (NNs) do not provide the confidence of its prediction, Bayesian neural network (BNN) can estimate the uncertainty of the prediction. However, BNNs have not been widely used in practice due to the computational cost of predictive inference. This prohibitive computational cost is a hindrance especially when processing stream data with low-latency. To address this problem, we propose a novel model which approximate BNNs for data streams. Instead of generating separate prediction for each data sample independently, this model estimates the increments of prediction for a new data sample from the previous predictions. The computational cost of this model is almost the same as that of non-Bayesian deep NNs. Experiments including semantic segmentation on real-world data show that this model performs significantly faster than BNNs, estimating uncertainty comparable to the results of BNNs. ""","""The main contribution is a Bayesian neural net algorithm which saves computation at test time using a vector quantization approximation. The reviewers are on the fence about the paper. I find the exposition somewhat hard to follow. In terms of evaluation, they demonstrate similar performance to various BNN architectures which require Monte Carlo sampling. But there have been lots of BNN algorithms that don't require sampling (e.g. PBP, Bayesian dark knowledge, MacKay's delta approximation), so it seems important to compare to these. I think there may be promising ideas here, but the paper needs a bit more work before it is to be published at a venue such as ICLR. """ 549,"""Contrastive Multiview Coding""","['Representation Learning', 'Unsupervised Learning', 'Self-supervsied Learning', 'Multiview Learning']","""Humans view the world through many sensory channels, e.g., the long-wavelength light channel, viewed by the left eye, or the high-frequency vibrations channel, heard by the right ear. Each view is noisy and incomplete, but important factors, such as physics, geometry, and semantics, tend to be shared between all views (e.g., a ""dog"" can be seen, heard, and felt). We hypothesize that a powerful representation is one that models view-invariant factors. Based on this hypothesis, we investigate a contrastive coding scheme, in which a representation is learned that aims to maximize mutual information between different views but is otherwise compact. Our approach scales to any number of views, and is view-agnostic. The resulting learned representations perform above the state of the art for downstream tasks such as object classification, compared to formulations based on predictive learning or single view reconstruction, and improve as more views are added. On the Imagenet linear readoff benchmark, we achieve 68.4% top-1 accuracy. ""","""This paper proposes to use contrastive predictive coding for self-supervised learning. The proposed approach is shown empirically to be more effective than existing self-supervised learning algorithms. While the reviewers found the experimental results encouraging, there were some questions about the contribution as a whole, in particular the lack of theoretical justification.""" 550,"""Empowering Graph Representation Learning with Paired Training and Graph Co-Attention""","['graph neural networks', 'graph co-attention', 'paired graphs', 'molecular properties', 'drug-drug interaction']","""Through many recent advances in graph representation learning, performance achieved on tasks involving graph-structured data has substantially increased in recent years---mostly on tasks involving node-level predictions. The setup of prediction tasks over entire graphs (such as property prediction for a molecule, or side-effect prediction for a drug), however, proves to be more challenging, as the algorithm must combine evidence about several structurally relevant patches of the graph into a single prediction. Most prior work attempts to predict these graph-level properties while considering only one graph at a time---not allowing the learner to directly leverage structural similarities and motifs across graphs. Here we propose a setup in which a graph neural network receives pairs of graphs at once, and extend it with a co-attentional layer that allows node representations to easily exchange structural information across them. We first show that such a setup provides natural benefits on a pairwise graph classification task (drug-drug interaction prediction), and then expand to a more generic graph regression setup: enhancing predictions over QM9, a standard molecular prediction benchmark. Our setup is flexible, powerful and makes no assumptions about the underlying dataset properties, beyond anticipating the existence of multiple training graphs.""","""The paper proposes combining paired attention with co-attention. The reviewers have remarked that the paper is will written and that the experiments provide some new insights into this combination. Initially, some additional experiments were proposed, which were addressed by the authors in the rebuttal and the new version of the paper. However, ICLR is becoming a very competitive conference where novelty is an important criteria for acceptance, and unfortunately the paper was considered to lack the novelty to be presented at ICLR.""" 551,"""Confidence Scores Make Instance-dependent Label-noise Learning Possible""","['Instance-dependent label noise', 'Deep learning']","""Learning with noisy labels has drawn a lot of attention. In this area, most of recent works only consider class-conditional noise, where the label noise is independent of its input features. This noise model may not be faithful to many real-world applications. Instead, few pioneer works have studied instance-dependent noise, but these methods are limited to strong assumptions on noise models. To alleviate this issue, we introduce confidence-scored instance-dependent noise (CSIDN), where each instance-label pair is associated with a confidence score. The confidence scores are sufficient to estimate the noise functions of each instance with minimal assumptions. Moreover, such scores can be easily and cheaply derived during the construction of the dataset through crowdsourcing or automatic annotation. To handle CSIDN, we design a benchmark algorithm termed instance-level forward correction. Empirical results on synthetic and real-world datasets demonstrate the utility of our proposed method.""","""While two reviewers rated this paper as an accept, reviewer 3 strongly believes there are unresolved issues with the work as summarized in their post-rebuttal review. This work seems very promising and while the AC will recommend rejection at this time, the authors are strongly encouraged to resubmit this work.""" 552,"""Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier""","['adversarial attack', 'transformation defenses', 'distribution classifier']","""Adversarial attacks on convolutional neural networks (CNN) have gained significant attention and there have been active research efforts on defense mechanisms. Stochastic input transformation methods have been proposed, where the idea is to recover the image from adversarial attack by random transformation, and to take the majority vote as consensus among the random samples. However, the transformation improves the accuracy on adversarial images at the expense of the accuracy on clean images. While it is intuitive that the accuracy on clean images would deteriorate, the exact mechanism in which how this occurs is unclear. In this paper, we study the distribution of softmax induced by stochastic transformations. We observe that with random transformations on the clean images, although the mass of the softmax distribution could shift to the wrong class, the resulting distribution of softmax could be used to correct the prediction. Furthermore, on the adversarial counterparts, with the image transformation, the resulting shapes of the distribution of softmax are similar to the distributions from the clean images. With these observations, we propose a method to improve existing transformation-based defenses. We train a separate lightweight distribution classifier to recognize distinct features in the distributions of softmax outputs of transformed images. Our empirical studies show that our distribution classifier, by training on distributions obtained from clean images only, outperforms majority voting for both clean and adversarial images. Our method is generic and can be integrated with existing transformation-based defenses.""","""This paper investigates tradeoffs between preserving accuracy on clean samples and increasing robustness on adversarial samples by using transformations and majority votes. Observations on the distribution of the induced softmax show that existing methods could be improved by leveraging information from that distribution to correct predictions, as confirmed by experiments. The problem space is important and reviewers find the approach interesting. Authors have provided some necessary clarifications during rebuttal and additional experiments. While some reservations remain, this paper's premise and its experimental results appear sufficiently interesting to justify an acceptance recommendation.""" 553,"""Network Deconvolution""","['convolutional networks', 'network deconvolution', 'whitening']","""Convolution is a central operation in Convolutional Neural Networks (CNNs), which applies a kernel to overlapping regions shifted across the image. However, because of the strong correlations in real-world image data, convolutional kernels are in effect re-learning redundant data. In this work, we show that this redundancy has made neural network training challenging, and propose network deconvolution, a procedure which optimally removes pixel-wise and channel-wise correlations before the data is fed into each layer. Network deconvolution can be efficiently calculated at a fraction of the computational cost of a convolution layer. We also show that the deconvolution filters in the first layer of the network resemble the center-surround structure found in biological neurons in the visual regions of the brain. Filtering with such kernels results in a sparse representation, a desired property that has been missing in the training of neural networks. Learning from the sparse representation promotes faster convergence and superior results without the use of batch normalization. We apply our network deconvolution operation to 10 modern neural network models by replacing batch normalization within each. Extensive experiments show that the network deconvolution operation is able to deliver performance improvement in all cases on the CIFAR-10, CIFAR-100, MNIST, Fashion-MNIST, Cityscapes, and ImageNet datasets.""","""This paper presents a feature normalization method for CNNs by decorrelating channel-wise and spatial correlation simultaneously. Overall all reviewers are positive to the acceptance and I support their opinions. The idea and implementation is relatively straightforward but well-motivated and reasonable. Experiments are well-organized and intensive, providing enough evidence to convince its effectiveness in terms of final accuracy and convergence speed. Also, its analogy to biological center-surrounded structure is thought provoking. The novelty of the method seems somewhat incremental considering that there already exists a channel-wise decorrelation method, but I think the findings of the paper are interesting and valuable enough for ICLR community and would like to recommend acceptance. Minor comments: I recommend authors to mention about zero-component analysis (ZCA) normalization, which has been a standard input normalization method for CIFAR datasets. I guess it is quite similar to the proposed method considering 1x1 convolution. Also, comparison with other recent normalization methods (e.g., Group Norm) would be useful. """ 554,"""Context-aware Attention Model for Coreference Resolution""","['Coreference resolution', 'Feature Attention']","""Coreference resolution is an important task for gaining more complete understanding about texts by artificial intelligence. The state-of-the-art end-to-end neural coreference model considers all spans in a document as potential mentions and learns to link an antecedent with each possible mention. However, for the verbatim same mentions, the model tends to get similar or even identical representations based on the features, and this leads to wrongful predictions. In this paper, we propose to improve the end-to-end system by building an attention model to reweigh features around different contexts. The proposed model substantially outperforms the state-of-the-art on the English dataset of the CoNLL 2012 Shared Task with 73.45% F1 score on development data and 72.84% F1 score on test data.""","""Main content: Blind review #2 summarizes it well: This paper extends the neural coreference resolution model in Lee et al. (2018) by 1) introducing an additional mention-level feature (grammatical numbers), and 2) letting the mention/pair scoring functions attend over multiple mention-level features. The proposed model achieves marginal improvement (0.2 avg. F1 points) over Lee et al., 2018, on the CoNLL 2012 English test set. -- Discussion: All reviewers rejected. -- Recommendation and justification: The paper must be rejected due to its violation of blind submission (the authors reveal themselves in the Acknowledgments). For information, blind review #2 also summarized well the following justifications for rejection: I recommend rejection for this paper due to the following reasons: - The technical contribution is very incremental (introducing one more features, and adding an attention layer over the feature vectors). - The experiment results aren't strong enough. And the experiments are done on only one dataset. - I am not convinced that adding the grammatical numbers features and the attention mechanism makes the model more context-aware.""" 555,"""Rethinking Curriculum Learning With Incremental Labels And Adaptive Compensation""","['Curriculum Learning', 'Incremental Label Learning', 'Label Smoothing', 'Deep Learning']","""Like humans, deep networks learn better when samples are organized and introduced in a meaningful order or curriculum. While conventional approaches to curriculum learning emphasize the difficulty of samples as the core incremental strategy, it forces networks to learn from small subsets of data while introducing pre-computation overheads. In this work, we propose Learning with Incremental Labels and Adaptive Compensation (LILAC), which introduces a novel approach to curriculum learning. LILAC emphasizes incrementally learning labels instead of incrementally learning difficult samples. It works in two distinct phases: first, in the incremental label introduction phase, we unmask ground-truth labels in fixed increments during training, to improve the starting point from which networks learn. In the adaptive compensation phase, we compensate for failed predictions by adaptively altering the target vector to a smoother distribution. We evaluate LILAC against the closest comparable methods in batch and curriculum learning and label smoothing, across three standard image benchmarks, CIFAR-10, CIFAR-100, and STL-10. We show that our method outperforms batch learning with higher mean recognition accuracy as well as lower standard deviation in performance consistently across all benchmarks. We further extend LILAC to state-of-the-art performance across CIFAR-10 using simple data augmentation while exhibiting label order invariance among other important properties.""","""While the reviewers appreciated the ideas presented in the paper and their novelty, there were major concerns raised about the experimental evaluation. Due to the serious doubts that the reviewers raised about the effectiveness of the proposed approach, I do not think that the paper is quite ready for publication at this time, though I would encourage the authors to revise and resubmit the work at the next opportunity.""" 556,"""Deep Orientation Uncertainty Learning based on a Bingham Loss""","['Orientation Estimation', 'Directional Statistics', 'Bingham Distribution']","""Reasoning about uncertain orientations is one of the core problems in many perception tasks such as object pose estimation or motion estimation. In these scenarios, poor illumination conditions, sensor limitations, or appearance invariance may result in highly uncertain estimates. In this work, we propose a novel learning-based representation for orientation uncertainty. By characterizing uncertainty over unit quaternions with the Bingham distribution, we formulate a loss that naturally captures the antipodal symmetry of the representation. We discuss the interpretability of the learned distribution parameters and demonstrate the feasibility of our approach on several challenging real-world pose estimation tasks involving uncertain orientations.""","""This paper considers the problem of reasoning about uncertain poses of objects in images. The reviewers agree that this is an interesting direction, and that the paper has interesting technical merit. """ 557,"""Attention Interpretability Across NLP Tasks""","['Attention', 'NLP', 'Interpretability']","""The attention layer in a neural network model provides insights into the models reasoning behind its prediction, which are usually criticized for being opaque. Recently, seemingly contradictory viewpoints have emerged about the interpretability of attention weights (Jain & Wallace, 2019; Vig & Belinkov, 2019). Amid such confusion arises the need to understand attention mechanism more systematically. In this work, we attempt to fill this gap by giving a comprehensive explanation which justifies both kinds of observations (i.e., when is attention interpretable and when it is not). Through a series of experiments on diverse NLP tasks, we validate our observations and reinforce our claim of interpretability of attention through manual evaluation.""","""This paper investigates the degree to which we might view attention weights as explanatory across NLP tasks and architectures. Notably, the authors distinguish between single and ""pair"" sequence tasks, the latter including NLI, and generation tasks (e.g., translation). The argument here is that attention weights do not provide explanatory power for single sequence tasks like classification, but do for NLI and generation. Another notable distinction from most (although not all; see the references below) prior work on the explainability of attention mechanisms in NLP is the inclusion of transformer/self-attentive architectures. Unfortunately, the paper needs work in presentation (in particular, in Section 3) before it is ready to be published.""" 558,"""Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions""","['Adversarial Examples', 'Detection of adversarial attacks']","""Adversarial examples raise questions about whether neural network models are sensitive to the same visual features as humans. In this paper, we first detect adversarial examples or otherwise corrupted images based on a class-conditional reconstruction of the input. To specifically attack our detection mechanism, we propose the Reconstructive Attack which seeks both to cause a misclassification and a low reconstruction error. This reconstructive attack produces undetected adversarial examples but with much smaller success rate. Among all these attacks, we find that CapsNets always perform better than convolutional networks. Then, we diagnose the adversarial examples for CapsNets and find that the success of the reconstructive attack is highly related to the visual similarity between the source and target class. Additionally, the resulting perturbations can cause the input image to appear visually more like the target class and hence become non-adversarial. This suggests that CapsNets use features that are more aligned with human perception and have the potential to address the central issue raised by adversarial examples.""","""This paper presents a mechanism for capsule networks to defend against adversarial examples, and a new attack, the reconstruction attack. The differing success of this attacks on capsnets and convnets is used to argue that capsnets find features that are more similar to what humans use. Reviewers generally like the paper, but took instance with the strength of the claim (about the usefulness of the examples) and argued that the paper might not be as novel as it claims. Still, this seems like a valuable contribution that should be published.""" 559,"""RGTI:Response generation via templates integration for End to End dialog""","['End-to-end dialogue systems', 'transformer', 'pointer-generate network']","""End-to-end models have achieved considerable success in task-oriented dialogue area, but suffer from the challenges of (a) poor semantic control, and (b) little interaction with auxiliary information. In this paper, we propose a novel yet simple end-to-end model for response generation via mixed templates, which can address above challenges. In our model, we retrieval candidate responses which contain abundant syntactic and sequence information by dialogue semantic information related to dialogue history. Then, we exploit candidate response attention to get templates which should be mentioned in response. Our model can integrate multi template information to guide the decoder module how to generate response better. We show that our proposed model learns useful templates information, which improves the performance of ""how to say"" and ""what to say"" in response generation. Experiments on the large-scale Multiwoz dataset demonstrate the effectiveness of our proposed model, which attain the state-of-the-art performance.""","""This paper describes a method to incorporate multiple candidate templates to aid in response generation for an end-to-end dialog system. Reviewers thought the basic idea is novel and interesting. However, they also agree that the paper is far from complete, results are missing, further experiments are needed as justification, and the presentation of the paper is not very clear. Given the these feedback from the reviews, I suggest rejecting the paper.""" 560,"""Lean Images for Geo-Localization""","['Geo Localization', 'Deep Learning', 'Computer Vision', 'Camera Localization']","""Most computer vision tasks use textured images. In this paper we consider the geo-localization task - finding the pose of a camera in a large 3D scene from a single lean image, i.e. an image with no texture. We aim to experimentally explore whether texture and correlation between nearby images are necessary in a CNN-based solution for this task. Our results may give insight to the role of geometry (as opposed to textures) in a CNN-based geo-localization solution. Lean images are projections of a simple 3D model of a city. They contain solely information that relates to the geometry of the scene viewed (edges, faces, or relative depth). We find that the network is capable of estimating the camera pose from lean images for a relatively large number of locations (order of hundreds of thousands of images). The main contributions of this paper are: (i) demonstrating the power of CNNs for recovering camera pose using lean images; and (ii) providing insight into the role of geometry in the CNN learning process;""","""The submission studies the problem of geolocalizing a city based on geometric information encoded in so called ""lean"" images. The reviewers were unanimous in their opinion that the submission does not meet the threshold for publication at ICLR. Concerns included quality of writing, novelty with respect to existing literature (in particular see Review #2), and limited validation on one geographic area. No rebuttal was provided.""" 561,"""Sample Efficient Policy Gradient Methods with Recursive Variance Reduction""","['Policy Gradient', 'Reinforcement Learning', 'Sample Efficiency']","""Improving the sample efficiency in reinforcement learning has been a long-standing research problem. In this work, we aim to reduce the sample complexity of existing policy gradient methods. We propose a novel policy gradient algorithm called SRVR-PG, which only requires pseudo-formula \footnote{ pseudo-formula notation hides constant factors.} episodes to find an pseudo-formula -approximate stationary point of the nonconcave performance function pseudo-formula (i.e., pseudo-formula such that J(\boldsymbol{\theta})\|_2^2\leq\epsilon This sample complexity improves the existing result pseudo-formula for stochastic variance reduced policy gradient algorithms by a factor of pseudo-formula . In addition, we also propose a variant of SRVR-PG with parameter exploration, which explores the initial policy parameter from a prior probability distribution. We conduct numerical experiments on classic control problems in reinforcement learning to validate the performance of our proposed algorithms.""","""The paper introduces a policy gradient estimator that is based on stochastic recursive gradient estimator. It provides a sample complexity result of O(eps^{-3/2}) trajectories for estimating the gradient with the accuracy of eps. This paper generated a lot of discussions among reviewers. The discussions were around the novelty of this work in relation to SARAH (Nguyen et al., ICML2017), SPIDER (Fang et al., NeurIPS2018) and the work of Papini et al. (ICML 2018). SARAH/SPIDER are stochastic variance reduced gradient estimators for convex/non-convex problems and have been studied in the optimization literature. To bring it to the RL literature, some adjustments are needed, for example the use of importance sampling (IS) estimator. The work of Papini et al. uses IS, but does not use SARAH/SPIDEH, and it does not use step-wise IS. Overall, I believe that even though the key algorithmic components of this work have been around, it is still a valuable contribution to the RL literature. """ 562,"""Certified Robustness to Adversarial Label-Flipping Attacks via Randomized Smoothing""","['Adversarial Robustness', 'Label Flipping Attack', 'Data Poisoning Attack']","""This paper considers label-flipping attacks, a type of data poisoning attack where an adversary relabels a small number of examples in a training set in order to degrade the performance of the resulting classifier. In this work, we propose a strategy to build classifiers that are certifiably robust against a strong variant of label-flipping, where the adversary can target each test example independently. In other words, for each test point, our classifier makes a prediction and includes a certification that its prediction would be the same had some number of training labels been changed adversarially. Our approach leverages randomized smoothing, a technique that has previously been used to guarantee test-time robustness to adversarial manipulation of the input to a classifier. Further, we obtain these certified bounds with no additional runtime cost over standard classification. On the Dogfish binary classification task from ImageNet, in the face of an adversary who is allowed to flip 10 labels to individually target each test point, the baseline undefended classifier achieves no more than 29.3% accuracy; we obtain a classifier that maintains 64.2% certified accuracy against the same adversary.""","""The authors develop a certified defense for label-flipping attacks (where an adversary can flip labels of a small number of training set samples) based on the randomized smoothing technique developed for certified defenses to adversarial perturbations of the input. The framework applies to least-squares classifiers acting on pretrained features learned by a deep network. The authors show that the resulting framework can obtain significant improvements in certified accuracy against targeted label flipping attacks for each test example. While the paper makes some interesting contributions, the reviewers had the following shared concerns regarding the paper: 1) Reality of threat model: The threat model assumes that the adversary has access to the model and all of the training data (so as to choose which labels to flip), which is very unlikely in practice. 2) Limitation to least squares on pre-trained features: The only practical instantiation of the framework presented in the paper is on least squares classifiers acting on pre-trained features learned by a deep network. In the rebuttal phase, the authors clarified some of the more minor concerns raised by the reviewers, but the above concerns remained. Overall, I feel that this paper is borderline - If the authors extend the applicability of the framework (for example relaxing the restriction on pre-training the deep features) and motivating the threat model more strongly, this could be an interesting paper.""" 563,"""Group-Transformer: Towards A Lightweight Character-level Language Model""","['Transformer', 'Lightweight model', 'Language Modeling', 'Character-level language modeling']","""Character-level language modeling is an essential but challenging task in Natural Language Processing. Prior works have focused on identifying long-term dependencies between characters and have built deeper and wider networks for better performance. However, their models require substantial computational resources, which hinders the usability of character-level language models in applications with limited resources. In this paper, we propose a lightweight model, called Group-Transformer, that reduces the resource requirements for a Transformer, a promising method for modeling sequence with long-term dependencies. Specifically, the proposed method partitions linear operations to reduce the number of parameters and computational cost. As a result, Group-Transformer only uses 18.2\% of parameters compared to the best performing LSTM-based model, while providing better performance on two benchmark tasks, enwik8 and text8. When compared to Transformers with a comparable number of parameters and time complexity, the proposed model shows better performance. The implementation code will be available.""","""This paper proposes using a lightweight alternative to Transformer self-attention called Group-Transformer. This is proposed in order to overcome difficulties in modelling long-distance dependencies in character level language modelling. They take inspiration from work on group convolutions. They experiment on two large-scale char-level LM datasets which show positive results, but experiments on word level tasks fail to show benefits. I think that this work, though promising, is still somewhat incremental and has not shown to be widely applicable, and therefore I recommend that it is not accepted. """ 564,"""Scaling Up Neural Architecture Search with Big Single-Stage Models""",['Single-Stage Neural Architecture Search'],"""Neural architecture search (NAS) methods have shown promising results discovering models that are both accurate and fast. For NAS, training a one-shot model has became a popular strategy to approximate the quality of multiple architectures (child models) using a single set of shared weights. To avoid performance degradation due to parameter sharing, most existing methods have a two-stage workflow where the best child model induced from the one-shot model has to be retrained or finetuned. In this work, we propose BigNAS, an approach that simplifies this workflow and scales up neural architecture search to target a wide range of model sizes simultaneously. We propose several techniques to bridge the gap between the distinct initialization and learning dynamics across small and big models with shared parameters, which enable us to train a single-stage model: a single model from which we can directly slice high-quality child models without retraining or finetuning. With BigNAS we are able to train a single set of shared weights on ImageNet and use these weights to obtain child models whose sizes range from 200 to 1000 MFLOPs. Our discovered model family, BigNASModels, achieve top-1 accuracies ranging from 76.5% to 80.9%, surpassing all state-of-the-art models in this range including EfficientNets.""","""This paper presents a NAS method that avoids having to retrain models from scratch and targets a range of model sizes at once. The work builds on Yu & Huang (2019) and studies a combination of many different techniques. Several baselines use a weaker training method, and no code is made available, raising doubts concerning reproducibility. The reviewers asked various questions, but for several of these questions (e.g., running experiments on MNIST and CIFAR) the authors did not answer satisfactorily. Therefore, the reviewer asking these questions also refuses to change his/her rating. Overall, as AnonReviewer #1 points out, the paper is very empirical. This is not necessarily a bad thing if the experiments yield a lot of insight, but this insight also appears limited. Therefore, I agree with the reviewers and recommend rejection.""" 565,"""GRAPHS, ENTITIES, AND STEP MIXTURE""","['Graph Neural Network', 'Random Walk', 'Attention']","""Graph neural networks have shown promising results on representing and analyzing diverse graph-structured data such as social, citation, and protein interaction networks. Existing approaches commonly suffer from the oversmoothing issue, regardless of whether policies are edge-based or node-based for neighborhood aggregation. Most methods also focus on transductive scenarios for fixed graphs, leading to poor generalization performance for unseen graphs. To address these issues, we propose a new graph neural network model that considers both edge-based neighborhood relationships and node-based entity features, i.e. Graph Entities with Step Mixture via random walk (GESM). GESM employs a mixture of various steps through random walk to alleviate the oversmoothing problem and attention to use node information explicitly. These two mechanisms allow for a weighted neighborhood aggregation which considers the properties of entities and relations. With intensive experiments, we show that the proposed GESM achieves state-of-the-art or comparable performances on four benchmark graph datasets comprising transductive and inductive learning tasks. Furthermore, we empirically demonstrate the significance of considering global information. The source code will be publicly available in the near future.""","""Two reviewers are concerned about this paper while the other one is slightly positive. A reject is recommended.""" 566,"""Editable Neural Networks""","['editing', 'editable', 'meta-learning', 'maml']","""These days deep neural networks are ubiquitously used in a wide range of tasks, from image classification and machine translation to face identification and self-driving cars. In many applications, a single model error can lead to devastating financial, reputational and even life-threatening consequences. Therefore, it is crucially important to correct model mistakes quickly as they appear. In this work, we investigate the problem of neural network editing - how one can efficiently patch a mistake of the model on a particular sample, without influencing the model behavior on other samples. Namely, we propose Editable Training, a model-agnostic training technique that encourages fast editing of the trained model. We empirically demonstrate the effectiveness of this method on large-scale image classification and machine translation tasks.""","""This paper proposes a method which patches/edits a pre-trained neural network's predictions on problematic data points. They do this without the need for retraining the network on the entire data, by only using a few steps of stochastic gradient descent, and thereby avoiding influencing model behaviour on other samples. The post patching training can encourage reliability, locality and efficiency by using a loss function which incorporates these three criteria weighted by hyperparameters. Experiments are done on CIFAR-10 toy experiments, large-scale image classification with adversarial examples, and machine translation. The reviews are generally positive, with significant author response, a new improved version of the paper, and further discussion. This is a well written paper with convincing results, and it addresses a serious problem for production models, I therefore recommend that it is accepted. """ 567,"""Towards Better Understanding of Adaptive Gradient Algorithms in Generative Adversarial Nets""","['Generative Adversarial Nets', 'Adaptive Gradient Algorithms']","""Adaptive gradient algorithms perform gradient-based updates using the history of gradients and are ubiquitous in training deep neural networks. While adaptive gradient methods theory is well understood for minimization problems, the underlying factors driving their empirical success in min-max problems such as GANs remain unclear. In this paper, we aim at bridging this gap from both theoretical and empirical perspectives. First, we analyze a variant of Optimistic Stochastic Gradient (OSG) proposed in~\citep{daskalakis2017training} for solving a class of non-convex non-concave min-max problem and establish pseudo-formula complexity for finding pseudo-formula -first-order stationary point, in which the algorithm only requires invoking one stochastic first-order oracle while enjoying state-of-the-art iteration complexity achieved by stochastic extragradient method by~\citep{iusem2017extragradient}. Then we propose an adaptive variant of OSG named Optimistic Adagrad (OAdagrad) and reveal an \emph{improved} adaptive complexity pseudo-formula ~\footnote{Here pseudo-formula compresses a logarithmic factor of pseudo-formula .}, where pseudo-formula characterizes the growth rate of the cumulative stochastic gradient and \alpha\leq 1/2$. To the best of our knowledge, this is the first work for establishing adaptive complexity in non-convex non-concave min-max optimization. Empirically, our experiments show that indeed adaptive gradient algorithms outperform their non-adaptive counterparts in GAN training. Moreover, this observation can be explained by the slow growth rate of the cumulative stochastic gradient, as observed empirically.""","""This work proposes a new adaptive method for solving certain min-max problems. The reviewers all appreciated the work and most of their concerns were addressed in the rebuttal. Given the current interest in both adaptive methods and min-max problems, this work is suited for publication at ICLR.""" 568,"""DeepSphere: a graph-based spherical CNN""","['spherical cnns', 'graph neural networks', 'geometric deep learning']","""Designing a convolution for a spherical neural network requires a delicate tradeoff between efficiency and rotation equivariance. DeepSphere, a method based on a graph representation of the discretized sphere, strikes a controllable balance between these two desiderata. This contribution is twofold. First, we study both theoretically and empirically how equivariance is affected by the underlying graph with respect to the number of pixels and neighbors. Second, we evaluate DeepSphere on relevant problems. Experiments show state-of-the-art performance and demonstrates the efficiency and flexibility of this formulation. Perhaps surprisingly, comparison with previous work suggests that anisotropic filters might be an unnecessary price to pay. Our code is available at pseudo-url.""","""This paper proposes a novel methodology for applying convolutional networks to spherical data through a graph-based discretization. The reviewers all found the methodology sensible and the experiments convincing. A common concern of the reviewers was the amount of novelty in the approach, as in it involves the combination of established methods, but ultimately they found that the empirical performance compared to baselines outweighed this.""" 569,"""Sub-policy Adaptation for Hierarchical Reinforcement Learning""","['Hierarchical Reinforcement Learning', 'Transfer', 'Skill Discovery']","""Hierarchical reinforcement learning is a promising approach to tackle long-horizon decision-making problems with sparse rewards. Unfortunately, most methods still decouple the lower-level skill acquisition process and the training of a higher level that controls the skills in a new task. Leaving the skills fixed can lead to significant sub-optimality in the transfer setting. In this work, we propose a novel algorithm to discover a set of skills, and continuously adapt them along with the higher level even when training on a new task. Our main contributions are two-fold. First, we derive a new hierarchical policy gradient with an unbiased latent-dependent baseline, and we introduce Hierarchical Proximal Policy Optimization (HiPPO), an on-policy method to efficiently train all levels of the hierarchy jointly. Second, we propose a method of training time-abstractions that improves the robustness of the obtained skills to environment changes. Code and videos are available at pseudo-url.""","""This paper considers hierarchical reinforcement learning, and specifically the case where the learning and use of lower-level skills should not be decoupled. To this end the paper proposes Hierarchical Proximal Policy Optimization (HiPPO) to jointly learn the different layers of the hierarchy. This is compared against other hierarchical RL schemes on several Mujoco domains. The reviewers raised three main issues with this paper. The first concerns an excluded baseline, which was included in the rebuttal. The other issues involve the motivation for the paper (in that there exist other methods that try and learn different levels of hierarchy together) and justification for some design choices. These were addressed to some extent in the rebuttal, but I believe this to still be an interesting contribution to the literature, and should be accepted. """ 570,"""Curriculum Loss: Robust Learning and Generalization against Label Corruption""","['Curriculum Learning', 'deep learning']","""Deep neural networks (DNNs) have great expressive power, which can even memorize samples with wrong labels. It is vitally important to reiterate robustness and generalization in DNNs against label corruption. To this end, this paper studies the 0-1 loss, which has a monotonic relationship between empirical adversary (reweighted) risk (Hu et al. 2018). Although the 0-1 loss is robust to outliers, it is also difficult to optimize. To efficiently optimize the 0-1 loss while keeping its robust properties, we propose a very simple and efficient loss, i.e. curriculum loss (CL). Our CL is a tighter upper bound of the 0-1 loss compared with conventional summation based surrogate losses. Moreover, CL can adaptively select samples for stagewise training. As a result, our loss can be deemed as a novel perspective of curriculum sample selection strategy, which bridges a connection between curriculum learning and robust learning. Experimental results on noisy MNIST, CIFAR10 and CIFAR100 dataset validate the robustness of the proposed loss.""","""This paper studies learning with noisy labels by integrating the idea of curriculum learning. All reviewers and AC are happy with novelty, clear write-up and experimental results. I recommend acceptance. """ 571,"""Multi-Step Decentralized Domain Adaptation""","['domain adaptation', 'decentralization']","""Despite the recent breakthroughs in unsupervised domain adaptation (uDA), no prior work has studied the challenges of applying these methods in practical machine learning scenarios. In this paper, we highlight two significant bottlenecks for uDA, namely excessive centralization and poor support for distributed domain datasets. Our proposed framework, MDDA, is powered by a novel collaborator selection algorithm and an effective distributed adversarial training method, and allows for uDA methods to work in a decentralized and privacy-preserving way. ""","""This paper proposes a solution to the decentralized privacy preserving domain adaptation problem. In other words, how to adapt to a target domain without explicit data access to other existing domains. In this scenario the authors propose MDDA which consists of both a collaborator selection algorithm based on minimal Wasserstein distance as well as a technique for adapting through sharing discriminator gradients across domains. The reviewers has split scores for this work with two recommending weak accept and two recommending weak reject. However, both reviewers who recommended weak accept explicitly mentioned that their recommendation was borderline (an option not available for ICLR 2020). The main issues raised by the reviewers was lack of algorithmic novelty and lack of comparison to prior privacy preserving work. The authors agreed that their goal was not to introduce a new domain adaptation algorithm, but rather to propose a generic solution to extend existing algorithms to the case of privacy preserving and decentralized DA. The authors also provided extensive revisions in response to the reviewers comments. Though the reviewers were convinced on some points (like privacy preserving arguments), there still remained key outstanding issues that were significant enough to cause the reviewers not to update their recommendations. Therefore, this paper is not recommended for acceptance in its current form. We encourage the authors to build off the revisions completed during the rebuttal phase and any outstanding comments from the reviewers. """ 572,"""Dimensional Reweighting Graph Convolution Networks""","['graph convolutional networks', 'representation learning', 'mean field theory', 'variance reduction', 'node classification']","""In this paper, we propose a method named Dimensional reweighting Graph Convolutional Networks (DrGCNs), to tackle the problem of variance between dimensional information in the node representations of GCNs. We prove that DrGCNs can reduce the variance of the node representations by connecting our problem to the theory of the mean field. However, practically, we find that the degrees DrGCNs help vary severely on different datasets. We revisit the problem and develop a new measure K to quantify the effect. This measure guides when we should use dimensional reweighting in GCNs and how much it can help. Moreover, it offers insights to explain the improvement obtained by the proposed DrGCNs. The dimensional reweighting block is light-weighted and highly flexible to be built on most of the GCN variants. Carefully designed experiments, including several fixes on duplicates, information leaks, and wrong labels of the well-known node classification benchmark datasets, demonstrate the superior performances of DrGCNs over the existing state-of-the-art approaches. Significant improvements can also be observed on a large scale industrial dataset.""","""As Reviewer 2 pointed out in his/her response to the authors' rebuttal, this paper (at least in current state) has significant shortcomings that need to be addressed before this paper merits acceptance.""" 573,"""Learning Compact Embedding Layers via Differentiable Product Quantization""","['efficient modeling', 'compact embedding', 'embedding table compression', 'differentiable product quantization']","""Embedding layers are commonly used to map discrete symbols into continuous embedding vectors that reflect their semantic meanings. Despite their effectiveness, the number of parameters in an embedding layer increases linearly with the number of symbols and poses a critical challenge on memory and storage constraints. In this work, we propose a generic and end-to-end learnable compression framework termed differentiable product quantization (DPQ). We present two instantiations of DPQ that leverage different approximation techniques to enable differentiability in end-to-end learning. Our method can readily serve as a drop-in alternative for any existing embedding layer. Empirically, DPQ offers significant compression ratios (14-238x) at negligible or no performance cost on 10 datasets across three different language tasks.""","""The presented paper gives a differentiable product quantization framework to compress embedding and support the claim by experiments (the supporting materials are as large as the paper itself). Reviewers agreed that the idea is simple is interesting, and also nice and positive discussion appeared. However, the main limiting factor is the small novelty over Chen 2018b, and I agree with that. Also, the comparison with low rank is rather formal: of course it would be of full rank , as the authors claim in the answer, but looking at singular values is needed to make this claim. Also, one can use low-rank tensor factorization to compress embeddings, and this can be compared. To summarize, I think the contribution is not enough to be accepted.""" 574,"""Simple is Better: Training an End-to-end Contract Bridge Bidding Agent without Human Knowledge""","['Contract Bridge', 'Bidding', 'Selfplay', 'AlphaZero']","""Contract bridge is a multi-player imperfect-information game where one partnership collaborate with each other to compete against the other partnership. The game consists of two phases: bidding and playing. While playing is relatively easy for modern software, bidding is challenging and requires agents to learn a communication protocol to reach the optimal contract jointly, with their own private information. The agents need to exchange information to their partners, and interfere opponents, through a sequence of actions. In this work, we train a strong agent to bid competitive bridge purely through selfplay, outperforming WBridge5, a championship-winning software. Furthermore, we show that explicitly modeling belief is not necessary in boosting the performance. To our knowledge, this is the first competitive bridge agent that is trained with no domain knowledge. It outperforms previous state-of-the-art that use human replays with 70x fewer number of parameters.""","""This paper proposes a new training method for an end-to-end contract bridge bidding agent. Reviewers R2 and R3 raised concerns regarding limited novelty and also experimental results not being convincing. R2's main objection is that the paper has ""strong SOTA performance with a simple model, but empirical study are rather shallow."" Based on their recommendations, I recommend to reject this paper. """ 575,"""Exploration in Reinforcement Learning with Deep Covering Options""","['Reinforcement learning', 'temporal abstraction', 'exploration']","""While many option discovery methods have been proposed to accelerate exploration in reinforcement learning, they are often heuristic. Recently, covering options was proposed to discover a set of options that provably reduce the upper bound of the environment's cover time, a measure of the difficulty of exploration. Covering options are computed using the eigenvectors of the graph Laplacian, but they are constrained to tabular tasks and are not applicable to tasks with large or continuous state-spaces. We introduce deep covering options, an online method that extends covering options to large state spaces, automatically discovering task-agnostic options that encourage exploration. We evaluate our method in several challenging sparse-reward domains and we show that our approach identifies less explored regions of the state-space and successfully generates options to visit these regions, substantially improving both the exploration and the total accumulated reward.""","""This paper considers options discovery in hierarchical reinforcement learning. It extends the idea of covering options, using the Laplacian of the state space discover a set of options that reduce the upper bound of the environment's cover time, to continuous and large state spaces. An online method is also included, and evaluated on several domains. The reviewers had major questions on a number of aspects of the paper, including around the novelty of the work which seemed limited, the quantitative results in the ATARI environments, and problems with comparisons to other exploration methods. These were all appropriately dealt with in the rebuttals, leaving this paper worthy of acceptance.""" 576,"""Concise Multi-head Attention Models""","['Transformers', 'Attention', 'Multihead', 'expressive power', 'embedding size']","""Attention based Transformer architecture has enabled significant advances in the field of natural language processing. In addition to new pre-training techniques, recent improvements crucially rely on working with a relatively larger embedding dimension for tokens. This leads to models that are prohibitively large to be employed in the downstream tasks. In this paper we identify one of the important factors contributing to the large embedding size requirement. In particular, our analysis highlights that the scaling between the number of heads and the size of each head in the existing architectures gives rise to this limitation, which we further validate with our experiments. As a solution, we propose a new way to set the projection size in attention heads that allows us to train models with a relatively smaller embedding dimension, without sacrificing the performance.""","""This paper studies tradeoffs in the design of attention-based architectures. It argues and formally establishes that the expressivity of an attention head is determined by its dimension and that fixing the head dimension, one gains additional expressive power by using more heads. Reviewers were generally positive about the question under study here, but raised important concerns about the significance of the results and the take-home message in the current manuscript. The AC shares these concerns, and recommends rejection, while encouraging the authors to address the concerns raised during this discussion. """ 577,"""A Hierarchy of Graph Neural Networks Based on Learnable Local Features""","['Graph Neural Networks', 'Hierarchy', 'Weisfeiler-Lehman', 'Discriminative Power']","""Graph neural networks (GNNs) are a powerful tool to learn representations on graphs by iteratively aggregating features from node neighbourhoods. Many variant models have been proposed, but there is limited understanding on both how to compare different architectures and how to construct GNNs systematically. Here, we propose a hierarchy of GNNs based on their aggregation regions. We derive theoretical results about the discriminative power and feature representation capabilities of each class. Then, we show how this framework can be utilized to systematically construct arbitrarily powerful GNNs. As an example, we construct a simple architecture that exceeds the expressiveness of the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theory on both synthetic and real-world benchmarks, and demonstrate our example's theoretical power translates to state-of-the-art results on node classification, graph classification, and graph regression tasks. ""","""This paper proposes a modification to GCNs that generalizes the aggregation step to multiple levels of neighbors, that in theory, the new class of models have better discriminative power. The main criticism raised is that there is lack of sufficient evidence to distinguish this works theoretical contribution from that of Xu et al. Two reviewers also pointed out the concerns around experiment results and suggested to includes more recent state of the art SOTA results. While authors disagree that the contributions of their work is incremental, reviewers concerns are good samples of the general readers of this paper general readers may also read this paper as incremental. We highly encourage authors to take another cycle of edits to better distinguish their work from others before future submissions. """ 578,"""DYNAMIC SELF-TRAINING FRAMEWORK FOR GRAPH CONVOLUTIONAL NETWORKS""","['self-training', 'semi-supervised learning', 'graph convolutional networks']","""Graph neural networks (GNN) such as GCN, GAT, MoNet have achieved state-of-the-art results on semi-supervised learning on graphs. However, when the number of labeled nodes is very small, the performances of GNNs downgrade dramatically. Self-training has proved to be effective for resolving this issue, however, the performance of self-trained GCN is still inferior to that of G2G and DGI for many settings. Moreover, additional model complexity make it more difficult to tune the hyper-parameters and do model selection. We argue that the power of self-training is still not fully explored for the node classification task. In this paper, we propose a unified end-to-end self-training framework called \emph{Dynamic Self-traning}, which generalizes and simplifies prior work. A simple instantiation of the framework based on GCN is provided and empirical results show that our framework outperforms all previous methods including GNNs, embedding based method and self-trained GCNs by a noticeable margin. Moreover, compared with standard self-training, hyper-parameter tuning for our framework is easier.""","""The paper is develops a self-training framework for graph convolutional networks where we have partially labeled graphs with a limited amount of labeled nodes. The reviewers found the paper interesting. One reviewer notes the ability to better exploit available information and raised questions of computational costs. Another reviewer felt the difference from previous work was limited, but that the good results speak for themselves. The final reviewer raised concerns on novelty and limited improvement in results. The authors provided detailed responses to these queries, providing additional results. The paper has improved over the course of the review, but due to a large number of stronger papers, was not accepted at this time.""" 579,"""NPTC-net: Narrow-Band Parallel Transport Convolutional Neural Network on Point Clouds""","['geometric convolution', 'point cloud', 'parallel transport']","""Convolution plays a crucial role in various applications in signal and image processing, analysis and recognition. It is also the main building block of convolution neural networks (CNNs). Designing appropriate convolution neural networks on manifold-structured point clouds can inherit and empower recent advances of CNNs to analyzing and processing point cloud data. However, one of the major challenges is to define a proper way to ""sweep"" filters through the point cloud as a natural generalization of the planar convolution and to reflect the point cloud's geometry at the same time. In this paper, we consider generalizing convolution by adapting parallel transport on the point cloud. Inspired by a triangulated surface based method \cite{DBLP:journals/corr/abs-1805-07857}, we propose the Narrow-Band Parallel Transport Convolution (NPTC) using a specifically defined connection on a voxelized narrow-band approximation of point cloud data. With that, we further propose a deep convolutional neural network based on NPTC (called NPTC-net) for point cloud classification and segmentation. Comprehensive experiments show that the proposed NPTC-net achieves similar or better results than current state-of-the-art methods on point clouds classification and segmentation.""","""All the reviewers recommend rejecting the paper. There is no basis for acceptance.""" 580,"""A Causal View on Robustness of Neural Networks""","['Neural Network Robustness', 'Variational autoencoder (VAE)', 'Causality', 'Deep generative model']","""We present a causal view on the robustness of neural networks against input manipulations, which applies not only to traditional classification tasks but also to general measurement data. Based on this view, we design a deep causal manipulation augmented model (deep CAMA) which explicitly models the manipulations of data as a cause to the observed effect variables. We further develop data augmentation and test-time fine-tuning methods to improve deep CAMA's robustness. When compared with discriminative deep neural networks, our proposed model shows superior robustness against unseen manipulations. As a by-product, our model achieves disentangled representation which separates the representation of manipulations from those of other latent causes.""","""This paper attempts to present a causal view of robustness in classifiers, which is a very important area of research. However, the connection to causality with the presented model is very thin and, in fact, mathematically unnecessary. Interventions are only applied to root nodes (as pointed out by R4) so they just amount to standard conditioning on the variable ""M"". The experimental results could be obtained without any mention to causal interventions.""" 581,"""A Probabilistic Formulation of Unsupervised Text Style Transfer""","['unsupervised text style transfer', 'deep latent sequence model']","""We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques. Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus. By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion. In contrast with traditional generative sequence models (e.g. the HMM), our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution. While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate. Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss. Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation. Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes. Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art.""","""This paper proposes an unsupervised text style transfer model which combines a language model prior with an encoder-decoder transducer. They use a deep generative model which hypothesises a latent sequence which generates the observed sequences. It is trained on non-parallel data and they report good results on unsupervised sentiment transfer, formality transfer, word decipherment, author imitation, and machine translation. The authors responded in depth to reviewer comments, and the reviewers took this into consideration. This is a well written paper, with an elegant model and I would like to see it accepted at ICLR. """ 582,"""Novelty Search in representational space for sample efficient exploration""","['Reinforcement Learning', 'Exploration']","""We present a new approach for efficient exploration which leverages a low-dimensional encoding of the environment learned with a combination of model-based and model-free objectives. Our approach uses intrinsic rewards that are based on a weighted distance of nearest neighbors in the low dimensional representational space to gauge novelty. We then leverage these intrinsic rewards for sample-efficient exploration with planning routines in representational space. One key element of our approach is that we perform more gradient steps in-between every environment step in order to ensure the model accuracy. We test our approach on a number of maze tasks, as well as a control problem and show that our exploration approach is more sample-efficient compared to strong baselines. ""","""The two most experienced reviewers recommended the paper be rejected. The submission lacks technical depth, which calls the significance of the contribution into question. This work would be greatly strengthened by a theoretical justification of the proposed approach. The reviewers also criticized the quality of the exposition, noting that key parts of the presentation was unclear. The experimental evaluation was not considered to be sufficiently convincing. The review comments should be able to help the authors strengthen this work.""" 583,"""Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents""","['Visualization', 'Reinforcement Learning', 'Safety']","""As deep reinforcement learning driven by visual perception becomes more widely used there is a growing need to better understand and probe the learned agents. Understanding the decision making process and its relationship to visual inputs can be very valuable to identify problems in learned behavior. However, this topic has been relatively under-explored in the research community. In this work we present a method for synthesizing visual inputs of interest for a trained agent. Such inputs or states could be situations in which specific actions are necessary. Further, critical states in which a very high or a very low reward can be achieved are often interesting to understand the situational awareness of the system as they can correspond to risky states. To this end, we learn a generative model over the state space of the environment and use its latent space to optimize a target function for the state of interest. In our experiments we show that this method can generate insights for a variety of environments and reinforcement learning methods. We explore results in the standard Atari benchmark games as well as in an autonomous driving simulator. Based on the efficiency with which we have been able to identify behavioural weaknesses with this technique, we believe this general approach could serve as an important tool for AI safety applications.""","""This paper proposes a tool to visualizing the behaviour of deep RL agents, for example to observe the behaviour of an agent in critical scenarios. The idea is to learn a generative model of the environment and use it to artificially generate novel states in order to induce specific agent actions. States can then be generated such as to optimize a given target function, for example states where the agent takes a specific actions or states which are high/low reward. They evaluate the proposed visualization on Atari games and on a driving simulation environment, where the authors use their approach, to investigate the behaviour of different deep RL agents such as DQN. The paper is very controversial. On the one hand, as far as we know, this is the first approach that explicitly generates states that are meant to induce specific agent behaviour, although one could relate this to adversarial samples generation. Interpretability in deep RL is a known problem and this work could bring an interesting tool to the community. However, the proposed approach lacks theoretical foundations, thus feels quite ad-hoc, and results are limited to a qualitative, visual, evaluation. At the same time, one could say that the approach is not more ad hoc than other gradient saliency visualization approaches, and one could argue that the lack of theoretical soundness is due to the difficulty of defining good measures of interpretability and that apply well to image-based environments. Nonetheless, this paper is a step in the good direction in a field that could really benefit from it. """ 584,"""Prestopping: How Does Early Stopping Help Generalization Against Label Noise?""","['noisy label', 'label noise', 'robustness', 'deep learning', 'early stopping']","""Noisy labels are very common in real-world training data, which lead to poor generalization on test data because of overfitting to the noisy labels. In this paper, we claim that such overfitting can be avoided by ""early stopping"" training a deep neural network before the noisy labels are severely memorized. Then, we resume training the early stopped network using a ""maximal safe set,"" which maintains a collection of almost certainly true-labeled samples at each epoch since the early stop point. Putting them all together, our novel two-phase training method, called Prestopping, realizes noise-free training under any type of label noise for practical use. Extensive experiments using four image benchmark data sets verify that our method significantly outperforms four state-of-the-art methods in test error by 0.48.2 percent points under existence of real-world noise.""","""This paper focuses on avoiding overfitting in the presence of noisy labels. The authors develop a two phase method called pre-stopping based on a combination of early stopping and a maximal safe set. The reviewers raised some concern about illustrating maximal safe set for all data sets and suggest comparisons with more baselines. The reviewers also indicated that the paper is missing key relevant publications. In the response the authors have done a rather through job of addressing the reviewers comments. I thank them for this. However, given the limited time some of the reviewers comments regarding adding new baselines could not be addressed. As a result I can not recommend acceptance because I think this is key to making a proper assessment. That said, I think this is an interesting with good potential if it can outperform other baselines and would recommend that the authors revise and resubmit in a future venue.""" 585,"""Meta Learning via Learned Loss""","['Meta Learning', 'Reinforcement Learning', 'Loss Learning']","""We present a meta-learning method for learning parametric loss functions that can generalize across different tasks and model architectures. We develop a pipeline for training such loss functions, targeted at maximizing the performance of model learn- ing with them. We observe that the loss landscape produced by our learned losses significantly improves upon the original task-specific losses in both supervised and reinforcement learning tasks. Furthermore, we show that our meta-learning framework is flexible enough to incorporate additional information at meta-train time. This information shapes the learned loss function such that the environment does not need to provide this information during meta-test time.""","""Despite the new ideas in this paper, reviewers feel that it needs to be revised for clarification, and that experimental results are not convincing. I have down-weighted the criticisms of Reviewer 2 because I agree with the authors' rebuttal. However, there is still not enough support among the remaining reviews to justify acceptance. """ 586,"""On Concept-Based Explanations in Deep Neural Networks""","['concept-based explanations', 'interpretability']","""Deep neural networks (DNNs) build high-level intelligence on low-level raw features. Understanding of this high-level intelligence can be enabled by deciphering the concepts they base their decisions on, as human-level thinking. In this paper, we study concept-based explainability for DNNs in a systematic framework. First, we define the notion of completeness, which quantifies how sufficient a particular set of concepts is in explaining a model's prediction behavior. Based on performance and variability motivations, we propose two definitions to quantify completeness. We show that under degenerate conditions, our method is equivalent to Principal Component Analysis. Next, we propose a concept discovery method that considers two additional constraints to encourage the interpretability of the discovered concepts. We use game-theoretic notions to aggregate over sets to define an importance score for each discovered concept, which we call \emph{ConceptSHAP}. On specifically-designed synthetic datasets and real-world text and image datasets, we validate the effectiveness of our framework in finding concepts that are complete in explaining the decision, and interpretable.""","""This paper introduces an unsupervised concept learning and explanation algorithm, as well as a concept of ""completeness"" for evaluating representations in an unsupervised way. There are several valuable contributions here, and the paper improved substantially after the rebuttal. It would not be unreasonable to accept this paper. But after extensive post-review discussion, we decided that the completeness idea was the most valuable contribution, but that it was insufficiently investigated. To quote R3, who I agree with: "" I think the paper could be strengthened considerably with a rewrite that focuses first on a shortcoming of existing methods in finding complete solutions. I also think their explanations for why PCA is not complete are somewhat speculative and I expect that studying the completeness of activation spaces in invertible networks would lead to some relevant insights"" """ 587,"""Four Things Everyone Should Know to Improve Batch Normalization""",['batch normalization'],"""A key component of most neural network architectures is the use of normalization layers, such as Batch Normalization. Despite its common use and large utility in optimizing deep architectures, it has been challenging both to generically improve upon Batch Normalization and to understand the circumstances that lend themselves to other enhancements. In this paper, we identify four improvements to the generic form of Batch Normalization and the circumstances under which they work, yielding performance gains across all batch sizes while requiring no additional computation during training. These contributions include proposing a method for reasoning about the current example in inference normalization statistics, fixing a training vs. inference discrepancy; recognizing and validating the powerful regularization effect of Ghost Batch Normalization for small and medium batch sizes; examining the effect of weight decay regularization on the scaling and shifting parameters and ; and identifying a new normalization algorithm for very small batch sizes by combining the strengths of Batch and Group Normalization. We validate our results empirically on six datasets: CIFAR-100, SVHN, Caltech-256, Oxford Flowers-102, CUB-2011, and ImageNet.""","""This paper proposes techniques to improve training with batch normalization. The paper establishes the benefits of these techniques experimentally using ablation studies. The reviewers found the results to be promising and of interest to the community. However, this paper is borderline due in part due to the writing (notation issues) and because it does not discuss related work enough. We encourage the authors to properly address these issues before the camera ready.""" 588,"""Weakly Supervised Clustering by Exploiting Unique Class Count""","['weakly supervised clustering', 'weakly supervised learning', 'multiple instance learning']","""A weakly supervised learning based clustering framework is proposed in this paper. As the core of this framework, we introduce a novel multiple instance learning task based on a bag level label called unique class count (ucc), which is the number of unique classes among all instances inside the bag. In this task, no annotations on individual instances inside the bag are needed during training of the models. We mathematically prove that with a perfect ucc classifier, perfect clustering of individual instances inside the bags is possible even when no annotations on individual instances are given during training. We have constructed a neural network based ucc classifier and experimentally shown that the clustering performance of our framework with our weakly supervised ucc classifier is comparable to that of fully supervised learning models where labels for all instances are known. Furthermore, we have tested the applicability of our framework to a real world task of semantic segmentation of breast cancer metastases in histological lymph node sections and shown that the performance of our weakly supervised framework is comparable to the performance of a fully supervised Unet model.""","""The paper proposes a weakly supervised learning algorithm, motivated by its application to histopathology. Similar to the multiple instance learning scenario, labels are provided for bags of instances. However instead of a single (binary) label per bag, the paper introduces the setting where the training algorithm is provided with the number of classes in the bag (but not which ones). Careful empirical experiments on semantic segmentation of histopathology data, as well as simulated labelling from MNIST and CIFAR demonstrate the usefulness of the method. The proposed approach is similar in spirit to works such as learning from label proportions and UU learning (both which solve classification tasks). pseudo-url pseudo-url The reviews are widely spread, with a low confidence reviewer rating (1). However it seems that the high confidence reviewers are also providing higher scores and better comments. The authors addressed many of the reviewer comments, and seeked clarification for certain points, but the reviewers did not engage further during the discussion period. This paper provides a novel weakly supervised learning setting, motivated by a real world semantic segmentation task, and provides an algorithm to learn from only the number of classes per bag, which is demonstrated to work on empirical experiments. It is a good addition to the ICLR program.""" 589,"""SCALOR: Generative World Models with Scalable Object Representations""",[],"""Scalability in terms of object density in a scene is a primary challenge in unsupervised sequential object-oriented representation learning. Most of the previous models have been shown to work only on scenes with a few objects. In this paper, we propose SCALOR, a probabilistic generative world model for learning SCALable Object-oriented Representation of a video. With the proposed spatially parallel attention and proposal-rejection mechanisms, SCALOR can deal with orders of magnitude larger numbers of objects compared to the previous state-of-the-art models. Additionally, we introduce a background module that allows SCALOR to model complex dynamic backgrounds as well as many foreground objects in the scene. We demonstrate that SCALOR can deal with crowded scenes containing up to a hundred objects while jointly modeling complex dynamic backgrounds. Importantly, SCALOR is the rst unsupervised object representation model shown to work for natural scenes containing several tens of moving objects.""","""After the author response and paper revision, the reviewers all came to appreciate this paper and unanimously recommended it be accepted. The paper makes a nice contribution to generative modelling of object-oriented representations with large numbers of objects. The authors adequately addressed the main reviewer concerns with their detailed rebuttal and revision.""" 590,"""Deep Symbolic Superoptimization Without Human Knowledge""",[],"""Deep symbolic superoptimization refers to the task of applying deep learning methods to simplify symbolic expressions. Existing approaches either perform supervised training on human-constructed datasets that defines equivalent expression pairs, or apply reinforcement learning with human-defined equivalent trans-formation actions. In short, almost all existing methods rely on human knowledge to define equivalence, which suffers from large labeling cost and learning bias, because it is almost impossible to define and comprehensive equivalent set. We thus propose HISS, a reinforcement learning framework for symbolic super-optimization that keeps human outside the loop. HISS introduces a tree-LSTM encoder-decoder network with attention to ensure tractable learning. Our experiments show that HISS can discover more simplification rules than existing human-dependent methods, and can learn meaningful embeddings for symbolic expressions, which are indicative of equivalence.""","""This work introduces a neural architecture and corresponding method for simplifying symbolic equations, which can be trained without requiring human input. This is an area somewhat outside most of our expertise, but the general consensus is that the paper is interesting and is an advance. The reviewer's concerns have been mostly resolved by the rebuttal, so I am recommending an accept. """ 591,"""Are there any 'object detectors' in the hidden layers of CNNs trained to identify objects or scenes?""","['neural networks', 'localist coding', 'selectivity', 'object detectors', 'CCMAS', 'CNNs', 'activation maximisation', 'information representation', 'network dissection', 'interpretabillity', 'signal detection']","""Various methods of measuring unit selectivity have been developed with the aim of better understanding how neural networks work. But the different measures provide divergent estimates of selectivity, and this has led to different conclusions regarding the conditions in which selective object representations are learned and the functional relevance of these representations. In an attempt to better characterize object selectivity, we undertake a comparison of various selectivity measures on a large set of units in AlexNet, including localist selectivity, precision, class-conditional mean activity selectivity (CCMAS), network dissection, the human interpretation of activation maximization (AM) images, and standard signal-detection measures. We find that the different measures provide different estimates of object selectivity, with precision and CCMAS measures providing misleadingly high estimates. Indeed, the most selective units had a poor hit-rate or a high false-alarm rate (or both) in object classification, making them poor object detectors. We fail to find any units that are even remotely as selective as the 'grandmother cell' units reported in recurrent neural networks. In order to generalize these results, we compared selectivity measures on a few units in VGG-16 and GoogLeNet trained on the ImageNet or Places-365 datasets that have been described as 'object detectors'. Again, we find poor hit-rates and high false-alarm rates for object classification. ""","""This paper conducted a number of empirical studies to find whether units in object-classification CNN can be used as object detectors. The claimed conclusion is that there are no units that are sufficient powerful to be considered as object detectors. Three reviewers have split reviews. While reviewer #1 is positive about this work, the review is quite brief. In contrast, Reviewer #2 and #3 both rate weak reject, with similar major concerns. That is, the conclusion seems non-conclusive and not surprising as well. What would be the contribution of this type of conclusion to the ICLR community? In particular, Reviewer #2 provided detailed and well elaborated comments. The authors made efforts to response to all reviewers comments. However, the major concerns remain, and the rating were not changed. The ACs concur the major concerns and agree that the paper can not be accepted at its current state.""" 592,"""Reanalysis of Variance Reduced Temporal Difference Learning""","['Reinforcement Learning', 'TD learning', 'Markovian sample', 'Variance Reduction']","""Temporal difference (TD) learning is a popular algorithm for policy evaluation in reinforcement learning, but the vanilla TD can substantially suffer from the inherent optimization variance. A variance reduced TD (VRTD) algorithm was proposed by \cite{korda2015td}, which applies the variance reduction technique directly to the online TD learning with Markovian samples. In this work, we first point out the technical errors in the analysis of VRTD in \cite{korda2015td}, and then provide a mathematically solid analysis of the non-asymptotic convergence of VRTD and its variance reduction performance. We show that VRTD is guaranteed to converge to a neighborhood of the fixed-point solution of TD at a linear convergence rate. Furthermore, the variance error (for both i.i.d.\ and Markovian sampling) and the bias error (for Markovian sampling) of VRTD are significantly reduced by the batch size of variance reduction in comparison to those of vanilla TD. As a result, the overall computational complexity of VRTD to attain a given accurate solution outperforms that of TD under Markov sampling and outperforms that of TD under i.i.d.\ sampling for a sufficiently small conditional number.""","""The paper studies the variance reduced TD algorithm by Konda and Prashanth (2015). The original paper provided a convergence analysis that had some technical issues. This paper provides a new convergence analysis, and shows the advantage of VRTD to vanilla TD in terms of reducing the bias and variance. Several of the five reviewers are expert in this area and all of them are positive about it. Therefore, I recommend acceptance of this work.""" 593,"""Selfish Emergent Communication""","['multi agent reinforcement learning', 'emergent communication', 'game theory']","""Current literature in machine learning holds that unaligned, self-interested agents do not learn to use an emergent communication channel. We introduce a new sender-receiver game to study emergent communication for this spectrum of partially-competitive scenarios and put special care into evaluation. We find that communication can indeed emerge in partially-competitive scenarios, and we discover three things that are tied to improving it. First, that selfish communication is proportional to cooperation, and it naturally occurs for situations that are more cooperative than competitive. Second, that stability and performance are improved by using LOLA (Foerster et al, 2018), especially in more competitive scenarios. And third, that discrete protocols lend themselves better to learning cooperative communication than continuous ones. ""","""There has been a long discussion on the paper, especially between the authors and the 2nd reviewer. While the authors' comments and paper modifications have improved the paper, the overall opinion on this paper is that it is below par in its current form. The main issue is that the significance of the results is insufficiently clear. While the sender-receiver game introduced is interesting, a more thorough investigation would improve the paper a lot (for example, by looking if theoretical statements can be made).""" 594,"""An Empirical and Comparative Analysis of Data Valuation with Scalable Algorithms""","['Data valuation', 'machine learning']","""This paper focuses on valuating training data for supervised learning tasks and studies the Shapley value, a data value notion originated in cooperative game theory. The Shapley value defines a unique value distribution scheme that satisfies a set of appealing properties desired by a data value notion. However, the Shapley value requires exponential complexity to calculate exactly. Existing approximation algorithms, although achieving great improvement over the exact algorithm, relies on retraining models for multiple times, thus remaining limited when applied to larger-scale learning tasks and real-world datasets. In this work, we develop a simple and efficient algorithm to estimate the Shapley value with complexity independent with the model size. The key idea is to approximate the model via a pseudo-formula -nearest neighbor ( pseudo-formula NN) classifier, which has a locality structure that can lead to efficient Shapley value calculation. We evaluate the utility of the values produced by the pseudo-formula NN proxies in various settings, including label noise correction, watermark detection, data summarization, active data acquisition, and domain adaption. Extensive experiments demonstrate that our algorithm achieves at least comparable utility to the values produced by existing algorithms while significant efficiency improvement. Moreover, we theoretically analyze the Shapley value and justify its advantage over the leave-one-out error as a data value measure.""","""There is insufficient support to recommend accepting this paper. The authors provided detailed responses to the reviewer comments, but the reviewers did not raise their evaluation of the significance and novelty of the contributions as a result. The feedback provided should help the authors improve their paper.""" 595,"""One-way prototypical networks""","['few-shot learning', 'one-shot learning', 'prototypical networks', 'one-class classification', 'anomaly detection', 'outlier detection', 'matching networks']","""Few-shot models have become a popular topic of research in the past years. They offer the possibility to determine class belongings for unseen examples using just a handful of examples for each class. Such models are trained on a wide range of classes and their respective examples, learning a decision metric in the process. Types of few-shot models include matching networks and prototypical networks. We show a new way of training prototypical few-shot models for just a single class. These models have the ability to predict the likelihood of an unseen query belonging to a group of examples without any given counterexamples. The difficulty here lies in the fact that no relative distance to other classes can be calculated via softmax. We solve this problem by introducing a null class centered around zero, and enforcing centering with batch normalization. Trained on the commonly used Omniglot data set, we obtain a classification accuracy of .98 on the matched test set, and of .8 on unmatched MNIST data. On the more complex MiniImageNet data set, test accuracy is .8. In addition, we propose a novel Gaussian layer for distance calculation in a prototypical network, which takes the support examples distribution rather than just their centroid into account. This extension shows promising results when a higher number of support examples is available.""","""This paper extends prototypical networks to few shot 1-way classification. The idea is to introduce a null class to compare against with a null prototype. The reviewers found the idea sound and interesting. However, the response was mixed because the reviewers were not convinced of the significance of the improvements. Furthermore, there were questions raised about the motivation that were not sufficiently addressed in the rebuttal. Batch normalization layers will not necessarily lead to zero mean if the trainable offset is not disabled. The authors did not clarify whether they disable this offset. I encourage the authors to resubmit after addressing the issues raised by the reviewers.""" 596,"""ADAPTING PRETRAINED LANGUAGE MODELS FOR LONG DOCUMENT CLASSIFICATION""","['NLP', 'Deep Learning', 'Language Models', 'Long Document']","""Pretrained language models (LMs) have shown excellent results in achieving human like performance on many language tasks. However, the most powerful LMs have one significant drawback: a fixed-sized input. With this constraint, these LMs are unable to utilize the full input of long documents. In this paper, we introduce a new framework to handle documents of arbitrary lengths. We investigate the addition of a recurrent mechanism to extend the input size and utilizing attention to identify the most discriminating segment of the input. We perform extensive validating experiments on patent and Arxiv datasets, both of which have long text. We demonstrate our method significantly outperforms state-of-the-art results reported in recent literature.""","""This paper investigates ways of using pretrained transformer models like BERT for classification tasks on documents that are longer than a standard transformer can feasibly encode. This seems like a reasonable research goal, and none of the reviewers raised any concerns that seriously questioned the claims of the paper. However, neither of the more confident reviewers were convinced by the experiments in the paper (even after some private discussion) that the methods presented here represent a useful contribution. This is not an area that I (the area chair) know well, but it seems as though there aren't any easy fixes to suggest: Additional discussion of the choice of evaluation data (or new data), further ablations, and general refinement of the writing could help.""" 597,"""Meta-Learning with Network Pruning for Overfitting Reduction""","['Meta-Learning', 'Few-shot Learning', 'Network Pruning', 'Generalization Analysis']","""Meta-Learning has achieved great success in few-shot learning. However, the existing meta-learning models have been evidenced to overfit on meta-training tasks when using deeper and wider convolutional neural networks. This means that we cannot improve the meta-generalization performance by merely deepening or widening the networks. To remedy such a deficiency of meta-overfitting, we propose in this paper a sparsity constrained meta-learning approach to learn from meta-training tasks a subnetwork from which first-order optimization methods can quickly converge towards the optimal network in meta-testing tasks. Our theoretical analysis shows the benefit of sparsity for improving the generalization gap of the learned meta-initialization network. We have implemented our approach on top of the widely applied Reptile algorithm assembled with varying network pruning routines including Dense-Sparse-Dense (DSD) and Iterative Hard Thresholding (IHT). Extensive experimental results on benchmark datasets with different over-parameterized deep networks demonstrate that our method can not only effectively ease meta-overfitting but also in many cases improve the meta-generalization performance when applied to few-shot classification tasks.""","""This paper proposes a regularization scheme for reducing meta-overfitting. After the rebuttal period, the reviewers all still had concerns about the significance of the paper's contributions and the thoroughness of the empirical study. As such, this paper isn't ready for publication at ICLR. See the reviewer's comments for detailed feedback on how to improve the paper. """ 598,"""Training Data Distribution Search with Ensemble Active Learning""",[],"""Deep Neural Networks (DNNs) often rely on very large datasets for training. Given the large size of such datasets, it is conceivable that they contain certain samples that either do not contribute or negatively impact the DNN's optimization. Modifying the training distribution in a way that excludes such samples could provide an effective solution to both improve performance and reduce training time. In this paper, we propose to scale up ensemble Active Learning methods to perform acquisition at a large scale (10k to 500k samples at a time). We do this with ensembles of hundreds of models, obtained at a minimal computational cost by reusing intermediate training checkpoints. This allows us to automatically and efficiently perform a training data distribution search for large labeled datasets. We observe that our approach obtains favorable subsets of training data, which can be used to train more accurate DNNs than training with the entire dataset. We perform an extensive experimental study of this phenomenon on three image classification benchmarks (CIFAR-10, CIFAR-100 and ImageNet), analyzing the impact of initialization schemes, acquisition functions and ensemble configurations. We demonstrate that data subsets identified with a lightweight ResNet-18 ensemble remain effective when used to train deep models like ResNet-101 and DenseNet-121. Our results provide strong empirical evidence that optimizing the training data distribution can provide significant benefits on large scale vision tasks.""","""This paper proposes an ensemble-based active learning approach to select a subset of training data that yields the same or better performance. The proposed method is rather heuristic and lacks novel technical contribution that we expect for top ML conferences. No theoretical justification is provided to argue why the proposed method works. Additional studied are needed to fully convincingly demonstrate the benefit of the proposed method in terms computational cost. """ 599,"""Linguistic Embeddings as a Common-Sense Knowledge Repository: Challenges and Opportunities""","['knowledge representation', 'word embeddings', 'sentence embeddings', 'common-sense knowledge']","""Many applications of linguistic embedding models rely on their value as pre-trained inputs for end-to-end tasks such as dialog modeling, machine translation, or question answering. This position paper presents an alternate paradigm: Rather than using learned embeddings as input features, we instead treat them as a common-sense knowledge repository that can be queried via simple mathematical operations within the embedding space. We show how linear offsets can be used to (a) identify an object given its description, (b) discover relations of an object given its label, and (c) map free-form text to a set of action primitives. Our experiments provide a valuable proof of concept that language-informed common sense reasoning, or `reasoning in the linguistic domain', lies within the grasp of the research community. In order to attain this goal, however, we must reconsider the way neural embedding models are typically trained an evaluated. To that end, we also identify three empirically-motivated evaluation metrics for use in the training of future embedding models.""","""This paper presents an analysis of the kind of knowledge captured by pre-trained word embeddings. The authors show various kinds of properties like relation between entities and their description, mapping high-level commands to discrete commands etc. The problem with the paper is that almost all of the properties shown in this work has already been established in existing literature. In fact, the methods presented here are the baseline algorithms to the identification of different properties presented in the paper. The term common-sense which is used often in the paper is mischaracterized. In NLP literature, common-sense is something that is implicitly understood by humans but which is not really captured by language. For example, going to a movie means you need parking is something that is well-understood by humans but is not implied by the language of going to the movie. The phenomenon described by the authors is general language processing. Towards the end the evaluation criteria for embedding proposed is also a well-established concept, its just that these metrics are not part of the training mechanism as yet. So if the contribution was on showing how those metrics can be integrated in training the embeddings, that would be a great contribution. I agree with the reviewer's critics and recommend a rejection as of now.""" 600,"""Set Functions for Time Series""","['Time Series', 'Set functions', 'Irregularly sampling', 'Medical Time series', 'Dynamical Systems', 'Time series classification']","""Despite the eminent successes of deep neural networks, many architectures are often hard to transfer to irregularly-sampled and asynchronous time series that occur in many real-world datasets, such as healthcare applications. This paper proposes a novel framework for classifying irregularly sampled time series with unaligned measurements, focusing on high scalability and data efficiency. Our method SeFT (Set Functions for Time Series) is based on recent advances in differentiable set function learning, extremely parallelizable, and scales well to very large datasets and online monitoring scenarios. We extensively compare our method to competitors on multiple healthcare time series datasets and show that it performs competitively whilst significantly reducing runtime.""","""The paper investigates a new approach to classification of irregularly sampled and unaligned multi-modal time series via set function mapping. Experiment results on health care datasets are reported to demonstrate the effectiveness of the proposed approach. The idea of extending set functions to address missing value in time series is interesting and novel. The paper does a good job at motivating the methods and describing the proposed solution. The authors did a good job at addressing the concerns of the reviewers. During the discussion, some reviewers are still concerned about the empirical results, which do not match well with published results (even though the authors provided an explanation for it). In addition, the proposed method is only tested on the health care datasets, but the improvement is limited. Therefore it would be worthwhile investigating other time series datasets, and most important answering the important question in terms of what datasets/applications the proposed method works well. The paper is one step away for being a strong publication. We hope the reviews can help improve the paper for a strong publication in the future. """ 601,"""CURSOR-BASED ADAPTIVE QUANTIZATION FOR DEEP NEURAL NETWORK""",[],"""Deep neural network (DNN) has rapidly found many applications in different scenarios. However, its large computational cost and memory consumption are barriers to computing restrained applications. DNN model quantization is a widely used method to reduce the DNN storage and computation burden by decreasing the bit width. In this paper, we propose a novel cursor based adaptive quantization method using differentiable architecture search (DAS). The multiple bits quantization mechanism is formulated as a DAS process with a continuous cursor that represents the possible quantization bit. The cursor-based DAS adaptively searches for the desired quantization bit for each layer. The DAS process can be solved via an alternative approximate optimization process, which is designed for mixed quantization scheme of a DNN model. We further devise a new loss function in the search process to simultaneously optimize accuracy and parameter size of the model. In the quantization step, based on a new strategy, the closest two integers to the cursor are adopted as the bits to quantize the DNN together to reduce the quantization noise and avoid the local convergence problem. Comprehensive experiments on benchmark datasets show that our cursor based adaptive quantization approach achieves the new state-of-the-art for multiple bits quantization and can efficiently obtain lower size model with comparable or even better classification accuracy.""","""This paper presents a method to compress DNNs by quantization. The core idea is to use NAS techniques to adaptively set quantization bits at each layer. The proposed method is shown to achieved good results on the standard benchmarks. Through our final discussion, one reviewer agreed to raise the score from Reject to Weak Reject, but still on negative side. Another reviewer was not satisfied with the authors rebuttal, particularly regarding the appropriateness of training strategy and evaluation. Moreover, as reviewers pointed out, there were so many unclear writings and explanations in the original manuscript. Although we admit that authors made great effort to address the comments, the revision seems too major and need to go through another complete peer reviewing. As there was no strong opinion to push this paper, Id like to recommend rejection. """ 602,"""Global Concavity and Optimization in a Class of Dynamic Discrete Choice Models""","['Reinforcement learning', 'Policy Gradient', 'Global Concavity', 'Dynamic Discrete Choice Model']","""Discrete choice models with unobserved heterogeneity are commonly used Econometric models for dynamic Economic behavior which have been adopted in practice to predict behavior of individuals and firms from schooling and job choices to strategic decisions in market competition. These models feature optimizing agents who choose among a finite set of options in a sequence of periods and receive choice-specific payoffs that depend on both variables that are observed by the agent and recorded in the data and variables that are only observed by the agent but not recorded in the data. Existing work in Econometrics assumes that optimizing agents are fully rational and requires finding a functional fixed point to find the optimal policy. We show that in an important class of discrete choice models the value function is globally concave in the policy. That means that simple algorithms that do not require fixed point computation, such as the policy gradient algorithm, globally converge to the optimal policy. This finding can both be used to relax behavioral assumption regarding the optimizing agents and to facilitate Econometric analysis of dynamic behavior. In particular, we demonstrate significant computational advantages in using a simple implementation policy gradient algorithm over existing ""nested fixed point"" algorithms used in Econometrics.""","""The authors develop theoretical results showing that policy gradient methods converge to the globally optimal policy for a class of MDPs arising in econometrics. The authors show empirically that their methods perform on a standard benchmark. The paper contains interesting theoretical results. However, the reviewers were concerned about some aspects: 1) The paper does not explain to a general ML audience the significance of the models considered in the paper - where do these arise in practical applications? Further, the experiments are also limited to a small MDP - while this may be a standard benchmark in econometrics, it would be good to study the algorithm's scaling properties to larger models as is standard practice in RL. 2) The implications of the assumptions made in the paper are not explained clearly, nor are the relative improvements of the authors' work relative to prior work. In particular, one reviewer was concerned that the assumptions could be trivially satisfied and the authors' rebuttal did not clarify this sufficiently. Thus, I recommend rejection but am unsure since none of the reviewers nor I am an expert in this area.""" 603,"""Scaleable input gradient regularization for adversarial robustness""","['adversarial robustness', 'gradient regularization', 'robust certification', 'robustness bounds']","""In this work we revisit gradient regularization for adversarial robustness with some new ingredients. First, we derive new per-image theoretical robustness bounds based on local gradient information. These bounds strongly motivate input gradient regularization. Second, we implement a scaleable version of input gradient regularization which avoids double backpropagation: adversarially robust ImageNet models are trained in 33 hours on four consumer grade GPUs. Finally, we show experimentally and through theoretical certification that input gradient regularization is competitive with adversarial training. Moreover we demonstrate that gradient regularization does not lead to gradient obfuscation or gradient masking.""","""(1) the authors emphasize the theoretical contribution and claims the bound are tighter. However, they did not directly compare with any certified robust methods, or previous bounds to support the argument. HM, not sure, need to check this (2) The empirical results look suboptimal. The authors did not convince me why they sampled 1000 images for test for a small CIFAR-10 dataset. The proposed method is 10% less robust comparing to Madry's in table 1. Seems ok, understand authors response 1) The theoretical analysis are not terribly new, which is just a straightforward application of first-order Taylor expansion. This idea could be traced back to the very first paper on adversarial examples FGSM (Goodfellow et al 2014). True 2) The novelty of the paper is to replace exact gradient (w.r.t input) by their finite difference and use it as a regularization. However, there is a misalignment between the theory and the proposed algorithm. The theory only encourages input gradient regularization, regardless to how it is evaluated, and previous studies have shown that this is not a very effective way to improve robustness. According to the experiments, the main empirical improvement comes from the finite difference implementation but the benefit of finite difference is not justified/discussed by the theory. Therefore, the empirical improvement are not supported by the theory. Authors have briefly respond to this issue in the discussion but I believe a more rigorous analysis is needed. This seems okay based on author response 3) Moreover, the empirical performance does not achieve state-of-the-art result. Indeed, there is a non-negligible gap (12%) between the obtained performance and some well-known baseline. Thus the empirical contribution is also limited. Yea, for some cases""" 604,"""Efficient Bi-Directional Verification of ReLU Networks via Quadratic Programming""",[],"""Neural networks are known to be sensitive to adversarial perturbations. To investigate this undesired behavior we consider the problem of computing the distance to the decision boundary (DtDB) from a given sample for a deep NN classifier. In this work we present an iterative procedure where in each step we solve a convex quadratic programming (QP) task. Solving the single initial QP already results in a lower bound on the DtDB and can be used as a robustness certificate of the classifier around a given sample. In contrast to currently known approaches our method also provides upper bounds used as a measure of quality for the certificate. We show that our approach provides better or competitive results in comparison with a wide range of existing techniques.""","""This article is concerned with sensitivity to adversarial perturbations. It studies the computation of the distance to the decision boundary from a given sample in order to obtain robustness certificates, and presents an iterative procedure to this end. This is a very relevant line of investigation. The reviewers found that the approach is different from previous ones (even if related quadratic constraints had been formulated in previous works). However, they expressed concerns with the presentation, missing details or intuition for the upper bounds, and the small size of the networks that are tested. The reviewers also mentioned that the paper could be clearer about the strengths and weaknesses of the proposed algorithm. The responses clarified a number of points from the initial reviews. However, some reviewers found that important aspects were still not addressed satisfactorily, specifically in relation to the justification of the approach to obtain upper bounds (although they acknowledge that the strategy seems at least empirically validated), and reiterated concerns about the scalability of the approach. Overall, this article ranks good, but not good enough. """ 605,"""State Alignment-based Imitation Learning""","['Imitation learning', 'Reinforcement Learning']","""Consider an imitation learning problem that the imitator and the expert have different dynamics models. Most of existing imitation learning methods fail because they focus on the imitation of actions. We propose a novel state alignment-based imitation learning method to train the imitator by following the state sequences in the expert demonstrations as much as possible. The alignment of states comes from both local and global perspectives. We combine them into a reinforcement learning framework by a regularized policy update objective. We show the superiority of our method on standard imitation learning settings as well as the challenging settings in which the expert and the imitator have different dynamics models.""","""This paper seeks to adapt behavioural cloning to the case where demonstrator and learner have different dynamics (e.g. human demonstrator), by designing a state-based objective. The reviewers agreed the paper makes an important and interesting contribution, but were somewhat divided about whether the experiments were sufficiently impactful. They furthermore had additional concerns regarding the clarity of the paper and presentation of the method. Through discussion, it seems that these were sufficiently addressed that the consensus has moved towards agreeing that the paper sufficiently proves the concept to warrant publication (with one reviewer dissenting). I recommend acceptance, with the view that the authors should put a substantial amount of work into improving the presentation of the paper based on the feedback that has emerged from the discussion before the camera ready is submitted (if accepted).""" 606,"""Few-Shot One-Class Classification via Meta-Learning""","['meta-learning', 'few-shot learning', 'one-class classification', 'class-imbalance learning']","""Although few-shot learning and one-class classification have been separately well studied, their intersection remains rather unexplored. Our work addresses the few-shot one-class classification problem and presents a meta-learning approach that requires only few data examples from only one class to adapt to unseen tasks. The proposed method builds upon the model-agnostic meta-learning (MAML) algorithm (Finn et al., 2017) and explicitly trains for few-shot class-imbalance learning, aiming to learn a model initialization that is particularly suited for learning one-class classification tasks after observing only a few examples of one class. Experimental results on datasets from the image domain and the time-series domain show that our model substantially outperforms the baselines, including MAML, and demonstrate the ability to learn new tasks from only few majority class samples. Moreover, we successfully learn anomaly detectors for a real world application involving sensor readings recorded during industrial manufacturing of workpieces with a CNC milling machine using only few examples from the normal class.""","""The authors present a combination of few-shot learning with one-class classification model of problems. The authors use the existing MAML algorithm and build upon it to present a learning algorithm for the problem. As pointed out by the reviewers, the technical contributions of the paper are quite minimal and after the author response period the reviewers have not changed their minds. However, the authors have significantly changed the paper from its initial submission and as of now it needs to be reviewed again. I recommend authors to resubmit their paper to another conference. As of now, I recommend rejection.""" 607,"""Learning to Guide Random Search""","['Random search', 'Derivative-free optimization', 'Learning continuous control']","""We are interested in derivative-free optimization of high-dimensional functions. The sample complexity of existing methods is high and depends on problem dimensionality, unlike the dimensionality-independent rates of first-order methods. The recent success of deep learning suggests that many datasets lie on low-dimensional manifolds that can be represented by deep nonlinear models. We therefore consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold. We develop an online learning approach that learns this manifold while performing the optimization. In other words, we jointly learn the manifold and optimize the function. Our analysis suggests that the presented method significantly reduces sample complexity. We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems. Our method achieves significantly lower sample complexity than Augmented Random Search, Bayesian optimization, covariance matrix adaptation (CMA-ES), and other derivative-free optimization algorithms.""","""This paper develops a methodology to perform global derivative-free optimization of high dimensional functions through random search on a lower dimensional manifold that is carefully learned with a neural network. In thorough experiments on reinforcement learning tasks and a real world airfoil optimization task, the authors demonstrate the effectiveness of their method compared to strong baselines. The reviewers unanimously agreed that the paper was above the bar for acceptance and thus the recommendation is to accept. An interesting direction for future work might be to combine this methodology with REMBO. REMBO seems competitive in the experiments (but maybe doesn't work as well early on since the model needs to learn the manifold). Learning both the low dimensional manifold to do the optimization over and then performing a guided search through Bayesian optimization instead of a random strategy might get the best of both worlds? """ 608,"""Convolutional Tensor-Train LSTM for Long-Term Video Prediction""","['Tensor decomposition', 'Video prediction']","""Long-term video prediction is highly challenging since it entails simultaneously capturing spatial and temporal information across a long range of image frames.Standard recurrent models are ineffective since they are prone to error propagation and cannot effectively capture higher-order correlations. A potential solution is to extend to higher-order spatio-temporal recurrent models. However, such a model requires a large number of parameters and operations, making it intractable to learn in practice and is prone to overfitting. In this work, we propose convolutional tensor-train LSTM (Conv-TT-LSTM), which learns higher-orderConvolutional LSTM (ConvLSTM) efficiently using convolutional tensor-train decomposition (CTTD). Our proposed model naturally incorporates higher-order spatio-temporal information at a small cost of memory and computation by using efficient low-rank tensor representations. We evaluate our model on Moving-MNIST and KTH datasets and show improvements over standard ConvLSTM and better/comparable results to other ConvLSTM-based approaches, but with much fewer parameters.""","""This paper proposes Conv-TT-LSTM for long-term video prediction. The proposed method saves memory and computation by low-rank tensor representations via tensor decomposition and is evaluated in Moving MNIST and KTH datasets. All reviews argue that the novelty of the paper does not meet the standard of ICLR. In the rebuttal, the authors polish the experiment design, which fails to change any reviewers decision. Overall, the paper is not good enough for ICLR.""" 609,"""Agent as Scientist: Learning to Verify Hypotheses""",[],"""In this paper, we formulate hypothesis verification as a reinforcement learning problem. Specifically, we aim to build an agent that, given a hypothesis about the dynamics of the world can take actions to generate observations which can help predict whether the hypothesis is true or false. Our first observation is that agents trained end-to-end with the reward fail to learn to solve this problem. In order to train the agents, we exploit the underlying structure in the majority of hypotheses -- they can be formulated as triplets (pre-condition, action sequence, post-condition). Once the agents have been pretrained to verify hypotheses with this structure, they can be fine-tuned to verify more general hypotheses. Our work takes a step towards a ``scientist agent'' that develops an understanding of the world by generating and testing hypotheses about its environment.""","""The authors propose an agent that can act in an RL environment to verify hypotheses about it, using hypotheses formulated as triplets of pre-condition, action sequence, and post-condition variables. Training then proceeds in multiple stages, including a pretraining phase using a reward function that encourages the agent to learn the hypothesis triplets. Strengths: Reviewers generally agreed its an important problem and interesting approach Weaknesses: There were some points of convergence among reviewer comments: lack of connection to existing literature (ie to causal reasoning and POMDPs), and concerns about the robustness of the results (which were only reporting the max seeds). Two reviewers also found the use of natural language to unnecessarily complicate their setup. Overall, clarity seemed to be an issue. Other comments concerned lack of comparisons, analyses, and suggestions for alternate methods of rewarding the agent (to improve understandability). The authors deserve credit for their responsiveness to reviewer comments and for the considerable amount of additional work done in the rebuttal period. However, these efforts ultimately didnt satisfy the reviewers enough to change their scores. Although I find that the additional experiments and revisions have significantly strengthened the paper, I don't believe it's currently ready for publication at ICLR. I urge the authors to focus on clearly presenting and integrating these new results in a future submission, which I look forward to. """ 610,"""A Group-Theoretic Framework for Knowledge Graph Embedding""","['group theory', 'knowledge graph embedding', 'representation learning']","""We have rigorously proved the existence of a group algebraic structure hidden in relational knowledge embedding problems, which suggests that a group-based embedding framework is essential for model design. Our theoretical analysis explores merely the intrinsic property of the embedding problem itself without introducing extra designs. Using the proposed framework, one could construct embedding models that naturally accommodate all possible local graph patterns, which are necessary for reproducing a complete graph from atomic knowledge triplets. We reconstruct many state-of-the-art models from the framework and re-interpret them as embeddings with different groups. Moreover, we also propose new instantiation models using simple continuous non-abelian groups.""","""This paper presents a rigorous mathematical framework for knowledge graph embedding. The paper received 3 reviews. R1 recommends Weak Reject based on concerns about the contributions of the paper; the authors, in their response, indicate that R1 may have been confused about what the contributions were meant to be. R2 initially recommended Reject, based on concerns that the paper was overselling its claims, and on the clarity and quality of writing. After the author response, R2 raised their score to Weak Reject but still felt that their main concerns had gone unanswered, and in particular that the authors seemed unwilling to tone down their claims. R3 recommends Weak Reject, indicating that they found the paper difficult to follow and gave some specific technical concerns. The authors, in their response, express confusion about R3's comments and suggest that R3 also did not understand the paper. However, in light of these unanimous Weak Reject reviews, we cannot recommend acceptance at this time. We understand that the authors may feel that some reviewers did not properly understand or appreciate the contribution, but all three reviewers are researchers working at highly-ranked institutions and thus are fairly representative of the attendees of ICLR; we hope that their points of confusion and concern, as reflected in their reviews, will help authors to clarify a revision of the paper for another venue. """ 611,"""Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions""",[],"""Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier. Recently, various kinds of adversarial attack methods have been proposed, most of which focus on adding small perturbations to input images. Despite the success of existing approaches, the way to generate realistic adversarial images with small perturbations remains a challenging problem. In this paper, we aim to address this problem by proposing a novel adversarial method, which generates adversarial examples by imposing not only perturbations but also spatial distortions on input images, including scaling, rotation, shear, and translation. As humans are less susceptible to small spatial distortions, the proposed approach can produce visually more realistic attacks with smaller perturbations, able to deceive classifiers without affecting human predictions. We learn our method by amortized techniques with neural networks and generate adversarial examples efficiently by a forward pass of the networks. Extensive experiments on attacking different types of non-robustified classifiers and robust classifiers with defence show that our method has state-of-the-art performance in comparison with advanced attack parallels.""","""The method proposed and explored here is to introduce small spatial distortions, with the goal of making them undetectable by humans but affecting the classification of the images. As reviewers point out, very similar methods have been tested before. The methods are also only tested on a few low-resolution datasets. The reviewers are unanimous in their judgement that the method is not novel enough, and the authors' rebuttals have not convinced the reviewers or me about the opposite.""" 612,"""Negative Sampling in Variational Autoencoders""","['Variational Autoencoder', 'generative modelling', 'out-of-distribution detection']","""We propose negative sampling as an approach to improve the notoriously bad out-of-distribution likelihood estimates of Variational Autoencoder models. Our model pushes latent images of negative samples away from the prior. When the source of negative samples is an auxiliary dataset, such a model can vastly improve on baselines when evaluated on OOD detection tasks. Perhaps more surprisingly, we present a fully unsupervised variant that can also significantly improve detection performance: using the output of the generator as a source of negative samples results in a fully unsupervised model that can be interpreted as adversarially trained. ""","""This paper proposes to improve VAEs' modeling of out-of-distribution examples, by pushing the latent representations of negative examples away from the prior. The general idea seems interesting, at least to some of the reviewers and to me. However, the paper seems premature, even after revision, as it leaves unclear some of the justification and analysis of the approach, especially in the fully unsupervised case. I think that with some more work it could be a very compelling contribution to a future conference.""" 613,"""DBA: Distributed Backdoor Attacks against Federated Learning""","['distributed backdoor attack', 'federated learning']","""Backdoor attacks aim to manipulate a subset of training data by injecting adversarial triggers such that machine learning models trained on the tampered dataset will make arbitrarily (targeted) incorrect prediction on the testset with the same trigger embedded. While federated learning (FL) is capable of aggregating information provided by different parties for training a better model, its distributed learning methodology and inherently heterogeneous data distribution across parties may bring new vulnerabilities. In addition to recent centralized backdoor attacks on FL where each party embeds the same global trigger during training, we propose the distributed backdoor attack (DBA) --- a novel threat assessment framework developed by fully exploiting the distributed nature of FL. DBA decomposes a global trigger pattern into separate local patterns and embed them into the training set of different adversarial parties respectively. Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data. We conduct extensive experiments to show that the attack success rate of DBA is significantly higher than centralized backdoors under different settings. Moreover, we find that distributed attacks are indeed more insidious, as DBA can evade two state-of-the-art robust FL algorithms against centralized backdoors. We also provide explanations for the effectiveness of DBA via feature visual interpretation and feature importance ranking. To further explore the properties of DBA, we test the attack performance by varying different trigger factors, including local trigger variations (size, gap, and location), scaling factor in FL, data distribution, and poison ratio and interval. Our proposed DBA and thorough evaluation results shed lights on characterizing the robustness of FL.""","""Thanks for the discussion, all. This paper proposes an attack strategy against federated learning. Reviewers put this in the top tier, and the authors responded appropriately to their criticisms. """ 614,"""Relative Pixel Prediction For Autoregressive Image Generation""","['Image Generation', 'Autoregressive']","""In natural images, transitions between adjacent pixels tend to be smooth and gradual, a fact that has long been exploited in image compression models based on predictive coding. In contrast, existing neural autoregressive image generation models predict the absolute pixel intensities at each position, which is a more challenging problem. In this paper, we propose to predict pixels relatively, by predicting new pixels relative to previously generated pixels (or pixels from the conditioning context, when available). We show that this form of prediction fare favorably to its absolute counterpart when used independently, but their coordination under an unified probabilistic model yields optimal performance, as the model learns to predict sharp transitions using the absolute predictor, while generating smooth transitions using the relative predictor. Experiments on multiple benchmarks for unconditional image generation, image colorization, and super-resolution indicate that our presented mechanism leads to improvements in terms of likelihood compared to the absolute prediction counterparts. ""","""All reviewers rated this submission as a weak reject and there was no author response. The AC recommends rejection.""" 615,"""InfoCNF: Efficient Conditional Continuous Normalizing Flow Using Adaptive Solvers""","['continuous normalizing flows', 'conditioning', 'adaptive solvers', 'gating networks']","""Continuous Normalizing Flows (CNFs) have emerged as promising deep generative models for a wide range of tasks thanks to their invertibility and exact likelihood estimation. However, conditioning CNFs on signals of interest for conditional image generation and downstream predictive tasks is inefficient due to the high-dimensional latent code generated by the model, which needs to be of the same size as the input data. In this paper, we propose InfoCNF, an efficient conditional CNF that partitions the latent space into a class-specific supervised code and an unsupervised code that shared among all classes for efficient use of labeled information. Since the partitioning strategy (slightly) increases the number of function evaluations (NFEs), InfoCNF also employs gating networks to learn the error tolerances of its ordinary differential equation (ODE) solvers for better speed and performance. We show empirically that InfoCNF improves the test accuracy over the baseline while yielding comparable likelihood scores and reducing the NFEs on CIFAR10. Furthermore, applying the same partitioning strategy in InfoCNF on time-series data helps improve extrapolation performance. ""","""This paper presents a conditional CNF based on the InfoGAN structure to improve ODE solvers. Reviewers appreciate that the approach shows improved performances over the baseline models. Reviewers all note, however, that this paper is weak in clearly defining the problem and explaining the approach and the results. While the authors have addressed some of the reviewers concerns through their rebuttal, reviewers still remain concerned about the clarity of the paper. I thank the authors for submitting to ICLR and hope to see a revised paper at a future venue.""" 616,"""Random Bias Initialization Improving Binary Neural Network Training""","['Binarized Neural Network', 'Activation function', 'Initialization', 'Neural Network Acceleration']","""Edge intelligence especially binary neural network (BNN) has attracted considerable attention of the artificial intelligence community recently. BNNs significantly reduce the computational cost, model size, and memory footprint. However, there is still a performance gap between the successful full-precision neural network with ReLU activation and BNNs. We argue that the accuracy drop of BNNs is due to their geometry. We analyze the behaviour of the full-precision neural network with ReLU activation and compare it with its binarized counterpart. This comparison suggests random bias initialization as a remedy to activation saturation in full-precision networks and leads us towards an improved BNN training. Our numerical experiments confirm our geometric intuition.""","""The article studies the behaviour of binary and full precision ReLU networks towards explaining differences in performance and suggests a random bias initialisation strategy. The reviewers agree that, while closing the gap between binary networks and full precision networks is an interesting problem, the article cannot be accepted in its current form. They point out that more extensive theoretical analysis and experiments would be important, as well as improving the writing. The authors did not provide a rebuttal nor a revision. """ 617,"""A Random Matrix Perspective on Mixtures of Nonlinearities in High Dimensions""",[],"""One of the distinguishing characteristics of modern deep learning systems is that they typically employ neural network architectures that utilize enormous numbers of parameters, often in the millions and sometimes even in the billions. While this paradigm has inspired significant research on the properties of large networks, relatively little work has been devoted to the fact that these networks are often used to model large complex datasets, which may themselves contain millions or even billions of constraints. In this work, we focus on this high-dimensional regime in which both the dataset size and the number of features tend to infinity. We analyze the performance of a simple regression model trained on the random features pseudo-formula for a random weight matrix pseudo-formula and random bias vector pseudo-formula , obtaining an exact formula for the asymptotic training error on a noisy autoencoding task. The role of the bias can be understood as parameterizing a distribution over activation functions, and our analysis actually extends to general such distributions, even those not expressible with a traditional additive bias. Intruigingly, we find that a mixture of nonlinearities can outperform the best single nonlinearity on the noisy autoecndoing task, suggesting that mixtures of nonlinearities might be useful for approximate kernel methods or neural network architecture design.""","""In this work, the authors focus on the high-dimensional regime in which both the dataset size and the number of features tend to infinity. They analyze the performance of a simple regression model trained on the random features and revealed several interesting and important observations. Unfortunately, the reviewers could not reach a consensus as to whether this paper had sufficient novelty to merit acceptance at this time. Incorporating their feedback would move the paper closer towards the acceptance threshold.""" 618,"""Building Deep Equivariant Capsule Networks""","['Capsule networks', 'equivariance']","""Capsule networks are constrained by the parameter-expensive nature of their layers, and the general lack of provable equivariance guarantees. We present a variation of capsule networks that aims to remedy this. We identify that learning all pair-wise part-whole relationships between capsules of successive layers is inefficient. Further, we also realise that the choice of prediction networks and the routing mechanism are both key to equivariance. Based on these, we propose an alternative framework for capsule networks that learns to projectively encode the manifold of pose-variations, termed the space-of-variation (SOV), for every capsule-type of each layer. This is done using a trainable, equivariant function defined over a grid of group-transformations. Thus, the prediction-phase of routing involves projection into the SOV of a deeper capsule using the corresponding function. As a specific instantiation of this idea, and also in order to reap the benefits of increased parameter-sharing, we use type-homogeneous group-equivariant convolutions of shallower capsules in this phase. We also introduce an equivariant routing mechanism based on degree-centrality. We show that this particular instance of our general model is equivariant, and hence preserves the compositional representation of an input under transformations. We conduct several experiments on standard object-classification datasets that showcase the increased transformation-robustness, as well as general performance, of our model to several capsule baselines.""","""This paper combine recent ideas from capsule networks and group-equivariant neural networks to form equivariant capsules, which is a great idea. The exposition is clear and the experiments provide a very interesting analysis and results. I believe this work will be very well received by the ICLR community.""" 619,"""Deep k-NN for Noisy Labels""",[],"""Modern machine learning models are often trained on examples with noisy labels that hurt performance and are hard to identify. In this paper, we provide an empirical study showing that a simple pseudo-formula -nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabeled training data and produce more accurate models than some recently proposed methods. We also provide new statistical guarantees into its efficacy.""","""The paper proposed and analyze a k-NN method for identifying corrupted labels for training deep neural networks. Although a reviewer pointed out that the noisy k-NN contribution is interesting, I think the paper can be much improved further due to the followings: (a) Lack of state-of-the-art baselines to compare. (b) Lack of important recent related work, i.e., ""Robust Inference via Generative Classifiers for Handling Noisy Labels"" from ICML 2019 (see pseudo-url). The paper also runs a clustering-like algorithm for handling noisy labels, and the authors should compare and discuss why the proposed method is superior. (c) Poor write-up, e.g., address what is missing in existing methods from many different perspectives as this is a quite well-studied popular problem. Hence, I recommend rejection.""" 620,"""Denoising Improves Latent Space Geometry in Text Autoencoders""","['controllable text generation', 'autoencoders', 'denoising', 'latent space geometry']","""Neural language models have recently shown impressive gains in unconditional text generation, but controllable generation and manipulation of text remain challenging. In particular, controlling text via latent space operations in autoencoders has been difficult, in part due to chaotic latent space geometry. We propose to employ adversarial autoencoders together with denoising (referred as DAAE) to drive the latent space to organize itself. Theoretically, we prove that input sentence perturbations in the denoising approach encourage similar sentences to map to similar latent representations. Empirically, we illustrate the trade-off between text-generation and autoencoder-reconstruction capabilities, and our model significantly improves over other autoencoder variants. Even from completely unsupervised training, DAAE can successfully alter the tense/sentiment of sentences via simple latent vector arithmetic.""","""This work presents a simple technique for improving the latent space geometry of text autoencoders. The strengths of the paper lie in the simplicity of the method, and results show that the technique improves over the considered baselines. However, some reviewers expressed concerns over the presented theory for why input noise helps, and did not address concerns that the theory was useful. The paper should be improved if Section 4 were instead rewritten to focus on providing intuition, either with empirical analysis, results on a toy task, or clear but high level discussion of why the method helps. The current theorem statements seem either unnecessary or make strong assumptions that don't hold in practice. As a result, Section 4 in its current form is not in service to the reader's understanding why the simple method works. Finally, further improvements to the paper could be made with comparisons to additional baselines from prior work as suggested by reviewers.""" 621,"""Neural Clustering Processes""","['amortized inference', 'probabilistic clustering', 'mixture models', 'exchangeability', 'spike sorting']","""Mixture models, a basic building block in countless statistical models, involve latent random variables over discrete spaces, and existing posterior inference methods can be inaccurate and/or very slow. In this work we introduce a novel deep learning architecture for efficient amortized Bayesian inference over mixture models. While previous approaches to amortized clustering assumed a fixed or maximum number of mixture components and only amortized over the continuous parameters of each mixture component, our method amortizes over the local discrete labels of all the data points, and performs inference over an unbounded number of mixture components. The latter property makes our method natural for the challenging case of nonparametric Bayesian models, where the number of mixture components grows with the dataset. Our approach exploits the exchangeability of the generative models and is based on mapping distributed, permutation-invariant representations of discrete arrangements into varying-size multinomial conditional probabilities. The resulting algorithm parallelizes easily, yields iid samples from the approximate posteriors along with a normalized probability estimate of each sample (a quantity generally unavailable using Markov Chain Monte Carlo) and can easily be applied to both conjugate and non-conjugate models, as training only requires samples from the generative model. We also present an extension of the method to models of random communities (such as infinite relational or stochastic block models). As a scientific application, we present a novel approach to neural spike sorting for high-density multielectrode arrays. ""","""This paper uses neural amortized inference for clustering processes to automatically tune the number of clusters based on the observed data. The main contribution of the paper is the design of the posterior parametrization based on the DeepSet method. The reviewers feel that the paper has limited novelty since it mainly follows from existing methodologies. Also, experiments are limited and not all comparisons are made. """ 622,"""A Base Model Selection Methodology for Efficient Fine-Tuning""","['transfer learning', 'fine-tuning', 'parameter transfer']","""While the accuracy of image classification achieves significant improvement with deep Convolutional Neural Networks (CNN), training a deep CNN is a time-consuming task because it requires a large amount of labeled data and takes a long time to converge even with high performance computing resources. Fine-tuning, one of the transfer learning methods, is effective in decreasing time and the amount of data necessary for CNN training. It is known that fine-tuning can be performed efficiently if the source and the target tasks have high relativity. However, the technique to evaluate the relativity or transferability of trained models quantitatively from their parameters has not been established. In this paper, we propose and evaluate several metrics to estimate the transferability of pre-trained CNN models for a given target task by featuremaps of the last convolutional layer. We found that some of the proposed metrics are good predictors of fine-tuned accuracy, but their effectiveness depends on the structure of the network. Therefore, we also propose to combine two metrics to get a generally applicable indicator. The experimental results reveal that one of the combined metrics is well correlated with fine-tuned accuracy in a variety of network structure and our method has a good potential to reduce the burden of CNN training.""","""This paper proposes to speed up finetuning of pretrained deep image classification networks by predicting the success rate of a zoom of pre-trained networks without completely running them on the test set. The idea is that a sensible measure from the output layer might well correlate with the performance of the network. All reviewers consider this is an important problem and a good direction to make the effort. However, various concerns are raised and all reviewers unanimously rate weak reject. The major concerns include the unclear relationship between the metrics and the fine-tuning performance, non- comprehensive experiments, poor writing quality. The authors respond to Reviewers concerns but did not change the major concerns. The ACs concur the concerns and the paper can not be accepted at its current state.""" 623,"""Variational Autoencoders for Opponent Modeling in Multi-Agent Systems""","['reinforcement learning', 'multi-agent systems', 'representation learning']","""Multi-agent systems exhibit complex behaviors that emanate from the interactions of multiple agents in a shared environment. In this work, we are interested in controlling one agent in a multi-agent system and successfully learn to interact with the other agents that have fixed policies. Modeling the behavior of other agents (opponents) is essential in understanding the interactions of the agents in the system. By taking advantage of recent advances in unsupervised learning, we propose modeling opponents using variational autoencoders. Additionally, many existing methods in the literature assume that the opponent models have access to opponent's observations and actions during both training and execution. To eliminate this assumption, we propose a modification that attempts to identify the underlying opponent model, using only local information of our agent, such as its observations, actions, and rewards. The experiments indicate that our opponent modeling methods achieve equal or greater episodic returns in reinforcement learning tasks against another modeling method.""","""The present work addresses the problem of opponent modeling in multi-agent learning settings, and propose an approach based on variational auto-encoders (VAEs). Reviewers consider the approach natural and novel empirical results area presented to show that the proposed approach can accurately model opponents in partially observable settings. Several concerns were addressed by the authors during the rebuttal phased. A key remaining concern is the size of the contribution. Reviewers suggest that a deeper conceptual development, e.g., based on empirical insights, is required.""" 624,"""Graph Neural Networks For Multi-Image Matching""","['Graph Neural Networks', 'Multi-image Matching']","""In geometric computer vision applications, multi-image feature matching gives more accurate and robust solutions compared to simple two-image matching. In this work, we formulate multi-image matching as a graph embedding problem, then use a Graph Neural Network to learn an appropriate embedding function for aligning image features. We use cycle consistency to train our network in an unsupervised fashion, since ground truth correspondence can be difficult or expensive to acquire. Geometric consistency losses are added to aid training, though unlike optimization based methods no geometric information is necessary at inference time. To the best of our knowledge, no other works have used graph neural networks for multi-image feature matching. Our experiments show that our method is competitive with other optimization based approaches.""","""The paper proposes a method for learning multi-image matching using graph neural networks. The model is learned by making use of cycle consistency constraints and geometric consistency, and it achieves a performance that is comparable to the state of the art. While the reviewers view the proposed method interesting in general, they raised issues regarding the evaluation, which is limited in terms of both the chosen datasets and prior methods. After rounds of discussion, the reviewers reached a consensus that the submission is not mature enough to be accepted for this venue at this time. Therefore, I recommend rejecting this submission.""" 625,"""Augmenting Transformers with KNN-Based Composite Memory""","['knn', 'memory-augmented networks', 'language generation', 'dialogue']","""Various machine learning tasks can benefit from access to external information of different modalities, such as text and images. Recent work has focused on learning architectures with large memories capable of storing this knowledge. We propose augmenting Transformer neural networks with KNN-based Information Fetching (KIF) modules. Each KIF module learns a read operation to access fixed external knowledge. We apply these modules to generative dialogue modeling, a challenging task where information must be flexibly retrieved and incorporated to maintain the topic and flow of conversation. We demonstrate the effectiveness of our approach by identifying relevant knowledge from Wikipedia, images, and human-written dialogue utterances, and show that leveraging this retrieved information improves model performance, measured by automatic and human evaluation.""","""This paper augments transformer encoder-decoder networks architecture with k nearest neighbors to fetch knowledge or information related to the previous conversation, and demonstrates improvements through manual and automated evaluation. Reviewers note the fact that the approach is simple and clean and results in significant improvements, however, the approach is incremental over the previous work (including pseudo-url). Furthermore, although the authors improved the article in the light of reviewer suggestions (i.e., rushed analysis, not so clear descriptions) and some reviewers increased their scores, none of them actually marked the paper as an accept or a strong accept.""" 626,"""Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds""","['deep learning', 'active learning', 'batch active learning']","""We design a new algorithm for batch active learning with deep neural network models. Our algorithm, Batch Active learning by Diverse Gradient Embeddings (BADGE), samples groups of points that are disparate and high-magnitude when represented in a hallucinated gradient space, a strategy designed to incorporate both predictive uncertainty and sample diversity into every selected batch. Crucially, BADGE trades off between diversity and uncertainty without requiring any hand-tuned hyperparameters. While other approaches sometimes succeed for particular batch sizes or architectures, BADGE consistently performs as well or better, making it a useful option for real world active learning problems.""","""The paper provides a simple method of active learning for classification using deep nets. The method is motivated by choosing examples based on an embedding computed that represents the last layer gradients, which is shown to have a connection to a lower bound of model change if labeled. The algorithm is simple and easy to implement. The method is justified by convincing experiments. The reviewers agree that the rebuttal and revisions cleared up any misunderstandings. This is a solid empirical work on an active learning technique that seems to have a lot of promise. Accept. """ 627,"""Controlling generative models with continuous factors of variations""","['Generative models', 'factor of variation', 'GAN', 'beta-VAE', 'interpretable representation', 'interpretability']","""Recent deep generative models can provide photo-realistic images as well as visual or textual content embeddings useful to address various tasks of computer vision and natural language processing. Their usefulness is nevertheless often limited by the lack of control over the generative process or the poor understanding of the learned representation. To overcome these major issues, very recent works have shown the interest of studying the semantics of the latent space of generative models. In this paper, we propose to advance on the interpretability of the latent space of generative models by introducing a new method to find meaningful directions in the latent space of any generative model along which we can move to control precisely specific properties of the generated image like position or scale of the object in the image. Our method is weakly supervised and particularly well suited for the search of directions encoding simple transformations of the generated image, such as translation, zoom or color variations. We demonstrate the effectiveness of our method qualitatively and quantitatively, both for GANs and variational auto-encoders.""","""Following the revision and the discussion, all three reviewers agree that the paper provides an interesting contribution to the area of generative image modeling. Accept.""" 628,"""A Theory of Usable Information under Computational Constraints""",[],"""We propose a new framework for reasoning about information in complex systems. Our foundation is based on a variational extension of Shannons information theory that takes into account the modeling power and computational constraints of the observer. The resulting predictive V-information encompasses mutual information and other notions of informativeness such as the coefficient of determination. Unlike Shannons mutual information and in violation of the data processing inequality, V-information can be created through computation. This is consistent with deep neural networks extracting hierarchies of progressively more informative features in representation learning. Additionally, we show that by incorporating computational constraints, V-information can be reliably estimated from data even in high dimensions with PAC-style guarantees. Empirically, we demonstrate predictive V-information is more effective than mutual information for structure learning and fair representation learning.""","""All reviewers unanimously accept the paper.""" 629,"""MaskConvNet: Training Efficient ConvNets from Scratch via Budget-constrained Filter Pruning""","['Structured Pruning', 'Sparsity Regularization', 'Budget-Aware']","""In this paper, we propose a framework, called MaskConvNet, for ConvNets filter pruning. MaskConvNet provides elegant support for training budget-aware pruned networks from scratch, by adding a simple mask module to a ConvNet architecture. MaskConvNet enjoys several advantages - (1) Flexible, the mask module can be integrated with any ConvNets in a plug-and-play manner. (2) Simple, the mask module is implemented by a hard Sigmoid function with a small number of trainable mask variables, adding negligible memory and computational overheads to the networks during training. (3) Effective, it is able to achieve competitive pruning rate while maintaining comparable accuracy with the baseline ConvNets without pruning, regardless of the datasets and ConvNet architectures used. (4) Fast, it is observed that the number of training epochs required by MaskConvNet is close to training a baseline without pruning. (5) Budget-aware, with a sparsity budget on target metric (e.g. model size and FLOP), MaskConvNet is able to train in a way that the optimizer can adaptively sparsify the network and automatically maintain sparsity level, till the pruned network produces good accuracy and fulfill the budget constraint simultaneously. Results on CIFAR-10 and ImageNet with several ConvNet architectures show that MaskConvNet works competitively well compared to previous pruning methods, with budget-constraint well respected. Code is available at pseudo-url. We hope MaskConvNet, as a simple and general pruning framework, can address the gaps in existing literate and advance future studies to push the boundaries of neural network pruning.""","""This paper presents a method to learn a pruned convolutional network during conventional training. Pruning the network has advantages (in deployment) of reducing the final model size and reducing the required FLOPS for compute. The method adds a pruning mask on each layer with an additional sparsity loss on the mask variables. The method avoids the cost of a train-prune-retrain optimization process that has been used in several earlier papers. The method is evaluated on CIFAR-10 and ImageNet with three standard convolutional network architectures. The results show comparable performance to the original networks with the learned sparse networks. The reviewers made many substantial comments on the paper and most of these were addressed in the author response and subsequent discussion. For example, Reviewer1 mentioned two other papers that promote sparsity implicitly during training (Q3), and the authors acknowledged the omission and described how those methods had less flexibility on a target metric (FLOPS) that is not parameter size. Many of the author responses described changes to an updated paper that would clarify the claims and results (R1: Q2-7, R2:Q3). However, the reviewers raised many concerns for the original paper and they did not see an updated paper that contains the proposed revisions. Given the numerous concerns with the original submission, the reviewers wanted to see the revised paper to assess whether their concerns had been addressed adequately. Additionally, the paper does not have a comparison experiment with state-of the art results, and the current results were not sufficiently convincing for the reviewers. Reviewer1 and author response to questions 13--15 suggest that the experimental results with ResNet-34 are inadequate to show the benefits of the approach, but results for the proposed method with the larger ResNet-50 (which could show benefits) are not yet ready. The current paper is not ready for publication. """ 630,"""Transfer Active Learning For Graph Neural Networks""","['Active Learning', 'Graph Neural Networks', 'Transfer Learning', 'Reinforcement Learning']","""Graph neural networks have been proved very effective for a variety of prediction tasks on graphs such as node classification. Generally, a large number of labeled data are required to train these networks. However, in reality it could be very expensive to obtain a large number of labeled data on large-scale graphs. In this paper, we studied active learning for graph neural networks, i.e., how to effectively label the nodes on a graph for training graph neural networks. We formulated the problem as a sequential decision process, which sequentially label informative nodes, and trained a policy network to maximize the performance of graph neural networks for a specific task. Moreover, we also studied how to learn a universal policy for labeling nodes on graphs with multiple training graphs and then transfer the learned policy to unseen graphs. Experimental results on both settings of a single graph and multiple training graphs (transfer learning setting) prove the effectiveness of our proposed approaches over many competitive baselines. ""","""Paper proposes a method for active learning on graphs. Reviewers found the presentation of the method confusing and somewhat lacking novelty in light of existing works (some of which were not compared to). After the rebuttal and revisions, reviewers minds were not changed from rejection. """ 631,"""Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach""","['interpretable machine learning', 'information bottleneck principle', 'black-box']","""Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, information bottleneck principle, as a criterion for finding such explanations. For each instance, VIBI selects key features that are maximally compressed about an input (briefness), and informative about a decision made by a black-box system on that input (comprehensive). We evaluate VIBI on three datasets and compare with state-of-the-art interpretable machine learning methods in terms of both interpretability and fidelity evaluated by human and quantitative metrics.""","""The authors present a system-agnostic interpretable method based on the idea of that provides a brief (=compressed) but comprehensive (=informative) explanation. Their system is build upon the idea of VIB. The authors compare against 3 state-of-the-art interpretable machine learning methods and the evaluation is terms of interpretability (=human understandable) and fidelity (=accuracy of approximating black-box model). Overall, all reviewers agreed that the topic of model interpretability is an important one and the novel connection between IB and interpretable data-summaries is a very natural one. This manuscript has generated a lot of discussion among the reviewers during the rebuttal and there are a number of concerns that are currently preventing me from recommending this paper for acceptance. The first concern relates to the lack of comparison against attention methods (I agree with the authors that this is a model-specific solution whereas they propose a model-agnostic one), however attention is currently the elephant in room and the first thing someone thinks of when thinking of interpretability. As such, the authors should have presented such a comparison. The second concern relates to the human evaluation protocol which could be significantly improved (Why 100 samples from all models but 200 for VIBI? Given the small set of results, are these model differences significant? Similarly, assuming that we have multiple annotations per sample, what is the variance in the annotations?). This paper is currently borderline and given reviewers' concerns and the limited space in the conference program I cannot recommend acceptance of this paper. """ 632,"""Adversarially Robust Representations with Smooth Encoders""","['Adversarial Learning', 'Robust Representations', 'Variational AutoEncoder', 'Wasserstein Distance', 'Variational Inference']","""This paper studies the undesired phenomena of over-sensitivity of representations learned by deep networks to semantically-irrelevant changes in data. We identify a cause for this shortcoming in the classical Variational Auto-encoder (VAE) objective, the evidence lower bound (ELBO). We show that the ELBO fails to control the behaviour of the encoder out of the support of the empirical data distribution and this behaviour of the VAE can lead to extreme errors in the learned representation. This is a key hurdle in the effective use of representations for data-efficient learning and transfer. To address this problem, we propose to augment the data with specifications that enforce insensitivity of the representation with respect to families of transformations. To incorporate these specifications, we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point. For certain choices of parameters, our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations. We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner, without a reference to a particular downstream task and without a costly supervised adversarial training procedure. ""","""This paper proposes an novel way of expanding our VAE toolkit by tying it to adversarial robustness. It should be thus of interest to the respective communities.""" 633,"""Encoding Musical Style with Transformer Autoencoders""","['music generation', 'sequence-to-sequence model', 'controllable generation']","""We consider the problem of learning high-level controls over the global structure of sequence generation, particularly in the context of symbolic music generation with complex language models. In this work, we present the Transformer autoencoder, which aggregates encodings of the input data across time to obtain a global representation of style from a given performance. We show it is possible to combine this global embedding with other temporally distributed embeddings, enabling improved control over the separate aspects of performance style and and melody. Empirically, we demonstrate the effectiveness of our method on a variety of music generation tasks on the MAESTRO dataset and an internal, 10,000+ hour dataset of piano performances, where we achieve improvements in terms of log-likelihood and mean listening scores as compared to relevant baselines.""","""Main content: Blind review #3 summarizes it well: This paper presents a technique for encoding the high level style of pieces of symbolic music. The music is represented as a variant of the MIDI format. The main strategy is to condition a Music Transformer architecture on this global style embedding. Additionally, the Music Transformer model is also conditioned on a combination of both style and melody embeddings to try and generate music similar to the conditioning melody but in the style of the performance embedding. -- Discussion: The reviewers questioned the novelty. Blind review #2 wrote: ""Overall, I think the paper presents an interesting application and parts of it are well written, however I have concerns with the technical presentation in parts of the paper and some of the methodology. Firstly, I think the algorithmic novelty in the paper is fairly limited. The performance conditioning vector is generated by an additional encoding transformer, compared to the Music Transformer paper (Huang et. al. 2019b). However, the limited algorithmic novelty is not the main concern. The authors also mention an internal dataset of music audio and transcriptions, which can be a major contribution to the music information retrieval (MIR) community. However it is not clear if this dataset will be publicly released or is only for internal experiments."" However, after revision, the same reviewer has upgraded the review to a weak accept, as the authors wrote ""We emphasize that our goal is to provide users with more fine-grained control over the outputs generated by a seq2seq language model. Despite its simplicity, our method is able to learn a global representation of style for a Transformer, which to the best of our knowledge is a novel contribution for music generation. Additionally, we can synthesize an arbitrary melody into the style of another performance, and we demonstrate the effectiveness of our results both quantitatively (metrics) and qualitatively (interpolations, samples, and user listening studies)."" -- Recommendation and justification: This paper is borderline for the reasons above, and due to the large number of strong papers, is not accepted at this time. As one comment, this work might actually be more suitable for a more specialized conference like ISMIR, as its novel contribution is more to music applications than to fundamental machine learning approaches.""" 634,"""Cascade Style Transfer""","['style transfer', 'cascade', 'quality', 'flexibility', 'domain-independent', 'serial', 'parallel']","""Recent studies have made tremendous progress in style transfer for specific domains, e.g., artistic, semantic and photo-realistic. However, existing approaches have limited flexibility in extending to other domains, as different style representations are often specific to particular domains. This also limits the stylistic quality. To address these limitations, we propose Cascade Style Transfer, a simple yet effective framework that can improve the quality and flexibility of style transfer by combining multiple existing approaches directly. Our cascade framework contains two architectures, i.e., Serial Style Transfer (SST) and Parallel Style Transfer (PST). The SST takes the stylized output of one method as the input content of the others. This could help improve the stylistic quality. The PST uses a shared backbone and a loss module to optimize the loss functions of different methods in parallel. This could help improve the quality and flexibility, and guide us to find domain-independent approaches. Our experiments are conducted on three major style transfer domains: artistic, semantic and photo-realistic. In all these domains, our methods have shown superiority over the state-of-the-art methods.""","""This work combines style transfer approaches either in a serial or parallel fashion, and shows that the combination of methods is more powerful than isolated methods. The novelty in this work is extremely limited and not offset by insightful analysis or very thorough experiments, given that most results are qualitative. Authors have not provided a public response. Therefore, we recommend rejection.""" 635,"""Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem""","['Depth-Width trade-offs', 'ReLU networks', 'chaos theory', 'Sharkovsky Theorem', 'dynamical systems']","""Understanding the representational power of Deep Neural Networks (DNNs) and how their structural properties (e.g., depth, width, type of activation unit) affect the functions they can compute, has been an important yet challenging question in deep learning and approximation theory. In a seminal paper, Telgarsky high- lighted the benefits of depth by presenting a family of functions (based on sim- ple triangular waves) for which DNNs achieve zero classification error, whereas shallow networks with fewer than exponentially many nodes incur constant error. Even though Telgarskys work reveals the limitations of shallow neural networks, it doesnt inform us on why these functions are difficult to represent and in fact he states it as a tantalizing open question to characterize those functions that cannot be well-approximated by smaller depths. In this work, we point to a new connection between DNNs expressivity and Sharkovskys Theorem from dynamical systems, that enables us to characterize the depth-width trade-offs of ReLU networks for representing functions based on the presence of a generalized notion of fixed points, called periodic points (a fixed point is a point of period 1). Motivated by our observation that the triangle waves used in Telgarskys work contain points of period 3 a period that is special in that it implies chaotic behaviour based on the celebrated result by Li-Yorke we proceed to give general lower bounds for the width needed to represent periodic functions as a function of the depth. Technically, the crux of our approach is based on an eigenvalue analysis of the dynamical systems associated with such functions.""","""The article is concerned with depth width tradeoffs in the representation of functions with neural networks. The article presents connections between expressivity of neural networks and dynamical systems, and obtains lower bounds on the width to represent periodic functions as a function of the depth. These are relevant advances and new perspectives for the theoretical study of neural networks. The reviewers were very positive about this article. The authors' responses also addressed comments from the initial reviews. """ 636,"""A Uniform Generalization Error Bound for Generative Adversarial Networks""","['GANs', 'Uniform Generalization Bound', 'Deep Learning', 'Weight normalization']",""" This paper focuses on the theoretical investigation of unsupervised generalization theory of generative adversarial networks (GANs). We first formulate a more reasonable definition of general error and generalization bounds for GANs. On top of that, we establish a bound for generalization error with a fixed generator in a general weight normalization context. Then, we obtain a width-independent bound by applying pseudo-formula and spectral norm weight normalization. To better understand the unsupervised model, GANs, we establish the generalization bound, which uniformly holds with respect to the choice of generators. Hence, we can explain how the complexity of discriminators and generators contribute to generalization error. For pseudo-formula and spectral weight normalization, we provide explicit guidance on how to design parameters to train robust generators. Our numerical simulations also verify that our generalization bound is reasonable.""","""The authors received reviews from true experts and these experts felt the paper was not up to the standards of ICLR. Reviewer 3 and Reviewer 1 disagree as to whether the new notion of generalization error is appropriate. I think both cases can be defended. I think the authors should aim to sharpen their argument in this regard.Several reviewers at one point remark that the results follow from standard techniques: shouldn't this be the case? I believe the actual criticism being made is that the value of these new results do not go above and beyond existing ones. There is also the matter of what value should be attributed to technical developments on their own. On this matter, the reviewers seem to agree that the derivations lean heavily on prior work. """ 637,"""The Dynamics of Signal Propagation in Gated Recurrent Neural Networks""","['recurrent neural networks', 'theory of deep learning']","""Training recurrent neural networks (RNNs) on long sequence tasks is plagued with difficulties arising from the exponential explosion or vanishing of signals as they propagate forward or backward through the network. Many techniques have been proposed to ameliorate these issues, including various algorithmic and architectural modifications. Two of the most successful RNN architectures, the LSTM and the GRU, do exhibit modest improvements over vanilla RNN cells, but they still suffer from instabilities when trained on very long sequences. In this work, we develop a mean field theory of signal propagation in LSTMs and GRUs that enables us to calculate the time scales for signal propagation as well as the spectral properties of the state-to-state Jacobians. By optimizing these quantities in terms of the initialization hyperparameters, we derive a novel initialization scheme that eliminates or reduces training instabilities. We demonstrate the efficacy of our initialization scheme on multiple sequence tasks, on which it enables successful training while a standard initialization either fails completely or is orders of magnitude slower. We also observe a beneficial effect on generalization performance using this new initialization.""","""Using ideas from mean-field theory and statistical mechanics, this paper derives a principled way to analyze signal propagation through gated recurrent networks. This analysis then allows for the development of a novel initialization scheme capable of mitigating subsequent training instabilities. In the end, while reviewers appreciated some of the analytical insights provided, two still voted for rejection while one chose accept after the rebuttal and discussion period. And as AC for this paper, I did not find sufficient evidence to overturn the reviewer majority for two primary reasons. First, the paper claims to demonstrate the efficacy of the proposed initialization scheme on multiple sequence tasks, but the presented experiments do not really involve representative testing scenarios as pointed out by reviewers. Given that this is not a purely theoretical paper, but rather one suggesting practically-relevant initializations for RNNs, it seems important to actually demonstrate this on sequence data people in the community actually care about. In fact, even the reviewer who voted for acceptance conceded that the presented results were not too convincing (basically limited to toy situations involving Cifar10 and MNIST data). Secondly, all reviewers found parts of the paper difficult to digest, and while a future revision has been promised to provide clarity, no text was actually changed making updated evaluations problematic. Note that the rebuttal mentions that the paper is written in a style that is common in the physics literature, and this appears to be a large part of the problem. ICLR is an ML conference and in this respect, to the extent possible it is important to frame relevant papers in an accessible way such that a broader segment of this community can benefit from the key message. At the very least, this will ensure that the reviewer pool is more equipped to properly appreciate the contribution. My own view is that this work can be reframed in such a way that it could be successfully submitted to another ML conference in the future.""" 638,"""GLAD: Learning Sparse Graph Recovery""","['Meta learning', 'automated algorithm design', 'learning structure recovery', 'Gaussian graphical models']","""Recovering sparse conditional independence graphs from data is a fundamental problem in machine learning with wide applications. A popular formulation of the problem is an pseudo-formula regularized maximum likelihood estimation. Many convex optimization algorithms have been designed to solve this formulation to recover the graph structure. Recently, there is a surge of interest to learn algorithms directly based on data, and in this case, learn to map empirical covariance to the sparse precision matrix. However, it is a challenging task in this case, since the symmetric positive definiteness (SPD) and sparsity of the matrix are not easy to enforce in learned algorithms, and a direct mapping from data to precision matrix may contain many parameters. We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data.""",""" The paper proposes a neural network architecture to address the problem of estimating a sparse precision matrix from data, which can be used for inferring conditional independence if the random variables are gaussian. The authors propose an Alternating Minimisation procedure for solving the l1 regularized maximum likelihood which can be unrolled and parameterized. This method is shown to converge faster at inference time than other methods and it is also far more effective in terms of training time compared to an existing data driven method. Reviewers had good initial impressions of this paper, pointing out the significance of the idea and the soundness of the setup. After a productive rebuttal phase the authors significantly improved the readibility and successfully clarified the remaining concerns of the reviewers. This AC thus recommends acceptance. """ 639,"""Jacobian Adversarially Regularized Networks for Robustness""","['adversarial examples', 'robust machine learning', 'deep learning']","""Adversarial examples are crafted with imperceptible perturbations with the intent to fool neural networks. Against such attacks, adversarial training and its variants stand as the strongest defense to date. Previous studies have pointed out that robust models that have undergone adversarial training tend to produce more salient and interpretable Jacobian matrices than their non-robust counterparts. A natural question is whether a model trained with an objective to produce salient Jacobian can result in better robustness. This paper answers this question with affirmative empirical results. We propose Jacobian Adversarially Regularized Networks (JARN) as a method to optimize the saliency of a classifier's Jacobian by adversarially regularizing the model's Jacobian to resemble natural training images. Image classifiers trained with JARN show improved robust accuracy compared to standard models on the MNIST, SVHN and CIFAR-10 datasets, uncovering a new angle to boost robustness without using adversarial training.""","""This paper extends previous observations (Tsipars, Etmann etc) in relations between Jacobian and robustness and directly train a model that improves robustness using Jacobians that look like images. The questions regarding computation time (suggested by two reviewers, including one of the most negative reviewers) are appropriately addressed by the authors (added experiments). Reviewers agree that the idea is novel, and some conjectured why the papers idea is a very sensible one. We think this paper would be an interest for ICLR readers. Please address any remaining comments from the reviewers before the final copy. """ 640,"""LabelFool: A Trick in the Label Space""","['Adversarial attack', 'LabelFool', 'Imperceptibility', 'Label space']","""It is widely known that well-designed perturbations can cause state-of-the-art machine learning classifiers to mis-label an image, with sufficiently small perturbations that are imperceptible to the human eyes. However, by detecting the inconsistency between the image and wrong label, the human observer would be alerted of the attack. In this paper, we aim to design attacks that not only make classifiers generate wrong labels, but also make the wrong labels imperceptible to human observers. To achieve this, we propose an algorithm called LabelFool which identifies a target label similar to the ground truth label and finds a perturbation of the image for this target label. We first find the target label for an input image by a probability model, then move the input in the feature space towards the target label. Subjective studies on ImageNet show that in the label space, our attack is much less recognizable by human observers, while objective experimental results on ImageNet show that we maintain similar performance in the image space as well as attack rates to state-of-the-art attack algorithms.""","""Thanks for the discussion with reviewers, which improved our understanding of your paper significantly. However, we concluded that this paper is still premature to be accepted to ICLR2020. We hope that the detailed comments by the reviewers help improve your paper for potential future submission.""" 641,"""Gradient Surgery for Multi-Task Learning""","['multi-task learning', 'deep learning']","""While deep learning and deep reinforcement learning systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge, particularly as these algorithms learn individual tasks from scratch. Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning. However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently. The reasons why multi-task learning is so challenging compared to single task learning are not fully understood. Motivated by the insight that gradient interference causes optimization challenges, we develop a simple and general approach for avoiding interference between gradients from different tasks, by altering the gradients through a technique we refer to as gradient surgery. We propose a form of gradient surgery that projects the gradient of a task onto the normal plane of the gradient of any other task that has a conflicting gradient. On a series of challenging multi-task supervised and multi-task reinforcement learning problems, we find that this approach leads to substantial gains in efficiency and performance. Further, it can be effectively combined with previously-proposed multi-task architectures for enhanced performance in a model-agnostic way.""","""This paper presents a method for improving optimization in multi-task learning settings by minimizing the interference of gradients belonging to different tasks. While the idea is simple and well-motivated, the reviewers felt that the problem is still not studied adequately. The proofs are useful, but there is still a gap when it comes to practicality. The rebuttal clarified some of the concerns, but still there is a feeling that (a) the main assumptions for the method need to be demonstrated in a more convincing way, e.g. by boosting the experiments as suggested with other MTL methods (b) by placing the paper better in the current literature and minimizing the gap between proofs/underlying assumptions and practical usefulness. """ 642,"""Non-linear System Identification from Partial Observations via Iterative Smoothing and Learning""","['System Identification', 'Dynamical Systems', 'Partial Observations', 'Non-linear Programming', 'Expectation Maximization', 'Neural Networks']","""System identification is the process of building a mathematical model of an unknown system from measurements of its inputs and outputs. It is a key step for model-based control, estimator design, and output prediction. This work presents an algorithm for non-linear offline system identification from partial observations, i.e. situations in which the system's full-state is not directly observable. The algorithm presented, called SISL, iteratively infers the system's full state through non-linear optimization and then updates the model parameters. We test our algorithm on a simulated system of coupled Lorenz attractors, showing our algorithm's ability to identify high-dimensional systems that prove intractable for particle-based approaches. We also use SISL to identify the dynamics of an aerobatic helicopter. By augmenting the state with unobserved fluid states, we learn a model that predicts the acceleration of the helicopter better than state-of-the-art approaches.""","""The paper is about nonlinear system identification in an EM-style learning framework. The idea is to use nonlinear programming for the E step (finding a MAP estimate) and then refine the model parameters. In flavor, this approach is similar to the work by Roweis and Ghahramani. However, this paper does not offer any new insights whatsoever and the (very short) methods section arrives at proposing to compute the maximum a posteriori estimate (eq. 5). While the motivation for this given in the paper is a bit hard to understand it is of course a very well-known and useful estimator. Besides the maximum likelihood estimator this is one of the most commonly used point estimators, see any textbook on statistical signal processing. There has been quite a bit of work in the signal processing community over the last 10 years, and a good overview can be found here: pseudo-url This should give evidence that this is indeed a standard way of solving the problem and it does work really well. Given that we have so fast and good optimizers these days it is common to solve Kalman filtering/smoothing problems via this optimization problem. The paper does not contain any analysis at all. The experiments do of course show that the method works (when there is low noise). Again, we know very well that the MAP estimate is a decent estimator for unimodal problems. The MAP estimator can also be made to work well for noisy situations. As for the comments that the sequential Monte Carlo methods do not work in higher dimensions that is indeed true. However, there are now algorithms that work in much higher dimensions than those considered by the authors of this paper, e.g. pseudo-url which also contains an up-to-date survey on the topic. Furthermore, when it comes to particle smoothing there are also much more efficient smoothers than 10 years ago. The area of particle smoothing has also evolved rapidly over the past years. Summary: The paper makes use of the well-known MAP estimator for learning nonlinear dynamical systems (states and parameters). This is by now a standard technique in signal processing. There are several throw-away comments on SMC that are not valid and that are not grounded in the intense research of that field over the past decade. """ 643,"""MxPool: Multiplex Pooling for Hierarchical Graph Representation Learning""","['GNN', 'graph pooling', 'graph representation learning']","""Graphs are known to have complicated structures and have myriad applications. How to utilize deep learning methods for graph classification tasks has attracted considerable research attention in the past few years. Two properties of graph data have imposed significant challenges on existing graph learning techniques. (1) Diversity: each graph has a variable size of unordered nodes and diverse node/edge types. (2) Complexity: graphs have not only node/edge features but also complex topological features. These two properties motivate us to use multiplex structure to learn graph features in a diverse way. In this paper, we propose a simple but effective approach, MxPool, which concurrently uses multiple graph convolution networks and graph pooling networks to build hierarchical learning structure for graph representation learning tasks. Our experiments on numerous graph classification benchmarks show that our MxPool has marked superiority over other state-of-the-art graph representation learning methods. For example, MxPool achieves 92.1% accuracy on the D&D dataset while the second best method DiffPool only achieves 80.64% accuracy.""","""All three reviewers are consistently negative on this paper. Thus a reject is recommended.""" 644,"""Pareto Optimality in No-Harm Fairness""","['Fairness', 'Fairness in Machine Learning', 'No-Harm Fairness']","""Common fairness definitions in machine learning focus on balancing various notions of disparity and utility. In this work we study fairness in the context of risk disparity among sub-populations. We introduce the framework of Pareto-optimal fairness, where the goal of reducing risk disparity gaps is secondary only to the principle of not doing unnecessary harm, a concept that is especially applicable to high-stakes domains such as healthcare. We provide analysis and methodology to obtain maximally-fair no-harm classifiers on finite datasets. We argue that even in domains where fairness at cost is required, no-harm fairness can prove to be the optimal first step. This same methodology can also be applied to any unbalanced classification task, where we want to dynamically equalize the misclassification risks across outcomes without degrading overall performance any more than strictly necessary. We test the proposed methodology on real case-studies of predicting income, ICU patient mortality, classifying skin lesions from images, and assessing credit risk, demonstrating how the proposed framework compares favorably to other traditional approaches.""","""This manuscript outlines procedures to address fairness as measured by disparity in risk across groups. The manuscript is primarily motivated by methods that can achieve ""no-harm"" fairness, i.e., achieving fairness without increasing the risk in subgroups. The reviewers and AC agree that the problem studied is timely and interesting. However, in reviews and discussion, the reviewers noted issues with clarity of the presentation, and sufficient justification of the results. The consensus was that the manuscript in its current state is borderline, and would have to be significantly improved in terms of clarity of the discussion, and possibly improved methods that result in more convincing results. """ 645,"""wMAN: WEAKLY-SUPERVISED MOMENT ALIGNMENT NETWORK FOR TEXT-BASED VIDEO SEGMENT RETRIEVAL""","['vision', 'language', 'video moment retrieval']","""Given a video and a sentence, the goal of weakly-supervised video moment retrieval is to locate the video segment which is described by the sentence without having access to temporal annotations during training. Instead, a model must learn how to identify the correct segment (i.e. moment) when only being provided with video-sentence pairs. Thus, an inherent challenge is automatically inferring the latent correspondence between visual and language representations. To facilitate this alignment, we propose our Weakly-supervised Moment Alignment Network (wMAN) which exploits a multi-level co-attention mechanism to learn richer multimodal representations. The aforementioned mechanism is comprised of a Frame-By-Word interaction module as well as a novel Word-Conditioned Visual Graph (WCVG). Our approach also incorporates a novel application of positional encodings, commonly used in Transformers, to learn visual-semantic representations that contain contextual information of their relative positions in the temporal sequence through iterative message-passing. Comprehensive experiments on the DiDeMo and Charades-STA datasets demonstrate the effectiveness of our learned representations: our combined wMAN model not only outperforms the state-of-the-art weakly-supervised method by a significant margin but also does better than strongly-supervised state-of-the-art methods on some metrics.""","""This paper proposes a method for aligning an input text with the frames in a video that correspond to what the text describes in a weakly supervised way. The main technical contribution of the paper is the use of co-attention at different abstraction levels. Among the four reviewers, one reviewer advocates for the paper while the others find this paper to be a borderline reject paper. Reviewer3 who was initially positive about the paper, during the discussion period, expressed that he/she wants to downgrade his/her rating to weak reject after reading the other reviewers' comments and concerns. The main concern of the reviewers is that the contribution of the paper incremental, particularly since the idea of co-attention has been used in many different area in other context. The authors responded to this in the rebuttal that the proposed approach incorporate different components such as Positional Encodings and is different from prior work, and that they experimentally perform superior compared to other co-attention usages such as LCGN. Although the AC understands the authors response, the majority of the reviewers are still not fully convinced about the contribution and their opinion stay opposed to the paper.""" 646,"""Neural Machine Translation with Universal Visual Representation""","['Neural Machine Translation', 'Visual Representation', 'Multimodal Machine Translation', 'Language Representation']","""Though visual information has been introduced for enhancing neural machine translation (NMT), its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations. In this paper, we present a universal visual representation learned over the monolingual corpora with image annotations, which overcomes the lack of large-scale bilingual sentence-image pairs, thereby extending image applicability in NMT. In detail, a group of images with similar topics to the source sentence will be retrieved from a light topic-image lookup table learned over the existing sentence-image pairs, and then is encoded as image representations by a pre-trained ResNet. An attention layer with a gated weighting is to fuse the visual information and text information as input to the decoder for predicting target translations. In particular, the proposed method enables the visual information to be integrated into large-scale text-only NMT in addition to the multimodel NMT. Experiments on four widely used translation datasets, including the WMT'16 English-to-Romanian, WMT'14 English-to-German, WMT'14 English-to-French, and Multi30K, show that the proposed approach achieves significant improvements over strong baselines.""","""This paper proposes using visual representations learned in a monolingual setting with image annotations into machine translation. Their approach obviates the need to have bilingual sentences aligned with image annotations, a very restricted resource. An attention layer allows the transformer to incorporate a topic-image lookup table. Their approach achieves significant improvements over strong baselines. The reviewers and the authors engaged in substantive discussions. This is a strong paper which should be included in ICLR. """ 647,"""AHash: A Load-Balanced One Permutation Hash""","['Data Representation', 'Probabilistic Algorithms']","""Minwise Hashing (MinHash) is a fundamental method to compute set similarities and compact high-dimensional data for efficient learning and searching. The bottleneck of MinHash is computing k (usually hundreds) MinHash values. One Permutation Hashing (OPH) only requires one permutation (hash function) to get k MinHash values by dividing elements into k bins. One drawback of OPH is that the load of the bins (the number of elements in a bin) could be unbalanced, which leads to the existence of empty bins and false similarity computation. Several strategies for densification, that is, filling empty bins, have been proposed. However, the densification is just a remedial strategy and cannot eliminate the error incurred by the unbalanced load. Unlike the densification to fill the empty bins after they undesirably occur, our design goal is to balance the load so as to reduce the empty bins in advance. In this paper, we propose a load-balanced hashing, Amortization Hashing (AHash), which can generate as few empty bins as possible. Therefore, AHash is more load-balanced and accurate without hurting runtime efficiency compared with OPH and densification strategies. Our experiments on real datasets validate the claim. All source codes and datasets have been provided as Supplementary Materials and released on GitHub anonymously.""","""This paper proposes a load-balanced hashing called AHash that balances the load of hashing bins to avoid empty bins that appear in some minwise hashing methods. Reviewers found the work interesting and well-motivated. Authors addressed some clarity issues in their rebuttal. However the impact appeared quite limited, and the experimental validation limited to few realistic experiments that did not alleviate this concern. We thus recommend rejection.""" 648,"""High Fidelity Speech Synthesis with Adversarial Networks""","['texttospeech', 'speechsynthesis', 'audiosynthesis', 'gans', 'generativeadversarialnetworks', 'implicitgenerativemodels']","""Generative adversarial networks have seen rapid development in recent years and have led to remarkable improvements in generative modelling of images. However, their application in the audio domain has received limited attention, and autoregressive models, such as WaveNet, remain the state of the art in generative modelling of audio signals such as human speech. To address this paucity, we introduce GAN-TTS, a Generative Adversarial Network for Text-to-Speech. Our architecture is composed of a conditional feed-forward generator producing raw speech audio, and an ensemble of discriminators which operate on random windows of different sizes. The discriminators analyse the audio both in terms of general realism, as well as how well the audio corresponds to the utterance that should be pronounced. To measure the performance of GAN-TTS, we employ both subjective human evaluation (MOS - Mean Opinion Score), as well as novel quantitative metrics (Frchet DeepSpeech Distance and Kernel DeepSpeech Distance), which we find to be well correlated with MOS. We show that GAN-TTS is capable of generating high-fidelity speech with naturalness comparable to the state-of-the-art models, and unlike autoregressive models, it is highly parallelisable thanks to an efficient feed-forward generator. Listen to GAN-TTS reading this abstract at pseudo-url""","""The authors design a GAN-based text-to-speech synthesis model that performs competitively with state-of-the-art synthesizers. The reviewers and I agree that this appears to be the first really successful effort at GAN-based synthesis. Additional positives are that the model is designed to be highly parallelisable, and that the authors also propose several automatic measures of performance in addition to reporting human mean opinion scores. The automatic measures correlate well (though far from perfectly) with human judgments, and in any case are a nice contribution to the area of evaluation of generative models. It would be even more convincing if the authors presented human A/B forced-choice test results (in addition to the mean opinion scores), which are often included in speech synthesis evaluation, but this is a minor quibble.""" 649,"""Off-policy Multi-step Q-learning""","['Multi-step Learning', 'Off-policy Learning', 'Q-learning']","""In the past few years, off-policy reinforcement learning methods have shown promising results in their application for robot control. Deep Q-learning, however, still suffers from poor data-efficiency which is limiting with regard to real-world applications. We follow the idea of multi-step TD-learning to enhance data-efficiency while remaining off-policy by proposing two novel Temporal-Difference formulations: (1) Truncated Q-functions which represent the return for the first n steps of a policy rollout and (2) Shifted Q-functions, acting as the farsighted return after this truncated rollout. We prove that the combination of these short- and long-term predictions is a representation of the full return, leading to the Composite Q-learning algorithm. We show the efficacy of Composite Q-learning in the tabular case and compare our approach in the function-approximation setting with TD3, Model-based Value Expansion and TD3(Delta), which we introduce as an off-policy variant of TD(Delta). We show on three simulated robot tasks that Composite TD3 outperforms TD3 as well as state-of-the-art off-policy multi-step approaches in terms of data-efficiency.""","""The authors propose TD updates for Truncated Q-functions and Shifted Q-functions, reflecting short- and long-term predictions, respectively. They show that they can be combined to form an estimate of the full-return, leading to a Composite Q-learning algorithm. They claim to demonstrated improved data-efficiency in the tabular setting and on three simulated robot tasks. All of the reviewers found the ideas in the paper interesting, however, based on the issues raised by Reviewer 3, everyone agreed that substantial revisions to the paper are necessary to properly incorporate the new results. As a result, I am recommending rejection for this submission at this time. I encourage the authors to incorporate the feedback from the reviewers, and believe that after that is done, the paper will be a strong submission. """ 650,"""Localised Generative Flows""","['Deep generative models', 'normalizing flows', 'variational inference']","""We argue that flow-based density models based on continuous bijections are limited in their ability to learn target distributions with complicated topologies, and propose localised generative flows (LGFs) to address this problem. LGFs are composed of stacked continuous mixtures of bijections, which enables each bijection to learn a local region of the target rather than its entirety. Our method is a generalisation of existing flow-based methods, which can be used without modification as the basis for an LGF model. Unlike normalising flows, LGFs do not permit exact computation of log likelihoods, but we propose a simple variational scheme that performs well in practice. We show empirically that LGFs yield improved performance across a variety of common density estimation tasks.""","""This paper proposes to overcome some fundamental limitations of normalizing flows by introducing auxiliary continuous latent variables. While the problem this paper is trying to address is mathematically legitimate, there is no strong evidence that this is a relevant problem in practice. Moreover, the proposed solution is not entirely novel, converting the flow in a latent-variable model. Overall, I believe this paper will be of minor relevance to the ICLR community.""" 651,"""Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning""",[],"""The posteriors over neural network weights are high dimensional and multimodal. Each mode typically characterizes a meaningfully different representation of the data. We develop Cyclical Stochastic Gradient MCMC (SG-MCMC) to automatically explore such distributions. In particular, we propose a cyclical stepsize schedule, where larger steps discover new modes, and smaller steps characterize each mode. We prove non-asymptotic convergence theory of our proposed algorithm. Moreover, we provide extensive experimental results, including ImageNet, to demonstrate the effectiveness of cyclical SG-MCMC in learning complex multimodal distributions, especially for fully Bayesian inference with modern deep neural networks.""","""This paper proposes a novel stochastic gradient Markov chain Monte Carlo method incorporating a cyclical step size schedule (cyclical SG-MCMC). The authors argue that this step size schedule allows the sampler to cross modes (when the step size is large) and locally explore modes (when the step size is smaller). SG-MCMC is a very promising method for Bayesian deep learning as it is both scalable and easily to incorporate into existing models. However, the stochastic setting often leads to the sampler getting stuck in a local mode due to a requirement of a small step size (which itself is often due to leaving out the Metropolis-Hastings accept / reject step). The cyclic learning rate intuitively helps the sampler escape local modes. This property is demonstrated on synthetic problems in comparison to existing SG-MCMC baselines. The authors demonstrate improved negative log likelihood on larger scale deep learning benchmarks, which is appreciated as the related literature often restricts experiments to small scale problems. The reviewers all found the paper compelling and argued for acceptance and thus the recommendation is to accept. Some questions remain for future work. E.g. all experiments were performed using a very low temperature, which implies that the methods are not sampling from the true Bayesian posterior. Why is such a low temperature needed for reasonable performance? In any case a very nice paper.""" 652,"""Deep symbolic regression""","['symbolic regression', 'reinforcement learning', 'automated machine learning']","""Discovering the underlying mathematical expressions describing a dataset is a core challenge for artificial intelligence. This is the problem of symbolic regression. Despite recent advances in training neural networks to solve complex tasks, deep learning approaches to symbolic regression are lacking. We propose a framework that combines deep learning with symbolic regression via a simple idea: use a large model to search the space of small models. More specifically, we use a recurrent neural network to emit a distribution over tractable mathematical expressions, and employ reinforcement learning to train the network to generate better-fitting expressions. Our algorithm significantly outperforms standard genetic programming-based symbolic regression in its ability to exactly recover symbolic expressions on a series of benchmark problems, both with and without added noise. More broadly, our contributions include a framework that can be applied to optimize hierarchical, variable-length objects under a black-box performance metric, with the ability to incorporate a priori constraints in situ.""","""This paper suggests using RNN and policy gradient methods for improving symbolic regression. The reviewers could not reach a consensus, and due to concerns about the clarity of the paper and the extensiveness of the experimental results, the paper does not appear to currently meet the level of publication. Also, while not mentioned in the reviews, there appears to be some work on symbolic regression aided by deep learning, (see for example, pseudo-url, which was found by searching ""symbolic regression deep learning"")---I would thus also recommend the authors do a more thorough literature search for future revisions. """ 653,"""Discrete InfoMax Codes for Meta-Learning""","['meta-learning', 'generalization', 'discrete representations']","""This paper analyzes how generalization works in meta-learning. Our core contribution is an information-theoretic generalization bound for meta-learning, which identifies the expressivity of the task-specific learner as the key factor that makes generalization to new datasets difficult. Taking inspiration from our bound, we present Discrete InfoMax Codes (DIMCO), a novel meta-learning model that trains a stochastic encoder to output discrete codes. Experiments show that DIMCO requires less memory and less time for similar performance to previous metric learning methods and that our method generalizes particularly well in a challenging small-data setting.""","""The reviewers were unanimous that this submission is not ready for publication at ICLR in its current form. Concerns raised included that the method was not sufficiently general, including in choice of experiments reported, and the lack of discussion of some lines of significantly related work.""" 654,"""Low-Resource Knowledge-Grounded Dialogue Generation""",[],"""Responding with knowledge has been recognized as an important capability for an intelligent conversational agent. Yet knowledge-grounded dialogues, as training data for learning such a response generation model, are difficult to obtain. Motivated by the challenge in practice, we consider knowledge-grounded dialogue generation under a natural assumption that only limited training examples are available. In such a low-resource setting, we devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model. By this means, the major part of the model can be learned from a large number of ungrounded dialogues and unstructured documents, while the remaining small parameters can be well fitted using the limited training examples. Evaluation results on two benchmarks indicate that with only pseudo-formula training data, our model can achieve the state-of-the-art performance and generalize well on out-of-domain knowledge. ""","""The paper considers the problem of knowledge-grounded dialogue generation with low resources. The authors propose to disentangle the model into three components that can be trained on separate data, and achieve SOTA on three datasets. The reviewers agree that this is a well-written paper with a good idea, and strong empirical results, and I happily recommend acceptance.""" 655,"""Detecting Extrapolation with Local Ensembles""","['extrapolation', 'reliability', 'influence functions', 'laplace approximation', 'ensembles', 'Rashomon set']","""We present local ensembles, a method for detecting extrapolation at test time in a pre-trained model. We focus on underdetermination as a key component of extrapolation: we aim to detect when many possible predictions are consistent with the training data and model class. Our method uses local second-order information to approximate the variance of predictions across an ensemble of models from the same class. We compute this approximation by estimating the norm of the component of a test point's gradient that aligns with the low-curvature directions of the Hessian, and provide a tractable method for estimating this quantity. Experimentally, we show that our method is capable of detecting when a pre-trained model is extrapolating on test data, with applications to out-of-distribution detection, detecting spurious correlates, and active learning.""","""This paper presents an ensembling approach to detect underdetermination for extrapolating to test points. The problem domain is interesting and the approach is simple and useful. While reviewers were positive about the work, they raised several points for improvement. The authors are strongly encouraged to include the discussion here in the final version.""" 656,"""On the Global Convergence of Training Deep Linear ResNets""",[],"""We study the convergence of gradient descent (GD) and stochastic gradient descent (SGD) for training pseudo-formula -hidden-layer linear residual networks (ResNets). We prove that for training deep residual networks with certain linear transformations at input and output layers, which are fixed throughout training, both GD and SGD with zero initialization on all hidden weights can converge to the global minimum of the training loss. Moreover, when specializing to appropriate Gaussian random linear transformations, GD and SGD provably optimize wide enough deep linear ResNets. Compared with the global convergence result of GD for training standard deep linear networks \citep{du2019width}, our condition on the neural network width is sharper by a factor of L) where pseudo-formula denotes the condition number of the covariance matrix of the training data. We further propose a modified identity input and output transformations, and show that a pseudo-formula -wide neural network is sufficient to guarantee the global convergence of GD/SGD, where pseudo-formula are the input and output dimensions respectively.""","""This paper provides further analysis of convergence in deep linear networks. I recommend acceptance. """ 657,"""Large Batch Optimization for Deep Learning: Training BERT in 76 minutes""","['large-batch optimization', 'distributed training', 'fast optimizer']","""Training large deep neural networks on massive datasets is computationally very challenging. There has been recent surge in interest in using large batch stochastic optimization methods to tackle this issue. The most prominent algorithm in this line of research is LARS, which by employing layerwise adaptive learning rates trains ResNet on ImageNet in a few minutes. However, LARS performs poorly for attention models like BERT, indicating that its performance gains are not consistent across tasks. In this paper, we first study a principled layerwise adaptation strategy to accelerate training of deep neural networks using large mini-batches. Using this strategy, we develop a new layerwise adaptive large batch optimization technique called LAMB; we then provide convergence analysis of LAMB as well as LARS, showing convergence to a stationary point in general nonconvex settings. Our empirical results demonstrate the superior performance of LAMB across various tasks such as BERT and ResNet-50 training with very little hyperparameter tuning. In particular, for BERT training, our optimizer enables use of very large batch sizes of 32868 without any degradation of performance. By increasing the batch size to the memory limit of a TPUv3 Pod, BERT training time can be reduced from 3 days to just 76 minutes.""","""This paper presents a range of methods for over-coming the challenges of large-batch training with transformer models. While one reviewer still questions the utility of training with such large numbers of devices, there is certainly a segment of the community that focuses on large-batch training, and the ideas in this paper will hopefully find a range of uses. """ 658,"""Keyframing the Future: Discovering Temporal Hierarchy with Keyframe-Inpainter Prediction""","['representation learning', 'variational inference', 'video generation', 'temporal hierarchy']","""To flexibly and efficiently reason about temporal sequences, abstract representations that compactly represent the important information in the sequence are needed. One way of constructing such representations is by focusing on the important events in a sequence. In this paper, we propose a model that learns both to discover such key events (or keyframes) as well as to represent the sequence in terms of them. We do so using a hierarchical Keyframe-Inpainter (KeyIn) model that first generates keyframes and their temporal placement and then inpaints the sequences between keyframes. We propose a fully differentiable formulation for efficiently learning the keyframe placement. We show that KeyIn finds informative keyframes in several datasets with diverse dynamics. When evaluated on a planning task, KeyIn outperforms other recent proposals for learning hierarchical representations.""","""The paper is interesting in video prediction, introducing a hierarchical approach: keyframes are first predicted, then intermediate frames are generated. While it is acknowledge the authors do a step in the right direction, several issues remain: (i) the presentation of the paper could be improved (ii) experiments are not convincing enough (baselines, images not realistic enough, marginal improvements) to validate the viability of the proposed approach over existing ones. """ 659,"""An Empirical Study on Post-processing Methods for Word Embeddings""","['word vectors', 'post-processing method', 'centralised kernel alignment', 'shrinkage']","""Word embeddings learnt from large corpora have been adopted in various applications in natural language processing and served as the general input representations to learning systems. Recently, a series of post-processing methods have been proposed to boost the performance of word embeddings on similarity comparison and analogy retrieval tasks, and some have been adapted to compose sentence representations. The general hypothesis behind these methods is that by enforcing the embedding space to be more isotropic, the similarity between words can be better expressed. We view these methods as an approach to shrink the covariance/gram matrix, which is estimated by learning word vectors, towards a scaled identity matrix. By optimising an objective in the semi-Riemannian manifold with Centralised Kernel Alignment (CKA), we are able to search for the optimal shrinkage parameter, and provide a post-processing method to smooth the spectrum of learnt word vectors which yields improved performance on downstream tasks.""","""This paper explores a post-processing method for word vectors to ""smooth the spectrum,"" and show improvements on some downstream tasks. Reviewers had some questions about the strength of the results, and the results on words of differing frequency. The reviewers also have comments on the clarity of the paper, as well as the exposition of some of the methods. Also, for future submissions to ICLR and other such conferences, it is more typical to address the authors comments in a direct response rather than to make changes to the document without summarizing and pointing reviewers to these changes. Without direction about what was changed or where to look, there is a lot of burden being placed on the reviewers to find your responses to their comments.""" 660,"""INTERNAL-CONSISTENCY CONSTRAINTS FOR EMERGENT COMMUNICATION""","['Emergent Communication', 'Speaker-Listener Models']","""When communicating, humans rely on internally-consistent language representations. That is, as speakers, we expect listeners to behave the same way we do when we listen. This work proposes several methods for encouraging such internal consistency in dialog agents in an emergent communication setting. We consider two hypotheses about the effect of internal-consistency constraints: 1) that they improve agents ability to refer to unseen referents, and 2) that they improve agents ability to generalize across communicative roles (e.g. performing as a speaker de- spite only being trained as a listener). While we do not find evidence in favor of the former, our results show significant support for the latter.""","""This work examines how internal consistency objectives can help emergent communication, namely through possibly improving ability to refer to unseen referents and to generalize across communicative roles. Experimental results support the second hypothesis but not the first. Reviewers agree that this is an exciting object of study, but had reservations about the rationale for the first hypothesis (which was ultimately disproven), and for how the second hypothesis was investigated (lack of ablations to tease apart which part was most responsible for improvement, unsatisfactory framing). These concerns were not fully addressed by the response. While the paper is very promising and the direction quite interesting, this cannot in its current form be recommended for acceptance. We encourage authors to carefully examine reviewers' suggestions to improve their work for submission to another venue.""" 661,"""Collaborative Training of Balanced Random Forests for Open Set Domain Adaptation""",[],"""In this paper, we introduce a collaborative training algorithm of balanced random forests for domain adaptation tasks which can avoid the overfitting problem. In real scenarios, most domain adaptation algorithms face the challenges from noisy, insufficient training data. Moreover in open set categorization, unknown or misaligned source and target categories adds difficulty. In such cases, conventional methods suffer from overfitting and fail to successfully transfer the knowledge of the source to the target domain. To address these issues, the following two techniques are proposed. First, we introduce the optimized decision tree construction method, in which the data at each node are split into equal sizes while maximizing the information gain. Compared to the conventional random forests, it generates larger and more balanced decision trees due to the even-split constraint, which contributes to enhanced discrimination power and reduced overfitting. Second, to tackle the domain misalignment problem, we propose the domain alignment loss which penalizes uneven splits of the source and target domain data. By collaboratively optimizing the information gain of the labeled source data as well as the entropy of unlabeled target data distributions, the proposed CoBRF algorithm achieves significantly better performance than the state-of-the-art methods. The proposed algorithm is extensively evaluated in various experimental setups in challenging domain adaptation tasks with noisy and small training data as well as open set domain adaptation problems, for two backbone networks of AlexNet and ResNet-50.""","""This paper proposes new target objectives for training random forests for better cross-domain generalizability. As reviewers mentioned, I think the idea of using random forests for domain adaptation is novel and interesting, while the proposed method has potential especially in the noisy settings. However, I think the paper can be much improved and is not ready to publish due to the following reviewers' comments: - This paper is not well-written and has too many unclear parts in the experiments and method section. The results are not guaranteed to be reproducible given the content of the paper. Also, the organization of the paper could be improved. - The open-set domain adaptation setting requires more elaboration. More carefully designed experiments should be presented. - It remains unclear how the feature extractors can be trained or fine-tuned in the DNN + tree architecture. Applying trees to high-dimensional features sacrifices the interpretability of the tree models, hampering the practical value of the approach. Hence, I recommend rejection.""" 662,"""Continuous Control with Contexts, Provably""","['continuous control', 'learning', 'context']","""A fundamental challenge in artificially intelligence is to build an agent that generalizes and adapts to unseen environments. A common strategy is to build a decoder that takes a context of the unseen new environment and generates a policy. The current paper studies how to build a decoder for the fundamental continuous control environment, linear quadratic regulator (LQR), which can model a wide range of real world physical environments. We present a simple algorithm for this problem, which uses upper confidence bound (UCB) to refine the estimate of the decoder and balance the exploration-exploitation trade-off. Theoretically, our algorithm enjoys a pseudo-formula regret bound in the online setting where pseudo-formula is the number of environments the agent played. This also implies after playing pseudo-formula environments, the agent is able to transfer the learned knowledge to obtain an pseudo-formula -suboptimal policy for an unseen environment. To our knowledge, this is first provably efficient algorithm to build a decoder in the continuous control setting. While our main focus is theoretical, we also present experiments that demonstrate the effectiveness of our algorithm.""","""This work considers the popular LQR objective but with [A,B] unknown and dynamically changing. At each time a context [C,D] is observed and it is assumed there exist a linear map Theta from [C,D] to [A,B]. The particular problem statement is novel, but is heavily influenced by other MDP settings and the also follows very closely to previous works. The algorithm seems computationally intractable (a problem shared by previous work this work builds on) and so in experiments a gross approximation is used. Reviewers found the work very stylized and did not adequately review related work. For example, little attention is paid to switching linear systems and the recent LQR advances are relegated to a list of references with no discussion. The reviewers also questioned how the theory relates to the traditional setting of LQR regret, say, if [C,D] were identity at all times so that Theta = [A,B]. This paper received 3 reviews (a third was added late to the process) and my own opinion influenced the decision. While the problem statement is interesting, the work fails to put the paper in context with the existing work, and there are some questions of algorithm methods. """ 663,"""Deep Learning of Determinantal Point Processes via Proper Spectral Sub-gradient""","['determinantal point processes', 'deep learning', 'optimization']","""Determinantal point processes (DPPs) is an effective tool to deliver diversity on multiple machine learning and computer vision tasks. Under deep learning framework, DPP is typically optimized via approximation, which is not straightforward and has some conflict with diversity requirement. We note, however, there has been no deep learning paradigms to optimize DPP directly since it involves matrix inversion which may result in highly computational instability. This fact greatly hinders the wide use of DPP on some specific objectives where DPP serves as a term to measure the feature diversity. In this paper, we devise a simple but effective algorithm to address this issue to optimize DPP term directly expressed with L-ensemble in spectral domain over gram matrix, which is more flexible than learning on parametric kernels. By further taking into account some geometric constraints, our algorithm seeks to generate valid sub-gradients of DPP term in case when the DPP gram matrix is not invertible (no gradients exist in this case). In this sense, our algorithm can be easily incorporated with multiple deep learning tasks. Experiments show the effectiveness of our algorithm, indicating promising performance for practical learning problems. ""","""Most reviewers seems in favour of accepting this paper, with the borderline rejection being satisfied with acceptance if the authors take special heed of their comments to improve the clarity of the paper when preparing the final version. From examination of the reviews, the paper achieves enough to warrant publication. Accept.""" 664,"""Augmenting Self-attention with Persistent Memory""","['transformer', 'language modeling', 'self-attention']","""Transformer networks have lead to important progress in language modeling and machine translation. These models include two consecutive modules, a feed-forward layer and a self-attention layer. The latter allows the network to capture long term dependencies and are often regarded as the key ingredient in the success of Transformers. Building upon this intuition, we propose a new model that solely consists of attention layers. More precisely, we augment the self-attention layers with persistent memory vectors that play a similar role as the feed-forward layer. Thanks to these vectors, we can remove the feed-forward layer without degrading the performance of a transformer. Our evaluation shows the benefits brought by our model on standard character and word level language modeling benchmarks.""","""This paper proposes a modification to the Transformer architecture in which the self-attention and feed-forward layer are merged into a self-attention layer with ""persistent"" memory vectors. This involves concatenating the contextual representations with global, learned memory vectors, which are attended over. Experiments show slight gains in character and word-level language modeling benchmarks. While the proposed architectural changes are interesting, they are also rather minor and had a small impact in performance and in number of model parameters. The motivation of the persistent memory vector as replacing the FF-layer is a bit tenuous since Eqs 5 and 9 are substantially different. Overall the contribution seems a bit thin for a ICLR paper. I suggest more analysis and possibly experimentation in other tasks in a future iteration of this paper.""" 665,"""Using Explainabilty to Detect Adversarial Attacks""","['adversarial', 'detection', 'explainability']","""Deep learning models are often sensitive to adversarial attacks, where carefully-designed input samples can cause the system to produce incorrect decisions. Here we focus on the problem of detecting attacks, rather than robust classification, since detecting that an attack occurs may be even more important than avoiding misclassification. We build on advances in explainability, where activity-map-like explanations are used to justify and validate decisions, by highlighting features that are involved with a classification decision. The key observation is that it is hard to create explanations for incorrect decisions. We propose EXAID, a novel attack-detection approach, which uses model explainability to identify images whose explanations are inconsistent with the predicted class. Specifically, we use SHAP, which uses Shapley values in the space of the input image, to identify which input features contribute to a class decision. Interestingly, this approach does not require to modify the attacked model, and it can be applied without modelling a specific attack. It can therefore be applied successfully to detect unfamiliar attacks, that were unknown at the time the detection model was designed. We evaluate EXAID on two benchmark datasets CIFAR-10 and SVHN, and against three leading attack techniques, FGSM, PGD and C&W. We find that EXAID improves over the SoTA detection methods by a large margin across a wide range of noise levels, improving detection from 70% to over 90% for small perturbations.""","""This paper proposes EXAID, a method to detect adversarial attacks by building on the advances in explainability (particularly SHAP), where activity-map-like explanations are used to justify and validate decisions. Though it may have some valuable ideas, the execution is not satisfying, with various issues raised in comments. No rebuttal was provided.""" 666,"""Learning with Protection: Rejection of Suspicious Samples under Adversarial Environment""","['Learning with Rejection', 'Adversarial Examples']","""We propose a novel framework for avoiding the misclassification of data by using a framework of learning with rejection and adversarial examples. Recent developments in machine learning have opened new opportunities for industrial innovations such as self-driving cars. However, many machine learning models are vulnerable to adversarial attacks and industrial practitioners are concerned about accidents arising from misclassification. To avoid critical misclassifications, we define a sample that is likely to be mislabeled as a suspicious sample. Our main idea is to apply a framework of learning with rejection and adversarial examples to assist in the decision making for such suspicious samples. We propose two frameworks, learning with rejection under adversarial attacks and learning with protection. Learning with rejection under adversarial attacks is a naive extension of the learning with rejection framework for handling adversarial examples. Learning with protection is a practical application of learning with rejection under adversarial attacks. This algorithm transforms the original multi-class classification problem into a binary classification for a specific class, and we reject suspicious samples to protect a specific label. We demonstrate the effectiveness of the proposed method in experiments.""","""The paper addresses the setting of learning with rejection while incorporating the ideas from learning with adversarial examples to tackle adversarial attacks. While the reviewers acknowledged the importance to study learning with rejection in this setting, they raised several concerns: (1) lack of technical contribution -- see R1s and R2s related references, see R3s suggestion on designing c(x); (2) insufficient empirical evidence -- see R3s comment about the sensitivity experiment on the strength of the attack, see R1s suggestion to compare with a baseline that learns the rejection function such as SelectiveNet; (3) clarity of presentation -- see R2s suggestions how to improve clarity. Among these, (3) did not have a substantial impact on the decision, but would be helpful to address in a subsequent revision. However, (1) and (2) make it very difficult to assess the benefits of the proposed approach, and were viewed by AC as critical issues. AC can confirm that all three reviewers have read the author responses and have revised the final ratings. AC suggests, in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper. """ 667,"""Emergence of Compositional Language with Deep Generational Transmission""","['Cultural Evolution', 'Deep Learning', 'Language Emergence']","""Recent work has studied the emergence of language among deep reinforcement learning agents that must collaborate to solve a task. Of particular interest are the factors that cause language to be compositional---i.e., express meaning by combining words which themselves have meaning. Evolutionary linguists have found that in addition to structural priors like those already studied in deep learning, the dynamics of transmitting language from generation to generation contribute significantly to the emergence of compositionality. In this paper, we introduce these cultural evolutionary dynamics into language emergence by periodically replacing agents in a population to create a knowledge gap, implicitly inducing cultural transmission of language. We show that this implicit cultural transmission encourages the resulting languages to exhibit better compositional generalization.""","""This paper explores the emergence of language in environments that demand agents communicate, focusing on the compositionality of language, and the cultural transmission of language. Reviewer 1 has several suggestions about new experiments that are possible. The AC does think there is value in many of the suggested experiments, if not to run, then just to acknowledge their possibility and leave for future work. The reviewers also point to some previous work that is very similar. E.g. ""Ease-of-Teaching and Language Structure from Emergent Communication"", Funshan Li et al""" 668,"""Mogrifier LSTM""","['lstm', 'language modelling']","""Many advances in Natural Language Processing have been based upon more expressive models for how inputs interact with the context in which they occur. Recurrent networks, which have enjoyed a modicum of success, still lack the generalization and systematicity ultimately required for modelling language. In this work, we propose an extension to the venerable Long Short-Term Memory in the form of mutual gating of the current input and the previous output. This mechanism affords the modelling of a richer space of interactions between inputs and their context. Equivalently, our model can be viewed as making the transition function given by the LSTM context-dependent. Experiments demonstrate markedly improved generalization on language modelling in the range of 34 perplexity points on Penn Treebank and Wikitext-2, and 0.010.05 bpc on four character-based datasets. We establish a new state of the art on all datasets with the exception of Enwik8, where we close a large gap between the LSTM and Transformer models. ""","""This paper presents a new twist on the typical LSTM that applies several rounds of gating on the history and input, with the end result that the LSTM's transition function is effectively context-dependent. The performance of the model is illustrated on several datasets. In general, the reviews were positive, with one score being upgraded during the rebuttal period. One of the reviewers complained that the baselines were not adequate, but in the end conceded that the results were still worthy of publication. One reviewer argued very hard for the acceptance of this paper ""Papers that are as clear and informative as this one are few and far between. ... As such, I vehemently argue in favor of this paper being accepted to ICLR.""""" 669,"""AlgoNet: pseudo-formula Smooth Algorithmic Neural Networks""","['Algorithms', 'Smoothness', 'Differentiable', 'Inverse Problems', 'Adversarial Training', 'Neural Networks', 'Deep Learning', 'Differentiable Renderer', '3D Mesh', 'Turing-completeness', 'Library']","""Artificial neural networks have revolutionized many areas of computer science in recent years, providing solutions to a number of previously unsolved problems. On the other hand, for many problems, classic algorithms exist, which typically exceed the accuracy and stability of neural networks. To combine these two concepts, we present a new kind of neural networksalgorithmic neural networks (AlgoNets). These networks integrate smooth versions of classic algorithms into the topology of neural networks. A forward AlgoNet includes algorithmic layers into existing architectures to enhance performance and explainability while a backward AlgoNet enables solving inverse problems without or with only weak supervision. In addition, we present the algonet package, a PyTorch based library that includes, inter alia, a smoothly evaluated programming language, a smooth 3D mesh renderer, and smooth sorting algorithms.""","""The paper does not provide theory or experiment to justify the various proposed relaxations. In its current form, it has very limited scope.""" 670,"""CEB Improves Model Robustness""","['Information Theory', 'Adversarial Robustness']","""We demonstrate that the Conditional Entropy Bottleneck (CEB) can improve model robustness. CEB is an easy strategy to implement and works in tandem with data augmentation procedures. We report results of a large scale adversarial robustness study on CIFAR-10, as well as the IMAGENET-C Common Corruptions Benchmark.""","""This paper proposes CEB, Conditional Entropy Bottleneck, as a way to improves the robustness of a model against adversarial attacks and noisy data. The model is tested empirically using several experiments and various datasets. We appreciate the authors for submitting the paper to ICLR and providing detailed responses to the reviewers' comments and concerns. After the initial reviews and rebuttal, we had extensive discussions to judge whether the contributions are clear and sufficient for publication. In particular, we discussed the overlap with a previous (arXiv) paper and decided that the overlap should not be considered because it is not published at a conference or journal. Plus the paper makes additional contributions. However, reviewers in the end did not think the paper showed sufficient explanation and proof of why and how this model works, and whether this approach improves upon other state-of-the-art adversarial defense approaches. Again, thank you for submitting to ICLR, and I hope to see an improved version in a future publication.""" 671,"""Kernel and Rich Regimes in Overparametrized Models""","['Overparametrized', 'Implicit', 'Bias', 'Regularization', 'Kernel', 'Rich', 'Adaptive', 'Regime']","""A recent line of work studies overparametrized neural networks in the ""kernel regime,"" i.e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat and Bach, we show how the scale of the initialization controls the transition between the ""kernel"" (aka lazy) and ""rich"" (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a simple two-layer model that already exhibits an interesting and meaningful transition between the kernel and rich regimes, and we demonstrate the transition for more complex matrix factorization models and multilayer non-linear networks. ""","""The paper studies how the size of the initialization of neural network weights affects whether the resulting training puts the network in a ""kernel regime"" or a ""rich regime"". Using a two-layer model they show, theoretically and practically, the transition between kernel and rich regimes. Further experiments are provided for more complex settings. The scores of the reviewers were widely spread, with a high score (8) from a low confidence reviewer with a very short review. While the authors responded to the reviewer comments, two of the reviewers (importantly including the one recommending reject) did not further engage. Overall, the paper studies an important problem, and provides insight into how weight initialization size can affect the final network. Unfortunately, there are many strong submissions to ICLR this year, and the submission in its current state is not yet suitable for publication.""" 672,"""Contrastive Learning of Structured World Models""","['state representation learning', 'graph neural networks', 'model-based reinforcement learning', 'relational learning', 'object discovery']","""A structured understanding of our world in terms of objects, relations, and hierarchies is an important component of human cognition. Learning such a structured world model from raw sensory data remains a challenge. As a step towards this goal, we introduce Contrastively-trained Structured World Models (C-SWMs). C-SWMs utilize a contrastive approach for representation learning in environments with compositional structure. We structure each state embedding as a set of object representations and their relations, modeled by a graph neural network. This allows objects to be discovered from raw pixel observations without direct supervision as part of the learning process. We evaluate C-SWMs on compositional environments involving multiple interacting objects that can be manipulated independently by an agent, simple Atari games, and a multi-object physics simulation. Our experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments, while learning interpretable object-based representations.""","""This paper presents an approach to learn state representations of the scene as well as their action-conditioned transition model, applying contrastive learning on top of a graph neural network. The reviewers unanimously agree that this paper contains a solid research contribution and the authors' response to the reviews further clarified their concerns.""" 673,"""CNAS: Channel-Level Neural Architecture Search""",['Neural architecture search'],"""There is growing interest in automating designing good neural network architectures. The NAS methods proposed recently have significantly reduced architecture search cost by sharing parameters, but there is still a challenging problem of designing search space. We consider search space is typically defined with its shape and a set of operations and propose a channel-level architecture search\,(CNAS) method using only a fixed type of operation. The resulting architecture is sparse in terms of channel and has different topology at different cell. The experimental results for CIFAR-10 and ImageNet show that a fine-granular and sparse model searched by CNAS achieves very competitive performance with dense models searched by the existing methods.""","""This paper proposes a channel pruning approach based one-shot neural architecture search (NAS). As agreed by all reviewers, it has limited novelty, and the method can be viewed as a straightforward combination of NAS and pruning. Experimental results are not convincing. The proposed method is not better than STOA on the accuracy or number of parameters. The setup is not fair, as the proposed method uses autoaugment while the other baselines do not. The authors should also compare with related methods such as Bayesnas, and other pruning techniques. Finally, the paper is poorly written, and many related works are missing.""" 674,"""How the Softmax Activation Hinders the Detection of Adversarial and Out-of-Distribution Examples in Neural Networks""","['Adversarial examples', 'out-of-distribution', 'detection', 'softmax', 'logits']","""Despite having excellent performances for a wide variety of tasks, modern neural networks are unable to provide a prediction with a reliable confidence estimate which would allow to detect misclassifications. This limitation is at the heart of what is known as an adversarial example, where the network provides a wrong prediction associated with a strong confidence to a slightly modified image. Moreover, this overconfidence issue has also been observed for out-of-distribution data. We show through several experiments that the softmax activation, usually placed as the last layer of modern neural networks, is partly responsible for this behaviour. We give qualitative insights about its impact on the MNIST dataset, showing that relevant information present in the logits is lost once the softmax function is applied. The same observation is made through quantitative analysis, as we show that two out-of-distribution and adversarial example detectors obtain competitive results when using logit values as inputs, but provide considerably lower performances if they use softmax probabilities instead: from 98.0% average AUROC to 56.8% in some settings. These results provide evidence that the softmax activation hinders the detection of adversarial and out-of-distribution examples, as it masks a significant part of the relevant information present in the logits.""",""" The paper investigates how the softmax activation hinders the detection of out-of-distribution examples. All the reviewers felt that the paper requires more work before it can be accepted. In particular, the reviewers raised several concerns about theoretical justification, comparison to other existing methods, discussion of connection to existing methods and scalability to larger number of classes. I encourage the authors to revise the draft based on the reviewers feedback and resubmit to a different venue. """ 675,"""Benefits of Overparameterization in Single-Layer Latent Variable Generative Models""","['overparameterization', 'unsupervised', 'parameter recovery', 'rigorous experiments']","""One of the most surprising and exciting discoveries in supervising learning was the benefit of overparameterization (i.e. training a very large model) to improving the optimization landscape of a problem, with minimal effect on statistical performance (i.e. generalization). In contrast, unsupervised settings have been under-explored, despite the fact that it has been observed that overparameterization can be helpful as early as Dasgupta & Schulman (2007). In this paper, we perform an exhaustive study of different aspects of overparameterization in unsupervised learning via synthetic and semi-synthetic experiments. We discuss benefits to different metrics of success (recovering the parameters of the ground-truth model, held-out log-likelihood), sensitivity to variations of the training algorithm, and behavior as the amount of overparameterization increases. We find that, when learning using methods such as variational inference, larger models can significantly increase the number of ground truth latent variables recovered.""","""This paper studies over-parameterization for unsupervised learning. The paper does a series of empirical studies on this topic. Among other things the authors observe that larger models can increase the number latent variables recovered when fitting larger variational inference models. The reviewers raised some concern about the simplicity of the models studied and also lack of some theoretical justification. One reviewer also suggests that more experiments and ablation studies on more general models will further help clarify the role over-parameterized model for latent generative models. I agree with the reviewers that this paper is ""compelling reason for theoretical research on the interplay between overparameterization and parameter recovery in latent variable neural networks trained with gradient descent methods"". I disagree with the reviewers that theoretical study is required as I think a good empirical paper with clear conjectures is as important. I do agree with the reviewers however that for empirical paper I think the empirical studies would have to be a bit more thorough with more clear conjectures. In summary, I think the paper is nice and raises a lot of interesting questions but can be improved with more through studies and conjectures. I would have liked to have the paper accepted but based on the reviewer scores and other papers in my batch I can not recommend acceptance at this time. I strongly recommend the authors to revise and resubmit. I really think this is a nice paper and has a lot of potential and can have impact with appropriate revision.""" 676,"""Towards Certified Defense for Unrestricted Adversarial Attacks""","['Adversarial Defense', 'Certified Defense', 'Adversarial Examples']","""Certified defenses against adversarial examples are very important in safety-critical applications of machine learning. However, existing certified defense strategies only safeguard against perturbation-based adversarial attacks, where the attacker is only allowed to modify normal data points by adding small perturbations. In this paper, we provide certified defenses under the more general threat model of unrestricted adversarial attacks. We allow the attacker to generate arbitrary inputs to fool the classifier, and assume the attacker knows everything except the classifiers' parameters and the training dataset used to learn it. Lack of knowledge about the classifiers parameters prevents an attacker from generating adversarial examples successfully. Our defense draws inspiration from differential privacy, and is based on intentionally adding noise to the classifier's outputs to limit the attacker's knowledge about the parameters. We prove concrete bounds on the minimum number of queries required for any attacker to generate a successful adversarial attack. For a simple linear classifiers we prove that the bound is asymptotically optimal up to a constant by exhibiting an attack algorithm that achieves this lower bound. We empirically show the success of our defense strategy against strong black box attack algorithms.""","""This paper proposes a certified defense under the more general threat model beyond additive perturbation. The proposed defense method is based on adding noise to the classifier's outputs to limit the attacker's knowledge about the parameters, which is similar to differential privacy mechanism. The authors proved the query complexity for any attacker to generate a successful adversarial attack. The main objection of this work is (1) the assumption of the attacker and the definition of the query complexity (to recover the optimal classifier rather than generating an adversarial example successfully) is uncommon, (2) the claim is misleading, and (3) the experimental evaluation is not sufficient (only two attacks are evaluated). The authors only provided a brief response to address the reviewers comments/questions without submitting a revision. Unfortunately none of the reviewer is in support of this paper even after author response. """ 677,"""Graph Convolutional Reinforcement Learning""",[],"""Learning to cooperate is crucially important in multi-agent environments. The key is to understand the mutual interplay between agents. However, multi-agent environments are highly dynamic, where agents keep moving and their neighbors change quickly. This makes it hard to learn abstract representations of mutual interplay between agents. To tackle these difficulties, we propose graph convolutional reinforcement learning, where graph convolution adapts to the dynamics of the underlying graph of the multi-agent environment, and relation kernels capture the interplay between agents by their relation representations. Latent features produced by convolutional layers from gradually increased receptive fields are exploited to learn cooperation, and cooperation is further improved by temporal relation regularization for consistency. Empirically, we show that our method substantially outperforms existing methods in a variety of cooperative scenarios.""","""The work proposes a graph convolutional network based approach to multi-agent reinforcement learning. This approach is designed to be able to adaptively capture changing interactions between agents. Initial reviews highlighted several limitations but these were largely addressed by the authors. The resulting paper makes a valuable contribution by proposing a well-motivated approach, and by conducting extensive empirical validation and analysis that result in novel insights. I encourage the authors to take on board any remaining reviewer suggestions as they prepare the camera ready version of the paper.""" 678,"""Disentangling Factors of Variations Using Few Labels""",[],"""Learning disentangled representations is considered a cornerstone problem in representation learning. Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations. However, in many practical settings, one might have access to a limited amount of supervision, for example through manual labeling of (some) factors of variation in a few training examples. In this paper, we investigate the impact of such supervision on state-of-the-art disentanglement methods and perform a large scale study, training over 52000 models under well-defined and reproducible experimental conditions. We observe that a small number of labeled examples (0.01--0.5% of the data set), with potentially imprecise and incomplete labels, is sufficient to perform model selection on state-of-the-art unsupervised models. Further, we investigate the benefit of incorporating supervision into the training process. Overall, we empirically validate that with little and imprecise supervision it is possible to reliably learn disentangled representations.""","""This paper addresses the problem of learning disentangled representations and shows that the introduction of a few labels corresponding to the desired factors of variation can be used to increase the separation of the learned representation. There were mixed scores for this work. Two reviewers recommended weak acceptance while one reviewer recommended rejection. All reviewers and authors agreed that the main conclusion that the labeled factors of variation can be used to improved disentanglement is perhaps expected. However, reviewers 2 and 3 argue that this work presents extensive experimental evidence to support this claim which will be of value to the community. The main concerns of R1 center around a lack of clear analysis and synthesis of the large number of experiments. Though there is a page limit we encourage the authors to revise their manuscript with a specific focus on clarity and take-away messages from their results. After careful consideration of all reviewer comments and author rebuttals the AC recommends acceptance of this work. The potential contribution of the extensive experimental evidence warrants presentation at ICLR. However, again, we encourage the authors to consider ways to mitigate the concerns of R1 in their final manuscript. """ 679,"""Meta-Graph: Few shot Link Prediction via Meta Learning""","['Meta Learning', 'Link Prediction', 'Graph Representation Learning', 'Graph Neural Networks']","""We consider the task of few shot link prediction, where the goal is to predict missing edges across multiple graphs using only a small sample of known edges. We show that current link prediction methods are generally ill-equipped to handle this task---as they cannot effectively transfer knowledge between graphs in a multi-graph setting and are unable to effectively learn from very sparse data. To address this challenge, we introduce a new gradient-based meta learning framework, Meta-Graph, that leverages higher-order gradients along with a learned graph signature function that conditionally generates a graph neural network initialization. Using a novel set of few shot link prediction benchmarks, we show that Meta-Graph enables not only fast adaptation but also better final convergence and can effectively learn using only a small sample of true edges.""","""This paper presents a new link prediction framework in the case of small amount labels using meta learning methods. The reviewers think the problem is important, and the proposed approach is a modification of meta learning to this case. However, the method is not compared to other knowledge graph completion methods such as TransE, RotaE, Neural Tensor Factorization in benchmark dataset such as Fb15k and freebase. Adding these comparisons can make the paper more convincing. """ 680,"""Neural Subgraph Isomorphism Counting""","['subgraph isomorphism', 'graph neural networks']","""In this paper, we study a new graph learning problem: learning to count subgraph isomorphisms. Although the learning based approach is inexact, we are able to generalize to count large patterns and data graphs in polynomial time compared to the exponential time of the original NP-complete problem. Different from other traditional graph learning problems such as node classification and link prediction, subgraph isomorphism counting requires more global inference to oversee the whole graph. To tackle this problem, we propose a dynamic intermedium attention memory network (DIAMNet) which augments different representation learning architectures and iteratively attends pattern and target data graphs to memorize different subgraph isomorphisms for the global counting. We develop both small graphs (<= 1,024 subgraph isomorphisms in each) and large graphs (<= 4,096 subgraph isomorphisms in each) sets to evaluate different models. Experimental results show that learning based subgraph isomorphism counting can help reduce the time complexity with acceptable accuracy. Our DIAMNet can further improve existing representation learning models for this more global problem.""","""This paper proposes a method called Dynamic Intermedium Attention Memory Network (DIAMNet) to learn the subgraph isomorphism counting for a given pattern graph P and target graph G. However, the reviewers think the experimental comparisons are insufficient. Furthermore, the evaluation is only for synthetic dataset for which generating process is designed by the authors. If possible, evaluation on benchmark graph datasets would be convincing though creating the ground truth might be difficult for larger graphs. """ 681,"""Learning relevant features for statistical inference""","['unsupervised learning', 'non-parametric probabilistic model', 'singular value decomposition', 'fisher information metric', 'chi-squared distance']","""We introduce an new technique to learn correlations between two types of data. The learned representation can be used to directly compute the expectations of functions over one type of data conditioned on the other, such as Bayesian estimators and their standard deviations. Specifically, our loss function teaches two neural nets to extract features representing the probability vectors of highest singular value for the stochastic map (set of conditional probabilities) implied by the joint dataset, relative to the inner product defined by the Fisher information metrics evaluated at the marginals. We test the approach using a synthetic dataset, analytical calculations, and inference on occluded MNIST images. Surprisingly, when applied to supervised learning (one dataset consists of labels), this approach automatically provides regularization and faster convergence compared to the cross-entropy objective. We also explore using this approach to discover salient independent features of a single dataset. ""","""This manuscript proposes an approach for estimating cross-correlations between model outputs, related to deep CCA. Authors note that the procedure improves results when applied to supervised learning problems. The reviewers have pointed out the close connection to previous work on deep CCA, and the author(s) have agreed. The reviewers agree that the paper has promise if properly expanded both theoretically and empirically.""" 682,"""Policy Tree Network""",['Reinforcement Learning'],"""Decision-time planning policies with implicit dynamics models have been shown to work in discrete action spaces with Q learning. However, decision-time planning with implicit dynamics models in continuous action space has proven to be a difficult problem. Recent work in Reinforcement Learning has allowed for implicit model based approaches to be extended to Policy Gradient methods. In this work we propose Policy Tree Network (PTN). Policy Tree Network lies at the intersection of Model-Based Reinforcement Learning and Model-Free Reinforcement Learning. Policy Tree Network is a novel approach which, for the first time, demonstrates how to leverage an implicit model to perform decision-time planning with Policy Gradient methods in continuous action spaces. This work is empirically justified on 8 standard MuJoCo environments so that it can easily be compared with similar work done in this area. Additionally, we offer a lower bound on the worst case change in the mean of the policy when tree planning is used and theoretically justify our design choices.""","""The consensus amongst the reviewers is that the paper discusses an interesting idea and shows significant promise, but that the presentation of the initial submission was not of a publishable standard. While some of the issues were clarified during discussion, the reviewers agree that the paper lacks polish and is therefore not ready. While I think Reviewer #3 is overly strict in sticking to a 1, as it is the nature of ICLR to allow papers to be improved through the discussion, in the absence of any of the reviewers being ready to champion the paper, I cannot recommend acceptance. I however have no doubt that with further work on the presentation of what sounds like a potentially fascinating contribution to the field, the paper will stand a chance at acceptance at a future conference.""" 683,"""Extreme Classification via Adversarial Softmax Approximation""","['Extreme classification', 'negative sampling']","""Training a classifier over a large number of classes, known as 'extreme classification', has become a topic of major interest with applications in technology, science, and e-commerce. Traditional softmax regression induces a gradient cost proportional to the number of classes C, which often is prohibitively expensive. A popular scalable softmax approximation relies on uniform negative sampling, which suffers from slow convergence due a poor signal-to-noise ratio. In this paper, we propose a simple training method for drastically enhancing the gradient signal by drawing negative samples from an adversarial model that mimics the data distribution. Our contributions are three-fold: (i) an adversarial sampling mechanism that produces negative samples at a cost only logarithmic in C, thus still resulting in cheap gradient updates; (ii) a mathematical proof that this adversarial sampling minimizes the gradient variance while any bias due to non-uniform sampling can be removed; (iii) experimental results on large scale data sets that show a reduction of the training time by an order of magnitude relative to several competitive baselines. ""","""The paper proposes a fast training method for extreme classification problems where number of classes is very large. The method improves the negative sampling (method which uses uniform distribution to sample the negatives) by using an adversarial auxiliary model to sample negatives in a non-uniform manner. This has logarithmic computational cost and minimizes the variance in the gradients. There were some concerns about missing empirical comparisons with methods that use sampled-softmax approach for extreme classification. While these comparisons will certainly add further value to the paper, the improvement over widely used method of negative sampling and a formal analysis of improvement from hard negatives is a valuable contribution in itself that will be of interest to the community. Authors should include the experiments on small datasets to quantify the approximation gap due to negative sampling compared to full softmax, as promised.""" 684,"""Off-policy Bandits with Deficient Support""","['Recommender System', 'Search Engine', 'Counterfactual Learning']","""Off-policy training of contextual-bandit policies is attractive in online systems (e.g. search, recommendation, ad placement), since it enables the reuse of large amounts of log data from the production system. State-of-the-art methods for off-policy learning, however, are based on inverse propensity score (IPS) weighting, which requires that the logging policy chooses all actions with non-zero probability for any context (i.e., full support). In real-world systems, this condition is often violated, and we show that existing off-policy learning methods based on IPS weighting can fail catastrophically. We therefore develop new off-policy contextual-bandit methods that can controllably and robustly learn even when the logging policy has deficient support. To this effect, we explore three approaches that provide various guarantees for safe learning despite the inherent limitations of support deficient data: restricting the action space, reward extrapolation, and restricting the policy space. We analyze the statistical and computational properties of these three approaches, and empirically evaluate their effectiveness in a series of experiments. We find that controlling the policy space is both computationally efficient and that it robustly leads to accurate policies.""","""This paper tackles the problem of learning off-policy in the contextual bandit problem, more specifically when the available data is deficient (in the sense that it does not allow to build reasonable counterfactual estimators). To address this, the authors introduce three strategies: 1) restricting the action space; 2) imputing missing rewards when lacking data; 3) restricting the policy space to policies with ""enough"" data. All three approaches are analyzed (statistical and computational properties) and evaluated empirically. Restricting the policy space appears to be particularly effective in practice. Although the problem being solved is very relevant, it is not clear how this work is positioned with respect to approaches solving similar problems in RL. For example, Batch constrained Q-learning ([1]) restricts action space, while Bootstrapping Error Accumulation ([2]) and SPIBB ([3]) restrict the policy class in batch RL. A comparison with these techniques in the contextual bandit settings, in addition to recent state-of-the-art off-policy bandit approaches (Liu et al. (2019), Xie et al. (2019)) is lacking. Moreover, given the newly added results (DR method by Tang et al. (2019)), it is not clear how the proposed approach improves over existing techniques. This should be clarified. I therefore recommend to reject this paper. """ 685,"""How Does Learning Rate Decay Help Modern Neural Networks?""","['Learning rate decay', 'Optimization', 'Explainability', 'Deep learning', 'Transfer learning']","""Learning rate decay (lrDecay) is a \emph{de facto} technique for training modern neural networks. It starts with a large learning rate and then decays it multiple times. It is empirically observed to help both optimization and generalization. Common beliefs in how lrDecay works come from the optimization analysis of (Stochastic) Gradient Descent: 1) an initially large learning rate accelerates training or helps the network escape spurious local minima; 2) decaying the learning rate helps the network converge to a local minimum and avoid oscillation. Despite the popularity of these common beliefs, experiments suggest that they are insufficient in explaining the general effectiveness of lrDecay in training modern neural networks that are deep, wide, and nonconvex. We provide another novel explanation: an initially large learning rate suppresses the network from memorizing noisy data while decaying the learning rate improves the learning of complex patterns. The proposed explanation is validated on a carefully-constructed dataset with tractable pattern complexity. And its implication, that additional patterns learned in later stages of lrDecay are more complex and thus less transferable, is justified in real-world datasets. We believe that this alternative explanation will shed light into the design of better training strategies for modern neural networks.""","""This paper seeks to understand the effect of learning rate decay in neural net training. This is an important question in the field and this paper also proposes to show why previous explanations were not correct. However, the reviewers found that the paper did not explain the experimental setup enough to be reproducible. Furthermore, there are significant problems with the novelty of the work due to its overlap with works such as (Nakiran et al., 2019), (Li et al. 2019) or (Jastrzbski et al. 2017).""" 686,"""Improving Differentially Private Models with Active Learning""","['Differential Privacy', 'Active Learning']","""Broad adoption of machine learning techniques has increased privacy concerns for models trained on sensitive data such as medical records. Existing techniques for training differentially private (DP) models give rigorous privacy guarantees, but applying these techniques to neural networks can severely degrade model performance. This performance reduction is an obstacle to deploying private models in the real world. In this work, we improve the performance of DP models by fine-tuning them through active learning on public data. We introduce two new techniques - DiversePublic and NearPrivate - for doing this fine-tuning in a privacy-aware way. For the MNIST and SVHN datasets, these techniques improve state-of-the-art accuracy for DP models while retaining privacy guarantees.""","""This paper provides an active-learning approach to improve the performance of an existing differentially private classifier with public labeled data. Where the paper provides a new approach, there is a consensus among the reviewers that the paper does not provide a strong enough contribution for acceptance. The authors can potentially improve the submission by including a more comprehensive comparison with the PATE framework and improving its overall presentation.""" 687,"""Deep exploration by novelty-pursuit with maximum state entropy""","['Exploration', 'Reinforcement Learning']","""Efficient exploration is essential to reinforcement learning in huge state space. Recent approaches to address this issue include the intrinsically motivated goal exploration process (IMGEP) and the maximum state entropy exploration (MSEE). In this paper, we disclose that goal-conditioned exploration behaviors in IMGEP can also maximize the state entropy, which bridges the IMGEP and the MSEE. From this connection, we propose a maximum entropy criterion for goal selection in goal-conditioned exploration, which results in the new exploration method novelty-pursuit. Novelty-pursuit performs the exploration in two stages: first, it selects a goal for the goal-conditioned exploration policy to reach the boundary of the explored region; then, it takes random actions to explore the non-explored region. We demonstrate the effectiveness of the proposed method in environments from simple maze environments, Mujoco tasks, to the long-horizon video game of SuperMarioBros. Experiment results show that the proposed method outperforms the state-of-the-art approaches that use curiosity-driven exploration.""","""There is insufficient support to recommend accepting this paper. The reviewers unanimously recommended rejection, and did not change their recommendation after the author response period. The technical depth of the paper was criticized, as was the experimental evaluation. The review comments should help the authors strenghen this work.""" 688,"""Massively Multilingual Sparse Word Representations""","['sparse word representations', 'multilinguality', 'sparse coding']","""In this paper, we introduce Mamus for constructing multilingual sparse word representations. Our algorithm operates by determining a shared set of semantic units which get reutilized across languages, providing it a competitive edge both in terms of speed and evaluation performance. We demonstrate that our proposed algorithm behaves competitively to strong baselines through a series of rigorous experiments performed towards downstream applications spanning over dependency parsing, document classification and natural language inference. Additionally, our experiments relying on the QVEC-CCA evaluation score suggests that the proposed sparse word representations convey an increased interpretability as opposed to alternative approaches. Finally, we are releasing our multilingual sparse word representations for the 27 typologically diverse set of languages that we conducted our various experiments on.""","""This paper describes a new method for creating word embeddings that can operate on corpora from more than one language. The algorithm is simple, but rivals more complex approaches. The reviewers were happy with this paper. They were also impressed that the authors ran the requested multi-lingual BERT experiments, even though they did not show positive results. One reviewer did think that non-contextual word embeddings were of less interest to the NLP community, but thought your arguments for the computational efficiency were convincing.""" 689,"""LEX-GAN: Layered Explainable Rumor Detector Based on Generative Adversarial Networks""","['explainable rumor detection', 'layered generative adversarial networks']","""Social media have emerged to be increasingly popular and have been used as tools for gathering and propagating information. However, the vigorous growth of social media contributes to the fast-spreading and far-reaching rumors. Rumor detection has become a necessary defense. Traditional rumor detection methods based on hand-crafted feature selection are replaced by automatic approaches that are based on Artificial Intelligence (AI). AI decision making systems need to have the necessary means, such as explainability to assure users their trustworthiness. Inspired by the thriving development of Generative Adversarial Networks (GANs) on text applications, we propose LEX-GAN, a GAN-based layered explainable rumor detector to improve the detection quality and provide explainability. Unlike fake news detection that needs a previously collected verified news database, LEX-GAN realizes explainable rumor detection based on only tweet-level text. LEX-GAN is trained with generated non-rumor-looking rumors. The generators produce rumors by intelligently inserting controversial information in non-rumors, and force the discriminators to detect detailed glitches and deduce exactly which parts in the sentence are problematic. The layered structures in both generative and discriminative model contributes to the high performance. We show LEX-GAN's mutation detection ability in textural sequences by performing a gene classification and mutation detection task.""","""The paper is well-written and presents an extensive set of experiments. The architecture is a simple yet interesting attempt at learning explainable rumour detection models. Some reviewers worry about the novelty of the approach, and whether the explainability of the model is in fact properly evaluated. The authors responded to the reviews and provided detailed feedback. A major limitation of this work is that explanations are at the level of input words. This is common in interpretability (LIME, etc), but it is not clear that explanations/interpretations are best provided at this level and not, say, at the level of training instances or at a more abstract level. It is also not clear that this approach would scale to languages that are morphologically rich and/or harder to segment into words. Since modern approaches to this problem would likely include pretrained language models, it is an interesting problem to make such architectures interpretable. """ 690,"""Stochastic Neural Physics Predictor""","['physics prediction', 'forward dynamics', 'stochastic environments', 'dropout']","""Recently, neural-network based forward dynamics models have been proposed that attempt to learn the dynamics of physical systems in a deterministic way. While near-term motion can be predicted accurately, long-term predictions suffer from accumulating input and prediction errors which can lead to plausible but different trajectories that diverge from the ground truth. A system that predicts distributions of the future physical states for long time horizons based on its uncertainty is thus a promising solution. In this work, we introduce a novel robust Monte Carlo sampling based graph-convolutional dropout method that allows us to sample multiple plausible trajectories for an initial state given a neural-network based forward dynamics predictor. By introducing a new shape preservation loss and training our dynamics model recurrently, we stabilize long-term predictions. We show that our models long-term forward dynamics prediction errors on complicated physical interactions of rigid and deformable objects of various shapes are significantly lower than existing strong baselines. Lastly, we demonstrate how generating multiple trajectories with our Monte Carlo dropout method can be used to train model-free reinforcement learning agents faster and to better solutions on simple manipulation tasks.""","""The paper presents a timely method for intuitive physics simulations that expand on the HTRN model, and tested in several physicals systems with rigid and deformable objects as well as other results later in the review. Reviewer 3 was positive about the paper, and suggested improving the exposition to make it more self-contained. Reviewer 1 raised questions about the complexity of tasks and a concerns of limited advancement provided by the paper. Reviewer 2, had a similar concerns about limited clarity as to how the changes contribute to the results, and missing baselines. The authors provided detailed responses in all cases, providing some additional results with various other videos. After discussion and reviewing the additional results, the role of the stochastic elements of the model and its contributions to performance remained and the reviewers chose not to adjust their ratings. The paper is interesting, timely and addresses important questions, but questions remain. We hope the review has provided useful information for their ongoing research. """ 691,"""Provenance detection through learning transformation-resilient watermarking""","['watermarking', 'provenance detection']","""Advancements in deep generative models have made it possible to synthesize images, videos and audio signals that are hard to distinguish from natural signals, creating opportunities for potential abuse of these capabilities. This motivates the problem of tracking the provenance of signals, i.e., being able to determine the original source of a signal. Watermarking the signal at the time of signal creation is a potential solution, but current techniques are brittle and watermark detection mechanisms can easily be bypassed by doing some post-processing (cropping images, shifting pitch in the audio etc.). In this paper, we introduce ReSWAT (Resilient Signal Watermarking via Adversarial Training), a framework for learning transformation-resilient watermark detectors that are able to detect a watermark even after a signal has been through several post-processing transformations. Our detection method can be applied to domains with continuous data representations such as images, videos or sound signals. Experiments on watermarking image and audio signals show that our method can reliably detect the provenance of a synthetic signal, even if the signal has been through several post-processing transformations, and improve upon related work in this setting. Furthermore, we show that for specific kinds of transformations (perturbations bounded in the pseudo-formula norm), we can even get formal guarantees on the ability of our model to detect the watermark. We provide qualitative examples of watermarked image and audio samples in the anonymous code submission link.""","""This paper offers an interesting and potentially useful approach to robust watermarking. The reviewers are divided on the significance of the method. The most senior and experienced reviewer was the most negative. On balance, my assessment of this paper is borderline; given the number of more highly ranked papers in my pile, that means I have to assign ""reject"".""" 692,"""I Am Going MAD: Maximum Discrepancy Competition for Comparing Classifiers Adaptively""",['model comparison'],"""The learning of hierarchical representations for image classification has experienced an impressive series of successes due in part to the availability of large-scale labeled data for training. On the other hand, the trained classifiers have traditionally been evaluated on small and fixed sets of test images, which are deemed to be extremely sparsely distributed in the space of all natural images. It is thus questionable whether recent performance improvements on the excessively re-used test sets generalize to real-world natural images with much richer content variations. Inspired by efficient stimulus selection for testing perceptual models in psychophysical and physiological studies, we present an alternative framework for comparing image classifiers, which we name the MAximum Discrepancy (MAD) competition. Rather than comparing image classifiers using fixed test images, we adaptively sample a small test set from an arbitrarily large corpus of unlabeled images so as to maximize the discrepancies between the classifiers, measured by the distance over WordNet hierarchy. Human labeling on the resulting model-dependent image sets reveals the relative performance of the competing classifiers, and provides useful insights on potential ways to improve them. We report the MAD competition results of eleven ImageNet classifiers while noting that the framework is readily extensible and cost-effective to add future classifiers into the competition. Codes can be found at pseudo-url.""","""This paper proposes a new way of comparing classifiers, which does not use fixed test set and adaptively sample it from an arbitrarily large corpus of unlabeled images, i.e. replacing the conventional test-set-based evaluation methods with a more flexible mechanism. The main proposal is to build a test set adaptively in a manner that captures how classifiers disagree, as measured by the wordnet tree. As noted by R2, this work has the potential to be of interest to a broad audience and can motivate many subsequent works. While the reviewers acknowledged the importance of this work, they raised several concerns: (1) the proposed approach is immature to be considered for benchmarking yet (R1,R4), (2) selecting k and studying its influence on the performance ( R1, R3, R4), (3) the proposed approach requires data annotation which might not be straightforward -- (R3, R4). The authors provided a detailed rebuttal addressing the reviewer concerns. There is reviewer disagreement on this paper. The comments from R3 were valuable for the discussion, but at the same time too brief to be adequately addressed by the authors. The comments from emergency reviewer were helpful in making the decision. AC decided to recommend acceptance of the paper seeing its valuable contributions towards re-thinking the evaluation of current SOTA models. """ 693,"""Batch-shaping for learning conditional channel gated networks""","['Conditional computation', 'channel gated networks', 'gating', 'Batch-shaping', 'distribution matching', 'image classification', 'semantic segmentation']","""We present a method that trains large capacity neural networks with significantly improved accuracy and lower dynamic computational cost. This is achieved by gating the deep-learning architecture on a fine-grained-level. Individual convolutional maps are turned on/off conditionally on features in the network. To achieve this, we introduce a new residual block architecture that gates convolutional channels in a fine-grained manner. We also introduce a generally applicable tool batch-shaping that matches the marginal aggregate posteriors of features in a neural network to a pre-specified prior distribution. We use this novel technique to force gates to be more conditional on the data. We present results on CIFAR-10 and ImageNet datasets for image classification, and Cityscapes for semantic segmentation. Our results show that our method can slim down large architectures conditionally, such that the average computational cost on the data is on par with a smaller architecture, but with higher accuracy. In particular, on ImageNet, our ResNet50 and ResNet34 gated networks obtain 74.60% and 72.55% top-1 accuracy compared to the 69.76% accuracy of the baseline ResNet18 model, for similar complexity. We also show that the resulting networks automatically learn to use more features for difficult examples and fewer features for simple examples.""","""The paper describes a method to train a convolutional network with large capacity, where channel-gating (input conditioned) is implemented - thus, only parts of the network are used at inference time. The paper builds over previous work, with the main contribution being a ""batch-shaping"" technique that regularizes the channel gating to follow a beta distribution, combined with L0 regularization. The paper shows that ResNet trained with this technique can achieve higher accuracy with lower theoretical MACs. Weakness of the paper is that more engineering would be required to convert the theoretical MACs into actual running time - which would further validate the practicality of the approach. """ 694,"""Compressive Recovery Defense: A Defense Framework for \ell_2 and pseudo-formula norm attacks.""","['adversarial input', 'adversarial machine learning', 'neural networks', 'compressive sensing.']","""We provide recovery guarantees for compressible signals that have been corrupted with noise and extend the framework introduced in \cite{bafna2018thwarting} to defend neural networks against pseudo-formula , pseudo-formula , and pseudo-formula -norm attacks. In the case of pseudo-formula -norm noise, we provide recovery guarantees for Iterative Hard Thresholding (IHT) and Basis Pursuit (BP). For pseudo-formula -norm bounded noise, we provide recovery guarantees for BP, and for the case of pseudo-formula -norm bounded noise, we provide recovery guarantees for Dantzig Selector (DS). These guarantees theoretically bolster the defense framework introduced in \cite{bafna2018thwarting} for defending neural networks against adversarial inputs. Finally, we experimentally demonstrate the effectiveness of this defense framework against an array of pseudo-formula , pseudo-formula and pseudo-formula -norm attacks. ""","""After reading the author's response, all the reviwers still think that this paper is a simple extension of gradient masking, and can not provide the robustness in neural networks.""" 695,"""On the implicit minimization of alternative loss functions when training deep networks""","['implicit minimization', 'optimization bias', 'margin based loss functions', 'flat minima']","""Understanding the implicit bias of optimization algorithms is important in order to improve generalization of neural networks. One approach to try to exploit such understanding would be to then make the bias explicit in the loss function. Conversely, an interesting approach to gain more insights into the implicit bias could be to study how different loss functions are being implicitly minimized when training the network. In this work, we concentrate our study on the inductive bias occurring when minimizing the cross-entropy loss with different batch sizes and learning rates. We investigate how three loss functions are being implicitly minimized during training. These three loss functions are the Hinge loss with different margins, the cross-entropy loss with different temperatures and a newly introduced Gcdf loss with different standard deviations. This Gcdf loss establishes a connection between a sharpness measure for the 01 loss and margin based loss functions. We find that a common behavior is emerging for all the loss functions considered.""","""The paper proposes an interesting setting in which the effect of different optimization parameters on the loss function is analyzed. The analysis is based on considering cross-entropy loss with different softmax parameters, or hinge loss with different margin parameters. The observations are interesting but ultimately the reviewers felt that the experimental results were not sufficient to warrant publication at ICLR. The reviews unanimously recommended rejection, and no rebuttal was provided.""" 696,"""Adaptive Generation of Unrestricted Adversarial Inputs""","['Adversarial Examples', 'Adversarial Robustness', 'Generative Adversarial Networks', 'Image Classification']","""Neural networks are vulnerable to adversarially-constructed perturbations of their inputs. Most research so far has considered perturbations of a fixed magnitude under some pseudo-formula norm. Although studying these attacks is valuable, there has been increasing interest in the construction ofand robustness tounrestricted attacks, which are not constrained to a small and rather artificial subset of all possible adversarial inputs. We introduce a novel algorithm for generating such unrestricted adversarial inputs which, unlike prior work, is adaptive: it is able to tune its attacks to the classifier being targeted. It also offers a 4002,000 speedup over the existing state of the art. We demonstrate our approach by generating unrestricted adversarial inputs that fool classifiers robust to perturbation-based attacks. We also show that, by virtue of being adaptive and unrestricted, our attack is able to bypass adversarial training against it.""","""This paper presents an interesting method for creating adversarial examples using a GAN. Reviewers are concerned that ImageNet Results, while successfully evading a classifier, do not appear to be natural images. Furthermore, the attacks are demonstrated on fairly weak baseline classifiers that are known to be easily broken. They attack Resnet50 (without adv training), for which Lp-bounded attacks empirically seem to produce more convincing images. For MNIST, they attack Wong and Kolters ""certifiable"" defense, which is empirically much weaker than an adversarially trained network, and also weaker than more recent certifiable baselines. """ 697,"""Adaptive Data Augmentation with Deep Parallel Generative Models""",[],"""Data augmentation(DA) is a useful technique to enlarge the size of the training set and prevent overfitting for different machine learning tasks when training data is scarce. However, current data augmentation techniques rely heavily on human design and domain knowledge, and existing automated approaches are yet to fully exploit the latent features in the training dataset. In this paper we propose an adaptive DA strategy based on generative models, where the training set adaptively enriches itself with sample images automatically constructed from deep generative models trained in parallel. We demonstrate by experiments that our data augmentation strategy, with little model-specific considerations, can be easily adapted to cross-domain deep learning/machine learning tasks such as image classification and image inpainting, while significantly improving model performance in both tasks. ""","""This paper proposes a data augmentation method based on Generative Adversarial Networks by training several GANs on subsets of the data which are then used to synthesise new training examples in proportion to their estimated quality as measured by the Inception Score. The reviewers have raised several critical issues with the work, including motivation (it can be harder to train a generative model than a discriminative one), novelty, complexity of the proposed method, and lack of comparison to existing methods. Perhaps the most important one is the inadequate empirical evaluation. The authors didnt address any of the raised concerns in the rebuttal. I will hence recommend the rejection of this paper.""" 698,"""Reconstructing continuous distributions of 3D protein structure from cryo-EM images""","['generative models', 'proteins', '3D reconstruction', 'cryo-EM']","""Cryo-electron microscopy (cryo-EM) is a powerful technique for determining the structure of proteins and other macromolecular complexes at near-atomic resolution. In single particle cryo-EM, the central problem is to reconstruct the 3D structure of a macromolecule from pseudo-formula noisy and randomly oriented 2D projection images. However, the imaged protein complexes may exhibit structural variability, which complicates reconstruction and is typically addressed using discrete clustering approaches that fail to capture the full range of protein dynamics. Here, we introduce a novel method for cryo-EM reconstruction that extends naturally to modeling continuous generative factors of structural heterogeneity. This method encodes structures in Fourier space using coordinate-based deep neural networks, and trains these networks from unlabeled 2D cryo-EM images by combining exact inference over image orientation with variational inference for structural heterogeneity. We demonstrate that the proposed method, termed cryoDRGN, can perform ab-initio reconstruction of 3D protein complexes from simulated and real 2D cryo-EM image data. To our knowledge, cryoDRGN is the first neural network-based approach for cryo-EM reconstruction and the first end-to-end method for directly reconstructing continuous ensembles of protein structures from cryo-EM images.""","""The paper introduces a generative approach to reconstruct 3D images for cryo-electron microscopy (cryo-EM). All reviewers really liked the paper, appreciate the challenging problem tackled and the proposed solution. Acceptance is therefore recommended. """ 699,"""Weakly Supervised Disentanglement with Guarantees""","['disentanglement', 'theory of disentanglement', 'representation learning', 'generative models']","""Learning disentangled representations that correspond to factors of variation in real-world data is critical to interpretable and human-controllable machine learning. Recently, concerns about the viability of learning disentangled representations in a purely unsupervised manner has spurred a shift toward the incorporation of weak supervision. However, there is currently no formalism that identifies when and how weak supervision will guarantee disentanglement. To address this issue, we provide a theoretical framework to assist in analyzing the disentanglement guarantees (or lack thereof) conferred by weak supervision when coupled with learning algorithms based on distribution matching. We empirically verify the guarantees and limitations of several weak supervision methods (restricted labeling, match-pairing, and rank-pairing), demonstrating the predictive power and usefulness of our theoretical framework.""","""This paper first discusses some concepts related to disentanglement. The authors propose to decompose disentanglement into two distinct concepts: consistency and restrictiveness. Then, a calculus of disentanglement is introduced to reveal the relationship between restrictiveness and consistency. The proposed concepts are applied to analyze weak supervision methods. The reviewers ultimately decided this paper is well-written and has content which is of general interest to the ICLR community.""" 700,"""Wasserstein Adversarial Regularization (WAR) on label noise""","['Label Noise', 'Adversarial regularization', 'Wasserstein']","""Noisy labels often occur in vision datasets, especially when they are obtained from crowdsourcing or Web scraping. We propose a new regularization method, which enables learning robust classifiers in presence of noisy data. To achieve this goal, we propose a new adversarial regularization scheme based on the Wasserstein distance. Using this distance allows taking into account specific relations between classes by leveraging the geometric properties of the labels space. Our Wasserstein Adversarial Regularization (WAR) encodes a selective regularization, which promotes smoothness of the classifier between some classes, while preserving sufficient complexity of the decision boundary between others. We first discuss how and why adversarial regularization can be used in the context of label noise and then show the effectiveness of our method on five datasets corrupted with noisy labels: in both benchmarks and real datasets, WAR outperforms the state-of-the-art competitors.""","""This article proposes a regularisation scheme to learn classifiers that take into account similarity of labels, and presents a series of experiments. The reviewers found the approach plausible, the paper well written, and the experiments sufficient. At the same time, they expressed concerns, mentioning that the technical contribution is limited (in particular, the Wasserstein distance has been used before in estimation of conditional distributions and in multi-label learning), and that it would be important to put more efforts into learning the metric. The author responses clarified a few points and agreed that learning the metric is an interesting problem. There were also concerns about the competitiveness of the approach, which were addressed in part in the authors' responses, albeit not fully convincing all of the reviewers. This article proposes an interesting technique for a relevant type of problems, and demonstrates that it can be competitive with extensive experiments. ``Although this is a reasonably good article, it is not good enough, given the very high acceptance bar for this year's ICLR. """ 701,"""Local Label Propagation for Large-Scale Semi-Supervised Learning""",[],"""A significant issue in training deep neural networks to solve supervised learning tasks is the need for large numbers of labeled datapoints. The goal of semisupervised learning is to leverage ubiquitous unlabeled data, together with small quantities of labeled data, to achieve high task performance. Though substantial recent progress has been made in developing semi-supervised algorithms that are effective for comparatively small datasets, many of these techniques do not scale readily to the large (unlabeled) datasets characteristic of real-world applications. In this paper we introduce a novel approach to scalable semi-supervised learning, called Local Label Propagation (LLP). Extending ideas from recent work on unsupervised embedding learning, LLP first embeds datapoints, labeled and otherwise, in a common latent space using a deep neural network. It then propagates pseudolabels from known to unknown datapoints in a manner that depends on the local geometry of the embedding, taking into account both inter-point distance and local data density as a weighting on propagation likelihood. The parameters of the deep embedding are then trained to simultaneously maximize pseudolabel categorization performance as well as a metric of the clustering of datapoints within each psuedo-label group, iteratively alternating stages of network training and label propagation. We illustrate the utility of the LLP method on the ImageNet dataset, achieving results that outperform previous state-of-the-art scalable semi-supervised learning algorithms by large margins, consistently across a wide variety of training regimes. We also show that the feature representation learned with LLP transfers well to scene recognition in the Places 205 dataset.""","""The paper introduces an approach for semi-supervised learning based on local label propagation. While reviewers appreciate learning a consistent embedding space for prediction and label propagation, a few pointed out that this paper does not make it clear how different it is from preview work (Wu et al, Iscen et al., Zhuang et al.), in addition to complexity calculation, or pseudo-label accuracy. These are important points that werent included to the degree that reviewers/readers can understand, and reviewers seem to not change their minds after authors wrote back. This suggests the paper can use additional cycles of polishing/editing to make these points clear. We highly recommend authors to carefully reflect on reviewers both pros and cons of the paper to improve the paper for your future submission. """ 702,"""Graph convolutional networks for learning with few clean and many noisy labels""",[],"""In this work we consider the problem of learning a classifier from noisy labels when a few clean labeled examples are given. The structure of clean and noisy data is modeled by a graph per class and Graph Convolutional Networks (GCN) are used to predict class relevance of noisy examples. For each class, the GCN is treated as a binary classifier learning to discriminate clean from noisy examples using a weighted binary cross-entropy loss function, and then the GCN-inferred ""clean"" probability is exploited as a relevance measure. Each noisy example is weighted by its relevance when learning a classifier for the end task. We evaluate our method on an extended version of a few-shot learning problem, where the few clean examples of novel classes are supplemented with additional noisy data. Experimental results show that our GCN-based cleaning process significantly improves the classification accuracy over not cleaning the noisy data and standard few-shot classification where only few clean examples are used. The proposed GCN-based method outperforms the transductive approach (Douze et al., 2018) that is using the same additional data without labels.""","""The paper combines graph convolutional networks with noisy label learning. The reviewers feel that novelty in the work is limited and there is a need for further experiments and extensions. """ 703,"""Unsupervised Generative 3D Shape Learning from Natural Images""","['unsupervised', '3D', 'differentiable', 'rendering', 'disentangling', 'interpretable']","""In this paper we present, to the best of our knowledge, the first method to learn a generative model of 3D shapes from natural images in a fully unsupervised way. For example, we do not use any ground truth 3D or 2D annotations, stereo video, and ego-motion during the training. Our approach follows the general strategy of Generative Adversarial Networks, where an image generator network learns to create image samples that are realistic enough to fool a discriminator network into believing that they are natural images. In contrast, in our approach the image gen- eration is split into 2 stages. In the first stage a generator network outputs 3D ob- jects. In the second, a differentiable renderer produces an image of the 3D object from a random viewpoint. The key observation is that a realistic 3D object should yield a realistic rendering from any plausible viewpoint. Thus, by randomizing the choice of the viewpoint our proposed training forces the generator network to learn an interpretable 3D representation disentangled from the viewpoint. In this work, a 3D representation consists of a triangle mesh and a texture map that is used to color the triangle surface by using the UV-mapping technique. We provide analysis of our learning approach, expose its ambiguities and show how to over- come them. Experimentally, we demonstrate that our method can learn realistic 3D shapes of faces by using only the natural images of the FFHQ dataset.""","""The paper proposes a GAN approach for unsupervised learning of 3d object shapes from natural images. The key idea is a two-stage generative process where the 3d shape is first generated and then rendered to pixel-level images. While the experimental results are promising, the experimental results are mostly focused on faces (that are well aligned and share roughly similar 3d structures across the dataset). Results on other categories are preliminary and limited, so it's unclear how well the proposed method will work for more general domains. In addition, comparison to the existing baselines (e.g., HoloGAN; Pix2Scene; Rezende et al., 2016) is missing. Overall, further improvements are needed to be acceptable for ICLR. Extra note: Missing citation to a relevant work Wang and Gupta, Generative Image Modeling using Style and Structure Adversarial Networks pseudo-url""" 704,"""Unsupervised Data Augmentation for Consistency Training""","['Semi-supervised learning', 'computer vision', 'natural language processing']","""Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 2.7% with only 4,000 examples, nearly matching the performance of models trained on 50,000 labeled examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used.""","""The paper shows that data augmentation methods work well for consistency training on unlabeled data in semi-supervised learning. Reviewers and AC think that the reported experimental scores are interesting/strong, but scientific reasoning for convincing why the proposed method is valuable is limited. In particular, the authors are encouraged to justify novelty and hyper-parameters used in the paper. This is because I also think that it is not too surprising that more data augmentations in supervised learning are also effective in semi-supervised learning. It can be valuable if more scientific reasoning/justification is provided. Hence, I recommend rejection.""" 705,"""Causally Correct Partial Models for Reinforcement Learning""","['causality', 'model-based reinforcement learning']","""In reinforcement learning, we can learn a model of future observations and rewards, and use it to plan the agent's next actions. However, jointly modeling future observations can be computationally expensive or even intractable if the observations are high-dimensional (e.g. images). For this reason, previous works have considered partial models, which model only part of the observation. In this paper, we show that partial models can be causally incorrect: they are confounded by the observations they don't model, and can therefore lead to incorrect planning. To address this, we introduce a general family of partial models that are provably causally correct, but avoid the need to fully model future observations.""","""The authors show that in a reinforcement learning setting, partial models can be causally incorrect, leading to improper evaluation of policies that are different from those used to collect the data for the model. They then propose a backdoor correction to this problem that allows the model to generalize properly by separating the effects of the stochasticity of the environment and the policy. The reviewers had substantial concerns about both issues of clarity and the clear, but largely undiscussed, connection to off-policy policy evaluation (OPPE). In response, the authors made a significant number of changes for the sake of clarity, as well as further explained the differences between their approach and the OPPE setting. First, OPPE is not typically model-based. Second, while an importance sampling solution would be technically possible, by re-training the model based on importance-weighted experiences, this would need to be done for every evaluation policy considered, whereas the authors' solution uses a fundamentally different approach of causal reasoning so that a causally correct model can be learned once and work for all policies. After much discussion, the reviewers could not come to a consensus about the validity of these arguments. Futhermore, there were lingering questions about writing clarity. Thus, in the future, it appears the paper could be significantly improved if the authors cite more of the off policy evaluation literature, in addition to their added textual clairifications of the relation of their work to that body of work. Overall, my recommendation at this time is to reject this paper.""" 706,"""MIST: Multiple Instance Spatial Transformer Networks""",[],"""We propose a deep network that can be trained to tackle image reconstruction and classification problems that involve detection of multiple object instances, without any supervision regarding their whereabouts. The network learns to extract the most significant top-K patches, and feeds these patches to a task-specific network -- e.g., auto-encoder or classifier -- to solve a domain specific problem. The challenge in training such a network is the non-differentiable top-K selection process. To address this issue, we lift the training optimization problem by treating the result of top-K selection as a slack variable, resulting in a simple, yet effective, multi-stage training. Our method is able to learn to detect recurrent structures in the training dataset by learning to reconstruct images. It can also learn to localize structures when only knowledge on the occurrence of the object is provided, and in doing so it outperforms the state-of-the-art.""","""Two reviewers are negative on this paper while the other one is slightly positive. Overall, this paper does not make the bar of ICLR. A reject is recommended.""" 707,"""From Variational to Deterministic Autoencoders""","['Unsupervised learning', 'Generative Models', 'Variational Autoencoders', 'Regularization']",""" Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models. However, learning a VAE from data poses still unanswered theoretical questions and considerable practical challenges. In this work, we propose an alternative framework for generative modeling that is simpler, easier to train, and deterministic, yet has many of the advantages of the VAE. We observe that sampling a stochastic encoder in a Gaussian VAE can be interpreted as simply injecting noise into the input of a deterministic decoder. We investigate how substituting this kind of stochasticity, with other explicit and implicit regularization schemes, can lead to an equally smooth and meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism to sample new data points, we introduce an ex-post density estimation step that can be readily applied to the proposed framework as well as existing VAEs, improving their sample quality. We show, in a rigorous empirical study, that the proposed regularized deterministic autoencoders are able to generate samples that are comparable to, or better than, those of VAEs and more powerful alternatives when applied to images as well as to structured data such as molecules. ""","""This paper proposes an extension to deterministic autoencoders, namely instead of noise injection in the encoders of VAEs to use deterministic autoencoders with an explicit regularization term on the latent representations. While the reviewers agree that the paper studies an important question for the generative modeling community, the paper has been limited in terms of theoretical analysis and experimental validation. The authors, however, provided further experimental results to support the claims empirically during the discussion period and the reviewers agree that the paper is now acceptable for publication in ICLR-2020. """ 708,"""Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel""","['Uncertainty Estimation', 'Neural Networks', 'Gaussian Process']","""Neural Networks (NNs) have been extensively used for a wide spectrum of real-world regression tasks, where the goal is to predict a numerical outcome such as revenue, effectiveness, or a quantitative result. In many such tasks, the point prediction is not enough: the uncertainty (i.e. risk or confidence) of that prediction must also be estimated. Standard NNs, which are most often used in such tasks, do not provide uncertainty information. Existing approaches address this issue by combining Bayesian models with NNs, but these models are hard to implement, more expensive to train, and usually do not predict as accurately as standard NNs. In this paper, a new framework (RIO) is developed that makes it possible to estimate uncertainty in any pretrained standard NN. The behavior of the NN is captured by modeling its prediction residuals with a Gaussian Process, whose kernel includes both the NN's input and its output. The framework is justified theoretically and evaluated in twelve real-world datasets, where it is found to (1) provide reliable estimates of uncertainty, (2) reduce the error of the point predictions, and (3) scale well to large datasets. Given that RIO can be applied to any standard NN without modifications to model architecture or training pipeline, it provides an important ingredient for building real-world NN applications.""","""This paper presents a method to model uncertainty in deep learning regressors by applying a post-hoc procedure. Specifically, the authors model the residuals of neural networks using Gaussian processes, which provide a principled Bayesian estimate of uncertainty. The reviewers were initially mixed and a fourth reviewer was brought in for an additional perspective. The reviewers found that the paper was well written, well motivated and found the methodology sensible and experiments compelling. AnonReviewer4 raised issues with the theoretical exposition of the paper (going so far as to suggest that moving the theory into the supplementary and using the reclaimed space for additional clarifications would make the paper stronger). The reviewers found the author response compelling and as a result the reviewers have come to a consensus to accept. Thus the recommendation is to accept the paper. Please do take the reviewer feedback into account in preparing the camera ready version. In particular, please do address the remaining concerns from AnonReviewer4 regarding the theoretical portion of the paper. It seems that the methodological and empirical portions of the paper are strong enough to stand on their own (and therefore the recommendation for an accept). Adding theory just for the sake of having theory seems to detract from the message (particularly if it is irrelevant or incorrect as initially pointed out by the reviewer).""" 709,"""Data-Independent Neural Pruning via Coresets""","['coresets', 'neural pruning', 'network compression']","""Previous work showed empirically that large neural networks can be significantly reduced in size while preserving their accuracy. Model compression became a central research topic, as it is crucial for deployment of neural networks on devices with limited computational and memory resources. The majority of the compression methods are based on heuristics and offer no worst-case guarantees on the trade-off between the compression rate and the approximation error for an arbitrarily new sample. We propose the first efficient, data-independent neural pruning algorithm with a provable trade-off between its compression rate and the approximation error for any future test sample. Our method is based on the coreset framework, which finds a small weighted subset of points that provably approximates the original inputs. Specifically, we approximate the output of a layer of neurons by a coreset of neurons in the previous layer and discard the rest. We apply this framework in a layer-by-layer fashion from the top to the bottom. Unlike previous works, our coreset is data independent, meaning that it provably guarantees the accuracy of the function for any input \mathbb{R}^d$, including an adversarial one. We demonstrate the effectiveness of our method on popular network architectures. In particular, our coresets yield 90% compression of the LeNet-300-100 architecture on MNIST while improving the accuracy.""","""The rebuttal period influenced R1 to raise their rating of the paper. The most negative reviewer did not respond to the author response. This work proposes an interesting approach that will be of interest to the community. The AC recommends acceptance.""" 710,"""Exploring Cellular Protein Localization Through Semantic Image Synthesis""","['Computational biology', 'image synthesis', 'GANs', 'exploring multiplex images', 'attention', 'interpretability']","""Cell-cell interactions have an integral role in tumorigenesis as they are critical in governing immune responses. As such, investigating specific cell-cell interactions has the potential to not only expand upon the understanding of tumorigenesis, but also guide clinical management of patient responses to cancer immunotherapies. A recent imaging technique for exploring cell-cell interactions, multiplexed ion beam imaging by time-of-flight (MIBI-TOF), allows for cells to be quantified in 36 different protein markers at sub-cellular resolutions in situ as high resolution multiplexed images. To explore the MIBI images, we propose a GAN for multiplexed data with protein specific attention. By conditioning image generation on cell types, sizes, and neighborhoods through semantic segmentation maps, we are able to observe how these factors affect cell-cell interactions simultaneously in different protein channels. Furthermore, we design a set of metrics and offer the first insights towards cell spatial orientations, cell protein expressions, and cell neighborhoods. Our model, cell-cell interaction GAN (CCIGAN), outperforms or matches existing image synthesis methods on all conventional measures and significantly outperforms on biologically motivated metrics. To our knowledge, we are the first to systematically model multiple cellular protein behaviors and interactions under simulated conditions through image synthesis.""","""This paper proposes a dedicated deep models for analysis of multiplexed ion beam imaging by time-of-flight (MIBI-TOF). The reviewers appreciated the contributions of the paper but not quite enough to make the cut. Rejection is recommended. """ 711,"""Collaborative Filtering With A Synthetic Feedback Loop""",[],"""We propose a novel learning framework for recommendation systems, assisting collaborative filtering with a synthetic feedback loop. The proposed framework consists of a ``recommender'' and a ``virtual user.'' The recommender is formulizd as a collaborative-filtering method, recommending items according to observed user behavior. The virtual user estimates rewards from the recommended items and generates the influence of the rewards on observed user behavior. The recommender connected with the virtual user constructs a closed loop, that recommends users with items and imitates the unobserved feedback of the users to the recommended items. The synthetic feedback is used to augment observed user behavior and improve recommendation results. Such a model can be interpreted as the inverse reinforcement learning, which can be learned effectively via rollout (simulation). Experimental results show that the proposed framework is able to boost the performance of existing collaborative filtering methods on multiple datasets. ""","""The paper proposes to learn a ""virtual user"" while learning a ""recommender"" model, to improve the performance of the recommender system. A reinforcement learning algorithm is used for address the problem the authors defined. Multiple reviewers raised several concerns regarding its technical details including the feedback signal F, but the authors have not responded to any of the concerns raised by the reviewers. The lack of authors involvement in the discussion suggest that this paper is not at the stage to be published.""" 712,"""On the Reflection of Sensitivity in the Generalization Error""","['Generalization Error', 'Sensitivity Analysis', 'Deep Neural Networks', 'Bias-variance Decomposition']","""Even though recent works have brought some insight into the performance improvement of techniques used in state-of-the-art deep-learning models, more work is needed to understand the generalization properties of over-parameterized deep neural networks. We shed light on this matter by linking the loss function to the outputs sensitivity to its input. We find a rather strong empirical relation between the output sensitivity and the variance in the bias-variance decomposition of the loss function, which hints on using sensitivity as a metric for comparing generalization performance of networks, without requiring labeled data. We find that sensitivity is decreased by applying popular methods which improve the generalization performance of the model, such as (1) using a deep network rather than a wide one, (2) adding convolutional layers to baseline classifiers instead of adding fully connected layers, (3) using batch normalization, dropout and max-pooling, and (4) applying parameter initialization techniques.""","""The paper proposes a definition of the sensitivity of the output to random perturbations of the input and its link to generalization. While both reviewers appreciated the timeliness of this research, they were taken aback by the striking similarity with the work of Novak et al. I encourage the authors to resubmit to a later conference with a lengthier analysis of the differences between the two frameworks, as they started to do in their rebuttal.""" 713,"""On the Unintended Social Bias of Training Language Generation Models with News Articles""","['Fair AI', 'latent representations', 'sequence to sequence']","""There are concerns that neural language models may preserve some of the stereotypes of the underlying societies that generate the large corpora needed to train these models. For example, gender bias is a significant problem when generating text, and its unintended memorization could impact the user experience of many applications (e.g., the smart-compose feature in Gmail). In this paper, we introduce a novel architecture that decouples the representation learning of a neural model from its memory management role. This architecture allows us to update a memory module with an equal ratio across gender types addressing biased correlations directly in the latent space. We experimentally show that our approach can mitigate the gender bias amplification in the automatic generation of articles news while providing similar perplexity values when extending the Sequence2Sequence architecture.""","""The reviewers had a hard time fully identifying the intended contribution behind this paper, and raised concerns that suggest that the experimental results are not sufficient to justify any substantial contribution with the level of certainty that would warrant publication at a top venue. The authors have not responded, and the concerns are serious, so I have no choice but to reject this paper despite its potentially valuable topic.""" 714,"""Quantum Expectation-Maximization for Gaussian Mixture Models""","['Quantum', 'ExpectationMaximization', 'Unsupervised', 'QRAM']","""The Expectation-Maximization (EM) algorithm is a fundamental tool in unsupervised machine learning. It is often used as an efficient way to solve Maximum Likelihood (ML) and Maximum A Posteriori estimation problems, especially for models with latent variables. It is also the algorithm of choice to fit mixture models: generative models that represent unlabelled points originating from pseudo-formula different processes, as samples from pseudo-formula multivariate distributions. In this work we define and use a quantum version of EM to fit a Gaussian Mixture Model. Given quantum access to a dataset of pseudo-formula vectors of dimension pseudo-formula , our algorithm has convergence and precision guarantees similar to the classical algorithm, but the runtime is only polylogarithmic in the number of elements in the training set, and is polynomial in other parameters - as the dimension of the feature space, and the number of components in the mixture. We generalize further the algorithm by fitting any mixture model of base distributions in the exponential family. We discuss the performance of the algorithm on datasets that are expected to be classified successfully by those algorithms, arguing that on those cases we can give strong guarantees on the runtime.""","""The reviewers were unanimous that this submission is not ready for publication at ICLR in its current form. Concerns raised include a significant lack of clarity, and the paper not being self-contained.""" 715,"""Semantically-Guided Representation Learning for Self-Supervised Monocular Depth""","['computer vision', 'machine learning', 'deep learning', 'monocular depth estimation', 'self-supervised learning']","""Self-supervised learning is showing great promise for monocular depth estimation, using geometry as the only source of supervision. Depth networks are indeed capable of learning representations that relate visual appearance to 3D properties by implicitly leveraging category-level patterns. In this work we investigate how to leverage more directly this semantic structure to guide geometric representation learning, while remaining in the self-supervised regime. Instead of using semantic labels and proxy losses in a multi-task approach, we propose a new architecture leveraging fixed pretrained semantic segmentation networks to guide self-supervised representation learning via pixel-adaptive convolutions. Furthermore, we propose a two-stage training process to overcome a common semantic bias on dynamic objects via resampling. Our method improves upon the state of the art for self-supervised monocular depth prediction over all pixels, fine-grained details, and per semantic categories. ""","""The paper proposes a using pixel-adaptive convolutions to leverage semantic labels in self-supervised monocular depth estimation. Although there were initial concerns of the reviewers regarding the technical details and limited experiments, the authors responded reasonably to the issues raised by the reviewers. Reviewer2, who gave a weak reject rating, did not provide any answer to the authors comments. We do not see any major flaws to reject this paper.""" 716,"""Stablizing Adversarial Invariance Induction by Discriminator Matching""","['invariance induction', 'adversarial training', 'domain generalization']","""Incorporating the desired invariance into representation learning is a key challenge in many situations, e.g., for domain generalization and privacy/fairness constraints. An adversarial invariance induction (AII) shows its power on this purpose, which maximizes the proxy of the conditional entropy between representations and attributes by adversarial training between an attribute discriminator and feature extractor. However, the practical behavior of AII is still unclear as the previous analysis assumes the optimality of the attribute classifier, which is rarely held in practice. This paper first analyzes the practical behavior of AII both theoretically and empirically, indicating that AII has theoretical difficulty as it maximizes variational {\em upper} bound of the actual conditional entropy, and AII catastrophically fails to induce invariance even in simple cases as suggested by the above theoretical findings. We then argue that a simple modification to AII can significantly stabilize the adversarial induction framework and achieve better invariant representations. Our modification is based on the property of conditional entropy; it is maximized if and only if the divergence between all pairs of marginal distributions over pseudo-formula between different attributes is minimized. The proposed method, {\em invariance induction by discriminator matching}, modify AII objective to explicitly consider the divergence minimization requirements by defining a proxy of the divergence by using the attribute discriminator. Empirical validations on both the toy dataset and four real-world datasets (related to applications of user anonymization and domain generalization) reveal that the proposed method provides superior performance when inducing invariance for nuisance factors. ""","""The paper proposes a modification to improve adversarial invariance induction for learning representations under invariance constraints. The authors provide both a formal analysis and experimental evaluation of the method. The reviewers generally agree that the experimental evaluation is rigorous and above average, but the paper lacks clarity making it difficult to judge the significance of it. Therefore, I recommend rejection, but encourage the authors to improve the presentation and resubmit.""" 717,"""Stabilizing DARTS with Amended Gradient Estimation on Architectural Parameters""","['Neural Architecture Search', 'DARTS', 'Stability']","""Differentiable neural architecture search has been a popular methodology of exploring architectures for deep learning. Despite the great advantage of search efficiency, it often suffers weak stability, which obstacles it from being applied to a large search space or being flexibly adjusted to different scenarios. This paper investigates DARTS, the currently most popular differentiable search algorithm, and points out an important factor of instability, which lies in its approximation on the gradients of architectural parameters. In the current status, the optimization algorithm can converge to another point which results in dramatic inaccuracy in the re-training process. Based on this analysis, we propose an amending term for computing architectural gradients by making use of a direct property of the optimality of network parameter optimization. Our approach mathematically guarantees that gradient estimation follows a roughly correct direction, which leads the search stage to converge on reasonable architectures. In practice, our algorithm is easily implemented and added to DARTS-based approaches efficiently. Experiments on CIFAR and ImageNet demonstrate that our approach enjoys accuracy gain and, more importantly, enables DARTS-based approaches to explore much larger search spaces that have not been studied before.""","""This paper studies Differentiable Neural Architecture Search, focusing on a problem identified with the approximated gradient with respect to architectural parameters, and proposing an improved gradient estimation procedure. The authors claim that this alleviates the tendency of DARTS to collapse on degenerate architectures consisting of e.g. all skip connections, presently dealt with via early stopping. Reviewers generally liked the theoretical contribution, but found the evidence insufficient to support the claims. Requests for experiments by R1 with matched hyperparameters were granted (and several reviewers felt this strengthened the submission), though relegated to an appendix, but after a lengthy discussion reviewers still felt the evidence was insufficient. R1 also contended that the authors were overly dogmatic regarding ""AutoML"" -- that the early stopping heuristic was undesirable because of the additional human knowledge involved. I appreciate the sentiment but find this argument unconvincing -- while it is true that a great deal of human knowledge is still necessary to make architecture search work, the aim is certainly to develop fool-proof automatic methods. As reviewers were still unsatisfied with the empirical investigation after revisions and found that the weight of the contribution was insufficient for a 10 page paper, I recommend rejection at this time, while encouraging the authors to take seriously the reviewers' requests for a systematic study of the source of the empirical gains in order to strengthen their paper for future submission.""" 718,"""Improving SAT Solver Heuristics with Graph Networks and Reinforcement Learning""","['SAT', 'reinforcement learning', 'graph neural networks', 'heuristics', 'DQN', 'boolean satisfiability']","""We present GQSAT, a branching heuristic in a Boolean SAT solver trained with value-based reinforcement learning (RL) using Graph Neural Networks for function approximation. Solvers using GQSAT are complete SAT solvers that either provide a satisfying assignment or a proof of unsatisfiability, which is required for many SAT applications. The branching heuristic commonly used in SAT solvers today suffers from bad decisions during their warm-up period, whereas GQSAT has been trained to examine the structure of the particular problem instance to make better decisions at the beginning of the search. Training GQSAT is data efficient and does not require elaborate dataset preparation or feature engineering to train. We train GQSAT on small SAT problems using RL interfacing with an existing SAT solver. We show that GQSAT is able to reduce the number of iterations required to solve SAT problems by 2-3X, and it generalizes to unsatisfiable SAT instances, as well as to problems with 5X more variables than it was trained on. We also show that, to a lesser extent, it generalizes to SAT problems from different domains by evaluating it on graph coloring. Our experiments show that augmenting SAT solvers with agents trained with RL and graph neural networks can improve performance on the SAT search problem.""","""SAT is NP-complete (Karp, 1972) due its intractable exhaustive search. As such, heuristics are commonly used to reduce the search space. While usually these heuristics rely on some in-domain expert knowledge, the authors propose a generic method that uses RL to learn a branching heuristic. The policy is parametrized by a GNN, and at each step selects a variable to expand and the process repeats until either a satisfying assignment has been found or the problem has been proved unsatisfiable. The main result of this is that the proposed heuristic results in fewer steps than VSIDS, a commonly used heuristic. All reviewers agreed that this is an interesting and well-presented submission. However, both R1 and R2 (rightly according to my judgment) point that at the moment the paper seems to be conducting an evaluation that is not entirely fair. Specifically, VSIDS has been implemented within a framework optimized for running time rather than number of iterations, whereas the proposed heuristic is doing the opposite. Moreover, the proposed heuristic is not stressed-test against larger datasets. So, the authors take a heuristic/framework that has been optimized to operate specifically well on large datasets (where running time is what ultimately makes the difference) scale it down to a smaller dataset and evaluate it on a metric that the proposed algorithm is optimized for. At the same time, they do not consider evaluation in larger datasets and defer all concerns about scalability to the one of industrial use vs answering ML questions related to whether or not it is possible to stretch existing RL techniques to learn a branching heuristic. This is a valid point and not all techniques need to be super scalable from iteration day 0, but this being ML, we need to make sure that our evaluation criteria are fair and that we are comparing apples to apples in testing hypotheses. As such, I do not feel comfortable suggesting acceptance of this submission, but I do sincerely hope the authors will take the reviewers' feedback and improve the evaluation protocols of their manuscript, resulting in a stronger future submission.""" 719,"""Non-Sequential Melody Generation""","['melody generation', 'DCGAN', 'dilated convolutions']","""In this paper we present a method for algorithmic melody generation using a generative adversarial network without recurrent components. Music generation has been successfully done using recurrent neural networks, where the model learns sequence information that can help create authentic sounding melodies. Here, we use DCGAN architecture with dilated convolutions and towers to capture sequential information as spatial image information, and learn long-range dependencies in fixed-length melody forms such as Irish traditional reel. ""","""All the reviewers pointed out issues with the experiments, which the rebuttal did not address. The paper seems interesting, and the authors are encouraged to improve it.""" 720,"""Compositional Transfer in Hierarchical Reinforcement Learning""","['Multitask', 'Transfer Learning', 'Reinforcement Learning', 'Hierarchical Reinforcement Learning', 'Compositional', 'Off-Policy']","""The successful application of flexible, general learning algorithms to real-world robotics applications is often limited by their poor data-efficiency. To address the challenge, domains with more than one dominant task of interest encourage the sharing of information across tasks to limit required experiment time. To this end, we investigate compositional inductive biases in the form of hierarchical policies as a mechanism for knowledge transfer across tasks in reinforcement learning (RL). We demonstrate that this type of hierarchy enables positive transfer while mitigating negative interference. Furthermore, we demonstrate the benefits of additional incentives to efficiently decompose task solutions. Our experiments show that these incentives are naturally given in multitask learning and can be easily introduced for single objectives. We design an RL algorithm that enables stable and fast learning of structured policies and the effective reuse of both behavior components and transition data across tasks in an off-policy setting. Finally, we evaluate our algorithm in simulated environments as well as physical robot experiments and demonstrate substantial improvements in data data-efficiency over competitive baselines.""","""This paper is concerned with improving data-efficiency in multitask reinforcement learning problems. This is achieved by taking a hierarchical approach, and learning commonalities across tasks for reuse. The authors present an off-policy actor-critic algorithm to learn and reuse these hierarchical policies. This is an interesting and promising paper, particularly with the ability to work with robots. The reviewers did however note issues with the novelty and making the contributions clear. Additionally, it was felt that the results proved the benefits of hierarchy rather than this approach, and that further comparisons to other approaches are required. As such, this paper is a weak reject at this point.""" 721,"""Surrogate-Based Constrained Langevin Sampling With Applications to Optimal Material Configuration Design""","['Black-box Constrained Langevin sampling', 'surrogate methods', 'projected and proximal methods', 'approximation theory of gradients', 'nano-porous material configuration design']","""We consider the problem of generating configurations that satisfy physical constraints for optimal material nano-pattern design, where multiple (and often conflicting) properties need to be simultaneously satisfied. Consider, for example, the trade-off between thermal resistance, electrical conductivity, and mechanical stability needed to design a nano-porous template with optimal thermoelectric efficiency. To that end, we leverage the posterior regularization framework andshow that this constraint satisfaction problem can be formulated as sampling froma Gibbs distribution. The main challenges come from the black-box nature ofthose physical constraints, since they are obtained via solving highly non-linearPDEs. To overcome those difficulties, we introduce Surrogate-based Constrained Langevin dynamics for black-box sampling. We explore two surrogate approaches. The first approach exploits zero-order approximation of gradients in the Langevin Sampling and we refer to it as Zero-Order Langevin. In practice, this approach can be prohibitive since we still need to often query the expensive PDE solvers. The second approach approximates the gradients in the Langevin dynamics with deep neural networks, allowing us an efficient sampling strategy using the surrogate model. We prove the convergence of those two approaches when the target distribution is log-concave and smooth. We show the effectiveness of both approaches in designing optimal nano-porous material configurations, where the goal is to produce nano-pattern templates with low thermal conductivity and reasonable mechanical stability.""","""The paper is not overly well written and motivated. A guiding thread through the paper is often missing. Comparisons with constrained BO methods would have improved the paper as well as a more explicit link to multi-objective BO. It could have been interesting to evaluate the sensitivity w.r.t. the number of samples in the Monte Carlo estimate. What happens if the observations of the function are noisy? Is there a natural way to deal with this? Given that the paper is 10+ pages long, we expect a higher quality than an 8-pages paper (reviewing and submission guidelines). """ 722,"""Meta-RCNN: Meta Learning for Few-Shot Object Detection""","['Few-shot detection', 'Meta-Learning', 'Object Detection']","""Despite significant advances in object detection in recent years, training effective detectors in a small data regime remains an open challenge. Labelling training data for object detection is extremely expensive, and there is a need to develop techniques that can generalize well from small amounts of labelled data. We investigate this problem of few-shot object detection, where a detector has access to only limited amounts of annotated data. Based on the recently evolving meta-learning principle, we propose a novel meta-learning framework for object detection named ``Meta-RCNN"", which learns the ability to perform few-shot detection via meta-learning. Specifically, Meta-RCNN learns an object detector in an episodic learning paradigm on the (meta) training data. This learning scheme helps acquire a prior which enables Meta-RCNN to do few-shot detection on novel tasks. Built on top of the Faster RCNN model, in Meta-RCNN, both the Region Proposal Network (RPN) and the object classification branch are meta-learned. The meta-trained RPN learns to provide class-specific proposals, while the object classifier learns to do few-shot classification. The novel loss objectives and learning strategy of Meta-RCNN can be trained in an end-to-end manner. We demonstrate the effectiveness of Meta-RCNN in addressing few-shot detection on Pascal VOC dataset and achieve promising results. ""","""This paper develops a meta-learning approach for few-shot object detection. This paper is borderline and the reviewers are split. The problem is important, albeit somewhat specific to computer vision applications. The main concerns were that it was lacking a head-to-head comparison to RepMet and that it was missing important details (e.g. the image resolution was not clarified, nor was the paper updated to include the details). The authors suggested that the RepMet code was not available, but I was able to find the official code for RepMet via a simple Google search: pseudo-url Reviewers also brought up concerns about an ICCV 2019 paper, though this should be considered as concurrent work, as it was not publicly available at the time of submission. Overall, I think the paper is borderline. Given that many meta-learning papers compare on rather synthetic benchmarks, the study of a more realistic problem setting is refreshing. That said, it's unclear if the insights from this paper would transfer to other machine learning problem settings of interest to the ICLR community. With all of this in mind, the paper is slightly below the bar for acceptance at ICLR.""" 723,"""Gauge Equivariant Spherical CNNs""","['deep learning', 'convolutional networks', 'equivariance', 'gauge equivariance', 'symmetry', 'geometric deep learning', 'manifold convolution']","""Spherical CNNs are convolutional neural networks that can process signals on the sphere, such as global climate and weather patterns or omnidirectional images. Over the last few years, a number of spherical convolution methods have been proposed, based on generalized spherical FFTs, graph convolutions, and other ideas. However, none of these methods is simultaneously equivariant to 3D rotations, able to detect anisotropic patterns, computationally efficient, agnostic to the type of sample grid used, and able to deal with signals defined on only a part of the sphere. To address these limitations, we introduce the Gauge Equivariant Spherical CNN. Our method is based on the recently proposed theory of Gauge Equivariant CNNs, which is in principle applicable to signals on any manifold, and which can be computed on any set of local charts covering all of the manifold or only part of it. In this paper we show how this method can be implemented efficiently for the sphere, and show that the resulting method is fast, numerically accurate, and achieves good results on the widely used benchmark problems of climate pattern segmentation and omnidirectional semantic segmentation.""","""The paper extends Gauge invariant CNNs to Gauge invariant spherical CNNs. The authors significantly improved both theory and experiments during the rebuttal and the paper is well presented. However, the topic is somewhat niche, and the bar for ICLR this year was very high, so unfortunately this paper did not make it. We encourage the authors to resubmit the work including the new results obtained during the rebuttal period.""" 724,"""An Information Theoretic Approach to Distributed Representation Learning""","['Information Bottleneck', 'Distributed Learning']","""The problem of distributed representation learning is one in which multiple sources of information X1,...,XK are processed separately so as to extract useful information about some statistically correlated ground truth Y. We investigate this problem from information-theoretic grounds. For both discrete memoryless (DM) and memoryless vector Gaussian models, we establish fundamental limits of learning in terms of optimal tradeoffs between accuracy and complexity. We also develop a variational bound on the optimal tradeoff that generalizes the evidence lower bound (ELBO) to the distributed setting. Furthermore, we provide a variational inference type algorithm that allows to compute this bound and in which the mappings are parametrized by neural networks and the bound approximated by Markov sampling and optimized with stochastic gradient descent. Experimental results on synthetic and real datasets are provided to support the efficiency of the approaches and algorithms which we develop in this paper.""","""The authors study generalization in distributed representation learning by describing limits in accuracy and complexity which stem from information theory. The paper has been controversial, but ultimately the reviewers who provided higher scores presented weaker and fewer arguments. By recruiting an additional reviewer it became clearer that, overall the paper needs a little more work to reach ICLR standards. The main suggestions for improvements have to do with improving clarity in a way that makes the motivation convincing and the practicality more obvious. Boosting the experimental results is a complemental way of increasing convincingness, as argued by reviewers. """ 725,"""Progressive Compressed Records: Taking a Byte Out of Deep Learning Data""","['Deep Learning', 'Storage', 'Bandwidth', 'Compression']","""Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs). PCRs deviate from previous formats by leveraging progressive compression to split each training example into multiple examples of increasingly higher fidelity, without adding to the total data size. Training examples of similar fidelity are grouped together, which reduces both the system overhead and data bandwidth needed to train a model. We show that models can be trained on aggressively compressed representations of the training data and still retain high accuracy, and that PCRs can enable a 2x speedup on average over baseline formats using JPEG compression. Our results hold across deep learning architectures for a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ.""","""Main content: Introduces Progressive Compressed Records (PCR), a new storage format for image datasets for machine learning training. Discussion: reviewer 4: Interesting application of progressive compression to reduce the disk I/O overhead. Main concern is paper could be clearer about setting. reviewer 5: (not knowledgable about area): well-written paper. concern is that related work could be better, including state of the art on the topic. reviewer 2: likes the topic but discusses many areas for improvement (stronger exeriments, better metrics reported, etc.). this is probably the most experienced reviewer marking reject. reviewer 3: paper is well written. Main issue is that exeriments are limited to image classification tasks, and it snot clear how the method works on larger scale. Recommendation: interesting idea but experiments could be stronger. I lean to Reject.""" 726,"""Certifying Neural Network Audio Classifiers""","['Adversarial Examples', 'Audio Classifier', 'Speech Recognition', 'Certified Robustness', 'Deep Learning']","""We present the first end-to-end verifier of audio classifiers. Compared to existing methods, our approach enables analysis of both, the entire audio processing stage as well as recurrent neural network architectures (e.g., LSTM). The audio processing is verified using novel convex relaxations tailored to feature extraction operations used in audio (e.g., Fast Fourier Transform) while recurrent architectures are certified via a novel binary relaxation for the recurrent unit update. We show the verifier scales to large networks while computing significantly tighter bounds than existing methods for common audio classification benchmarks: on the challenging Google Speech Commands dataset we certify 95% more inputs than the interval approximation (only prior scalable method), for a perturbation of -90dB.""","""The paper developed log abstract transformer, square abstract transformer and sigmoid-tanh abstract transformer to certifiy robustness of neural network models for audio. The work is interesting but the scope is limited. It presented a neural network certification methods for one particular type of audio classifiers that use MFCC as input features and LSTM as the neural network layers. This thus may have limited interest to the general readers. The paper targets to present an end-to-end solution to audio classifiers. Investigation on one particular type of audio classifier is far from sufficient. As the reviewers pointed out, there're large literature of work using raw waveform inputs systems. Also there're many state-of-the-art systems are HMM/DNN and attnetion based encoder-decoder models. In terms of neural network models, resent based models, transformer models etc are also important. A more thorough investigation/comparison would greatly enlarge the scope of this paper. """ 727,"""Point Process Flows""","['Temporal Point Process', 'Intensity-free Point Process']","""Event sequences can be modeled by temporal point processes (TPPs) to capture their asynchronous and probabilistic nature. We propose an intensity-free framework that directly models the point process as a non-parametric distribution by utilizing normalizing flows. This approach is capable of capturing highly complex temporal distributions and does not rely on restrictive parametric forms. Comparisons with state-of-the-art baseline models on both synthetic and challenging real-life datasets show that the proposed framework is effective at modeling the stochasticity of discrete event sequences. ""","""The paper proposed to use normalizing flow to model point processes. However, the reviews find that the paper is incremental. There have been several works using deep generative models to temporal data, and the proposed method is a simple combination of well-established existing works without problem-specific adaptation. """ 728,"""Learning by shaking: Computing policy gradients by physical forward-propagation""","['Reinforcement Learning', 'Control Theory']","""Model-free and model-based reinforcement learning are two ends of a spectrum. Learning a good policy without a dynamic model can be prohibitively expensive. Learning the dynamic model of a system can reduce the cost of learning the policy, but it can also introduce bias if it is not accurate. We propose a middle ground where instead of the transition model, the sensitivity of the trajectories with respect to the perturbation (shaking) of the parameters is learned. This allows us to predict the local behavior of the physical system around a set of nominal policies without knowing the actual model. We assay our method on a custom-built physical robot in extensive experiments and show the feasibility of the approach in practice. We investigate potential challenges when applying our method to physical systems and propose solutions to each of them.""","""While the reviewers generally appreciated the idea behind the method in the paper, there was considerable concern about the experimental evaluation, which did not provide a convincing demonstration that the method works in interesting and relevant problem settings, and did not compare adequately to alternative approach. As such, I believe this paper is not quite ready for publication in its current form.""" 729,"""Efficient Training of Robust and Verifiable Neural Networks""",[],"""Recent works have developed several methods of defending neural networks against adversarial attacks with certified guarantees. We propose that many common certified defenses can be viewed under a unified framework of regularization. This unified framework provides a technique for comparing different certified defenses with respect to robust generalization. In addition, we develop a new regularizer that is both more efficient than existing certified defenses and can be used to train networks with higher certified accuracy. Our regularizer also extends to an L0 threat model and ensemble models. Through experiments on MNIST, CIFAR-10 and GTSRB, we demonstrate improvements in training speed and certified accuracy compared to state-of-the-art certified defenses.""","""This paper studies the problem of certified robustness to adversarial examples. It first demonstrates that many existing certified defenses can be viewed under a unified framework of regularization. Then, it proposes a new double margin-based regularizer to obtain better certified robustness. Overall, it has major technical issues and the rebuttal is not satisfying.""" 730,"""The Sooner The Better: Investigating Structure of Early Winning Lottery Tickets""","['pruning', 'lottery ticket hypothesis', 'deep neural network', 'compression', 'image classification']","""The recent success of the lottery ticket hypothesis by Frankle & Carbin (2018) suggests that small, sparsified neural networks can be trained as long as the network is initialized properly. Several follow-up discussions on the initialization of the sparsified model have discovered interesting characteristics such as the necessity of rewinding (Frankle et al. (2019)), importance of sign of the initial weights (Zhou et al. (2019)), and the transferability of the winning lottery tickets (S. Morcos et al. (2019)). In contrast, another essential aspect of the winning ticket, the structure of the sparsified model, has been little discussed. To find the lottery ticket, unfortunately, all the prior work still relies on computationally expensive iterative pruning. In this work, we conduct an in-depth investigation of the structure of winning lottery tickets. Interestingly, we discover that there exist many lottery tickets that can achieve equally good accuracy much before the regular training schedule even finishes. We provide insights into the structure of these early winning tickets with supporting evidence. 1) Under stochastic gradient descent optimization, lottery ticket emerges when weight magnitude of a model saturates; 2) Pruning before the saturation of a model causes the loss of capability in learning complex patterns, resulting in the accuracy degradation. We employ the memorization capacity analysis to quantitatively confirm it, and further explain why gradual pruning can achieve better accuracy over the one-shot pruning. Based on these insights, we discover the early winning tickets for various ResNet architectures on both CIFAR10 and ImageNet, achieving state-of-the-art accuracy at a high pruning rate without expensive iterative pruning. In the case of ResNet50 on ImageNet, this comes to the winning ticket of 75:02% Top-1 accuracy at 80% pruning rate in only 22% of the total epochs for iterative pruning.""","""This paper does extensive experiments to understand the lottery ticket hypothesis. The lottery ticket hypothesis is that there exist sparse sub-networks inside dense large models that achieve as good accuracy as the original model. The reviewers have issues with the novelty and significance of these experiments. They felt that it didn't shed new scientific light. They felt that epochs needed to do early detection was still expensive. I recommend doing further studies and submitting it to another venue.""" 731,"""CONFEDERATED MACHINE LEARNING ON HORIZONTALLY AND VERTICALLY SEPARATED MEDICAL DATA FOR LARGE-SCALE HEALTH SYSTEM INTELLIGENCE""","['Confederated learning', 'siloed medical data', 'representation joining']","""A patients health information is generally fragmented across silos. Though it is technically feasible to unite data for analysis in a manner that underpins a rapid learning healthcare system, privacy concerns and regulatory barriers limit data centralization. Machine learning can be conducted in a federated manner on patient datasets with the same set of variables, but separated across sites of care. But federated learning cannot handle the situation where different data types for a given patient are separated vertically across different organizations. We call methods that enable machine learning model training on data separated by two or more degrees confederated machine learning. We built and evaluated a confederated machine learning model to stratify the risk of accidental falls among the elderly.""","""This manuscript proposes a strategy for fitting predictive models on data separated across nodes, with respect to both samples and features. The reviewers and AC agree that the problem studied is timely and interesting, and were impressed by the size and scope of the evaluation dataset (particularly for a medical application). However, reviewers were unconvinced about the novelty and clarity of the conceptual and empirical results. On the conceptual end, the AC also suggests that the authors look into closely related work on split learning (pseudo-url) which has also been applied to medical data settings.""" 732,"""Implicit competitive regularization in GANs""","['GAN', 'competitive optimization', 'game theory']","""Generative adversarial networks (GANs) are capable of producing high quality samples, but they suffer from numerous issues such as instability and mode collapse during training. To combat this, we propose to model the generator and discriminator as agents acting under local information, uncertainty, and awareness of their opponent. By doing so we achieve stable convergence, even when the underlying game has no Nash equilibria. We call this mechanism \emph{implicit competitive regularization} (ICR) and show that it is present in the recently proposed \emph{competitive gradient descent} (CGD). When comparing CGD to Adam using a variety of loss functions and regularizers on CIFAR10, CGD shows a much more consistent performance, which we attribute to ICR. In our experiments, we achieve the highest inception score when using the WGAN loss (without gradient penalty or weight clipping) together with CGD. This can be interpreted as minimizing a form of integral probability metric based on ICR.""","""The paper proposes to study ""implicit competitive regularization"", a phenomenon borne of taking a more nuanced game theoretic perspective on GAN training, wherein the two competing networks are ""model[ed] ... as agents acting with limited information and in awareness of their opponent"". The meaning of this is developed through a series of examples using simpler games and didactic experiments on actual GANs. An adversary-aware variant employing a Taylor approximation to the loss. Reviewer assessment amounted to 3 relatively light reviews, two of which reported little background in the area, and one more in-depth review, which happened to also be the most critical. R1, R2, R3 all felt the contribution was interesting and valuable. R1 felt the contribution of the paper may be on the light side given the original competitive gradient descent paper, on which this manuscript leans heavily, included GAN training (the authors disagreed); they also felt the paper would be stronger with additional datasets in the empirical evaluation (this was not addressed). R2 felt the work suffered for lack of evidence of consistency via repeated experiments, which the authors explained was due to the resource-intensity of the experiments. R5 raised that Inception scores for both the method and being noticeably worse than those reported in the literature, a concern that was resolved in an update and seemed to center on the software implementation of the metric. R5 had several technical concerns, but was generally unhappy with the presentation and finishedness of the manuscript, in particular the degree to which details are deferred to the CGD paper. (The authors maintain that CGD is but one instantiation of a more general framework, but given that the empirical section of the paper relies on this instantiation I would concur that it is under-treated.) Minor updates were made to the paper, but R5 remains unconvinced (other reviewers did not revisit their reviews at all). In particular: experiments seem promising but not final (repeatability is a concern), the single paragraph ""intuitive explanation"" and cartoon offered in Figure 3 were viewed as insufficiently rigorous. A great deal of the paper is spent on simple cases, but not much is said about ICR specifically in those cases. This appears to have the makings of an important contribution, but I concur with R5 that it is not quite ready for mass consumption. As is, the narrative is locally consistent but quite difficult to follow section after section. It should also be noted that ICLR as a venue has a community that is not as steeped in the game theory literature as the authors clearly are, and the assumed technical background is quite substantial here. For a game theory novice, it is difficult to tell which turns of phrase refer to concepts from game theory and which may be more informally introduced herein. I believe the paper requires redrafting for greater clarity with a more rigorous theoretical and/or empirical characterization of ICR, perhaps involving small scale experiments which clearly demonstrates the effect. I also believe the authors have done themselves a disservice by not availing themselves of 10 pages rather than 8. I recommend rejection at this time, but hope that the authors view this feedback as valuable and continue to improve their manuscript, as I (and the reviewers) believe this line of work has the potential to be quite impactful.""" 733,"""Deep Graph Translation""","['Graph translation', 'graph generation', 'deep neural network']","""Deep graph generation models have achieved great successes recently, among which, however, are typically unconditioned generative models that have no control over the target graphs are given an input graph. In this paper, we propose a novel Graph-Translation-Generative-Adversarial-Networks (GT-GAN) that transforms the input graphs into their target output graphs. GT-GAN consists of a graph translator equipped with innovative graph convolution and deconvolution layers to learn the translation mapping considering both global and local features, and a new conditional graph discriminator to classify target graphs by conditioning on input graphs. Extensive experiments on multiple synthetic and real-world datasets demonstrate that our proposed GT-GAN significantly outperforms other baseline methods in terms of both effectiveness and scalability. For instance, GT-GAN achieves at least 10X and 15X faster runtimes than GraphRNN and RandomVAE, respectively, when the size of the graph is around 50.""","""This paper studies a problem of graph translation, which aims at learning a graph translator to translate an input graph to a target graph using adversarial training framework. The reviewers think the problem is interesting. However, the paper needs to improve further in term of novelty and writing. """ 734,"""Gradientless Descent: High-Dimensional Zeroth-Order Optimization""",['Zeroth Order Optimization'],"""Zeroth-order optimization is the process of minimizing an objective pseudo-formula , given oracle access to evaluations at adaptively chosen inputs pseudo-formula . In this paper, we present two simple yet powerful GradientLess Descent (GLD) algorithms that do not rely on an underlying gradient estimate and are numerically stable. We analyze our algorithm from a novel geometric perspective and we show that for {\it any monotone transform} of a smooth and strongly convex objective with latent dimension \ge n we present a novel analysis that shows convergence within an pseudo-formula -ball of the optimum in pseudo-formula evaluations, where the input dimension is pseudo-formula , pseudo-formula is the diameter of the input space and pseudo-formula is the condition number. Our rates are the first of its kind to be both 1) poly-logarithmically dependent on dimensionality and 2) invariant under monotone transformations. We further leverage our geometric perspective to show that our analysis is optimal. Both monotone invariance and its ability to utilize a low latent dimensionality are key to the empirical success of our algorithms, as demonstrated on synthetic and MuJoCo benchmarks. ""","""The paper considers an interesting algorithm on zeorth-order optimization and contains strong theory. All the reviewers agree to accept.""" 735,"""Dynamic Scale Inference by Entropy Minimization""","['unsupervised learning', 'dynamic inference', 'equivariance', 'entropy']","""Given the variety of the visual world there is not one true scale for recognition: objects may appear at drastically different sizes across the visual field. Rather than enumerate variations across filter channels or pyramid levels, dynamic models locally predict scale and adapt receptive fields accordingly. The degree of variation and diversity of inputs makes this a difficult task. Existing methods either learn a feedforward predictor, which is not itself totally immune to the scale variation it is meant to counter, or select scales by a fixed algorithm, which cannot learn from the given task and data. We extend dynamic scale inference from feedforward prediction to iterative optimization for further adaptivity. We propose a novel entropy minimization objective for inference and optimize over task and structure parameters to tune the model to each input. Optimization during inference improves semantic segmentation accuracy and generalizes better to extreme scale variations that cause feedforward dynamic inference to falter.""","""This paper constitutes interesting progress on an important problem. I urge the authors to continue to refine their investigations, with the help of the reviewer comments; e.g., the quantitative analysis recommended by AnonReviewer4.""" 736,"""Batch Normalization has Multiple Benefits: An Empirical Study on Residual Networks""","['batch normalization', 'residual networks', 'initialization', 'batch size', 'learning rate', 'ImageNet']","""Many state of the art models rely on two architectural innovations; skip connections and batch normalization. However batch normalization has a number of limitations. It breaks the independence between training examples within a batch, performs poorly when the batch size is too small, and significantly increases the cost of computing a parameter update in some models. This work identifies two practical benefits of batch normalization. First, it improves the final test accuracy. Second, it enables efficient training with larger batches and larger learning rates. However we demonstrate that the increase in the largest stable learning rate does not explain why the final test accuracy is increased under a finite epoch budget. Furthermore, we show that the gap in test accuracy between residual networks with and without batch normalization can be dramatically reduced by improving the initialization scheme. We introduce ZeroInit, which trains a 1000 layer deep Wide-ResNet without normalization to 94.3% test accuracy on CIFAR-10 in 200 epochs at batch size 64. This initialization scheme outperforms batch normalization when the batch size is very small, and is competitive with batch normalization for batch sizes that are not too large. We also show that ZeroInit matches the validation accuracy of batch normalization when training ResNet-50-V2 on ImageNet at batch size 1024.""","""The paper is rejected based on unanimous reviews.""" 737,"""Learning to Represent Programs with Property Signatures""",['Program Synthesis'],"""We introduce the notion of property signatures, a representation for programs and program specifications meant for consumption by machine learning algorithms. Given a function with input type _in and output type _out, a property is a function of type: (_in, _out) Bool that (informally) describes some simple property of the function under consideration. For instance, if _in and _out are both lists of the same type, one property might ask is the input list the same length as the output list?. If we have a list of such properties, we can evaluate them all for our function to get a list of outputs that we will call the property signature. Crucially, we can guess the property signature for a function given only a set of input/output pairs meant to specify that function. We discuss several potential applications of property signatures and show experimentally that they can be used to improve over a baseline synthesizer so that it emits twice as many programs in less than one-tenth of the time.""","""The authors propose improved techniques for program synthesis by introducing the idea of property signatures. Property signatures help capture the specifications of the program and the authors show that using such property signatures they can synthesise programs more efficiently. I think it is an interesting work. Unfortunately, one of the reviewers has strong reservations about the work. However, after reading the reviewer's comments and the author's rebuttal to these comments I am convinced that the initial reservations of R1 have been adequately addressed. Similarly, the authors have done a great job of addressing the concerns of the other reviewers and have significantly updated their paper (including more experiments to address some of the concerns). Unfortunately R1 did not participate in subsequent discussions and it is not clear whether he/she read the rebuttal. Given the efforts put in by the authors to address different concerns of all the reviewers and considering the positive ratings given by the other two reviewers I recommend that this paper be accepted. Authors, Please include all the modifications done during the rebuttal period in your final version. Also move the comparison with DeepCoder to the main body of the paper.""" 738,"""GDP: Generalized Device Placement for Dataflow Graphs""","['device placement', 'reinforcement learning', 'graph neural networks', 'transformer']","""Runtime and scalability of large neural networks can be significantly affected by the placement of operations in their dataflow graphs on suitable devices. With increasingly complex neural network architectures and heterogeneous device characteristics, finding a reasonable placement is extremely challenging even for domain experts. Most existing automated device placement approaches are impractical due to the significant amount of compute required and their inability to generalize to new, previously held-out graphs. To address both limitations, we propose an efficient end-to-end method based on a scalable sequential attention mechanism over a graph neural network that is transferable to new graphs. On a diverse set of representative deep learning models, including Inception-v3, AmoebaNet, Transformer-XL, and WaveNet, our method on average achieves 16% improvement over human experts and 9.2% improvement over the prior art with 15 times faster convergence. To further reduce the computation cost, we pre-train the policy network on a set of dataflow graphs and use a superposition network to fine-tune it on each individual graph, achieving state-of-the-art performance on large hold-out graphs with over 50k nodes, such as an 8-layer GNMT.""","""This paper presents a new reinforcement learning based approach to device placement for operations in computational graphs and demonstrates improvements for large scale standard models. The paper is borderline with all reviewers appreciating the paper even the reviewer with the lowest score. The reviewer with the lowest score is basing the score on minor reservation regarding lack of detail in explaining the experiments. Based upon the average score rejection is recommended. The reviewers' comments can help improve the paper and it is definitely recommended to submit it to the next conference. """ 739,"""Mean Field Models for Neural Networks in Teacher-student Setting""","['mean field model', 'optimal transport', 'ResNet']","""Mean field models have provided a convenient framework for understanding the training dynamics for certain neural networks in the infinite width limit. The resulting mean field equation characterizes the evolution of the time-dependent empirical distribution of the network parameters. Following this line of work, this paper first focuses on the teacher-student setting. For the two-layer networks, we derive the necessary condition of the stationary distributions of the mean field equation and explain an empirical phenomenon concerning training speed differences using the Wasserstein flow description. Second, we apply this approach to two extended ResNet models and characterize the necessary condition of stationary distributions in the teacher-student setting.""","""This paper studies the evolution of the mean field dynamics of a two layer-fully connected and Resnet model. The focus is in a realizable or student/teacher setting where the labels are created according to a planted network. The authors study the stationary distribution of the mean-field method and use this to explain various observations. I think this is an interesting problem to study. However, the reviewers and I concur that the paper falls short in terms of clearly putting the results in the context of existing literature and demonstrating clear novel ideas. With the current writing of the paper is very difficult to surmise what is novel or new. I do agree with the authors' response that clearly they are looking at some novel aspects not studied by the previous work but this was not revised during the discussion period. Therefore, I do not think this paper is ready for publication. I suggest a substantial revision by the authors and recommend submission to future ML venues. """ 740,"""Locality and Compositionality in Zero-Shot Learning""","['Zero-shot learning', 'Compositionality', 'Locality', 'Deep Learning']","""In this work we study locality and compositionality in the context of learning representations for Zero Shot Learning (ZSL). In order to well-isolate the importance of these properties in learned representations, we impose the additional constraint that, differently from most recent work in ZSL, no pre-training on different datasets (e.g. ImageNet) is performed. The results of our experiment show how locality, in terms of small parts of the input, and compositionality, i.e. how well can the learned representations be expressed as a function of a smaller vocabulary, are both deeply related to generalization and motivate the focus on more local-aware models in future research directions for representation learning.""","""This paper investigates the role of locality (ability to encode only information specific to locations of interest) and compositionality (ability to be expressed as a combination of simpler parts) in Zero-Shot Learning (ZSL). Main contributions of the paper are (i) compared to previous ZSL frameworks, the proposed approach is that the model is not allowed to be pretrained on another dataset (ii) a thorough evaluation of existing methods. Following discussions, weaknesses are (i) the proposed method (CMDIM) isn't sufficiently different or interesting compared to existing methods (ii) the paper does not do an in-depth discussion of locality and compositionality. The empirical evaluation being extensive, the accept decision is chosen. """ 741,"""A Signal Propagation Perspective for Pruning Neural Networks at Initialization""","['neural network pruning', 'signal propagation perspective', 'sparse neural networks']","""Network pruning is a promising avenue for compressing deep neural networks. A typical approach to pruning starts by training a model and then removing redundant parameters while minimizing the impact on what is learned. Alternatively, a recent approach shows that pruning can be done at initialization prior to training, based on a saliency criterion called connection sensitivity. However, it remains unclear exactly why pruning an untrained, randomly initialized neural network is effective. In this work, by noting connection sensitivity as a form of gradient, we formally characterize initialization conditions to ensure reliable connection sensitivity measurements, which in turn yields effective pruning results. Moreover, we analyze the signal propagation properties of the resulting pruned networks and introduce a simple, data-free method to improve their trainability. Our modifications to the existing pruning at initialization method lead to improved results on all tested network models for image classification tasks. Furthermore, we empirically study the effect of supervision for pruning and demonstrate that our signal propagation perspective, combined with unsupervised pruning, can be useful in various scenarios where pruning is applied to non-standard arbitrarily-designed architectures.""","""This is a strong submission, and I recommend acceptance. The idea is an elegant one: sparsify a network at initialization using a distribution that achieves approximate orthogonality of the Jacobian for each layer. This is well motivated by dynamical isometry theory, and should imply good performance of the pruned network to the extent that the training dynamics are explainable in terms of a linearization around the initial weights. The paper is very well written, and all design decisions are clearly motivated. The experiments are careful, and cleanly demonstrate the effectiveness of the technique. The one shortcoming is that the experiments don't use state-of-the-art modern architectures, even thought that ought to have been easy to try. The architectures differ in ways that could impact the results, so it's not clear to what extent the same principles describe SOTA neural nets. Still, this is overall a very strong submission, and will be of interest to a lot of researchers at the conference. """ 742,"""Towards More Realistic Neural Network Uncertainties""","['uncertainty', 'variational inference', 'MC dropout', 'variational autoencoder', 'evaluation']","""Statistical models are inherently uncertain. Quantifying or at least upper-bounding their uncertainties is vital for safety-critical systems. While standard neural networks do not report this information, several approaches exist to integrate uncertainty estimates into them. Assessing the quality of these uncertainty estimates is not straightforward, as no direct ground truth labels are available. Instead, implicit statistical assessments are required. For regression, we propose to evaluate uncertainty realism---a strict quality criterion---with a Mahalanobis distance-based statistical test. An empirical evaluation reveals the need for uncertainty measures that are appropriate to upper-bound heavy-tailed empirical errors. Alongside, we transfer the variational U-Net classification architecture to standard supervised image-to-image tasks. It provides two uncertainty mechanisms and significantly improves uncertainty realism compared to a plain encoder-decoder model.""","""This paper proposes two contributions to improve uncertainty in deep learning. The first is a Mahalanobis distance based statistical test and the second a model architecture. Unfortunately, the reviewers found the message of the paper somewhat confusing and particularly didn't understand the connection between these two contributions. A major question from the reviewers is why the proposed statistical test is better than using a proper scoring rule such as negative log likelihood. Some empirical justification of this should be presented.""" 743,"""MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius""","['Adversarial Robustness', 'Provable Adversarial Defense', 'Randomized Smoothing', 'Robustness Certification']","""Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly. In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses. Recent work shows that randomized smoothing can be used to provide a certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER). The attack-free characteristic makes MACER faster to train and easier to optimize. In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN. For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius.""","""The submission proposes a robustness certification technique for smoothed classifiers for a given l_2 attack radius. Strengths: -The majority opinion is that this work is a non-trivial extension of prior work to provide radius certification. -The work is more efficient that strong recent baselines and provides better performance. -It successfully achieves this while avoiding adversarial training, which is another novel aspect. Weaknesses: -There were some initial concerns about missing experiments and unfair comparisons but these were sufficiently addressed in the discussion. AC shares the majority opinion and recommends acceptance.""" 744,"""Robust anomaly detection and backdoor attack detection via differential privacy""","['outlier detection', 'novelty detection', 'backdoor attack detection', 'system log anomaly detection', 'differential privacy']","""Outlier detection and novelty detection are two important topics for anomaly detection. Suppose the majority of a dataset are drawn from a certain distribution, outlier detection and novelty detection both aim to detect data samples that do not fit the distribution. Outliers refer to data samples within this dataset, while novelties refer to new samples. In the meantime, backdoor poisoning attacks for machine learning models are achieved through injecting poisoning samples into the training dataset, which could be regarded as outliers that are intentionally added by attackers. Differential privacy has been proposed to avoid leaking any individuals information, when aggregated analysis is performed on a given dataset. It is typically achieved by adding random noise, either directly to the input dataset, or to intermediate results of the aggregation mechanism. In this paper, we demonstrate that applying differential privacy could improve the utility of outlier detection and novelty detection, with an extension to detect poisoning samples in backdoor attacks. We first present a theoretical analysis on how differential privacy helps with the detection, and then conduct extensive experiments to validate the effectiveness of differential privacy in improving outlier detection, novelty detection, and backdoor attack detection.""","""Thanks for the submission. This paper leverages the stability of differential privacy for the problems of anomaly and backdoor attack detection. The reviewers agree that this application of differential privacy is novel. The theory of the paper appears to be a bit weak (with very strong assumptions on the private learner), although it reflects the basic underlying idea of the detection technique. The paper also provides some empirical evaluation of the technique.""" 745,"""A Closer Look at Deep Policy Gradients""","['deep policy gradient methods', 'deep reinforcement learning', 'trpo', 'ppo']",""" We study how the behavior of deep policy gradient algorithms reflects the conceptual framework motivating their development. To this end, we propose a fine-grained analysis of state-of-the-art methods based on key elements of this framework: gradient estimation, value prediction, and optimization landscapes. Our results show that the behavior of deep policy gradient algorithms often deviates from what their motivating framework would predict: surrogate rewards do not match the true reward landscape, learned value estimators fail to fit the true value function, and gradient estimates poorly correlate with the ""true"" gradient. The mismatch between predicted and empirical behavior we uncover highlights our poor understanding of current methods, and indicates the need to move beyond current benchmark-centric evaluation methods.""","""The paper empirically studies the behaviour of deep policy gradient algorithms, and reveals several unexpected observations that are not explained by the current theory. All three reviewers are excited about this work and recommend acceptance.""" 746,"""EXACT ANALYSIS OF CURVATURE CORRECTED LEARNING DYNAMICS IN DEEP LINEAR NETWORKS""",[],"""Deep neural networks exhibit complex learning dynamics due to the highly non-convex loss landscape, which causes slow convergence and vanishing gradient problems. Second order approaches, such as natural gradient descent, mitigate such problems by neutralizing the effect of potentially ill-conditioned curvature on the gradient-based updates, yet precise theoretical understanding on how such curvature correction affects the learning dynamics of deep networks has been lack- ing. Here, we analyze the dynamics of training deep neural networks under a generalized family of natural gradient methods that applies curvature corrections, and derive precise analytical solutions. Our analysis reveals that curvature corrected update rules preserve many features of gradient descent, such that the learning trajectory of each singular mode in natural gradient descent follows precisely the same path as gradient descent, while only accelerating the temporal dynamics along the path. We also show that layer-restricted approximations of natural gradient, which are widely used in most second order methods (e.g. K-FAC), can significantly distort the learning trajectory into highly diverging dynamics that significantly differs from true natural gradient, which may lead to undesirable net- work properties. We also introduce fractional natural gradient that applies partial curvature correction, and show that it provides most of the benefit of full curvature correction in terms of convergence speed, with additional benefit of superior numerical stability and neutralizing vanishing/exploding gradient problems, which holds true also in layer-restricted approximations.""","""This paper aims to study the effect of curvature correction techniques on training dynamics. The focus is on understanding how natural gradient based methods affect training dynamics of deep linear networks. The main conclusion of the analysis is that it does not fundamentally affect the path of convergence but rather accelerates convergence. They also show that layer correction techniques alone do not suffice. In the discussion the reviewers raised concerns about extrapolating too much based on linear networks and also lack of a cohesive literature review. One reviewer also mentioned that there is not enough technical detail. These issues were partially addressed in the response. I think the topic of the paper is interesting and timely. However, I concur with Reviewer #2 that there are still lots of missing detail and the connection with the nonlinear case is not clear (however the latter is not strictly necessary in my opinion if the rest of the paper is better written). As a result I think the paper in its current form is not ready for publication. """ 747,"""Unsupervised Hierarchical Graph Representation Learning with Variational Bayes""","['Hierarchical Graph Representation', 'Unsupervised Graph Learning', 'Variational Bayes', 'Graph classification']","""Hierarchical graph representation learning is an emerging subject owing to the increasingly popular adoption of graph neural networks in machine learning and applications. Loosely speaking, work under this umbrella falls into two categories: (a) use a predefined graph hierarchy to perform pooling; and (b) learn the hierarchy for a given graph through differentiable parameterization of the coarsening process. These approaches are supervised; a predictive task with ground-truth labels is used to drive the learning. In this work, we propose an unsupervised approach, \textsc{BayesPool}, with the use of variational Bayes. It produces graph representations given a predefined hierarchy. Rather than relying on labels, the training signal comes from the evidence lower bound of encoding a graph and decoding the subsequent one in the hierarchy. Node features are treated latent in this variational machinery, so that they are produced as a byproduct and are used in downstream tasks. We demonstrate a comprehensive set of experiments to show the usefulness of the learned representation in the context of graph classification.""","""The paper presents an unsupervised method for graph representation, building upon Loukas' method for generating a sequence of gradually coarsened graphs. The contribution is an ""encoder-decoder"" architecture trained by variational inference, where the encoder produces the embedding of the nodes in the next graph of the sequence, and the decoder produces the structure of the next graph. One important merit of the approach is that this unsupervised representation can be used effectively for supervised learning, with results quite competitive to the state of the art. However the reviewers were unconvinced by the novelty and positioning of the approach. The point of whether the approach should be viewed as variational Bayesian, or simply variational approximation was much debated between the reviewers and the authors. The area chair encourages the authors to pursue this very promising research, and to clarify the paper; perhaps the use of ""encoder-decoder"" generated too much misunderstanding. Another graph NN paper you might be interested in is ""Edge Contraction Pooling for Graph NNs"", by Frederik Diehl. """ 748,"""Learning to Plan in High Dimensions via Neural Exploration-Exploitation Trees""","['learning to plan', 'representation learning', 'learning to design algorithm', 'reinforcement learning', 'meta learning']","""We propose a meta path planning algorithm named \emph{Neural Exploration-Exploitation Trees~(NEXT)} for learning from prior experience for solving new path planning problems in high dimensional continuous state and action spaces. Compared to more classical sampling-based methods like RRT, our approach achieves much better sample efficiency in high-dimensions and can benefit from prior experience of planning in similar environments. More specifically, NEXT exploits a novel neural architecture which can learn promising search directions from problem structures. The learned prior is then integrated into a UCB-type algorithm to achieve an online balance between \emph{exploration} and \emph{exploitation} when solving a new problem. We conduct thorough experiments to show that NEXT accomplishes new planning problems with more compact search trees and significantly outperforms state-of-the-art methods on several benchmarks.""","""All reviewers unanimously accept the paper.""" 749,"""The Variational InfoMax AutoEncoder""","['autoencoder', 'information theory', 'infomax', 'vae']","""We propose the Variational InfoMax AutoEncoder (VIMAE), an autoencoder based on a new learning principle for unsupervised models: the Capacity-Constrained InfoMax, which allows the learning of a disentangled representation while maintaining optimal generative performance. The variational capacity of an autoencoder is defined and we investigate its role. We associate the two main properties of a Variational AutoEncoder (VAE), generation quality and disentangled representation, to two different information concepts, respectively Mutual Information and network capacity. We deduce that a small capacity autoencoder tends to learn a more robust and disentangled representation than a high capacity one. This observation is confirmed by the computational experiments.""","""This paper describes a new generative model based on the information theoretic principles for better representation learning. The approach is theoretically related to the InfoVAE and beta-VAE work, and is contrasted to vanilla VAEs. The reviewers have expressed strong concerns about the novelty of this work. Some of the very closely related baselines (e.g. Zhao et al., Chen et al., Alemi et a) are not compared against, and the contributions of this work over the baselines are not clearly discussed. Furthermore, the experimental section could be made stronger with more quantitative metrics. For these reasons I recommend rejection.""" 750,"""Depth-Adaptive Transformer""","['Deep learning', 'natural language processing', 'sequence modeling']","""State of the art sequence-to-sequence models for large scale tasks perform a fixed number of computations for each input sequence regardless of whether it is easy or hard to process. In this paper, we train Transformer models which can make output predictions at different stages of the network and we investigate different ways to predict how much computation is required for a particular sequence. Unlike dynamic computation in Universal Transformers, which applies the same set of layers iteratively, we apply different layers at every step to adjust both the amount of computation as well as the model capacity. On IWSLT German-English translation our approach matches the accuracy of a well tuned baseline Transformer while using less than a quarter of the decoder layers.""","""This paper presents an adaptive computation time method for reducing the average-case inference time of a transformer sequence-to-sequence model. The reviewers reached a rough consensus: This paper makes a proposes a novel method for an important problem, and offers reasonably compelling evidence for that method. However, the experiments aren't *quite* sufficient to isolate the cause of the observed improvements, and the discussion of related work could be clearer. I acknowledge that this paper is borderline (and thank R3 for an extremely thorough discussion, both in public and privately), but I lean toward acceptance: The paper doesn't have any fatal flaws, and it brings some fresh ideas to an area where further work would be valuable.""" 751,"""Deep Mining: Detecting Anomalous Patterns in Neural Network Activations with Subset Scanning""","['anomalous pattern detection', 'subset scanning', 'node activations', 'adversarial noise']","""This work views neural networks as data generating systems and applies anomalous pattern detection techniques on that data in order to detect when a network is processing a group of anomalous inputs. Detecting anomalies is a critical component for multiple machine learning problems including detecting the presence of adversarial noise added to inputs. More broadly, this work is a step towards giving neural networks the ability to detect groups of out-of-distribution samples. This work introduces ``Subset Scanning methods from the anomalous pattern detection domain to the task of detecting anomalous inputs to neural networks. Subset Scanning allows us to answer the question: ""``Which subset of inputs have larger-than-expected activations at which subset of nodes?"" Framing the adversarial detection problem this way allows us to identify systematic patterns in the activation space that span multiple adversarially noised images. Such images are ``""weird together"". Leveraging this common anomalous pattern, we show increased detection power as the proportion of noised images increases in a test set. Detection power and accuracy results are provided for targeted adversarial noise added to CIFAR-10 images on a 20-layer ResNet using the Basic Iterative Method attack. ""","""The paper investigates the use of the subset scanning to the detection of anomalous patterns in the input to a neural network. The paper has received mixed reviews (one positive and two negatives). The reviewers agree that the idea is interesting, has novelty, and is worth investigating. At the same time they raise issues about the clarity and the lack of comparisons with baselines. Despite a very detailed rebuttal, both of the negative reviewers still feel that addressing their concerns through paper revision would be needed for acceptance.""" 752,"""Abductive Commonsense Reasoning""","['Abductive Reasoning', 'Commonsense Reasoning', 'Natural Language Inference', 'Natural Language Generation']","""Abductive reasoning is inference to the most plausible explanation. For example, if Jenny finds her house in a mess when she returns from work, and remembers that she left a window open, she can hypothesize that a thief broke into her house and caused the mess, as the most plausible explanation. While abduction has long been considered to be at the core of how people interpret and read between the lines in natural language (Hobbs et al., 1988), there has been relatively little research in support of abductive natural language inference and generation. We present the first study that investigates the viability of language-based abductive reasoning. We introduce a challenge dataset, ART, that consists of over 20k commonsense narrative contexts and 200k explanations. Based on this dataset, we conceptualize two new tasks (i) Abductive NLI: a multiple-choice question answering task for choosing the more likely explanation, and (ii) Abductive NLG: a conditional generation task for explaining given observations in natural language. On Abductive NLI, the best model achieves 68.9% accuracy, well below human performance of 91.4%. On Abductive NLG, the current best language generators struggle even more, as they lack reasoning capabilities that are trivial for humans. Our analysis leads to new insights into the types of reasoning that deep pre-trained language models fail to performdespite their strong performance on the related but more narrowly defined task of entailment NLIpointing to interesting avenues for future research.""","""This paper presents a dataset, created using a combination of existing resources, crowdsourcing, and model-based filtering, that aims to tests models' understanding of typical progressions of events in everyday situations. The dataset represents a challenge for a range of state of the art models for NLP and commonsense reasoning, and also can be used productively as a training task in transfer learning. After some discussion, reviewers came to a consensus that this represents an interesting contribution and a potentially valuable resource. There were some concernsnot fully resolvedabout the implications of using model-based filtering during data creation, but these were not so serious as to invalidate the primary contributions of the paper. While the thematic fit with ICLR is a bit weakthe primary contribution of the paper appears to be a dataset and task definition, rather than anything specific to representation learningthere are relevant secondary contributions, and I think that this work will be practically of interest to a reasonable fraction of the ICLR audience. """ 753,"""Zeno++: Robust Fully Asynchronous SGD""","['fault-tolerance', 'Byzantine-tolerance', 'security', 'SGD', 'asynchronous']","""We propose Zeno++, a new robust asynchronous Stochastic Gradient Descent~(SGD) procedure which tolerates Byzantine failures of the workers. In contrast to previous work, Zeno++ removes some unrealistic restrictions on worker-server communications, allowing for fully asynchronous updates from anonymous workers, arbitrarily stale worker updates, and the possibility of an unbounded number of Byzantine workers. The key idea is to estimate the descent of the loss value after the candidate gradient is applied, where large descent values indicate that the update results in optimization progress. We prove the convergence of Zeno++ for non-convex problems under Byzantine failures. Experimental results show that Zeno++ outperforms existing approaches.""","""Main content: Blind review #2 summarizes it well: This paper investigates the security of distributed asynchronous SGD. Authors propose Zeno++, worker-server asynchronous implementation of SGD which is robust to Byzantine failures. To ensure that the gradients sent by the workers are correct, Zeno++ server scores each worker gradients using a reference gradient computed on a secret validation set. If the score is under a given threshold, then the worker gradient is discarded. Authors provide convergence guarantee for the Zeno++ optimizer for non-convex function. In addition, they provide an empirical evaluation of Zeno++ on the CIFAR10 datasets and compare with various baselines. -- Discussion: Reviews are generally weak on the limited novelty of the approach compared with Zeno, but the rebuttal of the authors on Nov 15 is fair (too long to summarize here). -- Recommendation and justification: I do not feel strongly enough to override the weak reviews (but if there is room in the program I would support a weak accept).""" 754,"""Semi-Supervised Few-Shot Learning with Prototypical Random Walks""","['Few-Shot Learning', 'Semi-Supervised Learning', 'Random Walks']","""Learning from a few examples is a key characteristic of human intelligence that inspired machine learning researchers to build data-efficient AI models. Recent progress has shown that few-shot learning can be improved with access to unlabelled data, known as semi-supervised few-shot learning(SS-FSL). We introduce an SS-FSL approach, dubbed as Prototypical Random Walk Networks (PRWN), built on top of Prototypical Networks (PN). We develop a random walk semi-supervised loss that enables the network to learn representations that are compact and well-separated. Our work is related to the very recent development on graph-based approaches for few-shot learning. However, we show that achieved compact and well-separated class embeddings can be achieved by our prototypical random walk notion without needing additional graph-NN parameters or requiring a transductive setting where collective test set is provided. Our model outperforms prior art in most benchmarks with significant improvements in some cases. For example, in a mini-Imagenet 5-shot classification task, we obtain 69.65% accuracy to the 64.59% state-of-the-art. Our model, trained with 40% of the data as labelled, compares competitively against fully supervised prototypical networks, trained on 100% of the labels, even outperforming it in the 1-shot mini-Imagenet case with 50.89% to 49.4% accuracy. We also show that our model is resistant to distractors, unlabeled data that does not belong to any of the training classes, and hence reflecting robustness to labelled/unlabelled class distribution mismatch. We also performed a challenging discriminative power test, showing a relative improvement on top of the baseline of 14% on 20 classes on mini-Imagenet and 60% on 800 classes on Ominiglot.""","""This paper proposed a semi-supervised few-shot learning method, on top of Prototypical Networks, wherein a regularization term that involves a random walk from a prototype to unlabeled samples and back to the same prototype. SotA results were obtained in several experiments by using this method. All reviewers agreed that the novelty of the paper is not such high compared with Haeusser et al. (2017) and the analysis and the experiments could be improved.""" 755,"""Meta-learning curiosity algorithms""","['meta-learning', 'exploration', 'curiosity']","""We hypothesize that curiosity is a mechanism found by evolution that encourages meaningful exploration early in an agent's life in order to expose it to experiences that enable it to obtain high rewards over the course of its lifetime. We formulate the problem of generating curious behavior as one of meta-learning: an outer loop will search over a space of curiosity mechanisms that dynamically adapt the agent's reward signal, and an inner loop will perform standard reinforcement learning using the adapted reward signal. However, current meta-RL methods based on transferring neural network weights have only generalized between very similar tasks. To broaden the generalization, we instead propose to meta-learn algorithms: pieces of code similar to those designed by humans in ML papers. Our rich language of programs combines neural networks with other building blocks such as buffers, nearest-neighbor modules and custom loss functions. We demonstrate the effectiveness of the approach empirically, finding two novel curiosity algorithms that perform on par or better than human-designed published curiosity algorithms in domains as disparate as grid navigation with image inputs, acrobot, lunar lander, ant and hopper.""","""This paper proposes meta-learning auxiliary rewards as specified by a DSL. The approach was considered innovative and the results interesting by all reviewers. The paper is clearly of an acceptable standard, with the main concerns raised by reviewers having been addressed (admittedly at the 11th hour) by the authors during the discussion period. Accept.""" 756,"""WORD SEQUENCE PREDICTION FOR AMHARIC LANGUAGE""","['Word prediction', 'POS', 'Statistical approach']","""Word prediction is guessing what word comes after, based on some current information, and it is the main focus of this study. Even though Amharic is used by a large number of populations, no significant work is done on the topic. In this study, Amharic word sequence prediction model is developed using Machine learning. We used statistical methods using Hidden Markov Model by incorporating detailed parts of speech tag and user profiling or adaptation. One of the needs for this research is to overcome the challenges on inflected languages. Word sequence prediction is a challenging task for inflected languages (Gustavii &Pettersson, 2003; Seyyed & Assi, 2005). These kinds of languages are morphologically rich and have enormous word forms, which is a word can have different forms. As Amharic language is morphologically rich it shares the problem (Tessema, 2014).This problem makes word prediction system much more difficult and results poor performance. Previous researches used dictionary approach with no consideration of context information. Due to this reason, storing all forms in a dictionary wont solve the problem as in English and other less inflected languages. Therefore, we introduced two models; tags and words and linear interpolation that use parts of speech tag information in addition to word n-grams in order to maximize the likelihood of syntactic appropriateness of the suggestions. The statistics included in the systems varies from single word frequencies to parts-of-speech tag n-grams. We described a combined statistical and lexical word prediction system and developed Amharic language models of bigram and trigram for the training purpose. The overall study followed Design Science Research Methodology (DSRM). ""","""This paper presents a language model for Amharic using HMMs and incorporating POS tags. The paper is very short and lacks essential parts such as describing the exact model and the experimental design and results. The reviewers all rejected this paper, and there was no author rebuttal. This paper is clearly not appropriate for publication at ICLR. """ 757,"""Probabilistic View of Multi-agent Reinforcement Learning: A Unified Approach""","['multi-agent reinforcement learning', 'maximum entropy reinforcement learning']","""Formulating the reinforcement learning (RL) problem in the framework of probabilistic inference not only offers a new perspective about RL, but also yields practical algorithms that are more robust and easier to train. While this connection between RL and probabilistic inference has been extensively studied in the single-agent setting, it has not yet been fully understood in the multi-agent setup. In this paper, we pose the problem of multi-agent reinforcement learning as the problem of performing inference in a particular graphical model. We model the environment, as seen by each of the agents, using separate but related Markov decision processes. We derive a practical off-policy maximum-entropy actor-critic algorithm that we call Multi-agent Soft Actor-Critic (MA-SAC) for performing approximate inference in the proposed model using variational inference. MA-SAC can be employed in both cooperative and competitive settings. Through experiments, we demonstrate that MA-SAC outperforms a strong baseline on several multi-agent scenarios. While MA-SAC is one resultant multi-agent RL algorithm that can be derived from the proposed probabilistic framework, our work provides a unified view of maximum-entropy algorithms in the multi-agent setting.""","""The paper takes the perspective of ""reinforcement learning as inference"", extends it to the multi-agent setting and derives a multi-agent RL algorithm that extends Soft Actor Critic. Several reviewer questions were addressed in the rebuttal phase, including key design choices. A common concern was the limited empirical comparison, including comparisons to existing approaches. """ 758,"""Adversarial Training: embedding adversarial perturbations into the parameter space of a neural network to build a robust system""","['Adversarial Training', 'Adversarial Examples']","""Adversarial training, in which a network is trained on both adversarial and clean examples, is one of the most trusted defense methods against adversarial attacks. However, there are three major practical difficulties in implementing and deploying this method - expensive in terms of extra memory and computation costs; accuracy trade-off between clean and adversarial examples; and lack of diversity of adversarial perturbations. Classical adversarial training uses fixed, precomputed perturbations in adversarial examples (input space). In contrast, we introduce dynamic adversarial perturbations into the parameter space of the network, by adding perturbation biases to the fully connected layers of deep convolutional neural network. During training, using only clean images, the perturbation biases are updated in the Fast Gradient Sign Direction to automatically create and store adversarial perturbations by recycling the gradient information computed. The network learns and adjusts itself automatically to these learned adversarial perturbations. Thus, we can achieve adversarial training with negligible cost compared to requiring a training set of adversarial example images. In addition, if combined with classical adversarial training, our perturbation biases can alleviate accuracy trade-off difficulties, and diversify adversarial perturbations.""","""This paper proposes to introduce perturbation biases as a counter-measure against adversarial perturbations. The perturbation biases are additional bias terms that are trained by a variant of gradient ascent. Serious issues were raised in the comments. No rebuttal was provided.""" 759,"""NORML: Nodal Optimization for Recurrent Meta-Learning""","['meta-learning', 'learning to learn', 'few-shot classification', 'memory-based optimization']","""Meta-learning is an exciting and powerful paradigm that aims to improve the effectiveness of current learning systems. By formulating the learning process as an optimization problem, a model can learn how to learn while requiring significantly less data or experience than traditional approaches. Gradient-based meta-learning methods aims to do just that, however recent work have shown that the effectiveness of these approaches are primarily due to feature reuse and very little has to do with priming the system for rapid learning (learning to make effective weight updates on unseen data distributions). This work introduces Nodal Optimization for Recurrent Meta-Learning (NORML), a novel meta-learning framework where an LSTM-based meta-learner performs neuron-wise optimization on a learner for efficient task learning. Crucially, the number of meta-learner parameters needed in NORML, increases linearly relative to the number of learner parameters. Allowing NORML to potentially scale to learner networks with very large numbers of parameters. While NORML also benefits from feature reuse it is shown experimentally that the meta-learner LSTM learns to make effective weight updates using information from previous data-points and update steps.""","""The paper proposes a LSTM-based meta-learning approach that learns how to update each neuron in another model for best few-shot learning performance. The reviewers agreed that this is a worthwhile problem and the approach has merits, but that it is hard to judge the significance of the work, given limited or unclear novelty compared to the work of Ravi & Larochelle (2017) and a lack of fair baseline comparisons. I recommend rejecting the paper for now, but encourage the authors to take the reviewers' feedback into account and submit to another venue.""" 760,"""Amortized Nesterov's Momentum: Robust and Lightweight Momentum for Deep Learning""","['momentum', 'nesterov', 'optimization', 'deep learning', 'neural networks']","""Stochastic Gradient Descent (SGD) with Nesterov's momentum is a widely used optimizer in deep learning, which is observed to have excellent generalization performance. However, due to the large stochasticity, SGD with Nesterov's momentum is not robust, i.e., its performance may deviate significantly from the expectation. In this work, we propose Amortized Nesterov's Momentum, a special variant of Nesterov's momentum which has more robust iterates, faster convergence in the early stage and higher efficiency. Our experimental results show that this new momentum achieves similar (sometimes better) generalization performance with little-to-no tuning. In the convex case, we provide optimal convergence rates for our new methods and discuss how the theorems explain the empirical results. ""","""This paper introduces a variant of Nesterov momentum which saves computation by only periodically recomputing certain quantities, and which is claimed to be more robust in the stochastic setting. The method seems easy to use, so there's probably no harm in trying it. However, the reviewers and I don't find the benefits persuasive. While there is theoretical analysis, its role is to show that the algorithm maintains the convergence properties while having other benefits. However, the computations saved by amortization seem like a small fraction of the total cost, and I'm having trouble seeing how the increased ""robustness"" is justified. (It's possible I missed something, but clarity of exposition is another area the paper could use some improvement in.) Overall, this submission seems promising, but probably needs to be cleaned up before publication at ICLR. """ 761,"""Precision Gating: Improving Neural Network Efficiency with Dynamic Dual-Precision Activations""","['deep learning', 'neural network', 'dynamic quantization', 'dual precision', 'efficient gating']","""We propose precision gating (PG), an end-to-end trainable dynamic dual-precision quantization technique for deep neural networks. PG computes most features in a low precision and only a small proportion of important features in a higher precision to preserve accuracy. The proposed approach is applicable to a variety of DNN architectures and significantly reduces the computational cost of DNN execution with almost no accuracy loss. Our experiments indicate that PG achieves excellent results on CNNs, including statically compressed mobile-friendly networks such as ShuffleNet. Compared to the state-of-the-art prediction-based quantization schemes, PG achieves the same or higher accuracy with 2.4 less compute on ImageNet. PG furthermore applies to RNNs. Compared to 8-bit uniform quantization, PG obtains a 1.2% improvement in perplexity per word with 2.7 computational cost reduction on LSTM on the Penn Tree Bank dataset.""","""The submission proposes an approach to accelerate network training by modifying the precision of individual weights, allowing a substantial speed up without a decrease in model accuracy. The magnitude of the activations determines whether it will be computed at a high or low bitwidth. The reviewers agreed that the paper should be published given the strong results, though there were some salient concerns which the authors should address in their final revision, such as how the method could be implemented on GPU and what savings could be achieved. Recommendation is to accept.""" 762,"""Coordinated Exploration via Intrinsic Rewards for Multi-Agent Reinforcement Learning""","['multi-agent reinforcement learning', 'multi-agent', 'exploration', 'intrinsic motivation', 'MARL', 'coordinated exploration']","""Solving tasks with sparse rewards is one of the most important challenges in reinforcement learning. In the single-agent setting, this challenge has been addressed by introducing intrinsic rewards that motivate agents to explore unseen regions of their state spaces. Applying these techniques naively to the multi-agent setting results in agents exploring independently, without any coordination among themselves. We argue that learning in cooperative multi-agent settings can be accelerated and improved if agents coordinate with respect to what they have explored. In this paper we propose an approach for learning how to dynamically select between different types of intrinsic rewards which consider not just what an individual agent has explored, but all agents, such that the agents can coordinate their exploration and maximize extrinsic returns. Concretely, we formulate the approach as a hierarchical policy where a high-level controller selects among sets of policies trained on different types of intrinsic rewards and the low-level controllers learn the action policies of all agents under these specific rewards. We demonstrate the effectiveness of the proposed approach in a multi-agent gridworld domain with sparse rewards, and then show that our method scales up to more complex settings by evaluating on the VizDoom platform.""","""The authors present a method that utilizes intrinsic rewards to coordinate the exploration of agents in a multi-agent reinforcement learning setting. The reviewers agreed that the proposed approach was relatively novel and an interesting research direction for multiagent RL. However, the reviewers had substantial concerns about writing clarity, the significance of the contribution of the propose method, and the thoroughness of evaluation (particularly the number of agents used and limited baselines). While the writing clarity and several technical points (including addition ablations) were addressed in the rebuttal, the reviewers still felt that the core contribution of the work was a bit too marginal. Thus, I recommend this paper to be rejected at this time.""" 763,"""Farkas layers: don't shift the data, fix the geometry""","['initialization', 'deep networks', 'residual networks', 'batch normalization', 'training', 'optimization']","""Successfully training deep neural networks often requires either {batch normalization}, appropriate {weight initialization}, both of which come with their own challenges. We propose an alternative, geometrically motivated method for training. Using elementary results from linear programming, we introduce Farkas layers: a method that ensures at least one neuron is active at a given layer. Focusing on residual networks with ReLU activation, we empirically demonstrate a significant improvement in training capacity in the absence of batch normalization or methods of initialization across a broad range of network sizes on benchmark datasets.""","""This paper proposes a new normalization scheme that attempts to prevent all units in a ReLU layer from being dead. The experimental results show that this normalization can effectively be used to train deep networks, though not as well as batch normalization. A significant issue is that the paper does not sufficiently establish that their explanation for the success of Farkas layer is valid. For example, do networks usually have layers with only inactive units in practice?""" 764,"""Insights on Visual Representations for Embodied Navigation Tasks""",[],"""Recent advances in deep reinforcement learning require a large amount of training data and generally result in representations that are often over specialized to the target task. In this work, we study the underlying potential causes for this specialization by measuring the similarity between representations trained on related, but distinct tasks. We use the recently proposed projection weighted Canonical Correlation Analysis (PWCCA) to examine the task dependence of visual representations learned across different embodied navigation tasks. Surprisingly, we find that slight differences in task have no measurable effect on the visual representation for both SqueezeNet and ResNet architectures. We then empirically demonstrate that visual representations learned on one task can be effectively transferred to a different task. Interestingly, we show that if the tasks constrain the agent to spatially disjoint parts of the environment, differences in representation emerge for SqueezeNet models but less-so for ResNets, suggesting that ResNets feature inductive biases which encourage more task-agnostic representations, even in the context of spatially separated tasks. We generalize our analysis to examine permutations of an environment and find, surprisingly, permutations of an environment also do not influence the visual representation. Our analysis provides insight on the overfitting of representations in RL and provides suggestions of how to design tasks that induce task-agnostic representations.""","""The general consensus amongst the reviewers is that this paper is not quite ready for publication, and needs to dig a little deeper in some areas. Some reviewers thought the contributions are unclear, or unsupported. I hope these reviews will help you as you work towards finding a home for this work.""" 765,"""Ordinary differential equations on graph networks""","['Graph Networks', 'Ordinary differential equation']","""Recently various neural networks have been proposed for irregularly structured data such as graphs and manifolds. To our knowledge, all existing graph networks have discrete depth. Inspired by neural ordinary differential equation (NODE) for data in the Euclidean domain, we extend the idea of continuous-depth models to graph data, and propose graph ordinary differential equation (GODE). The derivative of hidden node states are parameterized with a graph neural network, and the output states are the solution to this ordinary differential equation. We demonstrate two end-to-end methods for efficient training of GODE: (1) indirect back-propagation with the adjoint method; (2) direct back-propagation through the ODE solver, which accurately computes the gradient. We demonstrate that direct backprop outperforms the adjoint method in experiments. We then introduce a family of bijective blocks, which enables pseudo-formula memory consumption. We demonstrate that GODE can be easily adapted to different existing graph neural networks and improve accuracy. We validate the performance of GODE in both semi-supervised node classification tasks and graph classification tasks. Our GODE model achieves a continuous model in time, memory efficiency, accurate gradient estimation, and generalizability with different graph networks.""","""This paper introduces a few ideas to potentially improve the performance of neural ODEs on graph networks. However, the reviewers disagreed about the motivations for the proposed modifications. Specifically, it's not clear that neural ODEs provide a more advantageous parameterization in this setting than standard discrete networks. It's also not clear at all why the authors are discussion graph neural networks in particular, as all of their proposed changes would apply to all types of network. Another major problem I had with this paper was the assertion that the running the original system backwards leads to large numerical error. This is a plausible claim, but it was never verified. It's extremely easy to check (e.g. by comparing the reconstructed initial state at t0 with the true original state at t0, or by comparing gradients computed by different methods). It's also not clear if the authors enforced the constraints on their dynamics function needed to ensure that a unique solution exists in the first place.""" 766,"""Leveraging Simple Model Predictions for Enhancing its Performance""","['simple models', 'interpretability', 'resource constraints']","""There has been recent interest in improving performance of simple models for multiple reasons such as interpretability, robust learning from small data, deployment in memory constrained settings as well as environmental considerations. In this paper, we propose a novel method SRatio that can utilize information from high performing complex models (viz. deep neural networks, boosted trees, random forests) to reweight a training dataset for a potentially low performing simple model such as a decision tree or a shallow network enhancing its performance. Our method also leverages the per sample hardness estimate of the simple model which is not the case with the prior works which primarily consider the complex model's confidences/predictions and is thus conceptually novel. Moreover, we generalize and formalize the concept of attaching probes to intermediate layers of a neural network, which was one of the main ideas in previous work \citep{profweight}, to other commonly used classifiers and incorporate this into our method. The benefit of these contributions is witnessed in the experiments where on 6 UCI datasets and CIFAR-10 we outperform competitors in a majority (16 out of 27) of the cases and tie for best performance in the remaining cases. In fact, in a couple of cases, we even approach the complex model's performance. We also conduct further experiments to validate assertions and intuitively understand why our method works. Theoretically, we motivate our approach by showing that the weighted loss minimized by simple models using our weighting upper bounds the loss of the complex model.""","""The authors propose a sample reweighting scheme that helps to learn a simple model with similar performance as a more complex one. The authors contained critical errors in their original submission and the paper seems to lack in terms of originality and novelty of the proposed method.""" 767,"""Synthetic vs Real: Deep Learning on Controlled Noise""","['controlled experiments', 'robust deep learning', 'corrupted label', 'real-world noisy data']","""Performing controlled experiments on noisy data is essential in thoroughly understanding deep learning across a spectrum of noise levels. Due to the lack of suitable datasets, previous research have only examined deep learning on controlled synthetic noise, and real-world noise has never been systematically studied in a controlled setting. To this end, this paper establishes a benchmark of real-world noisy labels at 10 controlled noise levels. As real-world noise possesses unique properties, to understand the difference, we conduct a large-scale study across a variety of noise levels and types, architectures, methods, and training settings. Our study shows that: (1) Deep Neural Networks (DNNs) generalize much better on real-world noise. (2) DNNs may not learn patterns first on real-world noisy data. (3) When networks are fine-tuned, ImageNet architectures generalize well on noisy data. (4) Real-world noise appears to be less harmful, yet it is more difficult for robust DNN methods to improve. (5) Robust learning methods that work well on synthetic noise may not work as well on real-world noise, and vice versa. We hope our benchmark, as well as our findings, will facilitate deep learning research on noisy data. ""","""Thanks for your detailed feedback to the reviewers, which helped us a lot to better understand your paper. However, given high competition at ICLR2020, we think the current manuscript is premature and still below the bar to be accepted to ICLR2020. We hope that the reviewers' comments are useful to improve your manuscript for potential future submission.""" 768,"""VL-BERT: Pre-training of Generic Visual-Linguistic Representations""","['Visual-Linguistic', 'Generic Representation', 'Pre-training']","""We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short). VL-BERT adopts the simple yet powerful Transformer model as the backbone, and extends it to take both visual and linguistic embedded features as input. In it, each element of the input is either of a word from the input sentence, or a region-of-interest (RoI) from the input image. It is designed to fit for most of the visual-linguistic downstream tasks. To better exploit the generic representation, we pre-train VL-BERT on the massive-scale Conceptual Captions dataset, together with text-only corpus. Extensive empirical analysis demonstrates that the pre-training procedure can better align the visual-linguistic clues and benefit the downstream tasks, such as visual commonsense reasoning, visual question answering and referring expression comprehension. It is worth noting that VL-BERT achieved the first place of single model on the leaderboard of the VCR benchmark.""","""The paper proposed a new pretrained language model which can take visual information into the embeddings. Experiments showed state-of-the-art results on three downstream tasks. The paper is well written and detailed comparisons with related work are given. There are some concerns about the clarity and novelty raised by the reviewers which is answered in details and I think the paper is acceptable.""" 769,"""Task-Based Top-Down Modulation Network for Multi-Task-Learning Applications""","['deep learning', 'multi-task learning']","""A general problem that received considerable recent attention is how to perform multiple tasks in the same network, maximizing both efficiency and prediction accuracy. A popular approach consists of a multi-branch architecture on top of a shared backbone, jointly trained on a weighted sum of losses. However, in many cases, the shared representation results in non-optimal performance, mainly due to an interference between conflicting gradients of uncorrelated tasks. Recent approaches address this problem by a channel-wise modulation of the feature-maps along the shared backbone, with task specific vectors, manually or dynamically tuned. Taking this approach a step further, we propose a novel architecture which modulate the recognition network channel-wise, as well as spatial-wise, with an efficient top-down image-dependent computation scheme. Our architecture uses no task-specific branches, nor task specific modules. Instead, it uses a top-down modulation network that is shared between all of the tasks. We show the effectiveness of our scheme by achieving on par or better results than alternative approaches on both correlated and uncorrelated sets of tasks. We also demonstrate our advantages in terms of model size, the addition of novel tasks and interpretability. Code will be released.""","""The paper is interested in multi-task learning. It introduces a new architecture which condition the model in a particular manner: images features and task ID features are fed to a top-down network which generates task-specific weights, which are then used in a bottom-up network to produce final labels. The paper is experimental, and the contribution rather incremental, considering existing work in the area. Experimental section is currently not convincing enough, given marginal improvements over existing approaches - multiple runs as well as confidence intervals would help in that respect. """ 770,"""An Exponential Learning Rate Schedule for Deep Learning""","['batch normalization', 'weight decay', 'learning rate', 'deep learning theory']","""Intriguing empirical evidence exists that deep learning can work well with exotic schedules for varying the learning rate. This paper suggests that the phenomenon may be due to Batch Normalization or BN(Ioffe & Szegedy, 2015), which is ubiq- uitous and provides benefits in optimization and generalization across all standard architectures. The following new results are shown about BN with weight decay and momentum (in other words, the typical use case which was not considered in earlier theoretical analyses of stand-alone BN (Ioffe & Szegedy, 2015; Santurkar et al., 2018; Arora et al., 2018) Training can be done using SGD with momentum and an exponentially in- creasing learning rate schedule, i.e., learning rate increases by some (1 + ) factor in every epoch for some > 0. (Precise statement in the paper.) To the best of our knowledge this is the first time such a rate schedule has been successfully used, let alone for highly successful architectures. As ex- pected, such training rapidly blows up network weights, but the net stays well-behaved due to normalization. Mathematical explanation of the success of the above rate schedule: a rigor- ous proof that it is equivalent to the standard setting of BN + SGD + Standard Rate Tuning + Weight Decay + Momentum. This equivalence holds for other normalization layers as well, Group Normalization(Wu & He, 2018), Layer Normalization(Ba et al., 2016), Instance Norm(Ulyanov et al., 2016), etc. A worked-out toy example illustrating the above linkage of hyper- parameters. Using either weight decay or BN alone reaches global minimum, but convergence fails when both are used.""","""After the revision, the reviewers agree on acceptance of this paper. Let's do it.""" 771,"""Adapting to Label Shift with Bias-Corrected Calibration""","['calibration', 'label shift', 'domain adaptation', 'temperature scaling', 'em', 'bbse']","""Label shift refers to the phenomenon where the marginal probability p(y) of observing a particular class changes between the training and test distributions, while the conditional probability p(x|y) stays fixed. This is relevant in settings such as medical diagnosis, where a classifier trained to predict disease based on observed symptoms may need to be adapted to a different distribution where the baseline frequency of the disease is higher. Given estimates of p(y|x) from a predictive model, one can apply domain adaptation procedures including Expectation Maximization (EM) and Black-Box Shift Estimation (BBSE) to efficiently correct for the difference in class proportions between the training and test distributions. Unfortunately, modern neural networks typically fail to produce well-calibrated estimates of p(y|x), reducing the effectiveness of these approaches. In recent years, Temperature Scaling has emerged as an efficient approach to combat miscalibration. However, the effectiveness of Temperature Scaling in the context of adaptation to label shift has not been explored. In this work, we study the impact of various calibration approaches on shift estimates produced by EM or BBSE. In experiments with image classification and diabetic retinopathy detection, we find that calibration consistently tends to improve shift estimation. In particular, calibration approaches that include class-specific bias parameters are significantly better than approaches that lack class-specific bias parameters, suggesting that reducing systematic bias in the calibrated probabilities is especially important for domain adaptation.""","""This was a borderline paper, but in the end two of the reviewers remain unconvinced by this paper in its current form, and the last reviewer is not willing to argue for acceptance. The first reviewer's comments were taken seriously in making a decision on this paper. As such, it is my suggestion that the authors revise the paper in its current form, and resubmit, addressing some of the first reviewers comments, such as discussion of utility of the methodology, and to improve the exposition such that less knowledgable reviewers understand the material presented better. The comments that the first reviewer makes about lack of motivation for parts of the presented methodology is reflected in the other reviewers comments, and I'm convinced that the authors can address this issue and make this a really awesome submission at a future conference. On a different note, I think the authors should be congratulated on making their results reproducible. That is definitely something the field needs to see more of.""" 772,"""SCELMo: Source Code Embeddings from Language Models""","['Transfer Learning', 'Pretraining', 'Program Repair']","""Continuous embeddings of tokens in computer programs have been used to support a variety of software development tools, including readability, code search, and program repair. Contextual embeddings are common in natural language processing but have not been previously applied in software engineering. We introduce a new set of deep contextualized word representations for computer programs based on language models. We train a set of embeddings using the ELMo (embeddings from language models) framework of Peters et al (2018). We investigate whether these embeddings are effective when fine-tuned for the downstream task of bug detection. We show that even a low-dimensional embedding trained on a relatively small corpus of programs can improve a state-of-the-art machine learning system for bug detection.""","""This paper improves DeepBugs by borrowing the NLP method ELMo as new representations. The effectiveness of the embedding is investigated using the downstream task of bug detection. Two reviewers reject the paper for two main concerns: 1 The novelty of the paper is not strong enough for ICLR as this paper mainly uses a standard context embedding technique from NLP. 2 The experimental results are not convincing enough and more comprehensive evaluation are needed. Overall, this novelty of this paper does not meet the standard of ICLR. """ 773,"""Model-based Saliency for the Detection of Adversarial Examples""","['Adversarial Examples', 'Defense', 'Model-based Saliency']","""Adversarial perturbations cause a shift in the salient features of an image, which may result in a misclassification. We demonstrate that gradient-based saliency approaches are unable to capture this shift, and develop a new defense which detects adversarial examples based on learnt saliency models instead. We study two approaches: a CNN trained to distinguish between natural and adversarial images using the saliency masks produced by our learnt saliency model, and a CNN trained on the salient pixels themselves as its input. On MNIST, CIFAR-10 and ASSIRA, our defenses are able to detect various adversarial attacks, including strong attacks such as C&W and DeepFool, contrary to gradient-based saliency and detectors which rely on the input image. The latter are unable to detect adversarial images when the L_2- and L_infinity- norms of the perturbations are too small. Lastly, we find that the salient pixel based detector improves on saliency map based detectors as it is more robust to white-box attacks.""","""This submission proposes a method for detecting adversarial attacks using saliency maps. Strengths: -The experimental results are encouraging. Weaknesses: -The novelty is minor. -Experimental validation of some claims (e.g. robustness to white-box attacks) is lacking. These weaknesses were not sufficiently addressed in the discussion phase. AC agrees with the majority recommendation to reject. """ 774,"""Causal Discovery with Reinforcement Learning""","['causal discovery', 'structure learning', 'reinforcement learning', 'directed acyclic graph']","""Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences. Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function. While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are less satisfactory in practice due to finite data and possible violation of assumptions. Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring. Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity. In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward. We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows for a flexible score function under the acyclicity constraint. ""","""This paper proposes an RL-based structure search method for causal discovery. The reviewers and AC think that the idea of applying reinforcement learning to causal structure discovery is novel and intriguing. While there were initially some concerns regarding presentation of the results, these have been taken care of during the discussion period. The reviewers agree that this is a very good submission, which merits acceptance to ICLR-2020.""" 775,"""A Mean-Field Theory for Kernel Alignment with Random Features in Generative Adverserial Networks""","['Kernel Learning', 'Generative Adversarial Networks', 'Mean Field Theory']","""We propose a novel supervised learning method to optimize the kernel in maximum mean discrepancy generative adversarial networks (MMD GANs). Specifically, we characterize a distributionally robust optimization problem to compute a good distribution for the random feature model of Rahimi and Recht to approximate a good kernel function. Due to the fact that the distributional optimization is infinite dimensional, we consider a Monte-Carlo sample average approximation (SAA) to obtain a more tractable finite dimensional optimization problem. We subsequently leverage a particle stochastic gradient descent (SGD) method to solve finite dimensional optimization problems. Based on a mean-field analysis, we then prove that the empirical distribution of the interactive particles system at each iteration of the SGD follows the path of the gradient descent flow on the Wasserstein manifold. We also establish the non-asymptotic consistency of the finite sample estimator. Our empirical evaluation on synthetic data-set as well as MNIST and CIFAR-10 benchmark data-sets indicates that our proposed MMD GAN model with kernel learning indeed attains higher inception scores well as Fr\`{e}chet inception distances and generates better images compared to the generative moment matching network (GMMN) and MMD GAN with untrained kernels.""","""This paper was assessed by three reviewers who scored it as 6/1/6. The main criticism included somewhat weak experiments due to the manual tuning of bandwidth, the use of old (and perhaps mostly solved/not challenging) datasets such as Mnist and Cifar10, lack of ablation studies. The other issue voiced in the review is that the proposed method is very close to a MMD-GAN with a kernel plus random features. Taking into account all positives and negatives, we regret to conclude that this submission falls short of the quality required by ICLR2020, thus it cannot be accepted at this time. """ 776,"""Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation""",[],"""Few-shot classification aims to recognize novel categories with only few labeled images in each class. Existing metric-based few-shot classification algorithms predict categories by comparing the feature embeddings of query images with those from a few labeled images (support examples) using a learned metric function. While promising performance has been demonstrated, these methods often fail to generalize to unseen domains due to large discrepancy of the feature distribution across domains. In this work, we address the problem of few-shot classification under domain shifts for metric-based methods. Our core idea is to use feature-wise transformation layers for augmenting the image features using affine transforms to simulate various feature distributions under different domains in the training stage. To capture variations of the feature distributions under different domains, we further apply a learning-to-learn approach to search for the hyper-parameters of the feature-wise transformation layers. We conduct extensive experiments and ablation studies under the domain generalization setting using five few-shot classification datasets: mini-ImageNet, CUB, Cars, Places, and Plantae. Experimental results demonstrate that the proposed feature-wise transformation layer is applicable to various metric-based models, and provides consistent improvements on the few-shot classification performance under domain shift.""","""This submission addresses the problem of few-shot classification. The proposed solution centers around metric-based models with a core argument that prior work may lead to learned embeddings which are overfit to the few labeled examples available during learning. Thus, when measuring cross-domain performance, the specialization of the original classifier to the initial domain will be apparent through degraded test time (new domain) performance. The authors therefore, study the problem of domain generalization in the few-shot learning scenario. The main algorithmic contribution is the introduction of a feature-wise transformation layer. All reviewers suggest to accept this paper. Reviewer 3 says this problem statement is especially novel. Reviewer 1 and 2 had concerns over lack of comparisons with recent state-of-the-art methods. The authors responded with some additional results during the rebuttal phase, which should be included in the final draft. Overall the AC recommends acceptance, based on the positive comments and the fact that this paper addresses a sufficiently new problem statement. """ 777,"""Anomaly Detection Based on Unsupervised Disentangled Representation Learning in Combination with Manifold Learning""","['anomaly detection', 'disentangled representation learning', 'manifold learning']","""Identifying anomalous samples from highly complex and unstructured data is a crucial but challenging task in a variety of intelligent systems. In this paper, we present a novel deep anomaly detection framework named AnoDM (standing for Anomaly detection based on unsupervised Disentangled representation learning and Manifold learning). The disentanglement learning is currently implemented by beta-VAE for automatically discovering interpretable factorized latent representations in a completely unsupervised manner. The manifold learning is realized by t-SNE for projecting the latent representations to a 2D map. We define a new anomaly score function by combining beta-VAE's reconstruction error in the raw feature space and local density estimation in the t-SNE space. AnoDM was evaluated on both image and time-series data and achieved better results than models that use just one of the two measures and other deep learning methods.""","""The paper presents AnoDM (Anomaly detection based on unsupervised Disentangled representation learning and Manifold learning) that combine beta-VAE and t-SNE for anomaly detection. Experiment results on both image and time series data are shown to demonstrate the effectiveness of the proposed solution. The paper aims to attack a challenging problem. The proposed solution is reasonable. The authors did a job at addressing some of the concerns raised in the reviews. However, two major concerns remain: (1) the novelty in the proposed model (a combination of two existing models) is not clear; (2) the experiment results are not fully convincing. While theoretical analysis is not a must for all models, it would be useful to conduct thorough experiments to fully understand how the model works, which is missing in the current version. Given the two reasons above, the paper did not attract enough enthusiasm from the reviewers during the discussion. We hope the reviews can help improve the paper for a better publication in the future. """ 778,"""SAFE-DNN: A Deep Neural Network with Spike Assisted Feature Extraction for Noise Robust Inference""","['Noise robust', 'deep learning', 'DNN', 'image classification']","""We present a Deep Neural Network with Spike Assisted Feature Extraction (SAFE-DNN) to improve robustness of classification under stochastic perturbation of inputs. The proposed network augments a DNN with unsupervised learning of low-level features using spiking neuron network (SNN) with Spike-Time-Dependent-Plasticity (STDP). The complete network learns to ignore local perturbation while performing global feature detection and classification. The experimental results on CIFAR-10 and ImageNet subset demonstrate improved noise robustness for multiple DNN architectures without sacrificing accuracy on clean images.""",""" The paper proposes to improve noise robustness of the network learned features, by augmenting deep networks with Spike-Time-Dependent-Plasticity (STDP). The new network show improved noise robustness with better classification accuracy on Cifar10 and ImageNet subset when input data have noise. While this paper is well written, a number of concerns are raised by the reviewers. They include that the proposed method would not be favored from computer vision perspective, it is not convincing why spiking nets are more robust to random noises, and the method fails to address works in adversarial perturbations and adversarial training. Also, Reviewer #2 pointed out the low level of methodological novelty. The authors provided response to the questions, but did not change the rating of the reviewers. Given the various concerns raised, the ACs recommend reject.""" 779,"""Differentially Private Mixed-Type Data Generation For Unsupervised Learning""","['Differential privacy', 'synthetic data', 'private data generation', 'mixed-type', 'unsupervised learning', 'autoencoder', 'GAN', 'private deep learning']","""In this work we introduce the DP-auto-GAN framework for synthetic data generation, which combines the low dimensional representation of autoencoders with the flexibility of GANs. This framework can be used to take in raw sensitive data, and privately train a model for generating synthetic data that should satisfy the same statistical properties as the original data. This learned model can be used to generate arbitrary amounts of publicly available synthetic data, which can then be freely shared due to the post-processing guarantees of differential privacy. Our framework is applicable to unlabled \emph{mixed-type data}, that may include binary, categorical, and real-valued data. We implement this framework on both unlabeled binary data (MIMIC-III) and unlabeled mixed-type data (ADULT). We also introduce new metrics for evaluating the quality of synthetic mixed-type data, particularly in unsupervised settings.""","""This provides a new method, called DPAutoGAN, for the problem of differentially private synthetic generation. The method uses private auto-encoder to reduce the dimension of the data, and apply private GAN on the latent space. The reviewers think that there is not sufficient justification for why this is a good approach for synthetic generation. They also think that the presentation is not ready for publication.""" 780,"""Training Provably Robust Models by Polyhedral Envelope Regularization""","['deep learning', 'adversarial attack', 'robust certification']","""Training certifiable neural networks enables one to obtain models with robustness guarantees against adversarial attacks. In this work, we use a linear approximation to bound models output given an input adversarial budget. This allows us to bound the adversary-free region in the data neighborhood by a polyhedral envelope and yields finer-grained certified robustness than existing methods. We further exploit this certifier to introduce a framework called polyhedral envelope regular- ization (PER), which encourages larger polyhedral envelopes and thus improves the provable robustness of the models. We demonstrate the flexibility and effectiveness of our framework on standard benchmarks; it applies to networks with general activation functions and obtains comparable or better robustness guarantees than state-of-the-art methods, with very little cost in clean accuracy, i.e., without over-regularizing the model.""","""The authors develop a new technique for training neural networks to be provably robust to adversarial attacks. The technique relies on constructing a polyhedral envelope on the feasible set of activations and using this to derive a lower bound on the maximum certified radius. By training with this as a regularizer, the authors are able to train neural networks that achieve strong provable robustness to adversarial attacks. The paper makes a number of interesting contributions that the reviewers appreciated. However, two of the reviewers had some concerns with the significance of the contributions made: 1) The contributions of the paper are not clearly defined relative to prior work on bound propagation (Fast-Lin/KW/CROWN). In particular, the authors simply use the linear approximation derived in these prior works to obtain a bound on the radius to be certified. The authors claim faster convergence based on this, but this does not seem like a very significant contribution. 2) The improvements on the state of the art are marginal. These were discussed in detail during the rebuttal phase and the two reviewers with concerns about the paper decided to maintain their score after reading the rebuttals, as the fundamental issues above were not Given these concerns, I believe this paper is borderline - it has some interesting contributions, but the overall novelty on the technical side and strength of empirical results is not very high.""" 781,"""Learning Latent Dynamics for Partially-Observed Chaotic Systems""","['Dynamical systems', 'Neural networks', 'Embedding', 'Partially observed systems', 'Forecasting', 'chaos']","""This paper addresses the data-driven identification of latent representations of partially-observed dynamical systems, i.e. dynamical systems whose some components are never observed, with an emphasis on forecasting applications and long-term asymptotic patterns. Whereas state-of-the-art data-driven approaches rely on delay embeddings and linear decompositions of the underlying operators, we introduce a framework based on the data-driven identification of an augmented state-space model using a neural-network-based representation. For a given training dataset, it amounts to jointly reconstructing the latent states and learning an ODE (Ordinary Differential Equation) representation in this space. Through numerical experiments, we demonstrate the relevance of the proposed framework w.r.t. state-of-the-art approaches in terms of short-term forecasting errors and long-term behaviour. We further discuss how the proposed framework relates to Koopman operator theory and Takens' embedding theorem.""","""This paper presents an ODE-based latent variable model, argues that extra unobserved dimensions are necessary in general, and that deterministic encodings are also insufficient in general. Instead, they optimize the latent representation during training. They include small-scale experiments showing that their framework beats alternatives. In my mind, the argument about fixed mappings being inadequate is a fair one, but it misses the fact that the variational inference framework already has several ways to address this shortcoming: 1) The recognition network outputs a distribution over latent values, which in itself does not address this issue, but provides regularization benefits. 2) The recognition network is just a strategy for speeding up inference. There's no reason you can't just do variational inference or MCMC for inference instead (which is similar to your approach), or do semi-amortized variational inference. Basically, this paper could have been somewhat convincing as a general exploration of approximate inference strategies in the latent ODE model. Instead, it provides a lot of philosophical arguments and a small amount of empirical evidence that a particular encoder is insufficient when doing MAP inference. It also seems like a problem that hyperparameters were copied from Chen et al 2018, but are used in a MAP setting instead of a VAE setting. Finally, it's not clear how hyperparameters such as the size of the latent dimensions were chosen.""" 782,"""LEARNING EXECUTION THROUGH NEURAL CODE FUSION""","['code understanding', 'graph neural networks', 'learning program execution', 'execution traces', 'program performance']","""As the performance of computer systems stagnates due to the end of Moores Law, there is a need for new models that can understand and optimize the execution of general purpose code. While there is a growing body of work on using Graph Neural Networks (GNNs) to learn static representations of source code, these representations do not understand how code executes at runtime. In this work, we propose a new approach using GNNs to learn fused representations of general source code and its execution. Our approach defines a multi-task GNN over low-level representations of source code and program state (i.e., assembly code and dynamic memory states), converting complex source code constructs and data structures into a simpler, more uniform format. We show that this leads to improved performance over similar methods that do not use execution and it opens the door to applying GNN models to new tasks that would not be feasible from static code alone. As an illustration of this, we apply the new model to challenging dynamic tasks (branch prediction and prefetching) from the SPEC CPU benchmark suite, outperforming the state-of-the-art by 26% and 45% respectively. Moreover, we use the learned fused graph embeddings to demonstrate transfer learning with high performance on an indirectly related algorithm classification task.""","""This paper presents a method to learn representations of programs via code and execution. The paper presents an interesting method, and results on branch prediction and address pre-fetching are conclusive. The only main critiques associated with this paper seemed to be (1) potential lack of interest to the ICLR community, and (2) lack of comparison to other methods that similarly improve performance using other varieties of information. I am satisfied by the authors' responses to these concerns, and believe the paper warrants acceptance.""" 783,"""Under what circumstances do local codes emerge in feed-forward neural networks""","['localist coding', 'emergence', 'contructionist science', 'neural networks', 'feed-forward', 'learning representation', 'distributed coding', 'generalisation', 'memorisation', 'biological plausibility', 'deep-NNs', 'training conditions']","""Localist coding schemes are more easily interpretable than the distributed schemes but generally believed to be biologically implausible. Recent results have found highly selective units and object detectors in NNs that are indicative of local codes (LCs). Here we undertake a constructionist study on feed-forward NNs and find LCs emerging in response to invariant features, and this finding is robust until the invariant feature is perturbed by 40%. Decreasing the number of input data, increasing the relative weight of the invariant features and large values of dropout all increase the number of LCs. Longer training times increase the number of LCs and the turning point of the LC-epoch curve correlates well with the point at which NNs reach 90-100% on both test and training accuracy. Pseudo-deep networks (2 hidden layers) which have many LCs lose them when common aspects of deep-NN research are applied (large training data, ReLU activations, early stopping on training accuracy and softmax), suggesting that LCs may not be found in deep-NNs. Switching to more biologically feasible constraints (sigmoidal activation functions, longer training times, dropout, activation noise) increases the number of LCs. If LCs are not found in the feed-forward classification layers of modern deep-CNNs these data suggest this could either be caused by a lack of (moderately) invariant features being passed to the fully connected layers or due to the choice of training conditions and architecture. Should the interpretability and resilience to noise of LCs be required, this work suggests how to tune a NN so they emerge. ""","""This paper studies when hidden units provide local codes by analyzing the hidden units of trained fully connected classification networks under various architectures and regularizers. The reviewers and the AC believe that the paper in its current form is not ready for acceptance to ICLR-2020. Further work and experiments are needed in order to identify an explanation for the emergence of local codes. This would significantly strengthen the paper.""" 784,"""DEEP GRAPH SPECTRAL EVOLUTION NETWORKS FOR GRAPH TOPOLOGICAL TRANSFORMATION""","['deep graph learning', 'graph transformation', 'brain network']","""Characterizing the underlying mechanism of graph topological evolution from a source graph to a target graph has attracted fast increasing attention in the deep graph learning domain. However, there lacks expressive and efficient that can handle global and local evolution patterns between source and target graphs. On the other hand, graph topological evolution has been investigated in the graph signal processing domain historically, but it involves intensive labors to manually determine suitable prescribed spectral models and prohibitive difficulty to fit their potential combinations and compositions. To address these challenges, this paper proposes the deep Graph Spectral Evolution Network (GSEN) for modeling the graph topology evolution problem by the composition of newly-developed generalized graph kernels. GSEN can effectively fit a wide range of existing graph kernels and their combinations and compositions with the theoretical guarantee and experimental verification. GSEN has outstanding efficiency in terms of time complexity ( pseudo-formula ) and parameter complexity ( pseudo-formula ), where pseudo-formula is the number of nodes of the graph. Extensive experiments on multiple synthetic and real-world datasets have demonstrated outstanding performance.""","""The reviewers kept their scores after the author response period, pointing to continued concerns with methodology, needing increased exposition in parts, and not being able to verify theoretical results. As such, my recommendation is to improve the clarity around the methodological and theoretical contributions in a revision.""" 785,"""Self-supervised Training of Proposal-based Segmentation via Background Prediction""",[],"""While supervised object detection and segmentation methods achieve impressive accuracy, they generalize poorly to images whose appearance significantly differs from the data they have been trained on. To address this in scenarios where annotating data is prohibitively expensive, we introduce a self-supervised approach to detection and segmentation, able to work with monocular images captured with a moving camera. At the heart of our approach lies the observations that object segmentation and background reconstruction are linked tasks, and that, for structured scenes, background regions can be re-synthesized from their surroundings, whereas regions depicting the object cannot. We encode this intuition as a self-supervised loss function that we exploit to train a proposal-based segmentation network. To account for the discrete nature of the proposals, we develop a Monte Carlo-based training strategy that allows the algorithm to explore the large space of object proposals. We apply our method to human detection and segmentation in images that visually depart from those of standard benchmarks, achieving competitive results compared to the few existing self-supervised methods and approaching the accuracy of supervised ones that exploit large annotated datasets.""","""This work proposes a self-supervised segmentation method: building upon Crawford and Pineau 2019, this work adds a Monte-Carlo based training strategy to explore object proposals. Reviewers found the method interesting and clever, but shared concerns about the lack of a better comparison to Crawford and Pineau, as well as generally a lack of care in comparisons to others, which were not satisfactorily addressed by authors response. For these reasons, we recommend rejection.""" 786,"""Goal-Conditioned Video Prediction""","['predictive models', 'video prediction', 'latent variable models']","""Many processes can be concisely represented as a sequence of events leading from a starting state to an end state. Given raw ingredients, and a finished cake, an experienced chef can surmise the recipe. Building upon this intuition, we propose a new class of visual generative models: goal-conditioned predictors (GCP). Prior work on video generation largely focuses on prediction models that only observe frames from the beginning of the video. GCP instead treats videos as start-goal transformations, making video generation easier by conditioning on the more informative context provided by the first and final frames. Not only do existing forward prediction approaches synthesize better and longer videos when modified to become goal-conditioned, but GCP models can also utilize structures that are not linear in time, to accomplish hierarchical prediction. To this end, we study both auto-regressive GCP models and novel tree-structured GCP models that generate frames recursively, splitting the video iteratively into finer and finer segments delineated by subgoals. In experiments across simulated and real datasets, our GCP methods generate high-quality sequences over long horizons. Tree-structured GCPs are also substantially easier to parallelize than auto-regressive GCPs, making training and inference very efficient, and allowing the model to train on sequences that are thousands of frames in length.Finally, we demonstrate the utility of GCP approaches for imitation learning in the setting without access to expert actions. Videos are on the supplementary website: pseudo-url""","""The paper addresses a video generation setting where both initial and goal state are provided as a basis for long-term prediction. The authors propose two types of models, sequential and hierarchical, and obtain interesting insights into the performance of these two models. Reviewers raised concerns about evaluation metrics, empirical comparisons, and the relationship of the proposed model to prior work. While many of the initial concerns have been addressed by the authors, reviewers remain concerned about two issues in particular. First, the proposed model is similar to previous approaches with sequential latent variable models, and it is unclear how such existing models would compare if applied in this setting. Second, there are remaining concerns on whether the model may learn degenerate solutions. I quote from the discussion here, as I am not sure this will be visible to authors [about Figure 12]: ""now the two examples with two samples they show have the same door in the middle frame which makes me doubt the method learn[s] anything meaningful in terms of the agent walking through the door but just go to the middle of the screen every time.""""" 787,"""Deceptive Opponent Modeling with Proactive Network Interdiction for Stochastic Goal Recognition Control""",[],"""Goal recognition based on the observations of the behaviors collected online has been used to model some potential applications. Newly formulated problem of goal recognition design aims at facilitating the online goal recognition process by performing offline redesign of the underlying environment with hard action removal. In this paper, we propose the stochastic goal recognition control (S-GRC) problem with two main stages: (1) deceptive opponent modeling based on maximum entropy regularized Markov decision processes (MDPs) and (2) goal recognition control under proactively static interdiction. For the purpose of evaluation, we propose to use the worst case distinctiveness (wcd) as a measure of the non-distinctive path without revealing the true goals, the task of S-GRC is to interdict a set of actions that improve or reduce the wcd. We empirically demonstrate that our proposed approach control the goal recognition process based on opponent's deceptive behavior.""","""This paper has been withdrawn by the authors.""" 788,"""Manifold Modeling in Embedded Space: A Perspective for Interpreting ""Deep Image Prior""""","['Deep image prior', 'Manifold model', 'Auto-encoder', 'Convolutional neural network', 'Delay-embedding', 'Hankelization', 'Tensor completion', 'Image inpainting', 'Supperresolution']","""Deep image prior (DIP), which utilizes a deep convolutional network (ConvNet) structure itself as an image prior, has attracted huge attentions in computer vision community. It empirically shows the effectiveness of ConvNet structure for various image restoration applications. However, why the DIP works so well is still unknown, and why convolution operation is essential for image reconstruction or enhancement is not very clear. In this study, we tackle these questions. The proposed approach is dividing the convolution into ``delay-embedding'' and ``transformation (\ie encoder-decoder)'', and proposing a simple, but essential, image/tensor modeling method which is closely related to dynamical systems and self-similarity. The proposed method named as manifold modeling in embedded space (MMES) is implemented by using a novel denoising-auto-encoder in combination with multi-way delay-embedding transform. In spite of its simplicity, the image/tensor completion and super-resolution results of MMES are quite similar even competitive to DIP in our extensive experiments, and these results would help us for reinterpreting/characterizing the DIP from a perspective of ``low-dimensional patch-manifold prior''.""","""The paper proposes a combination of a delay embedding as well as an autoencoder to perform representation learning. The proposed algorithm shows competitive performance with deep image prior, which is a convnet structure. The paper claims that the new approach is interpretable and provides explainable insight into image priors. The discussion period was used constructively, with the authors addressing reviewer comments, and the reviewers acknowledging this an updating their scores. Overall, the proposed architecture is good, but the structure and presentation of the paper is still not up to the standards of ICLR. The current presentation seems to over-claim interpretability, without sufficient theoretical or empirical evidence. """ 789,"""FSNet: Compression of Deep Convolutional Neural Networks by Filter Summary""","['Compression of Convolutional Neural Networks', 'Filter Summary CNNs', 'Weight Sharing']","""We present a novel method of compression of deep Convolutional Neural Networks (CNNs) by weight sharing through a new representation of convolutional filters. The proposed method reduces the number of parameters of each convolutional layer by learning a pseudo-formula D vector termed Filter Summary (FS). The convolutional filters are located in FS as overlapping pseudo-formula D segments, and nearby filters in FS share weights in their overlapping regions in a natural way. The resultant neural network based on such weight sharing scheme, termed Filter Summary CNNs or FSNet, has a FS in each convolution layer instead of a set of independent filters in the conventional convolution layer. FSNet has the same architecture as that of the baseline CNN to be compressed, and each convolution layer of FSNet has the same number of filters from FS as that of the basline CNN in the forward process. With compelling computational acceleration ratio, the parameter space of FSNet is much smaller than that of the baseline CNN. In addition, FSNet is quantization friendly. FSNet with weight quantization leads to even higher compression ratio without noticeable performance loss. We further propose Differentiable FSNet where the way filters share weights is learned in a differentiable and end-to-end manner. Experiments demonstrate the effectiveness of FSNet in compression of CNNs for computer vision tasks including image classification and object detection, and the effectiveness of DFSNet is evidenced by the task of Neural Architecture Search.""","""The paper proposes to compress convolutional neural networks via weight sharing across filters of each convolution layer. A fast convolution algorithm is also designed for the convolution layer with this approach. Experimental results show (i) effectiveness in CNN compression, (ii) acceleration on the tasks of image classification, object detection and neural architecture search. While the authors addressed most of reviewers' concerns, the weakness of the paper which remains is that no wall-clock runtime numbers (only FLOPS) are reported - so efficiency of the approach in practice in uncertain. """ 790,"""Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures""","['Random Matrix Theory', 'Deep Learning Representations', 'GANs']","""This paper shows that deep learning (DL) representations of data produced by generative adversarial nets (GANs) are random vectors which fall within the class of so-called concentrated random vectors. Further exploiting the fact that Gram matrices, of the type G = X'X with X = [x_1 , . . . , x_n ] R pn and x_i independent concentrated random vectors from a mixture model, behave asymptotically (as n, p ) as if the x_i were drawn from a Gaussian mixture, suggests that DL representations of GAN-data can be fully described by their first two statistical moments for a wide range of standard classifiers. Our theoretical findings are validated by generating images with the BigGAN model and across different popular deep representation networks.""","""The paper theoretically shows that the data (embedded by representations learned by GANs) are essentially the same as a high dimensional Gaussian mixture. The result is based on a recent result from random matrix theory on the covariance matrix of data, which the authors extend to a theorem on the Gram matrix of the data. The authors also provide a small experiment comparing the spectrum and principle 2D subspace of BigGAN and Gaussian mixtures, demonstrating that their theorem applies in practice. Two of the reviews (with confident reviewers) were quite negative about the contributions of the paper, and the reviewers unfortunately did not participate in the discussion period. Overall, the paper seems solid, but the reviews indicate that improvements are needed in the structure and presentation of the theoretical results. Given the large number of submissions at ICLR this year, the paper in its current form does not pass the quality threshold for acceptance.""" 791,"""Wyner VAE: A Variational Autoencoder with Succinct Common Representation Learning""","[""Wyner's common information"", 'information theoretic regularization', 'information bottleneck', 'representation learning', 'generative models', 'conditional generation', 'joint generation', 'style transfer', 'variational autoencoders']","""A new variational autoencoder (VAE) model is proposed that learns a succinct common representation of two correlated data variables for conditional and joint generation tasks. The proposed Wyner VAE model is based on two information theoretic problems---distributed simulation and channel synthesis---in which Wyner's common information arises as the fundamental limit of the succinctness of the common representation. The Wyner VAE decomposes a pair of correlated data variables into their common representation (e.g., a shared concept) and local representations that capture the remaining randomness (e.g., texture and style) in respective data variables by imposing the mutual information between the data variables and the common representation as a regularization term. The utility of the proposed approach is demonstrated through experiments for joint and conditional generation with and without style control using synthetic data and real images. Experimental results show that learning a succinct common representation achieves better generative performance and that the proposed model outperforms existing VAE variants and the variational information bottleneck method.""","""This paper adds a new model to the literature on representation learning from correlated variables with some common and some ""private"" dimensions, and takes a variational approach based on Wyner's common information. The literature in this area includes models where both of the correlated variables are assumed to be available as input at all times, as well as models where only one of the two may be available; the proposed approach falls into the first category. Pros: The reviewers generally agree, as do I, that the motivation is very interesting and the resulting model is reasonable and produces solid results. Cons: The model is somewhat complex and the paper is lacking a careful ablation study on the components. In addition, the results are not a clear ""win"" for the proposed model. The authors have started to do an ablation study, and I think eventually an interesting story is likely to come out of that. But at the moment the paper feels a bit too preliminary/inconclusive for publication.""" 792,"""On the Pareto Efficiency of Quantized CNN""","['convolutional neural networks quantization', 'model compression', 'efficient neural network']","""Weight Quantization for deep convolutional neural networks (CNNs) has shown promising results in compressing and accelerating CNN-powered applications such as semantic segmentation, gesture recognition, and scene understanding. Prior art has shown that different datasets, tasks, and network architectures admit different iso-accurate precision values, which increase the complexity of efficient quantized neural network implementations from both hardware and software perspectives. In this work, we show that when the number of channels is allowed to vary in an iso-model size scenario, lower precision values Pareto dominate higher precision ones (in accuracy vs. model size) for networks with standard convolutions. Relying on comprehensive empirical analyses, we find that the Pareto optimal precision value of a convolution layer depends on the number of input channels per output filters and provide theoretical insights for it. To this end, we develop a simple algorithm to select the precision values for CNNs that outperforms corresponding 8-bit quantized networks by 0.9% and 2.2% in top-1 accuracy on ImageNet for ResNet50 and MobileNetV2, respectively. ""","""This paper studies the trade-off between the model size and quantization levels in quantized CNNs by varying different channel width multipliers. The paper is well motivated and draws interesting observations but can be improved in terms of evaluation. It is a borderline case and rejection is made due to the high competition. """ 793,"""Geometric Analysis of Nonconvex Optimization Landscapes for Overcomplete Learning""","['dictionary learning', 'sparse representations', 'nonconvex optimization']","""Learning overcomplete representations finds many applications in machine learning and data analytics. In the past decade, despite the empirical success of heuristic methods, theoretical understandings and explanations of these algorithms are still far from satisfactory. In this work, we provide new theoretical insights for several important representation learning problems: learning (i) sparsely used overcomplete dictionaries and (ii) convolutional dictionaries. We formulate these problems as pseudo-formula -norm optimization problems over the sphere and study the geometric properties of their nonconvex optimization landscapes. For both problems, we show the nonconvex objective has benign (global) geometric structures, which enable the development of efficient optimization methods finding the target solutions. Finally, our theoretical results are justified by numerical simulations. ""","""This paper investigates the use non-convex optimization for two dictionary learning problems, i.e., over-complete dictionary learning and convolutional dictionary learning. The paper provides theoretical results, associated with empirical experiments, about the fact that, that when formulating the problem as an l4 optimization, gives rise to a landscape with strict saddle points and as such, they can be escaped with negative curvature. As a result, descent methods can be used for learning with provable guarantees. All reviews found the work extremely interesting, highlighting the importance of the results that constitute ""a solid improvement over the prior understandings on over-complete DL"" and ""extends our understanding of provable methods for dictionary learning"". This is an interesting submission on non-convex optimization, and as such of interest to the ML community of ICLR . I'm recommending this work for acceptance.""" 794,"""Latent Question Reformulation and Information Accumulation for Multi-Hop Machine Reading""","['question-answering', 'machine comprehension', 'deep learning']","""Multi-hop text-based question-answering is a current challenge in machine comprehension. This task requires to sequentially integrate facts from multiple passages to answer complex natural language questions. In this paper, we propose a novel architecture, called the Latent Question Reformulation Network (LQR-net), a multi-hop and parallel attentive network designed for question-answering tasks that require reasoning capabilities. LQR-net is composed of an association of \textbf{reading modules} and \textbf{reformulation modules}. The purpose of the reading module is to produce a question-aware representation of the document. From this document representation, the reformulation module extracts essential elements to calculate an updated representation of the question. This updated question is then passed to the following hop. We evaluate our architecture on the \hotpotqa question-answering dataset designed to assess multi-hop reasoning capabilities. Our model achieves competitive results on the public leaderboard and outperforms the best current \textit{published} models in terms of Exact Match (EM) and pseudo-formula score. Finally, we show that an analysis of the sequential reformulations can provide interpretable reasoning paths.""","""This paper proposes a novel approach, Latent Question Reformulation Network (LQR-net), a multi-hop and parallel attentive network designed for question-answering tasks that require multi-hop reasoning capabilities. Experimental results on the HotPotQA dataset achieve competitive results and outperform the top system in terms of exact match and F1 scores. However, reviewers note the limited setting of the experiments on the unrealistic, closed-domain setting of this dataset and suggested experimenting with other data (such as complex WebQuesitons). Reviewers were also concerned about the scalability of the system due to the significant amount of computations. They also noted several previous studies were not included in the paper. Authors acknowledged and made changes according to these suggestions. They also included experiments only on the open-domain subset of the HotPotQA in their rebuttal, unfortunately the results are not as good as before. Hence, I suggest rejecting this paper.""" 795,"""MetaPix: Few-Shot Video Retargeting""","['Meta-learning', 'Few-shot Learning', 'Generative Adversarial Networks', 'Video Retargeting']","""We address the task of unsupervised retargeting of human actions from one video to another. We consider the challenging setting where only a few frames of the target is available. The core of our approach is a conditional generative model that can transcode input skeletal poses (automatically extracted with an off-the-shelf pose estimator) to output target frames. However, it is challenging to build a universal transcoder because humans can appear wildly different due to clothing and background scene geometry. Instead, we learn to adapt or personalize a universal generator to the particular human and background in the target. To do so, we make use of meta-learning to discover effective strategies for on-the-fly personalization. One significant benefit of meta-learning is that the personalized transcoder naturally enforces temporal coherence across its generated frames; all frames contain consistent clothing and background geometry of the target. We experiment on in-the-wild internet videos and images and show our approach improves over widely-used baselines for the task. ""","""Three reviewers have assessed this paper and they have scored it 6/6/6 after rebuttal. Nonetheless, the reviewers have raised a number of criticisms and the authors are encouraged to resolve them for the camera-ready submission.""" 796,"""Identity Crisis: Memorization and Generalization Under Extreme Overparameterization""","['Generalization', 'Memorization', 'Understanding', 'Inductive Bias']","""We study the interplay between memorization and generalization of overparameterized networks in the extreme case of a single training example and an identity-mapping task. We examine fully-connected and convolutional networks (FCN and CNN), both linear and nonlinear, initialized randomly and then trained to minimize the reconstruction error. The trained networks stereotypically take one of two forms: the constant function (memorization) and the identity function (generalization). We formally characterize generalization in single-layer FCNs and CNNs. We show empirically that different architectures exhibit strikingly different inductive biases. For example, CNNs of up to 10 layers are able to generalize from a single example, whereas FCNs cannot learn the identity function reliably from 60k examples. Deeper CNNs often fail, but nonetheless do astonishing work to memorize the training output: because CNN biases are location invariant, the model must progressively grow an output pattern from the image boundaries via the coordination of many layers. Our work helps to quantify and visualize the sensitivity of inductive biases to architectural choices such as depth, kernel width, and number of channels. ""","""The paper studies the effect of various hyperparameters of neural networks including architecture, width, depth, initialization, optimizer, etc. on the generalization and memorization. The paper carries out a rather through empirical study of these phenomena. The authors also rain a model to mimic identity function which allows rich visualization and easy evaluation. The reviewers were mostly positive but expressed concern about the general picture. One reviewer also has concerns about ""generality of the observed phenomenon in this paper"". The authors had a thorough response which addressed many of these concerns. My view of the paper is positive. I think the authors do a great job of carrying out careful experiments. As a result I think this is a good addition to ICLR and recommend acceptance.""" 797,"""Meta-Learning Runge-Kutta""",[],"""Initial value problems, i.e. differential equations with specific, initial conditions, represent a classic problem within the field of ordinary differential equations(ODEs). While the simplest types of ODEs may have closed-form solutions, most interesting cases typically rely on iterative schemes for numerical integration such as the family of Runge-Kutta methods. They are, however, sensitive to the strategy the step size is adapted during integration, which has to be chosen by the experimenter. In this paper, we show how the design of a step size controller can be cast as a learning problem, allowing deep networks to learn to exploit structure in the initial value problem at hand in an automatic way. The key ingredients for the resulting Meta-Learning Runge-Kutta (MLRK) are the development of a good performance measure and the identification of suitable input features. Traditional approaches suggest the local error estimates as input to the controller. However, by studying the characteristics of the local error function we show that including the partial derivatives of the initial value problem is favorable. Our experiments demonstrate considerable benefits over traditional approaches. In particular, MLRK is able to mitigate sudden spikes in the local error function by a faster adaptation of the step size. More importantly, the additional information in the form of partial derivatives and function values leads to a substantial improvement in performance. The source code can be found at pseudo-url""","""Summary: This paper casts the problem of step-size tuning in the Runge-Kutta method as a meta learning problem. The paper gives a review of the existing approaches to step size control in RK method. Deriving knowledge from these approaches the paper reasons about appropriate features and loss functions to use in the meta learning update. The paper shows that the proposed approach is able to generalize sufficiently enough to obtain better performance than a baseline. The paper was lacking in advocates for its merits, and needs better comparisons with other baselines before it is ready to be published.""" 798,"""Ellipsoidal Trust Region Methods for Neural Network Training""","['non-convex', 'optimization', 'neural networks', 'trust-region']","""We investigate the use of ellipsoidal trust region constraints for second-order optimization of neural networks. This approach can be seen as a higher-order counterpart of adaptive gradient methods, which we here show to be interpretable as first-order trust region methods with ellipsoidal constraints. In particular, we show that the preconditioning matrix used in RMSProp and Adam satisfies the necessary conditions for provable convergence of second-order trust region methods with standard worst-case complexities. Furthermore, we run experiments across different neural architectures and datasets to find that the ellipsoidal constraints constantly outperform their spherical counterpart both in terms of number of backpropagations and asymptotic loss value. Finally, we find comparable performance to state-of-the-art first-order methods in terms of backpropagations, but further advances in hardware are needed to render Newton methods competitive in terms of time.""","""This paper interprets adaptive gradient methods as trust region methods, and then extends the trust regions to axis-aligned ellipsoids determined by the approximate curvature. It's fairly natural to try to extend the algorithms in this way, but the paper doesn't show much evidence that this is actually effective. (The experiments show an improvement only in terms of iterations, which doesn't account for the computational cost or the increased batch size; there doesn't seem to be an improvement in terms of epochs.) I suspect the second-order version might also lose some of the online convex optimization guarantees of the original methods, raising the question of whether the trust-region interpretation really captures the benefits of the original methods. The reviewers recommend rejection (even after discussion) because they are unsatisfied with the experiments; I agree with their assessment. """ 799,"""Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech""","['visually-grounded speech', 'self-supervised learning', 'discrete representation learning', 'vision and language', 'vision and speech', 'hierarchical representation learning']","""In this paper, we present a method for learning discrete linguistic units by incorporating vector quantization layers into neural models of visually grounded speech. We show that our method is capable of capturing both word-level and sub-word units, depending on how it is configured. What differentiates this paper from prior work on speech unit learning is the choice of training objective. Rather than using a reconstruction-based loss, we use a discriminative, multimodal grounding objective which forces the learned units to be useful for semantic image retrieval. We evaluate the sub-word units on the ZeroSpeech 2019 challenge, achieving a 27.3% reduction in ABX error rate over the top-performing submission, while keeping the bitrate approximately the same. We also present experiments demonstrating the noise robustness of these units. Finally, we show that a model with multiple quantizers can simultaneously learn phone-like detectors at a lower layer and word-like detectors at a higher layer. We show that these detectors are highly accurate, discovering 279 words with an F1 score of greater than 0.5.""","""The paper is extremely well-written with a clear motivation (Section 1). The approach is novel. But I think the paper's biggest strength is in its very thorough experimental investigation. Their approach is compared to other very recent speech discretization methods on the same data using the same (ABX) evaluation metric. But the work goes further in that it systematically attempts to actually understand what types of structures are captured in the intermediate discrete layers, and it is able to answer this question convincingly. Finally, very good results on standard benchmarks are achieved. To authors: Please do include the additional discussions and results in the final paper. """ 800,"""Hierarchical Disentangle Network for Object Representation Learning""",[],"""An object can be described as the combination of primary visual attributes. Disentangling such underlying primitives is the long objective of representation learning. It is observed that categories have the natural multi-granularity or hierarchical characteristics, i.e. any two objects can share some common primitives in a particular category granularity while they may possess their unique ones in another granularity. However, previous works usually operate in a flat manner (i.e. in a particular granularity) to disentangle the representations of objects. Though they may obtain the primitives to constitute objects as the categories in that granularity, their results are obviously not efficient and complete. In this paper, we propose the hierarchical disentangle network (HDN) to exploit the rich hierarchical characteristics among categories to divide the disentangling process in a coarse-to-fine manner, such that each level only focuses on learning the specific representations in its granularity and finally the common and unique representations in all granularities jointly constitute the raw object. Specifically, HDN is designed based on an encoder-decoder architecture. To simultaneously ensure the disentanglement and interpretability of the encoded representations, a novel hierarchical generative adversarial network (GAN) is elaborately designed. Quantitative and qualitative evaluations on four object datasets validate the effectiveness of our method.""","""The authors propose a new method for learning hierarchically disentangled representations. One reviewer is positive, one is between weak accept and borderline and two reviewers recommend rejection, and keep their assessment after rebuttal and a discussion. The main criticism is the lack of disentanglement metrics and comparisons. After reading the paper and the discussion, the AC tends to agree with the negative reviewers. Authors are encouraged to strengthen their work and resubmit to a future venue.""" 801,"""ConQUR: Mitigating Delusional Bias in Deep Q-Learning""","['reinforcement learning', 'q-learning', 'deep reinforcement learning', 'Atari']","""Delusional bias is a fundamental source of error in approximate Q-learning. To date, the only techniques that explicitly address delusion require comprehensive search using tabular value estimates. In this paper, we develop efficient methods to mitigate delusional bias by training Q-approximators with labels that are ""consistent"" with the underlying greedy policy class. We introduce a simple penalization scheme that encourages Q-labels used across training batches to remain (jointly) consistent with the expressible policy class. We also propose a search framework that allows multiple Q-approximators to be generated and tracked, thus mitigating the effect of premature (implicit) policy commitments. Experimental results demonstrate that these methods can improve the performance of Q-learning in a variety of Atari games, sometimes dramatically.""","""While there was some support for the ideas presented, the majority of reviewers felt that this submission is not ready for publication at ICLR in its present form. Concerns raised included the need for better motivation of the practicality of the approach, versus its computational cost. The need for improved evaluations was also raised.""" 802,"""Defending Against Adversarial Examples by Regularized Deep Embedding""",[],"""Recent studies have demonstrated the vulnerability of deep convolutional neural networks against adversarial examples. Inspired by the observation that the intrinsic dimension of image data is much smaller than its pixel space dimension and the vulnerability of neural networks grows with the input dimension, we propose to embed high-dimensional input images into a low-dimensional space to perform classification. However, arbitrarily projecting the input images to a low-dimensional space without regularization will not improve the robustness of deep neural networks. We propose a new framework, Embedding Regularized Classifier (ER-Classifier), which improves the adversarial robustness of the classifier through embedding regularization. Experimental results on several benchmark datasets show that, our proposed framework achieves state-of-the-art performance against strong adversarial attack methods.""","""The paper suggests a new way to defend against adversarial attacks on neural networks. Two of the reviewers were negative, one of them (the most experienced in the subarea) strongly negative. One reviewer is weakly positive. The main two concerns of the reviewers are insufficient comparisons with SOTA and lack of clarity. The authors' response, though detailed, has not convinced the reviewers and has not alleviated their concerns. """ 803,"""Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control""","['Embed-to-Control', 'Representation Learning', 'Stochastic Optimal Control', 'VAE', 'iLQR']","""Many real-world sequential decision-making problems can be formulated as optimal control with high-dimensional observations and unknown dynamics. A promising approach is to embed the high-dimensional observations into a lower-dimensional latent representation space, estimate the latent dynamics model, then utilize this model for control in the latent space. An important open question is how to learn a representation that is amenable to existing control algorithms? In this paper, we focus on learning representations for locally-linear control algorithms, such as iterative LQR (iLQR). By formulating and analyzing the representation learning problem from an optimal control perspective, we establish three underlying principles that the learned representation should comprise: 1) accurate prediction in the observation space, 2) consistency between latent and observation space dynamics, and 3) low curvature in the latent space transitions. These principles naturally correspond to a loss function that consists of three terms: prediction, consistency, and curvature (PCC). Crucially, to make PCC tractable, we derive an amortized variational bound for the PCC loss function. Extensive experiments on benchmark domains demonstrate that the new variational-PCC learning algorithm benefits from significantly more stable and reproducible training, and leads to superior control performance. Further ablation studies give support to the importance of all three PCC components for learning a good latent space for control.""","""This paper studies optimal control with low-dimensional representation. The paper presents interesting progress, although I urge the authors to address all issues raised by reviewers in their revisions.""" 804,"""Semantic Hierarchy Emerges in the Deep Generative Representations for Scene Synthesis""","['Feature visualization', 'feature interpretation', 'generative models']","""Despite the success of Generative Adversarial Networks (GANs) in image synthesis, there lacks enough understanding on what networks have learned inside the deep generative representations and how photo-realistic images are able to be composed from random noises. In this work, we show that highly-structured semantic hierarchy emerges from the generative representations as the variation factors for synthesizing scenes. By probing the layer-wise representations with a broad set of visual concepts at different abstraction levels, we are able to quantify the causality between the activations and the semantics occurring in the output image. Such a quantification identifies the human-understandable variation factors learned by GANs to compose scenes. The qualitative and quantitative results suggest that the generative representations learned by GAN are specialized to synthesize different hierarchical semantics: the early layers tend to determine the spatial layout and configuration, the middle layers control the categorical objects, and the later layers finally render the scene attributes as well as color scheme. Identifying such a set of manipulatable latent semantics facilitates semantic scene manipulation.""","""The paper proposes to study what information is encoded in different layers of StyleGAN. The authors do so by training classifiers for different layers of latent codes and investigating whether changing the latent code changes the generated output in the expected fashion. The paper received borderline reviews with two weak accepts and one weak reject. Initially, the reviewers were more negative (with one reject, one weak reject, and one weak accept). After the rebuttal, the authors addressed most of the reviewer questions/concerns. Overall, the reviewers thought the results were interesting and appreciated the care the authors took in their investigations. The main concern of the reviewers is that the analysis is limited to only StyleGAN. It would be more interesting and informative if the authors applied their methodology to different GANs. Then they can analyze whether the methodology and findings holds for other types of GANs as well. R1 notes that given the wide interest in StyleGAN-like models, the work maybe of interest to the community despite the limited investigation. The reviewers also point out the writing can be improved to be more precise. The AC agrees that the paper is mostly well written and well presented. However, there are limitations in what is achieved in the paper and it would be of limited interest to the community. The AC recommends that the authors consider improving their work, potentially broadening their investigation to other GAN architectures, and resubmit to an appropriate venue.""" 805,"""Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks""","['Trustworthy Machine Learning', 'Adversarial Robustness', 'Inference Principle', 'Mixup']","""It has been widely recognized that adversarial examples can be easily crafted to fool deep networks, which mainly root from the locally non-linear behavior nearby input examples. Applying mixup in training provides an effective mechanism to improve generalization performance and model robustness against adversarial perturbations, which introduces the globally linear behavior in-between training examples. However, in previous work, the mixup-trained models only passively defend adversarial attacks in inference by directly classifying the inputs, where the induced global linearity is not well exploited. Namely, since the locality of the adversarial perturbations, it would be more efficient to actively break the locality via the globality of the model predictions. Inspired by simple geometric intuition, we develop an inference principle, named mixup inference (MI), for mixup-trained models. MI mixups the input with other random clean samples, which can shrink and transfer the equivalent perturbation if the input is adversarial. Our experiments on CIFAR-10 and CIFAR-100 demonstrate that MI can further improve the adversarial robustness for the models trained by mixup and its variants.""","""This paper proposed a mixup inference (MI) method, for mixup-trained models, to better defend adversarial attacks. The idea is novel and is proved to be effective on CIFAR-10 and CIFAR-100. All reviewers and the AC agree to accept the paper.""" 806,"""GRAPH ANALYSIS AND GRAPH POOLING IN THE SPATIAL DOMAIN""","['Graph Neural Network', 'Graph Classification', 'Graph Pooling', 'Graph Embedding']","""The spatial convolution layer which is widely used in the Graph Neural Networks (GNNs) aggregates the feature vector of each node with the feature vectors of its neighboring nodes. The GNN is not aware of the locations of the nodes in the global structure of the graph and when the local structures corresponding to different nodes are similar to each other, the convolution layer maps all those nodes to similar or same feature vectors in the continuous feature space. Therefore, the GNN cannot distinguish two graphs if their difference is not in their local structures. In addition, when the nodes are not labeled/attributed the convolution layers can fail to distinguish even different local structures. In this paper, we propose an effective solution to address this problem of the GNNs. The proposed approach leverages a spatial representation of the graph which makes the neural network aware of the differences between the nodes and also their locations in the graph. The spatial representation which is equivalent to a point-cloud representation of the graph is obtained by a graph embedding method. Using the proposed approach, the local feature extractor of the GNN distinguishes similar local structures in different locations of the graph and the GNN infers the topological structure of the graph from the spatial distribution of the locally extracted feature vectors. Moreover, the spatial representation is utilized to simplify the graph down-sampling problem. A new graph pooling method is proposed and it is shown that the proposed pooling method achieves competitive or better results in comparison with the state-of-the-art methods. ""","""The authors identify a limitation of aggregating GNNs, which is that global structure can be mostly lost. They propose a method which combines a graph embedding with the spatial convolution GNN and show that the resulting GNN can better distinguish between similar local structures. The reviewers were mixed in their scores. The proposed approach is clearly motivated and justified and may be relelvant for some graphnet researchers, but the approach is only applicable in some circumstances - in other cases it may be desirable to ignore global structure. This, plus the high computational complexity of the proposed approach, mean that the significance is weaker. Overall the reviewers felt that the contribution was not significant enough and that the results were not statistically convincing. Decision is to reject.""" 807,"""SQIL: Imitation Learning via Reinforcement Learning with Sparse Rewards""","['Imitation Learning', 'Reinforcement Learning']","""Learning to imitate expert behavior from demonstrations can be challenging, especially in environments with high-dimensional, continuous observations and unknown dynamics. Supervised learning methods based on behavioral cloning (BC) suffer from distribution shift: because the agent greedily imitates demonstrated actions, it can drift away from demonstrated states due to error accumulation. Recent methods based on reinforcement learning (RL), such as inverse RL and generative adversarial imitation learning (GAIL), overcome this issue by training an RL agent to match the demonstrations over a long horizon. Since the true reward function for the task is unknown, these methods learn a reward function from the demonstrations, often using complex and brittle approximation techniques that involve adversarial training. We propose a simple alternative that still uses RL, but does not require learning a reward function. The key idea is to provide the agent with an incentive to match the demonstrations over a long horizon, by encouraging it to return to demonstrated states upon encountering new, out-of-distribution states. We accomplish this by giving the agent a constant reward of r=+1 for matching the demonstrated action in a demonstrated state, and a constant reward of r=0 for all other behavior. Our method, which we call soft Q imitation learning (SQIL), can be implemented with a handful of minor modifications to any standard Q-learning or off-policy actor-critic algorithm. Theoretically, we show that SQIL can be interpreted as a regularized variant of BC that uses a sparsity prior to encourage long-horizon imitation. Empirically, we show that SQIL outperforms BC and achieves competitive results compared to GAIL, on a variety of image-based and low-dimensional tasks in Box2D, Atari, and MuJoCo. This paper is a proof of concept that illustrates how a simple imitation method based on RL with constant rewards can be as effective as more complex methods that use learned rewards.""","""The authors present a simple alternative to adversarial imitation learning methods like GAIL that is potentially less brittle, and can skip learning a reward function, instead learning an imitation policy directly. Their method has a close relationship with behavioral cloning, but overcomes some of the disadvantages of BC by encouraging the agent via reward to return to demonstration states if it goes out of distribution. The reviewers agree that overcoming the difficulties of both BC and adversarial imitation is an important contribution. Additionally, the authors reasonably addressed the majority of the minor concerns that the reviewers had. Therefore, I recommend for this paper to be accepted.""" 808,"""Improved Training Speed, Accuracy, and Data Utilization via Loss Function Optimization""","['metalearning', 'evolutionary computation', 'loss functions', 'optimization', 'genetic programming']","""As the complexity of neural network models has grown, it has become increasingly important to optimize their design automatically through metalearning. Methods for discovering hyperparameters, topologies, and learning rate schedules have lead to significant increases in performance. This paper shows that loss functions can be optimized with metalearning as well, and result in similar improvements. The method, Genetic Loss-function Optimization (GLO), discovers loss functions de novo, and optimizes them for a target task. Leveraging techniques from genetic programming, GLO builds loss functions hierarchically from a set of operators and leaf nodes. These functions are repeatedly recombined and mutated to find an optimal structure, and then a covariance-matrix adaptation evolutionary strategy (CMA-ES) is used to find optimal coefficients. Networks trained with GLO loss functions are found to outperform the standard cross-entropy loss on standard image classification tasks. Training with these new loss functions requires fewer steps, results in lower test error, and allows for smaller datasets to be used. Loss function optimization thus provides a new dimension of metalearning, and constitutes an important step towards AutoML.""","""This paper proposes a GA-based method for optimizing the loss function a model is trained on to produce better models (in terms of final performance). The general consensus from the reviewers is that the paper, while interesting, dedicates too much of its content to analyzing one such discovered loss (the Baikal loss), and that the experimental setting (MNIST and Cifar10) is too basic to be conclusive. It seems this paper can be so significantly improved with some further and larger scale experiments that it would be wrong to prematurely recommend acceptance. My recommendation is that the authors consider the reviewer feedback, run the suggested further experiments, and are hopefully in the position to submit a significantly stronger version of this paper to a future conference.""" 809,"""Distribution-Guided Local Explanation for Black-Box Classifiers""","['explanation', 'cnn', 'saliency map']","""Existing local explanation methods provide an explanation for each decision of black-box classifiers, in the form of relevance scores of features according to their contributions. To obtain satisfying explainability, many methods introduce ad hoc constraints into the classification loss to regularize these relevance scores. However, the large information gap between the classification loss and these constraints increases the difficulty of tuning hyper-parameters. To bridge this gap, in this paper we present a simple but effective mask predictor. Specifically, we model the above constraints with a distribution controller, and integrate it with a neural network to directly guide the distribution of relevance scores. The benefit of this strategy is to facilitate the setting of involved hyper-parameters, and enable discriminative scores over supporting features. The experimental results demonstrate that our method outperforms others in terms of faithfulness and explainability. Meanwhile, it also provides effective saliency maps for explaining each decision. ""","""This paper proposed a method to estimate the instance-wise saliency map for image classification, for the purpose of improving the faithfulness of the explainer. Based on the U-net, two modifications are proposed in this work. While reviewer #3 is overall positive about this work, both Reviewer #1 and #2 rated weak reject and raised a number of concerns. The major concerns include the modifications either already exist or suffer potential issue. Reviewer #2 considered that the contributions are not enough for ICLR, and the performance improvement is marginal. The authors provided detailed responses to the reviewers concerns, which help to make the paper stronger, but did not change the rating. Given the concerns raised by the reviewers, the ACs agree that this paper can not be accepted at its current state.""" 810,"""Encoder-decoder Network as Loss Function for Summarization""","['encoder-decoder', 'summarization', 'loss functions']","""We present a new approach to defining a sequence loss function to train a summarizer by using a secondary encoder-decoder as a loss function, alleviating a shortcoming of word level training for sequence outputs. The technique is based on the intuition that if a summary is a good one, it should contain the most essential information from the original article, and therefore should itself be a good input sequence, in lieu of the original, from which a summary can be generated. We present experimental results where we apply this additional loss function to a general abstractive summarizer on a news summarization dataset. The result is an improvement in the ROUGE metric and an especially large improvement in human evaluations, suggesting enhanced performance that is competitive with specialized state-of-the-art models.""","""This paper presents an encoder-decoder based architecture to generate summaries. The real contribution of the paper is to use a recoder matrix which takes the output from an existing encoder-decoder network and tries to generate the reference summary again. The output here is basically the softmax layer produced by the first encoder-decoder network which then goes through a feed-forward layer before being fed as embeddings into the recoder. So, since there is no discretization, the whole model can be trained jointly. (the original loss of the first encoder-decoder model is used as well anyway). I agree with the reviewers here, that this whole model can in fact be viewed as a large encoder-decoder model, its not really clear where the improvements come from. Can you just increase the number of parameters of the original encoder-decoder model and see if it performs as good as the encoder-decoder + recoder? The paper also does not achieve SOTA on the task as there are other RL based papers which have been shown to perform better, so the choice of the recorder model is also not empirically justified. I recommend rejection of the paper in its current form.""" 811,"""Effects of Linguistic Labels on Learned Visual Representations in Convolutional Neural Networks: Labels matter!""","['category learning', 'visual representation', 'linguistic labels', 'human behavior prediction']","""We investigated the changes in visual representations learnt by CNNs when using different linguistic labels (e.g., trained with basic-level labels only, superordinate-level only, or both at the same time) and how they compare to human behavior when asked to select which of three images is most different. We compared CNNs with identical architecture and input, differing only in what labels were used to supervise the training. The results showed that in the absence of labels, the models learn very little categorical structure that is often assumed to be in the input. Models trained with superordinate labels (vehicle, tool, etc.) are most helpful in allowing the models to match human categorization, implying that human representations used in odd-one-out tasks are highly modulated by semantic information not obviously present in the visual input.""","""This paper explores training CNNs with labels of differing granularity, and finds that the types of information learned by the method depends intimately on the structure of the labels provided. Thought the reviewers found value in the paper, they felt there were some issues with clarity, and didn't think the analyses were as thorough as they could be. I thank the authors for making changes to their paper in light of the reviews, and hope that they feel their paper is stronger because of the review process.""" 812,"""Distributed Training Across the World""","['Distributed Training', 'Bandwidth']","""Traditional synchronous distributed training is performed inside a cluster, since it requires high bandwidth and low latency network (e.g. 25Gb Ethernet or Infini-band). However, in many application scenarios, training data are often distributed across many geographic locations, where physical distance is long and latency is high. Traditional synchronous distributed training cannot scale well under such limited network conditions. In this work, we aim to scale distributed learning un-der high-latency network. To achieve this, we propose delayed and temporally sparse (DTS) update that enables synchronous training to tolerate extreme network conditions without compromising accuracy. We benchmark our algorithms on servers deployed across three continents in the world: London (Europe), Tokyo(Asia), Oregon (North America) and Ohio (North America). Under such challenging settings, DTS achieves90speedup over traditional methods without loss of accuracy on ImageNet.""","""The paper introduces a distributed algorithm for training deep nets in clusters with high-latency (i.e. very remote) nodes. While the motivation and clarity are the strengths of the paper, the reviewers have some concerns regarding novelty and insufficient theoretical analysis. """ 813,"""Zero-shot task adaptation by homoiconic meta-mapping""","['Meta-mapping', 'zero-shot', 'task adaptation', 'task representation', 'meta-learning']","""How can deep learning systems flexibly reuse their knowledge? Toward this goal, we propose a new class of challenges, and a class of architectures that can solve them. The challenges are meta-mappings, which involve systematically transforming task behaviors to adapt to new tasks zero-shot. We suggest that the key to achieving these challenges is representing the task being performed in such a way that this task representation is itself transformable. We therefore draw inspiration from functional programming and recent work in meta-learning to propose a class of Homoiconic Meta-Mapping (HoMM) approaches that represent data points and tasks in a shared latent space, and learn to infer transformations of that space. HoMM approaches can be applied to any type of machine learning task, including supervised learning and reinforcement learning. We demonstrate the utility of this perspective by exhibiting zero-shot remapping of behavior to adapt to new tasks.""","""The authors presents a method for adapting models to new tasks in a zero shot manner using learned meta-mappings. The reviewers largely agreed that this is an interesting and creative research direction. However, there was also agreement that the writing was unclear in many sections, that the appropriate metalearning baselines were not compared to, and that the power of the method was unclear due to overly simplistic domains. While the baseline issue was mostly cleared up in rebuttal and discussion, the other issues remain. Thus, I recommend rejection at this time.""" 814,"""MLModelScope: A Distributed Platform for ML Model Evaluation and Benchmarking at Scale""","['Evaluation', 'Scalable', 'Repeatable', 'Fair', 'System']","""Machine Learning (ML) and Deep Learning (DL) innovations are being introduced at such a rapid pace that researchers are hard-pressed to analyze and study them. The complicated procedures for evaluating innovations, along with the lack of standard and efficient ways of specifying and provisioning ML/DL evaluation, is a major ""pain point"" for the community. This paper proposes MLModelScope, an open-source, framework/hardware agnostic, extensible and customizable design that enables repeatable, fair, and scalable model evaluation and benchmarking. We implement the distributed design with support for all major frameworks and hardware, and equip it with web, command-line, and library interfaces. To demonstrate MLModelScope's capabilities we perform parallel evaluation and show how subtle changes to model evaluation pipeline affects the accuracy and HW/SW stack choices affect performance.""","""The paper proposes a platform for benchmarking, and in particular hardware-agnostic evaluation of machine learning models. This is an important problem as our field strives for more reproducibility. This was a very confusing paper to discuss and review, since most of the reviewers (and myself) do not know much about the area. Two of the reviewers found the paper contributions sufficient to be (weakly) accepted. The third reviewer had many issues with the work and engaged in a lengthy debate with the authors, but there was strong disagreement regarding their understanding of the scope of the paper as a Tools/Systems submission. Given the lack of consensus, I must recommend rejection at this time, but highly encourage the authors to take the feedback into account and resubmit to a future venue.""" 815,"""Learning to Recognize the Unseen Visual Predicates""","['Visual Relationship Detection', 'Scene Graph Generation', 'Knowledge', 'Zero-shot Learning']","""Visual relationship recognition models are limited in the ability to generalize from finite seen predicates to unseen ones. We propose a new problem setting named predicate zero-shot learning (PZSL): learning to recognize the predicates without training data. It is unlike the previous zero-shot learning problem on visual relationship recognition which learns to recognize the unseen relationship triplets () but requires all components (subject, predicate, and object) to be seen in the training set. For the PZSL problem, however, the models are expected to recognize the diverse even unseen predicates, which is meaningful for many downstream high-level tasks, like visual question answering, to handle complex scenes and open questions. The PZSL is a very challenging task since the predicates are very abstract and follow an extreme long-tail distribution. To address the PZSL problem, we present a model that performs compatibility learning leveraging the linguistic priors from the corpus and knowledge base. An unbalanced sampled-softmax is further developed to tackle the extreme long-tail distribution of predicates. Finally, the experiments are conducted to analyze the problem and verify the effectiveness of our methods. The dataset and source code will be released for further study. ""","""The paper proposes a new problem setting of predicate zero-shot learning for visual relation recognition for the setting when some of the predicates are missing, and a model that is able to address it. All reviewers agreed that the problem setting is interesting and important, but had reservations about the proposed model. In particular, the reviewers were concerned that it is too simple of a step from existing methods. One reviewer also pointed towards potential comparisons with other zero-shot methods. Following that discussion, I recommend rejection at this time but highly encourage the authors to take the feedback into account and resubmit to another venue.""" 816,"""SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models""",[],"""Standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest. We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series. If parameterized by an encoder-decoder architecture, the parameters of the encoder can be optimized to minimize its variance of this estimator. We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost. This estimator also allows use of latent variable models for tasks where unbiased estimators, rather than marginal likelihood lower bounds, are preferred, such as minimizing reverse KL divergences and estimating score functions.""","""The paper proposes a new way to train latent variable models. The standard way of training using the ELBO produces biased estimates for many quantities of interest. The authors introduce an unbiased estimate for the log marginal probability and its derivative to address this. The new estimator is based on the importance weighted autoencoder, correcting the remaining bias using russian roulette sampling. The model is empirically shown to give better test set likelihood, and can be used in tasks where unbiased estimates are needed. All reviewers are positive about the paper. Support for the main claims is provided through empirical and theoretical results. The reviewers had some minor comments, especially about the theory, which the authors have addressed with additional clarification, which was appreciated by the reviewers. The paper was deemed to be well organized. There were some unclarities about variance issues and bias from gradient clipping, which have been addressed by the authors in additional explanation as well as an additional plot. The approach is novel and addresses a very relevant problem for the ICLR community: optimizing latent variable models, especially in situations where unbiased estimates are required. The method results in marginally better optimization compared to IWAE with much smaller average number of samples. The method was deemed by the reviewers to open up new possibilities such as entropy minimization. """ 817,"""Enhancing the Transformer with explicit relational encoding for math problem solving""","['Tensor Product Representation', 'Transformer', 'Mathematics Dataset', 'Attention']","""We incorporate Tensor-Product Representations within the Transformer in order to better support the explicit representation of relation structure. Our Tensor-Product Transformer (TP-Transformer) sets a new state of the art on the recently-introduced Mathematics Dataset containing 56 categories of free-form math word-problems. The essential component of the model is a novel attention mechanism, called TP-Attention, which explicitly encodes the relations between each Transformer cell and the other cells from which values have been retrieved by attention. TP-Attention goes beyond linear combination of retrieved values, strengthening representation-building and resolving ambiguities introduced by multiple layers of regular attention. The TP-Transformer's attention maps give better insights into how it is capable of solving the Mathematics Dataset's challenging problems. Pretrained models and code will be made available after publication.""","""This paper proposes a change in the attention mechanism of Transformers yielding the so-called ""Tensor-Product Transformer"" (TP-Transformer). The main idea is to capture filler-role relationships by incorporating a Hadamard product of each value vector representation (after attention) with a relation vector, for every attention head at every layer. The resulting model achieves SOTA on the Mathematics Dataset. Attention maps are shown in the analysis to give insights into how TP-Transformer is capable of solving the Mathematics Dataset's challenging problems. While the modified attention mechanism is interesting and the analysis is insightful (and improved with the addition of an experiment in NMT after the rebuttal), the reviewers expressed some concerns in the discussion stage: 1. The comparison to baseline is not fair (not to mention the 8.24% claim in conclusion). The proposed approach adds 5 million parameters to a normal transformer (table 1, 5M is a lot!), but in terms of interpolation, it only improves 3% (extrapolation improves 0.5%) at 700k steps. The rebuttal claimed that it is fair as long as the hidden size is comparable, but I don't think that's a fair argument. I suspect that increasing the feedforward hidden size (d_ff) of a normal transformer to match parameters (and add #training steps to match #train steps) might change the conclusion. 2. The new experiment on WMT further convinces me that the theoretical motivation does not hold in practice. Even with the added few million more parameters, it only improved BLEU by 0.05 (we usually consider >0.5 as significant or non-random). This might be because the feedforward and non-linearity can disambiguate as well. I also found the name TP-Transformer a bit misleading, since what is proposed and tested here is the Hadamard product (i.e. only the diagonal part of the tensor product). I recommend resubmitting an improved version of this paper with stronger empirical evidence of outperformance of regular Transformers with comparable number of parameters.""" 818,"""Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks""","['small data', 'neural tangent kernel', 'UCI database', 'few-shot learning', 'kernel SVMs', 'deep learning theory', 'kernel design']","""Recent research shows that the following two models are equivalent: (a) infinitely wide neural networks (NNs) trained under l2 loss by gradient descent with infinitesimally small learning rate (b) kernel regression with respect to so-called Neural Tangent Kernels (NTKs) (Jacot et al., 2018). An efficient algorithm to compute the NTK, as well as its convolutional counterparts, appears in Arora et al. (2019a), which allowed studying performance of infinitely wide nets on datasets like CIFAR-10. However, super-quadratic running time of kernel methods makes them best suited for small-data tasks. We report results suggesting neural tangent kernels perform strongly on low-data tasks. 1. On a standard testbed of classification/regression tasks from the UCI database, NTK SVM beats the previous gold standard, Random Forests (RF), and also the corresponding finite nets. 2. On CIFAR-10 with 10 640 training samples, Convolutional NTK consistently beats ResNet-34 by 1% - 3%. 3. On VOC07 testbed for few-shot image classification tasks on ImageNet with transfer learning (Goyal et al., 2019), replacing the linear SVM currently used with a Convolutional NTK SVM consistently improves performance. 4. Comparing the performance of NTK with the finite-width net it was derived from, NTK behavior starts at lower net widths than suggested by theoretical analysis(Arora et al., 2019a). NTKs efficacy may trace to lower variance of output.""","""This paper carries out extensive experiments on Neural Tangent Kernel (NTK) --kernel methods based on infinitely wide neural nets on small-data tasks. I recommend acceptance.""" 819,"""Combining Q-Learning and Search with Amortized Value Estimates""","['model-based RL', 'Q-learning', 'MCTS', 'search']","""We introduce ""Search with Amortized Value Estimates"" (SAVE), an approach for combining model-free Q-learning with model-based Monte-Carlo Tree Search (MCTS). In SAVE, a learned prior over state-action values is used to guide MCTS, which estimates an improved set of state-action values. The new Q-estimates are then used in combination with real experience to update the prior. This effectively amortizes the value computation performed by MCTS, resulting in a cooperative relationship between model-free learning and model-based search. SAVE can be implemented on top of any Q-learning agent with access to a model, which we demonstrate by incorporating it into agents that perform challenging physical reasoning tasks and Atari. SAVE consistently achieves higher rewards with fewer training steps, and---in contrast to typical model-based search approaches---yields strong performance with very small search budgets. By combining real experience with information computed during search, SAVE demonstrates that it is possible to improve on both the performance of model-free learning and the computational cost of planning.""","""This paper proposes Search with Amortized Value Estimates (SAVE) that combines Q-learning and MCTS. SAVE uses the estimated Q-values obtained by MCTS at the root node to update the value network, and uses the learned value function to guide MCTS. The rebuttal addressed the reviewers concerns, and they are now all positive about the paper. I recommend acceptance.""" 820,"""Exploration Based Language Learning for Text-Based Games""","['Text-Based Games', 'Exploration', 'Language Learning']","""This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games. Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text. These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents. Moreover, they provide a learning environment in which these skills can be acquired through interactions with an environment rather than using fixed corpora. One aspect that makes these games particularly challenging for learning agents is the combinatorially large action space. Existing methods for solving text-based games are limited to games that are either very simple or have an action space restricted to a predetermined set of admissible actions. In this work, we propose to use the exploration approach of Go-Explore (Ecoffet et al., 2019) for solving text-based games. More specifically, in an initial exploration phase, we first extract trajectories with high rewards, after which we train a policy to solve the game by imitating these trajectories. Our experiments show that this approach outperforms existing solutions in solving text-based games, and it is more sample efficient in terms of the number of interactions with the environment. Moreover, we show that the learned policy can generalize better than existing solutions to unseen games without using any restriction on the action space.""","""The paper applies the Go-Explore algorithm to text-based games and shows that it is able to solve text-based game with better sample efficiency and generalization than some alternatives. The Go-Explore algorithm is used to extract high reward trajectories that can be used to train a policy using a seq2seq model that maps observations to actions. Paper received 1 weak accept and 2 weak rejects. Initially the paper received three weak rejects, with the author response and revision convincing one reviewer to increase their score to a weak accept. Overall, the authors liked the paper and thought that it was well-written with good experiments. However, there is concern that the paper lacks technical novelty and would not be of interest to the broader ICLR community (beyond those that are interested in text-based games). Another concern reviewers expressed was that the proposed method was only compared against baselines with simple exploration strategies and that baselines with more advanced exploration strategies should be included. The AC agrees with above concerns and encourage the authors to improve their paper based on the reviewer feedback, and to consider resubmitting to a venue that is more focused on text-based games (perhaps an NLP conference).""" 821,"""PNAT: Non-autoregressive Transformer by Position Learning""",['Text Generation'],"""Non-autoregressive generation is a new paradigm for text generation. Previous work hardly considers to explicitly model the positions of generated words. However, position modeling of output words is an essential problem in non-autoregressive text generation. In this paper, we propose PNAT, which explicitly models positions of output words as latent variables in text generation. The proposed PNATis simple yet effective. Experimental results show that PNATgives very promising results in machine translation and paraphrase generation tasks, outperforming many strong baselines.""","""This paper presents a non-autoregressive NMT model which predicts the positions of the words to be produced as a latent variable in addition to predicting the words. This is a novel idea in the field of several other papers which are trying to do similar things, and obtains good results on benchmark tasks. The major concerns are systematic comparisons with the FlowSeq paper which seems to have been published before the ICLR submission deadline. The reviewers are still not convinced by the empirical performance comparison as well as speed comparisons. With some more work this could be a good contribution. As of now, I am recommending a Rejection.""" 822,"""Provable Representation Learning for Imitation Learning via Bi-level Optimization""","['imitation learning', 'representation learning', 'multitask learning', 'theory', 'behavioral cloning', 'imitation from observations alone', 'reinforcement learning']","""A common strategy in modern learning systems is to learn a representation which is useful for many tasks, a.k.a, representation learning. We study this strategy in the imitation learning setting where multiple experts trajectories are available. We formulate representation learning as a bi-level optimization problem where the ""outer"" optimization tries to learn the joint representation and the ""inner"" optimization encodes the imitation learning setup and tries to learn task-specific parameters. We instantiate this framework for the cases where the imitation setting being behavior cloning and observation alone. Theoretically, we provably show using our framework that representation learning can reduce the sample complexity of imitation learning in both settings. We also provide proof-of-concept experiments to verify our theoretical findings.""","""This paper proposes a methodology for learning a representation given multiple demonstrations, by optimizing the representation as well as the learned policy parameters. The paper includes some theoretical results showing that this is a sensible thing to do, and an empirical evaluation. Post-discussion, the reviewers (and me!) agreed that this is an interesting approach that has a lot of promise. But there was still concern about he empirical evaluation and the writing. Hence I am recommending rejection.""" 823,"""XD: Cross-lingual Knowledge Distillation for Polyglot Sentence Embeddings""","['cross-lingual transfer', 'sentence embeddings', 'polyglot language models', 'knowledge distillation', 'natural language inference', 'embedding alignment', 'embedding mapping']","""Current state-of-the-art results in multilingual natural language inference (NLI) are based on tuning XLM (a pre-trained polyglot language model) separately for each language involved, resulting in multiple models. We reach significantly higher NLI results with a single model for all languages via multilingual tuning. Furthermore, we introduce cross-lingual knowledge distillation (XD), where the same polyglot model is used both as teacher and student across languages to improve its sentence representations without using the end-task labels. When used alone, XD beats multilingual tuning for some languages and the combination of them both results in a new state-of-the-art of 79.2% on the XNLI dataset, surpassing the previous result by absolute 2.5%. The models and code for reproducing our experiments will be made publicly available after de-anonymization.""","""This paper proposes a method for transferring an NLP model trained on one language a new language, without using labeled data in the new language. Reviewers were split on their recommendations, but the reviews collectively raised a number of concerns which, together, make me uncomfortable accepting the paper. Reviewers were not convinced by the value of the experimental setting described in the paperat least in the experiments conducted here, the claim that the model is distinctively effective depend on ruling out a large class of models arbitrarily. it would likely be valuable to find a concrete task/dataset/language combination that more closely aligns with the motivations for this work, and to evaluate whether the proposed method is genuinely the most effective practical option in that setting. Further, the reviewers raise a number of points involving baseline implementations, language families, and other issues, that collectively make me doubt that the paper is fully sound in its current form.""" 824,"""Distributed Bandit Learning: Near-Optimal Regret with Efficient Communication""","['Theory', 'Bandit Algorithms', 'Communication Efficiency']","""We study the problem of regret minimization for distributed bandits learning, in which pseudo-formula agents work collaboratively to minimize their total regret under the coordination of a central server. Our goal is to design communication protocols with near-optimal regret and little communication cost, which is measured by the total amount of transmitted data. For distributed multi-armed bandits, we propose a protocol with near-optimal regret and only pseudo-formula communication cost, where pseudo-formula is the number of arms. The communication cost is independent of the time horizon pseudo-formula , has only logarithmic dependence on the number of arms, and matches the lower bound except for a logarithmic factor. For distributed pseudo-formula -dimensional linear bandits, we propose a protocol that achieves near-optimal regret and has communication cost of order \log d\right)\log T\right) which has only logarithmic dependence on pseudo-formula .""","""This paper tackles the problem of regret minimization in a multi-agent bandit problem, where distributed learning bandit algorithms collaborate in order to minimize their total regret. More specifically, the work focuses on efficient communication protocols and the regret corresponds to the communication cost. The goal is therefore to design protocols with little communication cost. The authors first establish lower bounds on the communication cost, and then introduce an algorithm with provable near-optimal regret. The only concern with the paper is that ICLR may not be the appropriate venue given that this work lacks representation learning contributions. However, all reviewers being otherwise positive about the quality and contributions of this work, I would recommend acceptance.""" 825,"""Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization""","['Transfer Learning', 'Meta Learning', 'Bayesian Optimization', 'Reinforcement Learning']","""Transferring knowledge across tasks to improve data-efficiency is one of the open key challenges in the field of global black-box optimization. Readily available algorithms are typically designed to be universal optimizers and, therefore, often suboptimal for specific tasks. We propose a novel transfer learning method to obtain customized optimizers within the well-established framework of Bayesian optimization, allowing our algorithm to utilize the proven generalization capabilities of Gaussian processes. Using reinforcement learning to meta-train an acquisition function (AF) on a set of related tasks, the proposed method learns to extract implicit structural information and to exploit it for improved data-efficiency. We present experiments on a simulation-to-real transfer task as well as on several synthetic functions and on two hyperparameter search problems. The results show that our algorithm (1) automatically identifies structural properties of objective functions from available source tasks or simulations, (2) performs favourably in settings with both scarse and abundant source data, and (3) falls back to the performance level of general AFs if no particular structure is present.""","""This paper explores the idea of using meta-learning for acquisition functions. It is an interesting and novel research direction with promising results. The paper could be strengthened by adding more insights about the new acquisition function and performing more comparisons e.g. to Chen et al. 2017. But in any case, the current form of the paper should already be of high interest to the community """ 826,"""Effective Mechanism to Mitigate Injuries During NFL Plays ""","['Concussion', 'American football', 'Predictive modelling', 'Injuries', 'NFL Plays', 'Optimization']","""NFL(American football),which is regarded as the premier sports icon of America, has been severely accused in the recent years of being exposed to dangerous injuries that prove to be a bigger crisis as the players' lives have been increasingly at risk. Concussions, which refer to the serious brain traumas experienced during the passage of NFL play, have displayed a dramatic rise in the recent seasons concluding in an alarming rate in 2017/18. Acknowledging the potential risk, the NFL has been trying to fight via NeuroIntel AI mechanism as well as modifying existing game rules and risky play practices to reduce the rate of concussions. As a remedy, we are suggesting an effective mechanism to extensively analyse the potential concussion risks by adopting predictive analysis to project injury risk percentage per each play and positional impact analysis to suggest safer team formation pairs to lessen injuries to offer a comprehensive study on NFL injury analysis. The proposed data analytical approach differentiates itself from the other similar approaches that were focused only on the descriptive analysis rather than going for a bigger context with predictive modelling and formation pairs mining that would assist in modifying existing rules to tackle injury concerns. The predictive model that works with Kafka-stream processor real-time inputs and risky formation pairs identification by designing FP-Matrix, makes this far-reaching solution to analyse injury data on various grounds wherever applicable.""","""All reviewers recommend reject, and there is no rebuttal.""" 827,"""Iterative Deep Graph Learning for Graph Neural Networks""","['deep learning', 'graph neural networks', 'graph learning']","""In this paper, we propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL), for jointly learning graph structure and graph embedding simultaneously. We first cast graph structure learning problem as similarity metric learning problem and leverage an adapted graph regularization for controlling smoothness, connectivity and sparsity of the generated graph. We further propose a novel iterative method for searching for hidden graph structure that augments the initial graph structure. Our iterative method dynamically stops when learning graph structure approaches close enough to the ground truth graph. Our extensive experiments demonstrate that the proposed IDGL model can consistently outperform or match state-of-the-art baselines in terms of both classification accuracy and computational time. The proposed approach can cope with both transductive training and inductive training. ""","""The submission proposes a method for learning a graph structure and node embeddings through an iterative process. Smoothness and sparsity are both optimized in this approach. The iterative method has a stopping mechanism based on distance from a ground truth. The concerns of the reviewers were about scalability and novelty. Since other methods have used the same costs for optimization, as well as other aspects of this approach, there is little contribution other than the iterative process. The improvement over LDS, the most similar approach, is relatively minor. Although the paper is promising, more work is required to establish the contributions of the method. Recommendation is for rejection. """ 828,"""Well-Read Students Learn Better: On the Importance of Pre-training Compact Models""","['NLP', 'self-supervised learning', 'language model pre-training', 'knowledge distillation', 'BERT', 'compact models']","""Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training. Due to the cost of applying such models to down-stream tasks, several model compression techniques on pre-trained language representations have been proposed (Sun et al., 2019; Sanh, 2019). However, surprisingly, the simple baseline of just pre-training and fine-tuning compact models has been overlooked. In this paper, we first show that pre-training remains important in the context of smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work. Starting with pre-trained compact models, we then explore transferring task knowledge from large fine-tuned models through standard knowledge distillation. The resulting simple, yet effective and general algorithm, Pre-trained Distillation, brings further improvements. Through extensive experiments, we more generally explore the interaction between pre-training and distillation under two variables that have been under-studied: model size and properties of unlabeled task data. One surprising observation is that they have a compound effect even when sequentially applied on the same data. To accelerate future research, we will make our 24 pre-trained miniature BERT models publicly available.""",""" Though the reviewers thought the ideas in this paper were interesting, they questioned the importance and magnitude of the contribution. Though it is important to share empirical results, the reviewers were not sure that there was enough for this paper to be accepted.""" 829,"""Representing Model Uncertainty of Neural Networks in Sparse Information Form""","['Model Uncertainty', 'Neural Networks', 'Sparse representation']","""This paper addresses the problem of representing a system's belief using multi-variate normal distributions (MND) where the underlying model is based on a deep neural network (DNN). The major challenge with DNNs is the computational complexity that is needed to obtain model uncertainty using MNDs. To achieve a scalable method, we propose a novel approach that expresses the parameter posterior in sparse information form. Our inference algorithm is based on a novel Laplace Approximation scheme, which involves a diagonal correction of the Kronecker-factored eigenbasis. As this makes the inversion of the information matrix intractable - an operation that is required for full Bayesian analysis, we devise a low-rank approximation of this eigenbasis and a memory-efficient sampling scheme. We provide both a theoretical analysis and an empirical evaluation on various benchmark data sets, showing the superiority of our approach over existing methods.""","""This paper presents a variant of recently developed Kronecker-factored approximations to BNN posteriors. It corrects the diagonal entries of the approximate Hessian, and in order to make this scalable, approximates the Kronecker factors as low-rank. The approach seems reasonable, and is a natural thing to try. The novelty is fairly limited, however, and the calculations are mostly routine. In terms of the experiments: it seems like it improved the Frobenius norm of the error, though it's not clear to me that this would be a good measure of practical effectiveness. On the toy regression experiment, it's hard for me to tell the difference from the other variational methods. It looks like it helped a bit in the quantitative comparisons, though the improvement over K-FAC doesn't seem significant enough to justify acceptance purely based on the results. Reviewers felt like there was a potentially useful idea here and didn't spot any serious red flags, but didn't feel like the novelty or the experimental results were enough to justify acceptance. I tend to agree with this assessment. """ 830,"""Revisiting Gradient Episodic Memory for Continual Learning""",[],"""Gradient Episodic Memory (GEM) is an effective model for continual learning, where each gradient update for the current task is formulated as a quadratic program problem with inequality constraints that alleviate catastrophic forgetting of previous tasks. However, practical use of GEM is impeded by several limitations: (1) the data examples stored in the episodic memory may not be representative of past tasks; (2) the inequality constraints appear to be rather restrictive for competing or conflicting tasks; (3) the inequality constraints can only avoid catastrophic forgetting but can not assure positive backward transfer. To address these issues, in this paper we aim at improving the original GEM model via three handy techniques without extra computational cost. Experiments on MNIST Permutations and incremental CIFAR100 datasets demonstrate that our techniques enhance the performance of GEM remarkably. On CIFAR100 the average accuracy is improved from 66.48% to 68.76%, along with the backward (knowledge) transfer growing from 1.38% to 4.03%.""","""This paper proposes an extension of Gradient Episodic Memory (GEM) namely support examples, soft gradient constraints, and positive backward transfer. The authors argue that experiments on MNIST and CIFAR show that the proposed method consistently improves over the original GEM. All three reviewers are not convinced with experiments in the paper. R1 and R3 mentioned that the improvements over GEM appear to be small. R2 and R3 also have some concerns without results with multiple runs. R3 has questions about hyperparameter tuning. The authors also appears to be missing recent developments in this area (e.g., A-GEM). The authors did not provide a rebuttal to these concerns. I agree with the reviewers and recommend rejecting this paper.""" 831,"""Learning to Prove Theorems by Learning to Generate Theorems""",[],"""We consider the task of automated theorem proving, a key AI task. Deep learning has shown promise for training theorem provers, but there are limited human-written theorems and proofs available for supervised learning. To address this limitation, we propose to learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover. Experiments on real-world tasks demonstrate that synthetic data from our approach significantly improves the theorem prover and advances the state of the art of automated theorem proving in Metamath.""","""This paper proposes to augment training data for theorem provers by learning a deep neural generator that generates data to train a prover, resulting in an improvement over the Holophrasm baseline prover. The results were restricted to one particular mathematical formalism -- MetaMath, a limitation raised one by reviewer. All reviewers agree that it's an interesting method for addressing an important problem. However there were some concerns about the strength of the experimental results from R4 and R1. R4 in particular wanted to see results on more datasets, an assessment with which I agree. Although the authors argued vigorously against using other datasets, I am not convinced. For instance, they claim that other datasets do not afford the opportunity to generate new theorems, or the human proofs provided cannot be understood by an automatic prover. In their words, ""The idea of theorem generation can be applied to other systems beyond Metamath, but realizing it on another system is highly nontrivial. It can even involve new research challenges. In particular, due to large differences in logic foundations, grammar, inference rules, and benchmarking environments, the generation process, which is a key component of our approach, would be almost completely different for a new system. And the entire pipeline essentially needs to be re-designed and re-coded from scratch for a new formal system, which can require an unreasonable amount of engineering."" It sounds like they've essentially tailored their approach for this one dataset, which limits the generality of their approach, a limitation that was not discussed in the paper. There is also only one baseline considered, which renders their experimental findings rather weak. For these reasons, I think this work is not quite ready for publication at ICLR 2020, although future versions with stronger baselines and experiments could be quite impactful. """ 832,"""The Geometry of Sign Gradient Descent""","['Sign gradient descent', 'signSGD', 'steepest descent', 'Adam']","""Sign gradient descent has become popular in machine learning due to its favorable communication cost in distributed optimization and its good performance in neural network training. However, we currently do not have a good understanding of which geometrical properties of the objective function determine the relative speed of sign gradient descent compared to standard gradient descent. In this work, we frame sign gradient descent as steepest descent with respect to the maximum norm. We review the steepest descent framework and the related concept of smoothness with respect to arbitrary norms. By studying the smoothness constant resulting from the pseudo-formula -geometry, we isolate properties of the objective which favor sign gradient descent relative to gradient descent. In short, we find two requirements on its Hessian: (i) some degree of ``diagonal dominance'' and (ii) the maximal eigenvalue being much larger than the average eigenvalue. We also clarify the meaning of a certain separable smoothness assumption used in previous analyses of sign gradient descent. Experiments verify the developed theory.""","""The paper is rejected based on unanimous reviews.""" 833,"""Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network""","['Generalization error', 'compression based bound', 'local Rademacher complexity']","""One of the biggest issues in deep learning theory is the generalization ability of networks with huge model size. The classical learning theory suggests that overparameterized models cause overfitting. However, practically used large deep models avoid overfitting, which is not well explained by the classical approaches. To resolve this issue, several attempts have been made. Among them, the compression based bound is one of the promising approaches. However, the compression based bound can be applied only to a compressed network, and it is not applicable to the non-compressed original network. In this paper, we give a unified frame-work that can convert compression based bounds to those for non-compressed original networks. The bound gives even better rate than the one for the compressed network by improving the bias term. By establishing the unified frame-work, we can obtain a data dependent generalization error bound which gives a tighter evaluation than the data independent ones. ""","""This paper has a few interesting contributions: (a) a bound for un-compressed networks in terms of the compressed network (this is in contrast to some prior work, which only gives bounds on the compressed network); (b) the use of local Rademacher complexity to try to squeeze as much as possible out of the connection; (c) an application of the bound to a specific interesting favorable condition, namely low-rank structure. As a minor suggestion, I'd like to recommend that the authors go ahead and use their allowed 10th body page!""" 834,"""Closed loop deep Bayesian inversion: Uncertainty driven acquisition for fast MRI""","['Deep Bayesian Inversion', 'accelerated MRI', 'uncertainty quantification', 'sampling mask design']","""This work proposes a closed-loop, uncertainty-driven adaptive sampling frame- work (CLUDAS) for accelerating magnetic resonance imaging (MRI) via deep Bayesian inversion. By closed-loop, we mean that our samples adapt in real- time to the incoming data. To our knowledge, we demonstrate the first genera- tive adversarial network (GAN) based framework for posterior estimation over a continuum sampling rates of an inverse problem. We use this estimator to drive the sampling for accelerated MRI. Our numerical evidence demonstrates that the variance estimate strongly correlates with the expected MSE improvement for dif- ferent acceleration rates even with few posterior samples. Moreover, the resulting masks bring improvements to the state-of-the-art fixed and active mask designing approaches across MSE, posterior variance and SSIM on real undersampled MRI scans.""","""The author responses and notes to the AC are acknowledged. A fourth review was requested because this seemed like a tricky paper to review, given both the technical contribution and the application area. Overall, the reviewers were all in agreement in terms of score that the paper was just below borderline for acceptance. They found that the methodology seemed sensible and the application potentially impactful. However, a common thread was that the paper was hard to follow for non-experts on MRI and the reviewers weren't entirely convinced by the experiments (asking for additional experiments and comparison to Zhang et al.). The authors comment on the challenge of implementing Zhang is acknowledged and it's unfortunate that cluster issues prevented additional experimental results. While ICLR certainly accepts application papers and particularly ones with interesting technical contribution in machine learning, given that the reviewers struggled to follow the paper through the application specific language it does seem like this isn't the right venue for the paper as written. Thus the recommendation is to reject. Perhaps a more application specific venue would be a better fit for this work. Otherwise, making the paper more accessible to the ML audience and providing experiments to justify the methodology beyond the application would make the paper much stronger.""" 835,"""Acutum: When Generalization Meets Adaptability""","['optimization', 'momentum', 'adaptive gradient methods']","""In spite of the slow convergence, stochastic gradient descent (SGD) is still the most practical optimization method due to its outstanding generalization ability and simplicity. On the other hand, adaptive methods have attracted much more attention of optimization and machine learning communities, both for the leverage of life-long information and for the deep and fundamental mathematical theory. Taking the best of both worlds is the most exciting and challenging question in the field of optimization for machine learning. In this paper, we take a small step towards such ultimate goal. We revisit existing adaptive methods from a novel point of view, which reveals a fresh understanding of momentum. Our new intuition empowers us to remove the second moments in Adam without the loss of performance. Based on our view, we propose a new method, named acute adaptive momentum (Acutum). To the best of our knowledge, Acutum is the first adaptive gradient method without second moments. Experimentally, we demonstrate that our method has a faster convergence rate than Adam/Amsgrad, and generalizes as well as SGD with momentum. We also provide a convergence analysis of our proposed method to complement our intuition. ""","""The paper addresses an important problem of finding a good trade-off between generalization and convergence speed of stochastic gradient methods for training deep nets. However, there is a consensus among the reviewers, even after rebuttals provided by the authors, that the contribution is somewhat limited and the paper may require additional work before it is ready to be published.""" 836,"""DropEdge: Towards Deep Graph Convolutional Networks on Node Classification""","['graph neural network', 'over-smoothing', 'over-fitting', 'dropedge', 'graph convolutional networks']","""Over-fitting and over-smoothing are two main obstacles of developing deep Graph Convolutional Networks (GCNs) for node classification. In particular, over-fitting weakens the generalization ability on small dataset, while over-smoothing impedes model training by isolating output representations from the input features with the increase in network depth. This paper proposes DropEdge, a novel and flexible technique to alleviate both issues. At its core, DropEdge randomly removes a certain number of edges from the input graph at each training epoch, acting like a data augmenter and also a message passing reducer. Furthermore, we theoretically demonstrate that DropEdge either reduces the convergence speed of over-smoothing or relieves the information loss caused by it. More importantly, our DropEdge is a general skill that can be equipped with many other backbone models (e.g. GCN, ResGCN, GraphSAGE, and JKNet) for enhanced performance. Extensive experiments on several benchmarks verify that DropEdge consistently improves the performance on a variety of both shallow and deep GCNs. The effect of DropEdge on preventing over-smoothing is empirically visualized and validated as well. Codes are released on~pseudo-url.""","""The paper proposes a very simple but thoroughly evaluated and investigated idea for improving generalization in GCNs. Though the reviews are mixed, and in the post-rebuttal discussion the two negative reviewers stuck to their ratings, the area chair feels that there are no strong grounds for rejection in the negative reviews. Accept.""" 837,"""Symmetry and Systematicity""","['symmetry', 'systematicity', 'convolution', 'symbols', 'generalisation']","""We argue that symmetry is an important consideration in addressing the problem of systematicity and investigate two forms of symmetry relevant to symbolic processes. We implement this approach in terms of convolution and show that it can be used to achieve effective generalisation in three toy problems: rule learning, composition and grammar learning.""","""Thanks for clarifying several issues raised by the reviewers, which helped us understand the paper. After all, we decided not to accept this paper due to the weakness of its contribution. I hope the updated comments by the reviewers help you strengthen your paper for potential future submission.""" 838,"""Multiplicative Interactions and Where to Find Them""","['multiplicative interactions', 'hypernetworks', 'attention']","""We explore the role of multiplicative interaction as a unifying framework to describe a range of classical and modern neural network architectural motifs, such as gating, attention layers, hypernetworks, and dynamic convolutions amongst others. Multiplicative interaction layers as primitive operations have a long-established presence in the literature, though this often not emphasized and thus under-appreciated. We begin by showing that such layers strictly enrich the representable function classes of neural networks. We conjecture that multiplicative interactions offer a particularly powerful inductive bias when fusing multiple streams of information or when conditional computation is required. We therefore argue that they should be considered in many situation where multiple compute or information paths need to be combined, in place of the simple and oft-used concatenation operation. Finally, we back up our claims and demonstrate the potential of multiplicative interactions by applying them in large-scale complex RL and sequence modelling tasks, where their use allows us to deliver state-of-the-art results, and thereby provides new evidence in support of multiplicative interactions playing a more prominent role when designing new neural network architectures.""","""This paper provides a unifying perspective regarding a variety of popular DNN architectures in terms of the inclusion of multiplicative interaction layers. Such layers increase the representational power of conventional linear layers, which the paper argues can induce a useful inductive bias in practical scenarios such as when multiple streams of information are fused. Empirical support is provided to validate these claims and showcase the potential of multiplicative interactions in occupying broader practical roles. All reviewers agreed to accept this paper, although some concerns were raised in terms of novelty, clarity, and the relationship with state-of-the-art models. However, the author rebuttal and updated revision are adequate, and I believe that this paper should be accepted.""" 839,"""Constrained Markov Decision Processes via Backward Value Functions""","['Reinforcement Learning', 'Constrained Markov Decision Processes', 'Deep Reinforcement Learning']","""Although Reinforcement Learning (RL) algorithms have found tremendous success in simulated domains, they often cannot directly be applied to physical systems, especially in cases where there are hard constraints to satisfy (e.g. on safety or resources). In standard RL, the agent is incentivized to explore any behavior as long as it maximizes rewards, but in the real world undesired behavior can damage either the system or the agent in a way that breaks the learning process itself. In this work, we model the problem of learning with constraints as a Constrained Markov Decision Process, and provide a new on-policy formulation for solving it. A key contribution of our approach is to translate cumulative cost constraints into state-based constraints. Through this, we define a safe policy improvement method which maximizes returns while ensuring that the constraints are satisfied at every step. We provide theoretical guarantees under which the agent converges while ensuring safety over the course of training. We also highlight computational advantages of this approach. The effectiveness of our approach is demonstrated on safe navigation tasks and in safety-constrained versions of MuJoCo environments, with deep neural networks.""","""The paper considers the setting of constrained MDPs and proposes using backward value functions to keep track of the constraints. All reviewers agreed that the idea of backward value functions is interesting, but there were a few technical concerns raised, and the reviewers remained unconvinced after the rebuttal. In particular, there were doubts whether the method actually makes sense for the considered problem (the backward VF averaging constraints over all trajectories, instead of only considering the current one), and a concern about insufficient baseline comparisons. I recommend rejection at this time, but encourage the authors to take the feedback into account, make the paper more crisp, and resubmit to a future venue.""" 840,"""SELF: Learning to Filter Noisy Labels with Self-Ensembling""","['Ensemble Learning', 'Robust Learning', 'Noisy Labels', 'Labels Filtering']","""Deep neural networks (DNNs) have been shown to over-fit a dataset when being trained with noisy labels for a long enough time. To overcome this problem, we present a simple and effective method self-ensemble label filtering (SELF) to progressively filter out the wrong labels during training. Our method improves the task performance by gradually allowing supervision only from the potentially non-noisy (clean) labels and stops learning on the filtered noisy labels. For the filtering, we form running averages of predictions over the entire training dataset using the network output at different training epochs. We show that these ensemble estimates yield more accurate identification of inconsistent predictions throughout training than the single estimates of the network at the most recent training epoch. While filtered samples are removed entirely from the supervised training loss, we dynamically leverage them via semi-supervised learning in the unsupervised loss. We demonstrate the positive effect of such an approach on various image classification tasks under both symmetric and asymmetric label noise and at different noise ratios. It substantially outperforms all previous works on noise-aware learning across different datasets and can be applied to a broad set of network architectures.""","""The authors addressed the issues raised by the reviewers; I suggest to accept this paper.""" 841,"""Learning Space Partitions for Nearest Neighbor Search""","['space partition', 'lsh', 'locality sensitive hashing', 'nearest neighbor search']","""Space partitions of pseudo-formula underlie a vast and important class of fast nearest neighbor search (NNS) algorithms. Inspired by recent theoretical work on NNS for general metric spaces (Andoni et al. 2018b,c), we develop a new framework for building space partitions reducing the problem to balanced graph partitioning followed by supervised classification. We instantiate this general approach with the KaHIP graph partitioner (Sanders and Schulz 2013) and neural networks, respectively, to obtain a new partitioning procedure called Neural Locality-Sensitive Hashing (Neural LSH). On several standard benchmarks for NNS (Aumuller et al. 2017), our experiments show that the partitions obtained by Neural LSH consistently outperform partitions found by quantization-based and tree-based methods as well as classic, data-oblivious LSH.""","""This paper proposes a new framework for improved nearest neighbor search by learning a space partition of the data, allowing for better scalability in distributed settings and overall better performance over existing benchmarks. The two reviewers who were most confident were both positive about the contributions and the revisions. The one reviewer who recommended reject was concerned about the metric used and whether comparison with baselines was fair. In my opinion, the authors seem to have been very receptive to reviewer comments and answered these issues to my satisfaction. After author and reviewer engagement, both R1 and myself are satisfied with the addition of the new baselines and think the authors have sufficiently addressed the major concerns. For the final version of the paper, Id urge the authors to take seriously R4s comment regarding clarity and add algorithmic details as per their suggestion. """ 842,"""Salient Explanation for Fine-grained Classification""","['Visual explanation', 'XAI', 'Constitutional Neural Network']","""Explaining the prediction of deep models has gained increasing attention to increase its applicability, even spreading it to life-affecting decisions. However there has been no attempt to pinpoint only the most discriminative features contributing specifically to separating different classes in a fine-grained classification task. This paper introduces a novel notion of salient explanation and proposes a simple yet effective salient explanation method called Gaussian light and shadow (GLAS), which estimates the spatial impact of deep models by the feature perturbation inspired by light and shadow in nature. GLAS provides a useful coarse-to-fine control benefiting from scalability of Gaussian mask. We also devised the ability to identify multiple instances through recursive GLAS. We prove the effectiveness of GLAS for fine-grained classification using the fine-grained classification dataset. To show the general applicability, we also illustrate that GLAS has state-of-the-art performance at high speed (about 0.5 sec per 224 pseudo-formula 224 image) via the ImageNet Large Scale Visual Recognition Challenge. ""","""This paper is interested in finding salient areas in a deep learning image classification setting. The introduced method relies on masking images using Gaussian Gaussian light and shadow (GLAS) and estimating its impact on output. As noted by all reviewers, the paper is too weak for publication in its current form: - Novelty is very low. - Experimental section not convincing enough, in particular some metrics are missing. - The writing should be improved.""" 843,"""GraphAF: a Flow-based Autoregressive Model for Molecular Graph Generation""","['Molecular graph generation', 'deep generative models', 'normalizing flows', 'autoregressive models']","""Molecular graph generation is a fundamental problem for drug discovery and has been attracting growing attention. The problem is challenging since it requires not only generating chemically valid molecular structures but also optimizing their chemical properties in the meantime. Inspired by the recent progress in deep generative models, in this paper we propose a flow-based autoregressive model for graph generation called GraphAF. GraphAF combines the advantages of both autoregressive and flow-based approaches and enjoys: (1) high model flexibility for data density estimation; (2) efficient parallel computation for training; (3) an iterative sampling process, which allows leveraging chemical domain knowledge for valency checking. Experimental results show that GraphAF is able to generate 68\% chemically valid molecules even without chemical knowledge rules and 100\% valid molecules with chemical rules. The training process of GraphAF is two times faster than the existing state-of-the-art approach GCPN. After fine-tuning the model for goal-directed property optimization with reinforcement learning, GraphAF achieves state-of-the-art performance on both chemical property optimization and constrained property optimization. ""","""All reviewers agreed that this paper is essentially a combination of existing ideas, making it a bit incremental, but is well-executed and a good contribution. Specifically, to quote R1: ""This paper proposes a generative model architecture for molecular graph generation based on autoregressive flows. The main contribution of this paper is to combine existing techniques (auto-regressive BFS-ordered generation of graphs, normalizing flows, dequantization by Gaussian noise, fine-tuning based on reinforcement learning for molecular property optimization, and validity constrained sampling). Most of these techniques are well-established either for data generation with normalizing flows or for molecular graph generation and the novelty lies in the combination of these building blocks into a framework. ... Overall, the paper is very well written, nicely structured and addresses an important problem. The framework in its entirety is novel, but the building blocks of the proposed framework are established in prior work and the idea of using normalizing flows for graph generation has been proposed in earlier work. Nonetheless, I find the paper relevant for an ICLR audience and the quality of execution and presentation of the paper is good.""""" 844,"""Low Bias Gradient Estimates for Very Deep Boolean Stochastic Networks""",[],"""Stochastic neural networks with discrete random variables are an important class of models for their expressivity and interpretability. Since direct differentiation and backpropagation is not possible, Monte Carlo gradient estimation techniques have been widely employed for training such models. Efficient stochastic gradient estimators, such Straight-Through and Gumbel-Softmax, work well for shallow models with one or two stochastic layers. Their performance, however, suffers with increasing model complexity. In this work we focus on stochastic networks with multiple layers of Boolean latent variables. To analyze such such networks, we employ the framework of harmonic analysis for Boolean functions. We use it to derive an analytic formulation for the source of bias in the biased Straight-Through estimator. Based on the analysis we propose \emph{FouST}, a simple gradient estimation algorithm that relies on three simple bias reduction steps. Extensive experiments show that FouST performs favorably compared to state-of-the-art biased estimators, while being much faster than unbiased ones. To the best of our knowledge FouST is the first gradient estimator to train up very deep stochastic neural networks, with up to 80 deterministic and 11 stochastic layers. ""","""Straight-Through is a popular, yet not theoretically well-understood, biased gradient estimator for Bernoulli random variables. The low variance of this estimator makes it a highly useful tool for training large-scale models with binary latents. However, the bias of this estimator may cause divergence in training, which is a significant practical issue. The paper develops a Fourier analysis of the Straight-Through estimator and provides an expression for the bias of the estimator in terms of the Fourier coefficients of the considered function. The paper in its current form is not good enough for publication, and the reviewers believe that the paper contains significant mistakes when deriving the estimator. Furthermore, the Fourier analysis seems unnecessary. """ 845,"""Autoencoder-based Initialization for Recurrent Neural Networks with a Linear Memory""","['recurrent neural networks', 'autoencoders', 'orthogonal RNNs']","""Orthogonal recurrent neural networks address the vanishing gradient problem by parameterizing the recurrent connections using an orthogonal matrix. This class of models is particularly effective to solve tasks that require the memorization of long sequences. We propose an alternative solution based on explicit memorization using linear autoencoders for sequences. We show how a recently proposed recurrent architecture, the Linear Memory Network, composed of a nonlinear feedforward layer and a separate linear recurrence, can be used to solve hard memorization tasks. We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution. The initialization schema can be easily adapted to any recurrent architecture. We argue that this approach is superior to a random orthogonal initialization due to the autoencoder, which allows the memorization of long sequences even before training. The empirical analysis show that our approach achieves competitive results against alternative orthogonal models, and the LSTM, on sequential MNIST, permuted MNIST and TIMIT.""","""The paper explores an initialization scheme for the recently introduced linear memory network (LMN) (Bacciu et al., 2019) that is better than random initialization and the approach is tested on various MNIST and TIMIT data sets with positive results. Reviewer 3 raised concerns about the breadth of experiments and novelty. Reviewer 2 recognized that the model performs well on its MNIST baselines and had concerns about applicability to larger settings. Reviewer 1 acknowledges a very well written paper, but again raises concerns about the thoroughness of the experiments. The authors responded to all three reviewers, responding that the tasks were chosen to match existing work and that the approach is complementary to LSTMs to solve different tasks. Overall the reviewers did not re-adjust their ratings. There remains questions on scalability and generality, which makes the paper not yet ready for acceptance. We hope that the reviews support the authors further research.""" 846,"""Why do These Match? Explaining the Behavior of Image Similarity Models""","['explainable artificial intelligence', 'image similarity', 'artificial intelligence for fashion']","""Explaining a deep learning model can help users understand its behavior and allow researchers to discern its shortcomings. Recent work has primarily focused on explaining models for tasks like image classification or visual question answering. In this paper, we introduce an explanation approach for image similarity models, where a model's output is a score measuring the similarity of two inputs rather than a classification. In this task, an explanation depends on both of the input images, so standard methods do not apply. We propose an explanation method that pairs a saliency map identifying important image regions with an attribute that best explains the match. We find that our explanations provide additional information not typically captured by saliency maps alone, and can also improve performance on the classic task of attribute recognition. Our approach's ability to generalize is demonstrated on two datasets from diverse domains, Polyvore Outfits and Animals with Attributes 2.""","""This submission proposes an explainability method for deep visual representation models that have been trained to compute image similarity. Strengths: -The paper tackles an important and overlooked problem. -The proposed approach is novel and interesting. Weaknesses: -The evaluation is not convincing. In particular (i) the evaluation is performed only on ground-truth pairs, rather than on ground-truth pairs and predicted pairs; (ii) the user study doesnt disambiguate whether users find the SANE explanations better than the saliency map explanations or whether users tend to find text more understandable in general than heat maps. The user study should have compared their predicted attributes to the attribute prediction baseline; (iii) the explanation of Figure 4 is not convincing: the attribute is not only being removed. A new attribute is also being inserted (i.e. a new color). Therefore its not clear whether the similarity score should have increased or decreased; (iv) the proposed metric in section 4.2 is flawed: It matters whether similarity increases or decreases with insertion or deletion. The proposed metric doesnt reflect that. -Some key details, such as how the attribute insertion process was performed, havent been explained. The reviewer ratings were borderline after discussion, with some important concerns still not having been addressed after the author feedback period. Given the remaining shortcomings, AC recommends rejection.""" 847,"""CAN ALTQ LEARN FASTER: EXPERIMENTS AND THEORY""","['Reinforcement Learning', 'Q-Learning', 'Adam', 'Restart', 'Convergence Analysis']","""Differently from the popular Deep Q-Network (DQN) learning, Alternating Q-learning (AltQ) does not fully fit a target Q-function at each iteration, and is generally known to be unstable and inefficient. Limited applications of AltQ mostly rely on substantially altering the algorithm architecture in order to improve its performance. Although Adam appears to be a natural solution, its performance in AltQ has rarely been studied before. In this paper, we first provide a solid exploration on how well AltQ performs with Adam. We then take a further step to improve the implementation by adopting the technique of parameter restart. More specifically, the proposed algorithms are tested on a batch of Atari 2600 games and exhibit superior performance than the DQN learning method. The convergence rate of the slightly modified version of the proposed algorithms is characterized under the linear function approximation. To the best of our knowledge, this is the first theoretical study on the Adam-type algorithms in Q-learning. ""","""The reviewers attempted to provide a fair assessment of this work, albeit with varying qualifications. Nevertheless, the depth and significance of the technical contribution was unanimously questioned, and the experimental evaluation was not considered to be convincing by any of the assessors. The criticisms are sufficient to ask the authors to further strengthen this work before it can be considered for a top conference.""" 848,"""Verification of Generative-Model-Based Visual Transformations""","['robustness certification', 'formal verification', 'robustness analysis', 'latent space interpolations']","""Generative networks are promising models for specifying visual transformations. Unfortunately, certification of generative models is challenging as one needs to capture sufficient non-convexity so to produce precise bounds on the output. Existing verification methods either fail to scale to generative networks or do not capture enough non-convexity. In this work, we present a new verifier, called ApproxLine, that can certify non-trivial properties of generative networks. ApproxLine performs both deterministic and probabilistic abstract interpretation and captures infinite sets of outputs of generative networks. We show that ApproxLine can verify interesting interpolations in the network's latent space.""","""The goal of verification of properties of generative models is very interesting and the contributions of this work seem to make some progress in this context. However, the current state of the paper (particularly, its presentation) makes it difficult to recommend its acceptance.""" 849,"""InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization""","['graph-level representation learning', 'mutual information maximization']","""This paper studies learning the representations of whole graphs in both unsupervised and semi-supervised scenarios. Graph-level representations are critical in a variety of real-world applications such as predicting the properties of molecules and community analysis in social networks. Traditional graph kernel based methods are simple, yet effective for obtaining fixed-length representations for graphs but they suffer from poor generalization due to hand-crafted designs. There are also some recent methods based on language models (e.g. graph2vec) but they tend to only consider certain substructures (e.g. subtrees) as graph representatives. Inspired by recent progress of unsupervised representation learning, in this paper we proposed a novel method called InfoGraph for learning graph-level representations. We maximize the mutual information between the graph-level representation and the representations of substructures of different scales (e.g., nodes, edges, triangles). By doing so, the graph-level representations encode aspects of the data that are shared across different scales of substructures. Furthermore, we further propose InfoGraph*, an extension of InfoGraph for semisupervised scenarios. InfoGraph* maximizes the mutual information between unsupervised graph representations learned by InfoGraph and the representations learned by existing supervised methods. As a result, the supervised encoder learns from unlabeled data while preserving the latent semantic space favored by the current supervised task. Experimental results on the tasks of graph classification and molecular property prediction show that InfoGraph is superior to state-of-the-art baselines and InfoGraph* can achieve performance competitive with state-of-the-art semi-supervised models.""","""This paper proposes a graph embedding method for the whole graph under both unsupervised and semi-supervised setting. It can extract a fixed length graph-level representation with good generalization capability. All reviewers provided unanimous rating of weak accept. The reviewers praise the paper is well written and is value to different fields dealing with graph learning. There are some discussions on the novelty of the approach, which was better clarified after the response from the authors. Overall this paper presents a new effort in the active topic of graph representation learning with potential large impact to multiple fields. Therefore, the ACs recommend it to be an oral paper.""" 850,"""Learning to Control PDEs with Differentiable Physics""","['Differentiable physics', 'Optimal control', 'Deep learning']","""Predicting outcomes and planning interactions with the physical world are long-standing goals for machine learning. A variety of such tasks involves continuous physical systems, which can be described by partial differential equations (PDEs) with many degrees of freedom. Existing methods that aim to control the dynamics of such systems are typically limited to relatively short time frames or a small number of interaction parameters. We present a novel hierarchical predictor-corrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames. We propose to split the problem into two distinct tasks: planning and control. To this end, we introduce a predictor network that plans optimal trajectories and a control network that infers the corresponding control parameters. Both stages are trained end-to-end using a differentiable PDE solver. We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs such as the incompressible Navier-Stokes equations.""","""The paper proposes a method to control dynamical systems described by a partial differential equations (PDE). The method uses a hierarchical predictor-corrector scheme that divides the problem into smaller and simpler temporal subproblems. They illustrate the performance of their method on 1D Burgers PDE and 2D incompressible flow. The reviewers are all positive about this paper and find it well-written and potentially impactful. Hence, I recommend acceptance of this paper.""" 851,"""Faster Neural Network Training with Data Echoing""","['systems', 'faster training', 'large scale']","""In the twilight of Moore's law, GPUs and other specialized hardware accelerators have dramatically sped up neural network training. However, earlier stages of the training pipeline, such as disk I/O and data preprocessing, do not run on accelerators. As accelerators continue to improve, these earlier stages will increasingly become the bottleneck. In this paper, we introduce data echoing, which reduces the total computation used by earlier pipeline stages and speeds up training whenever computation upstream from accelerators dominates the training time. Data echoing reuses (or echoes) intermediate outputs from earlier pipeline stages in order to reclaim idle capacity. We investigate the behavior of different data echoing algorithms on various workloads, for various amounts of echoing, and for various batch sizes. We find that in all settings, at least one data echoing algorithm can match the baseline's predictive performance using less upstream computation. We measured a factor of 3.25 decrease in wall-clock time for ResNet-50 on ImageNet when reading training data over a network.""","""This paper presents a simple trick of taking multiple SGD steps on the same data to improve distributed processing of data and reclaim idle capacity. The underlying ideas seems interesting enough, but the reviewers had several concerns. 1. The method is a simple trick (R2). I don't think this is a good reason to reject the paper, as R3 also noted, so I think this is fine. 2. There are not clear application cases (R3). The authors have given a reasonable response to this, in indicating that this method is likely more useful for prototyping than for well-developed applications. This makes sense to me, but both R3 and I felt that this was insufficiently discussed in the paper, despite seeming quite important to arguing the main point. 3. The results look magical, or too good to be true without additional analysis (R1 and R3). This concerns me the most, and I'm not sure that this point has been addressed by the rebuttal. In addition, it seems that extensive hyperparameter tuning has been performed, which also somewhat goes against the idea that ""this is good for prototyping"". If it's good for prototyping, then ideally it should be a method where hyperparameter tuning is not very necessary. 4. The connections with theoretical understanding of SGD are not well elucidated (R1). I also agree this is a problem, but perhaps not a fatal one -- very often simple heuristics prove effective, and then are analyzed later in follow-up papers. Honestly, this paper is somewhat borderline, but given the large number of good papers that have been submitted to ICLR this year, I'm recommending that this not be accepted at this time, but certainly hope that the authors continue to improve the paper towards a final publication at a different venue. """ 852,"""Quantum Semi-Supervised Kernel Learning""","['quantum machine learning', 'semi-supervised learning', 'support vector machines']","""Quantum machine learning methods have the potential to facilitate learning using extremely large datasets. While the availability of data for training machine learning models is steadily increasing, oftentimes it is much easier to collect feature vectors that to obtain the corresponding labels. One of the approaches for addressing this issue is to use semi-supervised learning, which leverages not only the labeled samples, but also unlabeled feature vectors. Here, we present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines. The algorithm uses recent advances in quantum sample-based Hamiltonian simulation to extend the existing Quantum LS-SVM algorithm to handle the semi-supervised term in the loss, while maintaining the same quantum speedup as the Quantum LS-SVM.""","""Three reviewers have assessed this paper and they have scored it 6/6/6 after rebuttal with one reviewer hesitating about the appropriateness of this submission to ML venues. The reviewers have raised a number of criticisms such as an incremental nature of the paper (HHL and LMR algorithms) and the main contributions lying more within the field of quantum computing than ML. The paper was discussed with reviewers, buddy AC and chairs. On balance, it was concluded that this paper is minimally below the acceptance threshold. We encourage authors to consider all criticism, improve the paper and resubmit to another venue as there is some merit to the proposed idea. """ 853,"""A Non-asymptotic comparison of SVRG and SGD: tradeoffs between compute and speed""","['variance reduction', 'non-asymptotic analysis', 'trade-off', 'computational cost', 'convergence speed']","""Stochastic gradient descent (SGD), which trades off noisy gradient updates for computational efficiency, is the de-facto optimization algorithm to solve large-scale machine learning problems. SGD can make rapid learning progress by performing updates using subsampled training data, but the noisy updates also lead to slow asymptotic convergence. Several variance reduction algorithms, such as SVRG, introduce control variates to obtain a lower variance gradient estimate and faster convergence. Despite their appealing asymptotic guarantees, SVRG-like algorithms have not been widely adopted in deep learning. The traditional asymptotic analysis in stochastic optimization provides limited insight into training deep learning models under a fixed number of epochs. In this paper, we present a non-asymptotic analysis of SVRG under a noisy least squares regression problem. Our primary focus is to compare the exact loss of SVRG to that of SGD at each iteration t. We show that the learning dynamics of our regression model closely matches with that of neural networks on MNIST and CIFAR-10 for both the underparameterized and the overparameterized models. Our analysis and experimental results suggest there is a trade-off between the computational cost and the convergence speed in underparametrized neural networks. SVRG outperforms SGD after a few epochs in this regime. However, SGD is shown to always outperform SVRG in the overparameterized regime.""","""Two reviewers as well as the AC are confused by the paperperhaps because the readability of it should be improved? It is clear that the page limitation of conferences are problematic, with 7 pages of appendix (not part of the review) the authors may consider another venue to publish. In its current form, the usefulness for the ICLR community seems limited.""" 854,"""Meta-Learning Initializations for Image Segmentation""","['meta-learning', 'image segmentation']","""While meta-learning approaches that utilize neural network representations have made progress in few-shot image classification, reinforcement learning, and, more recently, image semantic segmentation, the training algorithms and model architectures have become increasingly specialized to the few-shot domain. A natural question that arises is how to develop learning systems that scale from few-shot to many-shot settings while yielding human level performance in both. One scalable potential approach that does not require ensembling many models nor the computational costs of relation networks, is to meta-learn an initialization. In this work, we study first-order meta-learning of initializations for deep neural networks that must produce dense, structured predictions given an arbitrary amount of train- ing data for a new task. Our primary contributions include (1), an extension and experimental analysis of first-order model agnostic meta-learning algorithms (including FOMAML and Reptile) to image segmentation, (2) a formalization of the generalization error of episodic meta-learning algorithms, which we leverage to decrease error on unseen tasks, (3) a novel neural network architecture built for parameter efficiency which we call EfficientLab, and (4) an empirical study of how meta-learned initializations compare to ImageNet initializations as the training set size increases. We show that meta-learned initializations for image segmentation smoothly transition from canonical few-shot learning problems to larger datasets, outperforming random and ImageNet-trained initializations. Finally, we show both theoretically and empirically that a key limitation of MAML-type algorithms is that when adapting to new tasks, a single update procedure is used that is not conditioned on the data. We find that our network, with an empirically estimated optimal update procedure yields state of the art results on the FSS-1000 dataset, while only requiring one forward pass through a single model at evaluation time.""","""The reviewers reached a consensus that the paper was not ready to be accepted in its current form. The main concerns were in regard to clarity, relatively limited novelty, and a relatively unsatisfying experimental evaluation. Although some of the clarity concerns were addressed during the response period, the other issues still remained, and the reviewers generally agreed that the paper should be rejected.""" 855,"""Do recent advancements in model-based deep reinforcement learning really improve data efficiency?""","['deep learning', 'reinforcement learning', 'data efficiency', 'DQN', 'Rainbow', 'SimPLe']","""Reinforcement learning (RL) has seen great advancements in the past few years. Nevertheless, the consensus among the RL community is that currently used model-free methods, despite all their benefits, suffer from extreme data inefficiency. To circumvent this problem, novel model-based approaches were introduced that often claim to be much more efficient than their model-free counterparts. In this paper, however, we demonstrate that the state-of-the-art model-free Rainbow DQN algorithm can be trained using a much smaller number of samples than it is commonly reported. By simply allowing the algorithm to execute network updates more frequently we manage to reach similar or better results than existing model-based techniques, at a fraction of complexity and computational costs. Furthermore, based on the outcomes of the study, we argue that the agent similar to the modified Rainbow DQN that is presented in this paper should be used as a baseline for any future work aimed at improving sample efficiency of deep reinforcement learning.""","""The paper makes broad claims, but the depth of the experiments is very limited to a narrow combination of algorithms.""" 856,"""PCMC-Net: Feature-based Pairwise Choice Markov Chains""","['choice modeling', 'pairwise choice Markov chains', 'deep learning', 'amortized inference', 'automatic differentiation', 'airline itinerary choice modeling']","""Pairwise Choice Markov Chains (PCMC) have been recently introduced to overcome limitations of choice models based on traditional axioms unable to express empirical observations from modern behavior economics like context effects occurring when a choice between two options is altered by adding a third alternative. The inference approach that estimates the transition rates between each possible pair of alternatives via maximum likelihood suffers when the examples of each alternative are scarce and is inappropriate when new alternatives can be observed at test time. In this work, we propose an amortized inference approach for PCMC by embedding its definition into a neural network that represents transition rates as a function of the alternatives' and individual's features. We apply our construction to the complex case of airline itinerary booking where singletons are common (due to varying prices and individual-specific itineraries), and context effects and behaviors strongly dependent on market segments are observed. Experiments show our network significantly outperforming, in terms of prediction accuracy and logarithmic loss, feature engineered standard and latent class Multinomial Logit models as well as recent machine learning approaches.""","""This submission proposes to use neural networks in combination with pairwise choice markov chain models for choice modelling. The deep network is used to parametrize the PCMC and in so doing improve generalization and inference. Strengths: The formulation and theoretical justifications are convincing. The improvements are non-trivial and the approach is novel. Weaknesses: The text was not always easy to follow. The experimental validation is too limited initially. This was addressed during the discussion by adding an additional experiment. All reviewers recommend acceptance. """ 857,"""Kaleidoscope: An Efficient, Learnable Representation For All Structured Linear Maps""","['structured matrices', 'efficient ML', 'algorithms', 'butterfly matrices', 'arithmetic circuits']","""Modern neural network architectures use structured linear transformations, such as low-rank matrices, sparse matrices, permutations, and the Fourier transform, to improve inference speed and reduce memory usage compared to general linear maps. However, choosing which of the myriad structured transformations to use (and its associated parameterization) is a laborious task that requires trading off speed, space, and accuracy. We consider a different approach: we introduce a family of matrices called kaleidoscope matrices (K-matrices) that provably capture any structured matrix with near-optimal space (parameter) and time (arithmetic operation) complexity. We empirically validate that K-matrices can be automatically learned within end-to-end pipelines to replace hand-crafted procedures, in order to improve model quality. For example, replacing channel shuffles in ShuffleNet improves classification accuracy on ImageNet by up to 5%. K-matrices can also simplify hand-engineered pipelines---we replace filter bank feature computation in speech data preprocessing with a learnable kaleidoscope layer, resulting in only 0.4% loss in accuracy on the TIMIT speech recognition task. In addition, K-matrices can capture latent structure in models: for a challenging permuted image classification task, adding a K-matrix to a standard convolutional architecture can enable learning the latent permutation and improve accuracy by over 8 points. We provide a practically efficient implementation of our approach, and use K-matrices in a Transformer network to attain 36% faster end-to-end inference speed on a language translation task.""","""The paper generalizes several existing results for structured linear transformations in the form of K-matrices. This is an excellent paper and all reviewers confirmed that.""" 858,"""PopSGD: Decentralized Stochastic Gradient Descent in the Population Model""","['Distributed machine learning', 'distributed optimization', 'decentralized parallel SGD', 'population protocols']","""The population model is a standard way to represent large-scale decentralized distributed systems, in which agents with limited computational power interact in randomly chosen pairs, in order to collectively solve global computational tasks. In contrast with synchronous gossip models, nodes are anonymous, lack a common notion of time, and have no control over their scheduling. In this paper, we examine whether large-scale distributed optimization can be performed in this extremely restrictive setting. We introduce and analyze a natural decentralized variant of stochastic gradient descent (SGD), called PopSGD, in which every node maintains a local parameter, and is able to compute stochastic gradients with respect to this parameter. Every pair-wise node interaction performs a stochastic gradient step at each agent, followed by averaging of the two models. We prove that, under standard assumptions, SGD can converge even in this extremely loose, decentralized setting, for both convex and non-convex objectives. Moreover, surprisingly, in the former case, the algorithm can achieve linear speedup in the number of nodes n. Our analysis leverages a new technical connection between decentralized SGD and randomized load balancing, which enables us to tightly bound the concentration of node parameters. We validate our analysis through experiments, showing that PopSGD can achieve convergence and speedup for large-scale distributed learning tasks in a supercomputing environment.""","""This manuscript studies scaling distributed stochastic gradient descent to a large number of nodes. Specifically, it proposes to use algorithms based on population analysis (relevant for large numbers of distributed nodes) to implement distributed training of deep neural networks. In reviews and discussions, the reviewers and AC note missing or inadequate comparisons to previous work on asynchronous SGD, and possible lack of novelty compared to previous work. The reviewers also mentioned the incomplete empirical comparison to closely related work. On the writing, reviewers mentioned that the conciseness of the manuscript could be improved. """ 859,"""Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization""","['generalization', 'deep learning']","""An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data. We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other. Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists. We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning. Furthermore, our analysis is not just descriptive, but prescriptive. It suggests a natural modification to gradient descent that can greatly reduce overfitting.""","""The paper proposes an intuitive causal explanation for the generalization properties of GD methods. The reviewers appreciated the insights, with one reviewer claiming that there was significant overlap with existing work. I ultimately decided to accept this paper as I believe intuitive explanations are critical to the propagation of ideas. That being said, there is a tendency in this community to erase past, especially theoretical, work, for that very reason that theoretical work is less popular. Hence, I want to make it clear that the acceptance of this paper is based on the premise that the authors will incorporate all of reviewer 3's comments and give enough credit to all relevant work (namely, all the papers cited by the reviewer) with a proper discussion on the link between these.""" 860,"""AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures""","['video representation learning', 'video understanding', 'activity recognition', 'neural architecture search']","""Learning to represent videos is a very challenging task both algorithmically and computationally. Standard video CNN architectures have been designed by directly extending architectures devised for image understanding to include the time dimension, using modules such as 3D convolutions, or by using two-stream design to capture both appearance and motion in videos. We interpret a video CNN as a collection of multi-stream convolutional blocks connected to each other, and propose the approach of automatically finding neural architectures with better connectivity and spatio-temporal interactions for video understanding. This is done by evolving a population of overly-connected architectures guided by connection weight learning. Architectures combining representations that abstract different input types (i.e., RGB and optical flow) at multiple temporal resolutions are searched for, allowing different types or sources of information to interact with each other. Our method, referred to as AssembleNet, outperforms prior approaches on public video datasets, in some cases by a great margin. We obtain 58.6% mAP on Charades and 34.27% accuracy on Moments-in-Time.""","""The submission applies architecture search to find effective architectures for video classification. The work is not terribly innovative, but the results are good. All reviewers recommend accepting the paper.""" 861,"""The Frechet Distance of training and test distribution predicts the generalization gap""","['Generalization', 'Transfer learning', 'Frechet distance', 'Optimal transport', 'Domain adaptation', 'Distribution shift', 'Invariance']","""Learning theory tells us that more data is better when minimizing the generalization error of identically distributed training and test sets. However, when training and test distribution differ, this distribution shift can have a significant effect. With a novel perspective on function transfer learning, we are able to lower bound the change of performance when transferring from training to test set with the Wasserstein distance between the embedded training and test set distribution. We find that there is a trade-off affecting performance between how invariant a function is to changes in training and test distribution and how large this shift in distribution is. Empirically across several data domains, we substantiate this viewpoint by showing that test performance correlates strongly with the distance in data distributions between training and test set. Complementary to the popular belief that more data is always better, our results highlight the utility of also choosing a training data distribution that is close to the test data distribution when the learned function is not invariant to such changes.""","""The authors discuss how to predict generalization gaps. Reviews are mixed, putting the submission in the lower half of this year's submissions. I also would have liked to see a comparison with other divergence metrics, for example, L1, MMD, H-distance, discrepancy distance, and learned representations (e.g., BERT, Laser, etc., for language). Without this, the empirical evaluation of FD is a bit weak. Also, the obvious next step would be trying to minimize FD in the context of domain adaptation, and the question is if this shouldn't already be part of your paper? Suggestions: The Amazon reviews are time-stamped, enabling you to run experiments with drift over time. See [0] for an example. [0] pseudo-url""" 862,"""Siamese Attention Networks""",[],"""Attention operators have been widely applied on data of various orders and dimensions such as texts, images, and videos. One challenge of applying attention operators is the excessive usage of computational resources. This is due to the usage of dot product and softmax operator when computing similarity scores. In this work, we propose the Siamese similarity function that uses a feed-forward network to compute similarity scores. This results in the Siamese attention operator (SAO). In particular, SAO leads to a dramatic reduction in the requirement of computational resources. Experimental results show that our SAO can save 94% memory usage and speed up the computation by a factor of 58 compared to the regular attention operator. The computational advantage of SAO is even larger on higher-order and higher-dimensional data. Results on image classification and restoration tasks demonstrate that networks with SAOs are as effective as models with regular attention operator, while significantly outperform those without attention operators.""","""The submission presents a Siamese attention operator that lowers the computational costs of attention operators for applications such as image recognition. The reviews are split. R1 posted significant concerns with the content of the submission. The concerns remain after the authors' responses and revision. One of the concerns is the apparent dual submission with ""Kronecker Attention Networks"". The AC agrees with these concerns and recommends rejecting the submission.""" 863,"""ASYNCHRONOUS MULTI-AGENT GENERATIVE ADVERSARIAL IMITATION LEARNING""","['Multi-agent', 'Imitation Learning', 'Inverse Reinforcement Learning']","""Imitation learning aims to inversely learn a policy from expert demonstrations, which has been extensively studied in the literature for both single-agent setting with Markov decision process (MDP) model, and multi-agent setting with Markov game (MG) model. However, existing approaches for general multi-agent Markov games are not applicable to multi-agent extensive Markov games, where agents make asynchronous decisions following a certain order, rather than simultaneous decisions. We propose a novel framework for asynchronous multi-agent generative adversarial imitation learning (AMAGAIL) under general extensive Markov game settings, and the learned expert policies are proven to guarantee subgame perfect equilibrium (SPE), a more general and stronger equilibrium than Nash equilibrium (NE). The experiment results demonstrate that compared to state-of-the-art baselines, our AMAGAIL model can better infer the policy of each expert agent using their demonstration data collected from asynchronous decision-making scenarios (i.e., extensive Markov games).""","""This paper extends multi-agent imitation learning to extensive-form games. There is a long discussion between reviewer #3 and the authors on the difference between Markov Games (MGs) and Extensive-Form Games (EFGs). The core of the discussion is on whether methods developed under the MG formalism (where agents take actions simultaneously) naturally can be applied to the EFG problem setting (where agents can take actions asynchronously). Despite the long discussion, the authors and reviewer did not come to an agreement on this point. Given that it is a crucial point for determining the significance of the contribution, my decision is to decline the paper. I suggest that the authors add a detailed discussion on why MG methods cannot be applied to EFGs in the way suggested by reviewer #3 in the next version of this work and then resubmit.""" 864,"""ROS-HPL: Robotic Object Search with Hierarchical Policy Learning and Intrinsic-Extrinsic Modeling""","['Robotic Object Search', 'Hierarchical Reinforcement Learning']","""Despite significant progress in Robotic Object Search (ROS) over the recent years with deep reinforcement learning based approaches, the sparsity issue in reward setting as well as the lack of interpretability of the previous ROS approaches leave much to be desired. We present a novel policy learning approach for ROS, based on a hierarchical and interpretable modeling with intrinsic/extrinsic reward setting, to tackle these two challenges. More specifically, we train the low-level policy by deliberating between an action that achieves an immediate sub-goal and the one that is better suited for achieving the final goal. We also introduce a new evaluation metric, namely the extrinsic reward, as a harmonic measure of the object search success rate and the average steps taken. Experiments conducted with multiple settings on the House3D environment validate and show that the intelligent agent, trained with our model, can achieve a better object search performance (higher success rate with lower average steps, measured by SPL: Success weighted by inverse Path Length). In addition, we conduct studies w.r.t. the parameter that controls the weighted overall reward from intrinsic and extrinsic components. The results suggest it is critical to devise a proper trade-off strategy to perform the object search well.""","""This paper introduces a two-level hierarchical reinforcement learning approach, applied to the problem of a robot searching for an object specified by an image. The system incorporates a human-specified subgoal space, and learns low-level policies that balance the intrinsic and extrinsic rewards. The method is tested in simulations against several baselines. The reviewer discussion highlighted strengths and weaknesses of the paper. One strength is the extensive comparisons with alternative approaches on this task. The main weakness is the paper did not adequately distinguish between which aspects of the system were generic to HRL and which aspects are particular to robot object search. The paper was not general enough to be understood as a generic HRL method. It was also ignoring much relevant background knowledge (robot mapping and navigation) if the paper is intended to be primarily about robot object search. The paper did not convince the reviewers that the proposed method was desirable for either hierarchical reinforcement learning or for robot object search. This paper is not ready for publication as the contribution was not sufficiently clear to the readers. """ 865,"""Weight-space symmetry in neural network loss landscapes revisited""","['Weight-space symmetry', 'neural network landscapes']","""Neural network training depends on the structure of the underlying loss landscape, i.e. local minima, saddle points, flat plateaus, and loss barriers. In relation to the structure of the landscape, we study the permutation symmetry of neurons in each layer of a deep neural network, which gives rise not only to multiple equivalent global minima of the loss function but also to critical points in between partner minima. In a network of pseudo-formula hidden layers with pseudo-formula neurons in layers = 1, \ldots, d we construct continuous paths between equivalent global minima that lead through a `permutation point' where the input and output weight vectors of two neurons in the same hidden layer pseudo-formula collide and interchange. We show that such permutation points are critical points which lie inside high-dimensional subspaces of equal loss, contributing to the global flatness of the landscape. We also find that a permutation point for the exchange of neurons pseudo-formula and pseudo-formula transits into a flat high-dimensional plateau that enables all pseudo-formula permutations of neurons in a given layer pseudo-formula at the same loss value. Moreover, we introduce higher-order permutation points by exploiting the hierarchical structure in the loss landscapes of neural networks, and find that the number of pseudo-formula -th order permutation points is much larger than the (already huge) number of equivalent global minima -- at least by a polynomial factor of order pseudo-formula . In two tasks, we demonstrate numerically with our path finding method that continuous paths between partner minima exist: first, in a toy network with a single hidden layer on a function approximation task and, second, in a multilayer network on the MNIST task. Our geometric approach yields a lower bound on the number of critical points generated by weight-space symmetries and provides a simple intuitive link between previous theoretical results and numerical observations.""","""After communicating with each reviewer about the rebuttal, there seems to be a consensus that the paper contains a number of interesting ideas, but the motivation for the paper and the relationship to the literature needs to be expanded. The reviewers have not changed their scores, and so there is not currently enough support to accept this paper.""" 866,"""Distribution Matching Prototypical Network for Unsupervised Domain Adaptation""","['Deep Learning', 'Unsupervised Domain Adaptation', 'Distribution Modeling']","""State-of-the-art Unsupervised Domain Adaptation (UDA) methods learn transferable features by minimizing the feature distribution discrepancy between the source and target domains. Different from these methods which do not model the feature distributions explicitly, in this paper, we explore explicit feature distribution modeling for UDA. In particular, we propose Distribution Matching Prototypical Network (DMPN) to model the deep features from each domain as Gaussian mixture distributions. With explicit feature distribution modeling, we can easily measure the discrepancy between the two domains. In DMPN, we propose two new domain discrepancy losses with probabilistic interpretations. The first one minimizes the distances between the corresponding Gaussian component means of the source and target data. The second one minimizes the pseudo negative log likelihood of generating the target features from source feature distribution. To learn both discriminative and domain invariant features, DMPN is trained by minimizing the classification loss on the labeled source data and the domain discrepancy losses together. Extensive experiments are conducted over two UDA tasks. Our approach yields a large margin in the Digits Image transfer task over state-of-the-art approaches. More remarkably, DMPN obtains a mean accuracy of 81.4% on VisDA 2017 dataset. The hyper-parameter sensitivity analysis shows that our approach is robust w.r.t hyper-parameter changes.""","""This paper addresses the problem of unsupervised domain adaptation and proposes explicit modeling of the source and target feature distributions to aid in cross-domain alignment. The reviewers all recommended rejection of this work. Though they all understood the papers position of explicit feature distribution modeling, there was a lack of understanding as to why this explicit modeling should be superior to the common implicit modeling done in related literature. As some reviewers raised concern that the empirical performance of the proposed approach was marginally better than competing methods, this experimental evidence alone was not sufficient justification of the explicit modeling. There was also a secondary concern about whether the two proposed loss functions were simultaneously necessary. Overall, after reading the reviewers and authors comments, the AC recommends this paper not be accepted. """ 867,"""Beyond GANs: Transforming without a Target Distribution""","['GAN', 'domain transfer', 'computational biology', 'latent space manipulations']","""While generative neural networks can learn to transform a specific input dataset into a specific target dataset, they require having just such a paired set of input/output datasets. For instance, to fool the discriminator, a generative adversarial network (GAN) exclusively trained to transform images of black-haired *men* to blond-haired *men* would need to change gender-related characteristics as well as hair color when given images of black-haired *women* as input. This is problematic, as often it is possible to obtain *a* pair of (source, target) distributions but then have a second source distribution where the target distribution is unknown. The computational challenge is that generative models are good at generation within the manifold of the data that they are trained on. However, generating new samples outside of the manifold or extrapolating ""out-of-sample"" is a much harder problem that has been less well studied. To address this, we introduce a technique called *neuron editing* that learns how neurons encode an edit for a particular transformation in a latent space. We use an autoencoder to decompose the variation within the dataset into activations of different neurons and generate transformed data by defining an editing transformation on those neurons. By performing the transformation in a latent trained space, we encode fairly complex and non-linear transformations to the data with much simpler distribution shifts to the neuron's activations. Our technique is general and works on a wide variety of data domains and applications. We first demonstrate it on image transformations and then move to our two main biological applications: removal of batch artifacts representing unwanted noise and modeling the effect of drug treatments to predict synergy between drugs.""","""This paper presents a new generative modeling approach to transform between data domains via a neuron editing technique. The authors address the scenario of source to target domain translation that can be applied to a new source domain. While the reviewers acknowledged that the idea of neuron editing is interesting, they have raised several concerns that were viewed by AC as critical issues: (1) given the progress that have been made in the field, an empirical comparison with SOTA GANs models is required to assess the benefits/competitiveness of the proposed approach -- see R1s comments, also [StarGAN by Choi et al, CVPR 2018], (2) the literature review is incomplete and requires a major revision -- see R1s and R3s suggestions, also [CYCADA by Hoffman et al, ICML 2018], (3) presentation clarity -- see R1s and R2s comments. AC suggests, in its current state the manuscript is not ready for a publication. We hope the detailed reviews are useful for improving and revising the paper. """ 868,"""Detecting Noisy Training Data with Loss Curves""","['Deep learning', 'noisy data', 'robust training']","""This paper introduces a new method to discover mislabeled training samples and to mitigate their impact on the training process of deep networks. At the heart of our algorithm lies the Area Under the Loss (AUL) statistic, which can be easily computed for each sample in the training set. We show that the AUL can use training dynamics to differentiate between (clean) samples that benefit from generalization and (mislabeled) samples that need to be memorized. We demonstrate that the estimated AUL score conditioned on clean vs. noisy is approximately Gaussian distributed and can be well estimated with a simple Gaussian Mixture Model (GMM). The resulting GMM provides us with mixing coefficients that reveal the percentage of mislabeled samples in a data set as well as probability estimates that each individual training sample is mislabeled. We show that these probability estimates can be used to down-weight suspicious training samples and successfully alleviate the damaging impact of label noise. We demonstrate on the CIFAR10/100 datasets that our proposed approach is significantly more accurate and consistent across model architectures than all prior work.""","""The paper proposes a new, stable metric, called Area Under Loss curve (AUL) to recognize mislabeled samples in a dataset due to the different behavior of their loss function over time. The paper build on earlier observations (e.g. by Shen & Sanghavi) to propose this new metric as a concrete solution to the mislabeling problem. Although the reviewers remarked that this is an interesting approach for a relevant problems, they expressed several concerns regarding this paper. Two of them are whether the hardness of a sample would also result in high AUL scores, and another whether the results hold up under realistic mislabelings, rather than artificial label swapping / replacing. The authors did anecdotally suggest that neither of these effects has a major impact on the results. Still, I think a precise analysis of these effects would be critically important to have in the paper. Especially since there might be a complex interaction between the 'hardness' of samples and mislabelings (an MNIST 1 that looks like a 7 might be sooner mislabeled than a 1 that doesn't look like a 7). The authors show some examples of 'real' mislabeled sentences recognized by the model but it is still unclear whether downweighting these helped final test set performance in this case. Because of these issues, I cannot recommend acceptance of the paper in its current state. However, based on the identified relevance of the problem tackled and the identified potential for significant impact I do think this could be a great paper in a next iteration. """ 869,"""BETANAS: Balanced Training and selective drop for Neural Architecture Search""","['neural architecture search', 'weight sharing', 'auto machine learning', 'deep learning', 'CNN']","""Automatic neural architecture search techniques are becoming increasingly important in machine learning area recently. Especially, weight sharing methods have shown remarkable potentials on searching good network architectures with few computational resources. However, existing weight sharing methods mainly suffer limitations on searching strategies: these methods either uniformly train all network paths to convergence which introduces conflicts between branches and wastes a large amount of computation on unpromising candidates, or selectively train branches with different frequency which leads to unfair evaluation and comparison among paths. To address these issues, we propose a novel neural architecture search method with balanced training strategy to ensure fair comparisons and a selective drop mechanism to reduces conflicts among candidate paths. The experimental results show that our proposed method can achieve a leading performance of 79.0% on ImageNet under mobile settings, which outperforms other state-of-the-art methods in both accuracy and efficiency.""","""This paper proposes a neural architecture search method that uses balanced sampling of architectures from the one-shot model and drops operators whose importance drops below a certain weight. The reviewers agreed that the paper's approach is intuitive, but main points of criticism were: - Lack of good baselines - Potentially unfair comparison, not using the same training pipeline - Lack of available code and thus of reproducibility. (The authors promised code in response, which is much appreciated. If the open-sourcing process has completed in time for the next version of the paper, I encourage the authors to include an anonymized version of the code in the submission to avoid this criticism.) The reviewers appreciated the authors' rebuttal, but it did not suffice for them to change their ratings. I agree with the reviewers that this work may be a solid contribution, but that additional evaluation is needed to demonstrate this. I therefore recommend rejection and encourage resubmission to a different venue after addressing the issues pointed out by the reviewers.""" 870,"""Dynamic Model Pruning with Feedback""","['network pruning', 'dynamic reparameterization', 'model compression']","""Deep neural networks often have millions of parameters. This can hinder their deployment to low-end devices, not only due to high memory requirements but also because of increased latency at inference. We propose a novel model compression method that generates a sparse trained model without additional overhead: by allowing (i) dynamic allocation of the sparsity pattern and (ii) incorporating feedback signal to reactivate prematurely pruned weights we obtain a performant sparse model in one single training pass (retraining is not needed, but can further improve the performance). We evaluate the method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models and further that their performance surpasses all previously proposed pruning schemes (that come without feedback mechanisms).""","""The paper proposes a new, simple method for sparsifying deep neural networks. It use as temporary, pruned model to improve pruning masks via SGD, and eventually applying the SGD steps to the dense model. The paper is well written and shows SOTA results compared to prior work. The authors unanimously recommend to accept this work, based on simplicity of the proposed method and experimental results. I recommend to accept this paper, it seems to make a simple, yet effective contribution to compressing large-scale models. """ 871,"""Differentially Private Meta-Learning""","['Differential Privacy', 'Meta-Learning', 'Federated Learning']","""Parameter-transfer is a well-known and versatile approach for meta-learning, with applications including few-shot learning, federated learning, with personalization, and reinforcement learning. However, parameter-transfer algorithms often require sharing models that have been trained on the samples from specific tasks, thus leaving the task-owners susceptible to breaches of privacy. We conduct the first formal study of privacy in this setting and formalize the notion of task-global differential privacy as a practical relaxation of more commonly studied threat models. We then propose a new differentially private algorithm for gradient-based parameter transfer that not only satisfies this privacy requirement but also retains provable transfer learning guarantees in convex settings. Empirically, we apply our analysis to the problems of federated learning with personalization and few-shot classification, showing that allowing the relaxation to task-global privacy from the more commonly studied notion of local privacy leads to dramatically increased performance in recurrent neural language modeling and image classification.""","""Thanks to the authors for the submission. This paper studies differentially private meta-learning, where the algorithm needs to use information across several learning tasks to protect the privacy of the data set from each task. The reviewers agree that this is a natural problem and the paper presents a solution that is essentially an adoption of differentially private SGD. There are several places the paper can improve. For the experimental evaluation, the authors should include a wider range of epsilon values in order to investigate the accuracy-privacy trade-off. The authors should also consider expanding the existing experiments with other datasets. """ 872,"""Learn to Explain Efficiently via Neural Logic Inductive Learning""","['inductive logic programming', 'interpretability', 'attention']","""The capability of making interpretable and self-explanatory decisions is essential for developing responsible machine learning systems. In this work, we study the learning to explain the problem in the scope of inductive logic programming (ILP). We propose Neural Logic Inductive Learning (NLIL), an efficient differentiable ILP framework that learns first-order logic rules that can explain the patterns in the data. In experiments, compared with the state-of-the-art models, we find NLIL is able to search for rules that are x10 times longer while remaining x3 times faster. We also show that NLIL can scale to large image datasets, i.e. Visual Genome, with 1M entities.""","""This paper proposes a differentiable inductive logic programming method in the vein of recent work on the topic, with efficiency-focussed improvements. Thanks the very detailed comments and discussion with the reviewers, my view is that the paper is acceptable to ICLR. I am mindful of the reasons for reluctance from reviewer #3 while these are not enough to reject the paper, I would strongly, *STRONGLY* advise the authors to consider adding a short section providing comparison to traditional ILP methods and NLM in their camera ready.""" 873,"""Generating Robust Audio Adversarial Examples using Iterative Proportional Clipping""","['audio adversarial examples', 'attack', 'machine learning']","""Audio adversarial examples, imperceptible to humans, have been constructed to attack automatic speech recognition (ASR) systems. However, the adversarial examples generated by existing approaches usually involve notable noise, especially during the periods of silence and pauses, which may lead to the detection of such attacks. This paper proposes a new approach to generate adversarial audios using Iterative Proportional Clipping (IPC), which exploits temporal dependency in original audios to significantly limit human-perceptible noise. Specifically, in every iteration of optimization, we use a backpropagation model to learn the raw perturbation on the original audio to construct our clipping. We then impose a constraint on the perturbation at the positions with lower sound intensity across the time domain to eliminate the perceptible noise during the silent periods or pauses. IPC preserves the linear proportionality between the original audio and the perturbed one to maintain the temporal dependency. We show that the proposed approach can successfully attack the latest state-of-the-art ASR model Wav2letter+, and only requires a few minutes to generate an audio adversarial example. Experimental results also demonstrate that our approach succeeds in preserving temporal dependency and can bypass temporal dependency based defense mechanisms.""","""This paper proposes a method called iterative proportional clipping (IPC) for generating adversarial audio examples that are imperceptible to humans. The efficiency of the method is demonstrated by generating adversarial examples to attack the Wav2letter+ model. Overall, the reviewers found the work interesting, but somewhat incremental and analysis of the method and generated samples incomplete, and Im thus recommending rejection.""" 874,"""Avoiding Negative Side-Effects and Promoting Safe Exploration with Imaginative Planning""","['Reinforcement Learning', 'AI-Safety', 'Model-Based Reinforcement Learning', 'Safe-Exploration']",""" With the recent proliferation of the usage of reinforcement learning (RL) agents for solving real-world tasks, safety emerges as a necessary ingredient for their successful application. In this paper, we focus on ensuring the safety of the agent while making sure that the agent does not cause any unnecessary disruptions to its environment. The current approaches to this problem, such as manually constraining the agent or adding a safety penalty to the reward function, can introduce bad incentives. In complex domains, these approaches are simply intractable, as they require knowing apriori all the possible unsafe scenarios an agent could encounter. We propose a model-based approach to safety that allows the agent to look into the future and be aware of the future consequences of its actions. We learn the transition dynamics of the environment and generate a directed graph called the imaginative module. This graph encapsulates all possible trajectories that can be followed by the agent, allowing the agent to efficiently traverse through the imagined environment without ever taking any action in reality. A baseline state, which can either represent a safe or an unsafe state (based on whichever is easier to define) is taken as a human input, and the imaginative module is used to predict whether the current actions of the agent can cause it to end up in dangerous states in the future. Our imaginative module can be seen as a ``plug-and-play'' approach to ensuring safety, as it is compatible with any existing RL algorithm and any task with discrete action space. Our method induces the agent to act safely while learning to solve the task. We experimentally validate our proposal on two gridworld environments and a self-driving car simulator, demonstrating that our approach to safety visits unsafe states significantly less frequently than a baseline.""","""This paper tackles the problem of safe exploration in RL. The proposed approach uses an imaginative module to construct a connectivity graph between all states using forward predictions. The idea then consists in using this graph to plan a trajectory which avoids states labelled as ""unsafe"". Several concerns were raised and the authors did not provide any rebuttal. A major point is that the assumption that the approach has access to what are unsafe states, which is either unreasonable in practice or makes the problem much simpler. Another major point is the uniform data collection about every state-action pairs. This can be really unsafe and defeats the purpose of safe exploration following this phase. These questions may be due to a miscomprehension, indicating that the paper should be clarified, as demanded by reviewers. Finally, the experiments would benefit from additional details in order to be correctly understood. All reviewers agree that this paper should be rejected. Hence, I recommend reject.""" 875,"""Permutation Equivariant Models for Compositional Generalization in Language""","['Compositionality', 'Permutation Equivariance', 'Language Processing']","""Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for natural language modeling fail when such compositional generalization is required. The main contribution of this paper is to hypothesize that language compositionality is a form of group-equivariance. Based on this hypothesis, we propose a set of tools for constructing equivariant sequence-to-sequence models. Throughout a variety of experiments on the SCAN tasks, we analyze the behavior of existing models under the lens of equivariance, and demonstrate that our equivariant architecture is able to achieve the type compositional generalization required in human language understanding.""","""This paper proposes an equivariant sequence-to-sequence model for dealing with compositionality of language. They show these models are better at SCAN tasks. Reviewers expressed two major concerns: 1) Limited clarity of section 4 which makes the paper difficult to understand. 2) Whether this could generalize to more complex types of compositionality. Authors responded by revising Section 4 and answering the question of generalization. While the reviewers are not 100% satisfied, they agree there is enough novel contribution in this paper. I thank the authors for submitting and look forward to seeing a clearer revision in the conference.""" 876,"""A shallow feature extraction network with a large receptive field for stereo matching tasks""","['stereo matching', 'feature extraction network', 'convolution neural network', 'receptive field']","""Stereo matching is one of the important basic tasks in the computer vision field. In recent years, stereo matching algorithms based on deep learning have achieved excellent performance and become the mainstream research direction. Existing algorithms generally use deep convolutional neural networks (DCNNs) to extract more abstract semantic information, but we believe that the detailed information of the spatial structure is more important for stereo matching tasks. Based on this point of view, this paper proposes a shallow feature extraction network with a large receptive field. The network consists of three parts: a primary feature extraction module, an atrous spatial pyramid pooling (ASPP) module and a feature fusion module. The primary feature extraction network contains only three convolution layers. This network utilizes the basic feature extraction ability of the shallow network to extract and retain the detailed information of the spatial structure. In this paper, the dilated convolution and atrous spatial pyramid pooling (ASPP) module is introduced to increase the size of receptive field. In addition, a feature fusion module is designed, which integrates the feature maps with multiscale receptive fields and mutually complements the feature information of different scales. We replaced the feature extraction part of the existing stereo matching algorithms with our shallow feature extraction network, and achieved state-of-the-art performance on the KITTI 2015 dataset. Compared with the reference network, the number of parameters is reduced by 42%, and the matching accuracy is improved by 1.9%.""","""The paper proposed the use of a shallow layers with large receptive fields for feature extraction to be used in stereo matching tasks. It showed on the KITTI2015 dataset this method leads to large model size reducetion while maintaining a comparable performance. The main conern on this paper is the lack of technical contributions: * The task of stereo matching is very specialized one, simply presenting the model size reduction and performance is not interesting to general readers. Adding more analysis that help understanding why the proposed method helps in this particular task and for what kind of tasks a shallow feature instead a deeper one is perferred. In that way, the paper would be addressing much wider audiences. * The discussions on related work is not thorough enough, lacking of analysis of pros and cons between different methods.""" 877,"""Training individually fair ML models with sensitive subspace robustness""","['fairness', 'adversarial robustness']","""We consider training machine learning models that are fair in the sense that their performance is invariant under certain sensitive perturbations to the inputs. For example, the performance of a resume screening system should be invariant under changes to the gender and/or ethnicity of the applicant. We formalize this notion of algorithmic fairness as a variant of individual fairness and develop a distributionally robust optimization approach to enforce it during training. We also demonstrate the effectiveness of the approach on two ML tasks that are susceptible to gender and racial biases. ""","""The paper addresses individual fairness scenario (treating similar users similarly) and proposes a new definition of algorithmic fairness that is based on the idea of robustness, i.e. by perturbing the inputs (while keeping them close with respect to the distance function), the loss of the model cannot be significantly increased. All reviewers and AC agree that this work is clearly of interest to ICLR, however the reviewers have noted the following potential weaknesses: (1) presentation clarity -- see R3s detailed suggestions e.g. comparison to Dwork et al, see R2s comments on how to improve, (2) empirical evaluations -- see R1s question about using more complex models, see R3s question on the usefulness of the word embeddings. Pleased to report that based on the author respond with extra experiments and explanations, R3 has raised the score to weak accept. All reviewers and AC agree that the most crucial concerns have been addressed in the rebuttal, and the paper could be accepted - congratulations to the authors! The authors are strongly urged to improve presentation clarity and to include the supporting empirical evidence when preparing the final revision.""" 878,"""Sparse Transformer: Concentrated Attention Through Explicit Selection""","['Attention', 'Transformer', 'Machine Translation', 'Natural Language Processing', 'Sparse', 'Sequence to sequence learning']","""Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks. Self attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context. To tackle the problem, we propose a novel model called Sparse Transformer. Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments. Extensive experimental results on a series of natural language processing tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Sparse Transformer in model performance. Sparse Transformer reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation and IWSLT 2014 German-to-English translation. In addition, we conduct qualitative analysis to account for Sparse Transformer's superior performance. ""","""The paper proposes a variant of Sparse Transformer where only top K activations are kept in the softmax. The resulting transformer model is applied to NMT, image caption generation and language modeling, where it outperformed a vanilla Transformer. While the proposed idea is simple, easy to implement, and it does not add additional computational or memory cost, the reviewers raised several concerns in the discussion phase, including: several baselines missing from the tables; incomplete experimental details; incorrect/misleading selection of best performing model in tables of results (e.g. In Table 1, the authors boldface their results on En-De (29.4) and De-En (35.6) but in fact, the best performance on these is achieved by competing models, respectively 29.7 and 35.7. The caption claims their model ""achieves the state-of-the-art performances in En-Vi and De-En"" but this is not true for De-En (albeit by 0.1). In Table 3, they boldface their result of 1.05 but the best result is 1.02; the text says their model beats the Transf-XL ""with an advantage"" (of 0.01) but do not point out that the advantage of Adaptive-span over their model is 3 times as large (0.03)). This prevents me from recommending acceptance of this paper in its current form. I strongly encourage the authors to address these concerns in a future submission.""" 879,"""Simultaneous Classification and Out-of-Distribution Detection Using Deep Neural Networks""","['Out-of-Distribution Detection', 'OOD detection', 'Outlier Exposure', 'Classification', 'Open-World Classification', 'Anomaly Detection', 'Novelty Detection', 'Calibration', 'Neural Networks']","""Deep neural networks have achieved great success in classication tasks during the last years. However, one major problem to the path towards articial intelligence is the inability of neural networks to accurately detect samples from novel class distributions and therefore, most of the existent classication algorithms assume that all classes are known prior to the training stage. In this work, we propose a methodology for training a neural network that allows it to efciently detect out-of-distribution (OOD) examples without compromising much of its classication accuracy on the test examples from known classes. Based on the Outlier Exposure (OE) technique, we propose a novel loss function that achieves state-of-the-art results in out-of-distribution detection with OE both on image and text classication tasks. Additionally, the way this method was constructed makes it suitable for training any classication algorithm that is based on Maximum Likelihood methods.""","""The paper proposes a method for out-of-distribution (OOD) detection for neural network classifiers. The reviewers raised several concerns about novelty, choice of baselines and the experimental evaluation. While the author rebuttal addressed some of these concerns, I think the paper is still not ready for acceptance as is. I encourage the authors to revise the paper and resubmit to a different venue.""" 880,"""ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks""","['Compact Network Training', 'Linear Expansion', 'Over-parameterization', 'Knowledge Transfer']","""In this paper, we introduce a novel approach to training a given compact network. To this end, we build upon over-parameterization, which typically improves both optimization and generalization in neural network training, while being unnecessary at inference time. We propose to expand each linear layer of the compact network into multiple linear layers, without adding any nonlinearity. As such, the resulting expanded network can benefit from over-parameterization during training but can be compressed back to the compact one algebraically at inference. As evidenced by our experiments, this consistently outperforms training the compact network from scratch and knowledge distillation using a teacher. In this context, we introduce several expansion strategies, together with an initialization scheme, and demonstrate the benefits of our ExpandNets on several tasks, including image classification, object detection, and semantic segmentation. ""","""The paper develops linear over-parameterization methods to improve training of small neural network models. This is compared to training from scratch and other knowledge distillation methods. Reviewer 1 found the paper to be clear with good analysis, and raised concerns on generality and extensiveness of experimental work. Reviewer 2 raised concerns about the correctness of the approach and laid out several other possibilities. The authors conducted several other experiments and responded to all the feedback from the reviewers, although there was no final consensus on the scores. The review process has made this a better paper and it is of interest to the community. The paper demonstrates all the features of a good paper, but due to a large number of strong papers, was not accepted at this time.""" 881,"""Unsupervised Learning of Automotive 3D Crash Simulations using LSTMs""","['LSTM', 'surface data', 'geometric deep learning', 'numerical simulation']","""Long short-term memory (LSTM) networks allow to exhibit temporal dynamic behavior with feedback connections and seem a natural choice for learning sequences of 3D meshes. We introduce an approach for dynamic mesh representations as used for numerical simulations of car crashes. To bypass the complication of using 3D meshes, we transform the surface mesh sequences into spectral descriptors that efficiently encode the shape. A two branch LSTM based network architecture is chosen to learn the representations and dynamics of the crash during the simulation. The architecture is based on unsupervised video prediction by an LSTM without any convolutional layer. It uses an encoder LSTM to map an input sequence into a fixed length vector representation. On this representation one decoder LSTM performs the reconstruction of the input sequence, while the other decoder LSTM predicts the future behavior by receiving initial steps of the sequence as seed. The spatio-temporal error behavior of the model is analysed to study how well the model can extrapolate the learned spectral descriptors into the future, that is, how well it has learned to represent the underlying dynamical structural mechanics. Considering that only a few training examples are available, which is the typical case for numerical simulations, the network performs very well.""",""" The paper proposes to train LSTMs to encode car crashes (a temporal sequence of 3D mesh representations). Decoder LSTMs can then be used to 1) reconstruct the input or 2) predict the future sequence of structural geometry. The authors propose to use a spectral feature representation based on prior work as input into the encoding LSTM. The main contribution of the paper (based on the author response) is the introduction of this spectral feature representation to the ML community. The authors used single 3D truck model to generate 205 simulations, of which 105 was used for training, and 100 for testing. The authors presented reconstruction errors and TSNE visualization of the LSTM's reconstruction weights. Discussion Summary: The paper got three weak rejects. The response provided by the authors failed to convince any of the reviewers to adjust their scores. The authors did not provide a revision based on the reviewer comments. Overall, the reviewers found the problem statement to be interesting. However, they had concerns about the following: 1. It's unclear what is the main technical contribution of the work. Several of the reviewers pointed out the lack of technical novelty. From the writing, it's unclear if the proposed spectral feature representation is taken directly from prior work or there was some additional innovation in this submission. Based on the author response, it seems the proposed feature representation is taken directly from prior work as the authors themselves acknowledge that the submission is taking two known ideas and combining them. This can be made more explicit in the paper itself. 2. Lack of comparison with existing work and experimental analysis There is no comparison against existing work on predicting 3D structure deformation over time. While the proposed representation is interesting, the is no comparison with other methods or other alternative representations. Without any comparisons it is difficult to judge how the reconstruction error corresponding to actual reconstruction quality. How much error is acceptable? The submissions also fails to elucidate when the proposed representation should be used. Is it better than alternative representations (use 3D mesh directly? use point clouds? use alternate basis functions?) 3. What is being learned by the model? R3 pointed out that the authors mention that the model is trained in just half an hour and questioned whether the dynamics function is trivial to learn and that the only two parts of the 3D structure is analyzed. The authors responded that the ""coarse"" dynamic is easier to learn than the ""fine"" scale dynamics. Is what is learned by the model sufficient? How well would a model that just modeled the car as a rigid object and predicted the position do? The lack of comparison against baselines and alternative methods/representations makes it difficult to judge usefulness of the representation/approach that is presented. 4. The paper also has minor typos. Page 5: ""treat the for beams"" --> ""treat the four beams"" Page 7: ""marrked"" --> ""marked"" Overall the paper addresses a interesting problem domain, and introduces a interesting representation to the ML community, but fails to do a proper experimental analysis showing how the representation compares to alternatives. Since the paper does not claim the novelty of the representation as its contribution, it is essential that it performs a thorough investigation of the task and perform empirical studies comparing the proposed representation/method against baselines and alternatives.""" 882,"""Hypermodels for Exploration""","['exploration', 'hypermodel', 'reinforcement learning']","""We study the use of hypermodels to represent epistemic uncertainty and guide exploration. This generalizes and extends the use of ensembles to approximate Thompson sampling. The computational cost of training an ensemble grows with its size, and as such, prior work has typically been limited to ensembles with tens of elements. We show that alternative hypermodels can enjoy dramatic efficiency gains, enabling behavior that would otherwise require hundreds or thousands of elements, and even succeed in situations where ensemble methods fail to learn regardless of size. This allows more accurate approximation of Thompson sampling as well as use of more sophisticated exploration schemes. In particular, we consider an approximate form of information-directed sampling and demonstrate performance gains relative to Thompson sampling. As alternatives to ensembles, we consider linear and neural network hypermodels, also known as hypernetworks. We prove that, with neural network base models, a linear hypermodel can represent essentially any distribution over functions, and as such, hypernetworks do not extend what can be represented.""","""This paper considers ensemble of deep learning models in order to quantify their epistemic uncertainty and use this for exploration in RL. The authors first show that limiting the ensemble to a small number of models, which is typically done for computational reasons, can severely limit the approximation of the posterior, which can translate into poor learning behaviours (e.g. over-exploitation). Instead, they propose a general approach based on hypermodels which can achieve the benefits of a large ensemble of models without the computational issues. They perform experiments in the bandit setting supporting their claim. They also provide a theoretical contribution, proving that an arbitrary distribution over functions can be represented by a linear hypermodel. The decision boundary for this paper is unclear given the confidence of reviewers and their scores. However, the tackled problem is important, and the proposed approach is sound and backed up by experiments. Most of reviewers concerns seemed to be addressed by the rebuttal, with the exception of few missing references which the authors should really consider adding. I would therefore recommend acceptance.""" 883,"""Compressive Transformers for Long-Range Sequence Modelling""","['memory', 'language modeling', 'transformer', 'compression']","""We present the Compressive Transformer, an attentive sequence model which compresses past memories for long-range sequence learning. We find the Compressive Transformer obtains state-of-the-art language modelling results in the WikiText-103 and Enwik8 benchmarks, achieving 17.1 ppl and 0.97bpc respectively. We also find it can model high-frequency speech effectively and can be used as a memory mechanism for RL, demonstrated on an object matching task. To promote the domain of long-range sequence learning, we propose a new open-vocabulary language modelling benchmark derived from books, PG-19.""","""The paper proposes a ""compressive transformer"", an extension of the transformer, that keeps a compressed long term memory in addition to the fixed sized memory. Both memories can be queried using attention weights. Unlike TransfomerXL that discards the oldest memories, the authors propose to ""compress"" those memories. The main contribution of this work is that that it introduces a model that can handle extremely long sequences. The authors also introduces a new language modeling dataset based on text from Project Gutenberg that has much longer sequences of words than existing datasets. They provide comprehensive experiments comparing against different compression strategies and compares against previous methods, showing that this method is able to result in lower word-level perplexity. In addition, the authors also present evaluations on speech, and image sequences for RL. Initially the paper received weak positive responses from the reviewers. The reviewers pointed out some clarity issues with details of the method and figures and some questions about design decisions. After rebuttal, all of the reviewers expressed that they were very satisfied with the authors responses and increased their scores (for a final of 2 accepts and 1 weak accept). The authors have provided a thorough and well-written paper, with comprehensive and convincing experiments. In addition, the ability to model long-range sequences and dependencies is an important problem and the AC agrees that this paper makes a solid contribution in tackling that problem. Thus, acceptance is recommended.""" 884,"""Mint: Matrix-Interleaving for Multi-Task Learning""",['multi-task learning'],"""Deep learning enables training of large and flexible function approximators from scratch at the cost of large amounts of data. Applications of neural networks often consider learning in the context of a single task. However, in many scenarios what we hope to learn is not just a single task, but a model that can be used to solve multiple different tasks. Such multi-task learning settings have the potential to improve data efficiency and generalization by sharing data and representations across tasks. However, in some challenging multi-task learning settings, particularly in reinforcement learning, it is very difficult to learn a single model that can solve all the tasks while realizing data efficiency and performance benefits. Learning each of the tasks independently from scratch can actually perform better in such settings, but it does not benefit from the representation sharing that multi-task learning can potentially provide. In this work, we develop an approach that endows a single model with the ability to represent both extremes: joint training and independent training. To this end, we introduce matrix-interleaving (Mint), a modification to standard neural network models that projects the activations for each task into a different learned subspace, represented by a per-task and per-layer matrix. By learning these matrices jointly with the other model parameters, the optimizer itself can decide how much to share representations between tasks. On three challenging multi-task supervised learning and reinforcement learning problems with varying degrees of shared task structure, we find that this model consistently matches or outperforms joint training and independent training, combining the best elements of both.""","""Reviewers put this paper in the lower half and question the theoretical motivation and the experimental design. On the other hand, this seems like an alternative general framework for solving large-scale multi-task learning problems. In the future, I would encourage the authors to evaluate on multi-task benchmarks such as SuperGLUE, decaNLP and C4. Note: It seems there's more similarities with Ruder et al. (2019) [0] than the paper suggests. [0]pseudo-url""" 885,"""Vid2Game: Controllable Characters Extracted from Real-World Videos""",[],"""We extract a controllable model from a video of a person performing a certain activity. The model generates novel image sequences of that person, according to user-defined control signals, typically marking the displacement of the moving body. The generated video can have an arbitrary background, and effectively capture both the dynamics and appearance of the person. The method is based on two networks. The first maps a current pose, and a single-instance control signal to the next pose. The second maps the current pose, the new pose, and a given background, to an output frame. Both networks include multiple novelties that enable high-quality performance. This is demonstrated on multiple characters extracted from various videos of dancers and athletes.""","""This paper proposes to extract a character from a video, manually control the character, and render into the background in real time. The rendered video can have arbitrary background and capture both the dynamics and appearance of the person. All three reviewers praises the visual quality of the synthesized video and the paper is well written with extensive details. Some concerns are raised. For example, despite an excellent engineering effort, there is few things the reader would scientifically learn from this paper. Additional ablation study on each component would also help the better understanding of the approach. Given the level of efforts, the quality of the results and the reviewers comments, the ACs recommend acceptance as a poster.""" 886,"""STABILITY AND CONVERGENCE THEORY FOR LEARNING RESNET: A FULL CHARACTERIZATION""","['ResNet', 'stability', 'convergence theory', 'over-parameterization']","""ResNet structure has achieved great success since its debut. In this paper, we study the stability of learning ResNet. Specifically, we consider the ResNet block = \phi(h_{l-1}+\tau\cdot g(h_{l-1})) where pseudo-formula is ReLU activation and pseudo-formula is a scalar. We show that for standard initialization used in practice, =1/\Omega(\sqrt{L}) is a sharp value in characterizing the stability of forward/backward process of ResNet, where pseudo-formula is the number of residual blocks. Specifically, stability is guaranteed for 1/\Omega(\sqrt{L}) while conversely forward process explodes when pseudo-formula for a positive constant pseudo-formula . Moreover, if ResNet is properly over-parameterized, we show for \le 1/\tilde{\Omega}(\sqrt{L}) gradient descent is guaranteed to find the global minima \footnote{We use pseudo-formula to hide logarithmic factor.}, which significantly enlarges the range of 1/\tilde{\Omega}(L) that admits global convergence in previous work. We also demonstrate that the over-parameterization requirement of ResNet only weakly depends on the depth, which corroborates the advantage of ResNet over vanilla feedforward network. Empirically, with pseudo-formula , deep ResNet can be easily trained even without normalization layer. Moreover, adding pseudo-formula can also improve the performance of ResNet with normalization layer.""","""The article studies the stability of ResNets in relation to initialisation and depth. The reviewers found that this is an interesting article with important theoretical and experimental results. However, they also pointed out that the results, while good, are based on adaptations of previous work and hence might not be particularly impactful. The reviewers found that the revision made important improvements, but not quite meeting the bar for acceptance, pointing out that the presentation and details in the proofs could still be improved. """ 887,"""Style-based Encoder Pre-training for Multi-modal Image Synthesis""","['image-to_image translation', 'representation learning', 'multi-modal image synthesis', 'GANs']","""Image-to-image (I2I) translation aims to translate images from one domain to another. To tackle the multi-modal version of I2I translation, where input and output domains have a one-to-many relation, an extra latent input is provided to the generator to specify a particular output. Recent works propose involved training objectives to learn a latent embedding, jointly with the generator, that models the distribution of possible outputs. Alternatively, we study a simple, yet powerful pre-training strategy for multi-modal I2I translation. We first pre-train an encoder, using a proxy task, to encode the style of an image, such as color and texture, into a low-dimensional latent style vector. Then we train a generator to transform an input image along with a style-code to the output domain. Our generator achieves state-of-the-art results on several benchmarks with a training objective that includes just a GAN loss and a reconstruction loss, which simplifies and speeds up the training significantly compared to competing approaches. We further study the contribution of different loss terms to learning the task of multi-modal I2I translation, and finally we show that the learned style embedding is not dependent on the target domain and generalizes well to other domains.""","""The submission describes a new two-stage training scheme for multi-modal image-to-image translation. The new scheme is compared to a single-stage end-to-end baseline, and the advantage of the new scheme is demonstrated empirically. All three reviewers appreciate the proposed contribution and the quality improvement it brings over the baseline. At the same time, the reviewers see the contribution as incremental and not sufficient for an ICLR paper. The author response and paper adjustment have not changed the opinion of the reviewers, so the overall recommendation is to reject.""" 888,"""PairNorm: Tackling Oversmoothing in GNNs""","['Graph Neural Network', 'oversmoothing', 'normalization']","""The performance of graph neural nets (GNNs) is known to gradually decrease with increasing number of layers. This decay is partly attributed to oversmoothing, where repeated graph convolutions eventually make node embeddings indistinguishable. We take a closer look at two different interpretations, aiming to quantify oversmoothing. Our main contribution is PairNorm, a novel normalization layer that is based on a careful analysis of the graph convolution operator, which prevents all node embeddings from becoming too similar. What is more, PairNorm is fast, easy to implement without any change to network architecture nor any additional parameters, and is broadly applicable to any GNN. Experiments on real-world graphs demonstrate that PairNorm makes deeper GCN, GAT, and SGC models more robust against oversmoothing, and significantly boosts performance for a new problem setting that benefits from deeper GNNs. Code is available at pseudo-url.""","""The paper proposes a way to tackle oversmoothing in Graph Neural Networks. The authors do a good job of motivating their approach, which is straightforward and works well. The paper is well written and the experiments are informative and well carried out. Therefore, I recommend acceptance. Please make suree thee final version reflects the discussion during the rebuttal.""" 889,"""A GOODNESS OF FIT MEASURE FOR GENERATIVE NETWORKS""","['generative adversarial networks', 'goodness of fit', 'inception score', 'empirical approximation error', 'validation metric', 'frechet inception score']","""We define a goodness of fit measure for generative networks which captures how well the network can generate the training data, which is necessary to learn the true data distribution. We demonstrate how our measure can be leveraged to understand mode collapse in generative adversarial networks and provide practitioners with a novel way to perform model comparison and early stopping without having to access another trained model as with Frechet Inception Distance or Inception Score. This measure shows that several successful, popular generative models, such as DCGAN and WGAN, fall very short of learning the data distribution. We identify this issue in generative models and empirically show that overparameterization via subsampling data and using a mixture of models improves performance in terms of goodness of fit.""","""This paper proposes to measure the distance of the generator manifold to the training data. The proposed approach bears significant similarity to past studies that also sought to analyze the behavior of generative models that define a low-dimensional manifold (e.g. Webster 2019, and in particular, Xiang 2017). I recommend that the authors perform a broader literature search to better contextualize the claims and experiments put forth in the paper. The proposed method also suffers from some limitations that are not made clear in the paper. First, the measure depends only on the support of the generator, but not the density. For models that have support everywhere (exact likelihood models tend to have this property by construction), the measure is no longer meaningful. Even for VAEs, the measure is only easily applicable if the decoder is non-autoregressive so that the procedure can be applied only to the mean decoding. In this current state, I do not recommend the paper for submission. Xiang (2017). On the Effects of Batch and Weight Normalization in Generative Adversarial Networks Webster (2019). Detecting Overfitting of Deep Generative Networks via Latent Recovery """ 890,"""FINBERT: FINANCIAL SENTIMENT ANALYSIS WITH PRE-TRAINED LANGUAGE MODELS""","['Financial sentiment analysis', 'financial text classification', 'transfer learning', 'pre-trained language models', 'BERT', 'NLP']","""While many sentiment classification solutions report high accuracy scores in product or movie review datasets, the performance of the methods in niche domains such as finance still largely falls behind. The reason of this gap is the domain-specific language, which decreases the applicability of existing models, and lack of quality labeled data to learn the new context of positive and negative in the specific domain. Transfer learning has been shown to be successful in adapting to new domains without large training data sets. In this paper, we explore the effectiveness of NLP transfer learning in financial sentiment classification. We introduce FinBERT, a language model based on BERT, which improved the state-of-the-art performance by 14 percentage points for a financial sentiment classification task in FinancialPhrasebank dataset.""","""This paper presents FinBERT, a BERT-based model that is further trained on a financial corpus and evaluated on Financial PhraseBank and Financial QA. The authors show that FinBERT slightly outperforms baseline methods on both tasks. The reviewers agree that the novelty is limited and this seems to be an application of BERT to financial dataset. There are many cases when it is okay to not present something entirely novel in terms of model as long as a paper still provides new insights on other things. Unfortunately, the new experiments in this paper are also not convincing. The improvements are very minor on small evaluation datasets, which makes the main contributions of the paper not enough for a venue such as ICLR. The authors did not respond to any of the reviewers' concerns. I recommend rejecting this paper.""" 891,"""CRNet: Image Super-Resolution Using A Convolutional Sparse Coding Inspired Network""","['Convolutional sparse coding', 'LISTA', 'image super-resolution']","""Convolutional Sparse Coding (CSC) has been attracting more and more attention in recent years, for making full use of image global correlation to improve performance on various computer vision applications. However, very few studies focus on solving CSC based image Super-Resolution (SR) problem. As a consequence, there is no significant progress in this area over a period of time. In this paper, we exploit the natural connection between CSC and Convolutional Neural Networks (CNN) to address CSC based image SR. Specifically, Convolutional Iterative Soft Thresholding Algorithm (CISTA) is introduced to solve CSC problem and it can be implemented using CNN architectures. Then we develop a novel CSC based SR framework analogy to the traditional SC based SR methods. Two models inspired by this framework are proposed for pre-/post-upsampling SR, respectively. Compared with recent state-of-the-art SR methods, both of our proposed models show superior performance in terms of both quantitative and qualitative measurements.""","""All three reviewers agreed that the paper should not be accepted. No rebuttal was offered, thus the paper is rejected.""" 892,"""Model Ensemble-Based Intrinsic Reward for Sparse Reward Reinforcement Learning""","['Reinforcement Learning', 'Intrinsic Reward', 'Dynamics Model', 'Ensemble']","""In this paper, a new intrinsic reward generation method for sparse-reward reinforcement learning is proposed based on an ensemble of dynamics models. In the proposed method, the mixture of multiple dynamics models is used to approximate the true unknown transition probability, and the intrinsic reward is designed as the minimum of the surprise seen from each dynamics model to the mixture of the dynamics models. In order to show the effectiveness of the proposed intrinsic reward generation method, a working algorithm is constructed by combining the proposed intrinsic reward generation method with the proximal policy optimization (PPO) algorithm. Numerical results show that for representative locomotion tasks, the proposed model-ensemble-based intrinsic reward generation method outperforms the previous methods based on a single dynamics model.""","""This paper considers the challenge of sparse reward reinforcement learning through intrinsic reward generation based on the deviation in predictions of an ensemble of dynamics models. This is combined with PPO and evaluated in some Mujoco domains. The main issue here was with the way the sparse rewards were provided in the experiments, which was artificial and could lead to a number of problems with the reward structure and partial observability. The work was also considered incremental in its novelty. These concerns were not adequately rebutted, and so as it stands this paper should be rejected.""" 893,"""End to End Trainable Active Contours via Differentiable Rendering""",[],"""We present an image segmentation method that iteratively evolves a polygon. At each iteration, the vertices of the polygon are displaced based on the local value of a 2D shift map that is inferred from the input image via an encoder-decoder architecture. The main training loss that is used is the difference between the polygon shape and the ground truth segmentation mask. The network employs a neural renderer to create the polygon from its vertices, making the process fully differentiable. We demonstrate that our method outperforms the state of the art segmentation networks and deep active contour solutions in a variety of benchmarks, including medical imaging and aerial images.""","""The submission presents a differentiable take on classic active contour methods, which used to be popular in computer vision. The method is sensible and the results are strong. After the revision, all reviewers recommend accepting the paper.""" 894,"""Adversarial AutoAugment""","['Automatic Data Augmentation', 'Adversarial Learning', 'Reinforcement Learning']","""Data augmentation (DA) has been widely utilized to improve generalization in training deep neural networks. Recently, human-designed data augmentation has been gradually replaced by automatically learned augmentation policy. Through finding the best policy in well-designed search space of data augmentation, AutoAugment (Cubuk et al., 2019) can significantly improve validation accuracy on image classification tasks. However, this approach is not computationally practical for large-scale problems. In this paper, we develop an adversarial method to arrive at a computationally-affordable solution called Adversarial AutoAugment, which can simultaneously optimize target related object and augmentation policy search loss. The augmentation policy network attempts to increase the training loss of a target network through generating adversarial augmentation policies, while the target network can learn more robust features from harder examples to improve the generalization. In contrast to prior work, we reuse the computation in target network training for policy evaluation, and dispense with the retraining of the target network. Compared to AutoAugment, this leads to about 12x reduction in computing cost and 11x shortening in time overhead on ImageNet. We show experimental results of our approach on CIFAR-10/CIFAR-100, ImageNet, and demonstrate significant performance improvements over state-of-the-art. On CIFAR-10, we achieve a top-1 test error of 1.36%, which is the currently best performing single model. On ImageNet, we achieve a leading performance of top-1 accuracy 79.40% on ResNet-50 and 80.00% on ResNet-50-D without extra data.""","""This paper proposes a method to learn data augmentation policies using an adversarial loss. In contrast to AutoAugment where an augmentation policy generator is trained by RL (computationally expensive), the authors propose to train a policy generator and the target classifier simultaneously. This is done in an adversarial fashion by computing augmentation policies which increase the loss of the classifier. The authors show that this approach leads to roughly an order of magnitude improvement in computational cost over AutoAugment, while improving the test performance. The reviewers agree that the presentation is clear and that the proposed method is sound, and that there is a significant practical benefit of using such a technique. As most of the concerns were addressed in the discussion phase, I will recommend acceptance of this paper. We ask the authors to update the manuscript to address the remaining (minor) concerns. """ 895,"""Towards Simplicity in Deep Reinforcement Learning: Streamlined Off-Policy Learning""","['Deep Reinforcement Learning', 'Sample Efficiency', 'Off-Policy Algorithms']","""The field of Deep Reinforcement Learning (DRL) has recently seen a surge in the popularity of maximum entropy reinforcement learning algorithms. Their popularity stems from the intuitive interpretation of the maximum entropy objective and their superior sample efficiency on standard benchmarks. In this paper, we seek to understand the primary contribution of the entropy term to the performance of maximum entropy algorithms. For the Mujoco benchmark, we demonstrate that the entropy term in Soft Actor Critic (SAC) principally addresses the bounded nature of the action spaces. With this insight, we propose a simple normalization scheme which allows a streamlined algorithm without entropy maximization match the performance of SAC. Our experimental results demonstrate a need to revisit the benefits of entropy regularization in DRL. We also propose a simple non-uniform sampling method for selecting transitions from the replay buffer during training. We further show that the streamlined algorithm with the simple non-uniform sampling scheme outperforms SAC and achieves state-of-the-art performance on challenging continuous control tasks.""","""The paper studies the role of entropy in maximum entropy RL, particularly in soft actor-critic, and proposes an action normalization scheme that leads to a new algorithm, called Streamlined Off-Policy (SOP), that does not maximize entropy, but retains or exceeds the performance of SAC. Independently from SOP, the paper also introduces Emphasizing Recent Experience (ERE) that samples minibatches from the replay buffer by prioritizing the most recent samples. After rounds of discussion and a revised version with added experiments, the reviewers viewed ERE as the main contribution, while had doubts regarding the claimed benefits of SOP. However, the paper is currently structured around SOP, and the effectiveness of ERE, which can be applied to any off-policy algorithm, is not properly studied. Therefore, I recommend rejection, but encourage the authors to revisit the work with an emphasis on ERE.""" 896,"""RTFM: Generalising to New Environment Dynamics via Reading""","['reinforcement learning', 'policy learning', 'reading comprehension', 'generalisation']","""Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations. We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information. In addition, we propose txt2, a model that captures three-way interactions between the goal, document, and observations. On RTFM, txt2 generalises to new environments with dynamics not seen during training via reading. Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM. Through curriculum learning, txt2 produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps.""","""This paper proposes RTFM, a new model in the field of language-conditioned policy learning. This approach is promising and important in reinforcement learning because of the difficulty to learn policies in new environments. Reviewers appreciate the importance of the problem and the effective approach. After the author response which addressed some of the major concerns, reviewers feel more positive about the paper. They comment, though, that presentation could be clearer, and the limitations of using synthetic data should be discussed in depth. I thank the authors for submitting this paper.""" 897,"""Disentangling Style and Content in Anime Illustrations""","['Adversarial Training', 'Generative Models', 'Style Transfer', 'Anime']","""Existing methods for AI-generated artworks still struggle with generating high-quality stylized content, where high-level semantics are preserved, or separating fine-grained styles from various artists. We propose a novel Generative Adversarial Disentanglement Network which can disentangle two complementary factors of variations when only one of them is labelled in general, and fully decompose complex anime illustrations into style and content in particular. Training such model is challenging, since given a style, various content data may exist but not the other way round. Our approach is divided into two stages, one that encodes an input image into a style independent content, and one based on a dual-conditional generator. We demonstrate the ability to generate high-fidelity anime portraits with a fixed content and a large variety of styles from over a thousand artists, and vice versa, using a single end-to-end network and with applications in style transfer. We show this unique capability as well as superior output to the current state-of-the-art.""","""This paper proposes a two-stage adversarial training approach for learning a disentangled representation of style and content of anime images. Unlike the previous style transfer work, here style is defined as the identity of a particular anime artist, rather than a set of uninterpretable style features. This allows the trained network to generate new anime images which have a particular content and are drawn in the style of a particular artist. While the approach works well, the reviewers voiced concerns about the method (overly complicated and somewhat incremental) and the quality of the experimental section (lack of good baselines and quantitative comparisons at least in terms of the disentanglement quality). It was also mentioned that releasing the code and the dataset would strengthen the appeal of the paper. While the authors have addressed some of the reviewers concerns, unfortunately it was not enough to persuade the reviewers to change their marks. Hence, I have to recommend a rejection.""" 898,"""Feature Partitioning for Efficient Multi-Task Architectures""","['multi-task learning', 'neural architecture search', 'multi-task architecture search']","""Multi-task learning promises to use less data, parameters, and time than training separate single-task models. But realizing these benefits in practice is challenging. In particular, it is difficult to define a suitable architecture that has enough capacity to support many tasks while not requiring excessive compute for each individual task. There are difficult trade-offs when deciding how to allocate parameters and layers across a large set of tasks. To address this, we propose a method for automatically searching over multi-task architectures that accounts for resource constraints. We define a parameterization of feature sharing strategies for effective coverage and sampling of architectures. We also present a method for quick evaluation of such architectures with feature distillation. Together these contributions allow us to quickly optimize for parameter-efficient multi-task models. We benchmark on Visual Decathlon, demonstrating that we can automatically search for and identify architectures that effectively make trade-offs between task resource requirements while maintaining a high level of final performance.""","""This paper considers how to create efficient architectures for multi-task neural networks. R1 recommends Weak Reject, identifying concerns about the clarity of writing, unsupported claims, and missing or unclear technical details. R2 recommends Weak Accept but calls this a ""borderline"" case, and has concerns about experiments and comparisons to baselines. R3 also has concerns about experiments and baselines, and feels the approach is somewhat ad hoc. The authors submitted a response that addressed some of these issues, but the authors chose to maintain their decisions. The AC feels the paper has merit but given these slightly negative to borderline reviews, we cannot recommend acceptance at this time. We hope the reviewer comments help the authors to prepare a revision for another venue.""" 899,"""A Fine-Grained Spectral Perspective on Neural Networks""","['Neural Tangent Kernel', 'Neural Network Gaussian Process', 'Spectral theory', 'Eigenvalues', 'Harmonic analysis']","""Are neural networks biased toward simple functions? Does depth always help learn more complex features? Is training the last layer of a network as good as training all layers? These questions seem unrelated at face value, but in this work we give all of them a common treatment from the spectral perspective. We will study the spectra of the *Conjugate Kernel, CK,* (also called the *Neural Network-Gaussian Process Kernel*), and the *Neural Tangent Kernel, NTK*. Roughly, the CK and the NTK tell us respectively ``""what a network looks like at initialization"" and ""``what a network looks like during and after training."" Their spectra then encode valuable information about the initial distribution and the training and generalization properties of neural networks. By analyzing the eigenvalues, we lend novel insights into the questions put forth at the beginning, and we verify these insights by extensive experiments of neural networks. We believe the computational tools we develop here for analyzing the spectra of CK and NTK serve as a solid foundation for future studies of deep neural networks. We have open-sourced the code for it and for generating the plots in this paper at pseudo-url.""","""The authors develop a spectral analysis on the boolean cube for the neural ""conjugate kernel"" (CK) and ""tangent kernel"" (NTK). The analysis sheds light into inductive biases of neural networks, such as whether they are biased to simple functions. This work contains rigorous analysis and theory which is useful for further discussions. However, the theory and insights do not feel complete. One important drawback is that the analysis is limited by the boolean cube setting; this also means that it is more difficult to link theory to practical scenarios. This has been discussed a lot during the rebuttal and among reviewers. Empirical validation has attempted to deal with these concerns, but it would be useful to have this validation coming from theory, or at least have further relevant theoretical insights. This could happen by further building on the theorem provided in the rebuttal for eigenvalue behavior when d is large.""" 900,"""Fast Neural Network Adaptation via Parameter Remapping and Architecture Search""",[],"""Deep neural networks achieve remarkable performance in many computer vision tasks. Most state-of-the-art~(SOTA) semantic segmentation and object detection approaches reuse neural network architectures designed for image classification as the backbone, commonly pre-trained on ImageNet. However, performance gains can be achieved by designing network architectures specifically for detection and segmentation, as shown by recent neural architecture search (NAS) research for detection and segmentation. One major challenge though, is that ImageNet pre-training of the search space representation (a.k.a. super network) or the searched networks incurs huge computational cost. In this paper, we propose a Fast Neural Network Adaptation (FNA) method, which can adapt both the architecture and parameters of a seed network (e.g. a high performing manually designed backbone) to become a network with different depth, width, or kernels via a Parameter Remapping technique, making it possible to utilize NAS for detection/segmentation tasks a lot more efficiently. In our experiments, we conduct FNA on MobileNetV2 to obtain new networks for both segmentation and detection that clearly out-perform existing networks designed both manually and by NAS. The total computation cost of FNA is significantly less than SOTA segmentation/detection NAS approaches: 1737 pseudo-formula less than DPC, 6.8 pseudo-formula less than Auto-DeepLab and 7.4 pseudo-formula less than DetNAS. The code is available at pseudo-url .""","""Main content: Paper proposes a fast network adaptation (FNA) method, which takes a pre-trained image classification network, and produces a network for the task of object detection/semantic segmentation Summary of discussion: reviewer1: interesting paper with good results, specifically without the need to do pre-training on Imagenet. Cons are better comparisons to existing methods and run on more datasets. reviewer2: interesting idea on adapting source network network via parameter re-mapping that offers good results in both performance and training time. reviewer3: novel method overall, though some concerns on the concrete parameter remapping scheme. Results are impressive Recommendation: Interesting idea and good results. Paper could be improved with better comparison to existing techniques. Overall recommend weak accept.""" 901,"""Bootstrapping the Expressivity with Model-based Planning""","['reinforcement learning theory', 'model-based reinforcement learning', 'planning', 'expressivity', 'approximation theory', 'deep reinforcement learning theory']","""We compare the model-free reinforcement learning with the model-based approaches through the lens of the expressive power of neural networks for policies, pseudo-formula -functions, and dynamics. We show, theoretically and empirically, that even for one-dimensional continuous state space, there are many MDPs whose optimal pseudo-formula -functions and policies are much more complex than the dynamics. We hypothesize many real-world MDPs also have a similar property. For these MDPs, model-based planning is a favorable algorithm, because the resulting policies can approximate the optimal policy significantly better than a neural network parameterization can, and model-free or model-based policy optimization rely on policy parameterization. Motivated by the theory, we apply a simple multi-step model-based bootstrapping planner (BOOTS) to bootstrap a weak pseudo-formula -function into a stronger policy. Empirical results show that applying BOOTS on top of model-based or model-free policy optimization algorithms at the test time improves the performance on MuJoCo benchmark tasks. ""","""The paper provides some insight why model-based RL might be more efficient than model-free methods. It provides an example that even though the dynamics is simple, the value function is quite complicated (it is in a fractal). Even though the particular example might be novel and the construction interesting, this relation between dynamics and value function is not surprising, and perhaps part of the folklore. The paper also suggests a model-based RL methods and provides some empirical results. The reviewers find the paper interesting, but they expressed several concerns about the relevance of the particular example, the relation of the theory to empirical results, etc. The authors provided a rebuttal, but the reviewers were not convinced. Given that we have two Weak Rejects and the reviewer who is Weak Accept is not completely convinced, unfortunately I can only recommend rejection of this paper at this stage.""" 902,"""Measuring Calibration in Deep Learning""","['Deep Learning', 'Multiclass Classification', 'Classification', 'Uncertainty Estimation', 'Calibration']","""Overconfidence and underconfidence in machine learning classifiers is measured by calibration: the degree to which the probabilities predicted for each class match the accuracy of the classifier on that prediction. We propose two new measures for calibration, the Static Calibration Error (SCE) and Adaptive Calibration Error (ACE). These measures take into account every prediction made by a model, in contrast to the popular Expected Calibration Error.""","""The authors propose two measures of calibration that don't simply rely on the top prediction. The reviewers gave a lot of useful feedback. Unfortunately, the authors didn't respond.""" 903,"""Reinforcement Learning without Ground-Truth State""","['Self-supervised', 'goal-conditioned reinforcement learning']","""To perform robot manipulation tasks, a low-dimensional state of the environment typically needs to be estimated. However, designing a state estimator can sometimes be difficult, especially in environments with deformable objects. An alternative is to learn an end-to-end policy that maps directly from high-dimensional sensor inputs to actions. However, if this policy is trained with reinforcement learning, then without a state estimator, it is hard to specify a reward function based on high-dimensional observations. To meet this challenge, we propose a simple indicator reward function for goal-conditioned reinforcement learning: we only give a positive reward when the robot's observation exactly matches a target goal observation. We show that by relabeling the original goal with the achieved goal to obtain positive rewards (Andrychowicz et al., 2017), we can learn with the indicator reward function even in continuous state spaces. We propose two methods to further speed up convergence with indicator rewards: reward balancing and reward filtering. We show comparable performance between our method and an oracle which uses the ground-truth state for computing rewards. We show that our method can perform complex tasks in continuous state spaces such as rope manipulation from RGB-D images, without knowledge of the ground-truth state.""","""This paper considers the problem of reinforcement learning with goal-conditioned agents where the agents do not have access to the ground truth state. The paper builds on the ideas in hindsight experience replay (HER), a method that relabels past trajectories with a goal set in hindsight. This hindsight mechanism enables indicator reward functions to be useful even with image inputs. Two technical contributions are reward balancing (balancing positive and negative experience) and reward filtering (a heuristic for removing false negatives). The method is tested on multiple tasks including a novel RopePush task in simulation. The reviewers discussed strengths and limitations of the paper. One strength was that the writing was clear for the reviewers. One limitation was the paper's novelty, as most of these ideas are already present in HER with the exception of reward filtering. Another major concern was that the experiments were not sufficiently informative. The simulation tasks did not adequately distinguish the proposed method from the baseline (in two of the three tasks) and the third task (RopePush) was simplified substantially (using invisible robot arms). The real world task did not require the pixel observations. The analysis of the method was also found to be somewhat limited by the reviewers, though this was partially addressed by the authors. This paper is not yet ready for publication since the proposed method has insufficient supporting evidence. A more thorough experiment could provide stronger evidence by showing a regime where the proposed method performs better than alternatives.""" 904,"""Self-labelling via simultaneous clustering and representation learning""","['self-supervision', 'feature representation learning', 'clustering']","""Combining clustering and representation learning is one of the most promising approaches for unsupervised learning of deep neural networks. However, doing so naively leads to ill posed learning problems with degenerate solutions. In this paper, we propose a novel and principled learning formulation that addresses these issues. The method is obtained by maximizing the information between labels and input data indices. We show that this criterion extends standard cross-entropy minimization to an optimal transport problem, which we solve efficiently for millions of input images and thousands of labels using a fast variant of the Sinkhorn-Knopp algorithm. The resulting method is able to self-label visual data so as to train highly competitive image representations without manual labels. Our method achieves state of the art representation learning performance for AlexNet and ResNet-50 on SVHN, CIFAR-10, CIFAR-100 and ImageNet and yields the first self-supervised AlexNet that outperforms the supervised Pascal VOC detection baseline. ""","""The paper focuses on supervised and self-supervised learning. The originality is to formulate the self-supervised criterion in terms of optimal transport, where the trained representation is required to induce pseudo-formula equidistributed clusters. The formulation is well founded; in practice, the approach proceeds by alternatively optimizing the cross-entropy loss (SGD) and the pseudo-loss, through a fast version of the Sinkhorn-Knopp algorithm, and scales up to million of samples and thousands of classes. Some concerns about the robustness w.r.t. imbalanced classes, the ability to deliver SOTA supervised performances, the computational complexity have been answered by the rebuttal and handled through new experiments. The convergence toward a local minimum is shown; however, increasing the number of pseudo-label optimization rounds might degrade the results. Overall, I recommend to accept the paper as an oral presentation. A more fancy title would do a better justice to this very nice paper (""Self-labelling learning via optimal transport"" ?). """ 905,"""Ridge Regression: Structure, Cross-Validation, and Sketching""","['ridge regression', 'sketching', 'random matrix theory', 'cross-validation', 'high-dimensional asymptotics']","""We study the following three fundamental problems about ridge regression: (1) what is the structure of the estimator? (2) how to correctly use cross-validation to choose the regularization parameter? and (3) how to accelerate computation without losing too much accuracy? We consider the three problems in a unified large-data linear model. We give a precise representation of ridge regression as a covariance matrix-dependent linear combination of the true parameter and the noise. We study the bias of pseudo-formula -fold cross-validation for choosing the regularization parameter, and propose a simple bias-correction. We analyze the accuracy of primal and dual sketching for ridge regression, showing they are surprisingly accurate. Our results are illustrated by simulations and by analyzing empirical data.""","""The paper studies theoretical properties of ridge regression, and in particular how to correct for the bias of the estimator. The reviewers appreciated the contribution and the fact that you updated the manuscript to make it clearer. I however advise the authors to think about the best way to maximize impact for the ICLR audience, perhaps by providing relevant examples from the ML literature.""" 906,"""SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum""","['distributed optimization', 'decentralized training methods', 'communication-efficient distributed training with momentum', 'large-scale parallel SGD']","""Distributed optimization is essential for training large models on large datasets. Multiple approaches have been proposed to reduce the communication overhead in distributed training, such as synchronizing only after performing multiple local SGD steps, and decentralized methods (e.g., using gossip algorithms) to decouple communications among workers. Although these methods run faster than AllReduce-based methods, which use blocking communication before every update, the resulting models may be less accurate after the same number of updates. Inspired by the BMUF method of Chen & Huo (2016), we propose a slow momentum (SlowMo) framework, where workers periodically synchronize and perform a momentum update, after multiple iterations of a base optimization algorithm. Experiments on image classification and machine translation tasks demonstrate that SlowMo consistently yields improvements in optimization and generalization performance relative to the base optimizer, even when the additional overhead is amortized over many updates so that the SlowMo runtime is on par with that of the base optimizer. We provide theoretical convergence guarantees showing that SlowMo converges to a stationary point of smooth non-convex losses. Since BMUF can be expressed through the SlowMo framework, our results also correspond to the first theoretical convergence guarantees for BMUF.""","""This paper presents a new approach, SlowMo, to improve communication-efficient distribution training with SGD. The main method is based on the BMUF approach and relies on workers to periodically synchronize and perform a momentum update. This works well in practice as shown in the empirical results. Reviewers had a couple of concerns regarding the significance of the contributions. After the rebuttal period some of their doubts were clarified. Even though they find that the solutions of the paper are an incremental extension of existing work, they believe this is a useful extension. For this reason, I recommend to accept this paper.""" 907,"""MANIFOLD FORESTS: CLOSING THE GAP ON NEURAL NETWORKS""","['machine learning', 'structured learning', 'projections', 'structured data', 'images', 'classification']","""Decision forests (DF), in particular random forests and gradient boosting trees, have demonstrated state-of-the-art accuracy compared to other methods in many supervised learning scenarios. In particular, DFs dominate other methods in tabular data, that is, when the feature space is unstructured, so that the signal is invariant to permuting feature indices. However, in structured data lying on a manifold---such as images, text, and speech---neural nets (NN) tend to outperform DFs. We conjecture that at least part of the reason for this is that the input to NN is not simply the feature magnitudes, but also their indices (for example, the convolution operation uses ``feature locality). In contrast, naive DF implementations fail to explicitly consider feature indices. A recently proposed DF approach demonstrates that DFs, for each node, implicitly sample a random matrix from some specific distribution. Here, we build on that to show that one can choose distributions in a manifold aware fashion. For example, for image classification, rather than randomly selecting pixels, one can randomly select contiguous patches. We demonstrate the empirical performance of data living on three different manifolds: images, time-series, and a torus. In all three cases, our Manifold Forest (Mf) algorithm empirically dominates other state-of-the-art approaches that ignore feature space structure, achieving a lower classification error on all sample sizes. This dominance extends to the MNIST data set as well. Moreover, both training and test time is significantly faster for manifold forests as compared to deep nets. This approach, therefore, has promise to enable DFs and other machine learning methods to close the gap with deep nets on manifold-valued data. ""","""This work explores how to leverage structure of this input in decision trees, the way this is done for example in convolutional networks. All reviewers agree that the experimental validation of the method as presented is extremely weak. Authors have not provided a response to answer the many concerns raised by reviewers. Therefore, we recommend rejection.""" 908,"""Learning a Behavioral Repertoire from Demonstrations""","['Behavioral Repertoires', 'Imitation Learning', 'Deep Learning', 'Adaptation', 'StarCraft 2']","""Imitation Learning (IL) is a machine learning approach to learn a policy from a set of demonstrations. IL can be useful to kick-start learning before applying reinforcement learning (RL) but it can also be useful on its own, e.g. to learn to imitate human players in video games. However, a major limitation of current IL approaches is that they learn only a single ``""average"" policy based on a dataset that possibly contains demonstrations of numerous different types of behaviors. In this paper, we present a new approach called Behavioral Repertoire Imitation Learning (BRIL) that instead learns a repertoire of behaviors from a set of demonstrations by augmenting the state-action pairs with behavioral descriptions. The outcome of this approach is a single neural network policy conditioned on a behavior description that can be precisely modulated. We apply this approach to train a policy on 7,777 human demonstrations for the build-order planning task in StarCraft II. Dimensionality reduction techniques are applied to construct a low-dimensional behavioral space from the high-dimensional army unit composition of each demonstration. The results demonstrate that the learned policy can be effectively manipulated to express distinct behaviors. Additionally, by applying the UCB1 algorithm, the policy can adapt its behavior -in-between games- to reach a performance beyond that of the traditional IL baseline approach.""","""This paper proposes a way to lean context-dependent policies from demonstrations, where the context represents behavior labels obtained by annotating demonstrations with differences in behavior across dimensions and the reduced in 2 dimensions. Results are conducted in the domain of StarCraft. The main concerns from the reviewers related to the papers novelty (as pointed by R2) and experiments (particularly the lack of comparison with other methods and the evaluation of only 4 out of the 62 behaviour clusters, as pointed by R3). As such, I cannot recommend acceptance, as current results do not provide strong empirical evidence about the superiority of the method against other alternatives.""" 909,"""Learning to Generate Grounded Visual Captions without Localization Supervision""","['image captioning', 'video captioning', 'self-supervised learning', 'visual grounding']","""When automatically generating a sentence description for an image or video, it often remains unclear how well the generated caption is grounded, or if the model hallucinates based on priors in the dataset and/or the language model. The most common way of relating image regions with words in caption models is through an attention mechanism over the regions that are used as input to predict the next word. The model must therefore learn to predict the attentional weights without knowing the word it should localize. This is difficult to train without grounding supervision since recurrent models can propagate past information and there is no explicit signal to force the captioning model to properly ground the individual decoded words. In this work, we help the model to achieve this via a novel cyclical training regimen that forces the model to localize each word in the image after the sentence decoder generates it, and then reconstruct the sentence from the localized image region(s) to match the ground-truth. Our proposed framework only requires learning one extra fully-connected layer (the localizer), a layer that can be removed at test time. We show that our model significantly improves grounding accuracy without relying on grounding supervision or introducing extra computation during inference for both image and video captioning tasks.""","""This paper proposes a cyclical training scheme for grounded visual captioning, where a localization model is trained to identify the regions in the image referred to by caption words, and a reconstruction step is added conditioned on this information. This extends prior work which required grounding supervision. While the proposed approach is sensible and grounding of generated captions is an important requirement, some reviewers (me included) pointed out concerns about the relevance of this paper's contributions. I found the authors explanation that the objective is not to improve the captioning accuracy but to refine its grounding performance without any localization supervision a bit unconvincing -- I would expect that better grounding would be reflected in overall better captioning performance, which seems to have happened with the supervised model of Zhou et al. (2019). In fact, even the localization gains seem rather small: The attention accuracy for localizer is 20.4% and is higher than the 19.3% from the decoder at the end of training. Overall, the proposed model is an incremental change on the training of an image captioning system, by adding a localizer component, which is not used at test time. The authors' claim that The network is implicitly regularized to update its attention mechanism to match with the localized image regions is also unclear to me -- there is nothing in the loss function that penalizes the difference between these two attentions, as the gradient doesnt backprop from one component to another. Sharing the LSTM and Language LSTM doesnt imply this, as the localizer is just providing guidance to the decoder, but there is no reason this will help the attention of the original model. Other natural questions left unanswered by this paper are: - What happens if we use the localizer also in test time (calling the decoder twice)? Will the captions improve? This experiment would be needed to assess the potential of this method to help image captioning. - Can we keep refining this iteratively? - Can we add a loss term on the disagreement of the two attentions to actually achieve the said regularisation effect? Finally, the paper [1] (cited by the authors) seems to employ a similar strategy (encoder-decoder with reconstructor) with shown benefits in video captioning. [1] Bairui Wang, Lin Ma, Wei Zhang, and Wei Liu. Reconstruction network for video captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 76227631, 2018. I suggest addressing some of these concerns in a revised version of the paper.""" 910,"""Neural networks with motivation""","['neuroscience', 'brain', 'motivation', 'learning', 'reinforcement learning', 'recurrent neural network', 'deep learning']","""How can animals behave effectively in conditions involving different motivational contexts? Here, we propose how reinforcement learning neural networks can learn optimal behavior for dynamically changing motivational salience vectors. First, we show that Q-learning neural networks with motivation can navigate in environment with dynamic rewards. Second, we show that such networks can learn complex behaviors simultaneously directed towards several goals distributed in an environment. Finally, we show that in Pavlovian conditioning task, the responses of the neurons in our model resemble the firing patterns of neurons in the ventral pallidum (VP), a basal ganglia structure involved in motivated behaviors. We show that, similarly to real neurons, recurrent networks with motivation are composed of two oppositely-tuned classes of neurons, responding to positive and negative rewards. Our model generates predictions for the VP connectivity. We conclude that networks with motivation can rapidly adapt their behavior to varying conditions without changes in synaptic strength when expected reward is modulated by motivation. Such networks may also provide a mechanism for how hierarchical reinforcement learning is implemented in the brain.""","""This paper proposes a deep RL framework that incorporates motivation as input features, and is tested on 3 simplified domains, including one which is presented to rodents. While R2 found the paper well-written and interesting to read, a common theme among reviewer comments is that its not clear what the main contribution is, as it seems to simultaneously be claiming a ML contribution (motivation as a feature input helps with certain tasks) as well as a neuroscientific contribution (their agent exhibited representations that clustered similarly to those in animals). In trying to do both, its perhaps doing both a disservice. I think its commendable to try to bridge the fields of deep RL and neuroscience, and this is indeed an intriguing paper. However any such paper still needs to have a clear contribution. It seems that the ML contributions are too slight to be of general practical use, while the neuroscientific contributions are muddled somewhat. The authors several times mentioned the space constraints limiting their explanations. Perhaps this is an indication that they are trying to cover too much within one paper. I urge the authors to consider splitting it up into two separate works in order to give both the needed focus. I also have some concerns about the results themselves. R1 and R3 both mentioned that the comparison between the non-motivated agent and the motivated agent wasnt quite fair, since one is essentially only given partial information. Its therefore not clear how we should be interpreting the performance difference. Second, why was the non-motivated agent not analyzed in the same way as the motivated agent for the Pavlovian task? Isnt this a crucial comparison to make, if one wanted to argue that the motivational salience is key to reproducing the representational similarities of the animals? (The new experiment with the random fixed weights is interesting, I would have liked to see those results.) For these reasons and the ones laid out in the extensive comments of the reviewers, Im afraid I have to recommend reject. """ 911,"""Episodic Reinforcement Learning with Associative Memory""","['Deep Reinforcement Learning', 'Episodic Control', 'Episodic Memory', 'Associative Memory', 'Non-Parametric Method', 'Sample Efficiency']","""Sample efficiency has been one of the major challenges for deep reinforcement learning. Non-parametric episodic control has been proposed to speed up parametric reinforcement learning by rapidly latching on previously successful policies. However, previous work on episodic reinforcement learning neglects the relationship between states and only stored the experiences as unrelated items. To improve sample efficiency of reinforcement learning, we propose a novel framework, called Episodic Reinforcement Learning with Associative Memory (ERLAM), which associates related experience trajectories to enable reasoning effective strategies. We build a graph on top of states in memory based on state transitions and develop a reverse-trajectory propagation strategy to allow rapid value propagation through the graph. We use the non-parametric associative memory as early guidance for a parametric reinforcement learning model. Results on navigation domain and Atari games show our framework achieves significantly higher sample efficiency than state-of-the-art episodic reinforcement learning models.""","""The submission tackles the problem of data efficiency in RL by building a graph on top of the replay memory and propagate values based on this representation of states and transitions. The method is evaluated on Atari games and is shown to outperform other episodic RL methods. The reviews were mixed initially but have been brought up by the revisions to the paper and the authors' rebuttal. In particular, there was a concern about theoretical support and the authors added a proof of convergence. They have also added additional experiments and explanations. Given the positive reviews and discussion, the recommendation is to accept this paper.""" 912,"""Adaptive Correlated Monte Carlo for Contextual Categorical Sequence Generation""","['binary softmax', 'discrete variables', 'policy gradient', 'pseudo actions', 'reinforcement learning', 'variance reduction']","""Sequence generation models are commonly refined with reinforcement learning over user-defined metrics. However, high gradient variance hinders the practical use of this method. To stabilize this method, we adapt to contextual generation of categorical sequences a policy gradient estimator, which evaluates a set of correlated Monte Carlo (MC) rollouts for variance control. Due to the correlation, the number of unique rollouts is random and adaptive to model uncertainty; those rollouts naturally become baselines for each other, and hence are combined to effectively reduce gradient variance. We also demonstrate the use of correlated MC rollouts for binary-tree softmax models, which reduce the high generation cost in large vocabulary scenarios by decomposing each categorical action into a sequence of binary actions. We evaluate our methods on both neural program synthesis and image captioning. The proposed methods yield lower gradient variance and consistent improvement over related baselines. ""","""The paper presents a novel reinforcement learning-based algorithm for contextual sequence generation. Specifically, the paper presents experimental results on the application of the gradient ARSM estimator of Yin et al. (2019) to challenging structured prediction problems (neural program synthesis and image captioning). The method consists in performing correlated Monte Carlo rollouts starting from each token in the generated sequence, and using the multiple rollouts to reduce gradient variance. Numerical experiments are presented with promising performance. Reviewers were in agreement that this is a non-trivial extension of previous work with broad potential application. Some concerns about better framing of contributions were mostly resolved during the author rebuttal phase. Therefore, the AC recommends publication. """ 913,"""Lagrangian Fluid Simulation with Continuous Convolutions""","['particle-based physics', 'fluid mechanics', 'continuous convolutions', 'material estimation']","""We present an approach to Lagrangian fluid simulation with a new type of convolutional network. Our networks process sets of moving particles, which describe fluids in space and time. Unlike previous approaches, we do not build an explicit graph structure to connect the particles but use spatial convolutions as the main differentiable operation that relates particles to their neighbors. To this end we present a simple, novel, and effective extension of N-D convolutions to the continuous domain. We show that our network architecture can simulate different materials, generalizes to arbitrary collision geometries, and can be used for inverse problems. In addition, we demonstrate that our continuous convolutions outperform prior formulations in terms of accuracy and speed. ""","""The paper proposes an approach for N-D continuous convolution on unordered particle set and applies it to Lagrangian fluid simulation. All reviewers found the paper to be a novel and useful contribution towards the problem of N-D continuous convolution on unordered particles. I recommend acceptance. """ 914,"""Multi-Agent Interactions Modeling with Correlated Policies""","['Multi-agent reinforcement learning', 'Imitation learning']","""In multi-agent systems, complex interacting behaviors arise due to the high correlations among agents. However, previous work on modeling multi-agent interactions from demonstrations is primarily constrained by assuming the independence among policies and their reward structures. In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents policies, which can recover agents' policies that can regenerate similar interactions. Consequently, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL), which allows for decentralized training and execution. Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators and outperforms state-of-the-art multi-agent imitation learning methods. Our code is available at \url{pseudo-url}.""","""The paper proposes an extension to the popular Generative Adversarial Imitation Learning framework that considers multi-agent settings with ""correlated policies"", i.e., where agents' actions influence each other. The proposed approach learns opponent models to consider possible opponent actions during learning. Several questions were raised during the review phase, including clarifying questions about key components of the proposed approach and theoretical contributions, as well as concerns about related work. These were addressed by the authors and the reviewers are satisfied that the resulting paper provides a valuable contribution. I encourage the authors to continue to use the reviewers' feedback to improve the clarity of their manuscript in time for the camera ready submission.""" 915,"""Leveraging inductive bias of neural networks for learning without explicit human annotations""","['dataset construction', 'deep learning', 'candidate examples']","""Classification problems today are typically solved by first collecting examples along with candidate labels, second obtaining clean labels from workers, and third training a large, overparameterized deep neural network on the clean examples. The second, labeling step is often the most expensive one as it requires manually going through all examples. In this paper we skip the labeling step entirely and propose to directly train the deep neural network on the noisy raw labels and early stop the training to avoid overfitting. With this procedure we exploit an intriguing property of large overparameterized neural networks: While they are capable of perfectly fitting the noisy data, gradient descent fits clean labels much faster than the noisy ones, thus early stopping resembles training on the clean labels. Our results show that early stopping the training of standard deep networks such as ResNet-18 on part of the Tiny Images dataset, which does not involve any human labeled data, and of which only about half of the labels are correct, gives a significantly higher test performance than when trained on the clean CIFAR-10 training dataset, which is a labeled version of the Tiny Images dataset, for the same classification problem. In addition, our results show that the noise generated through the label collection process is not nearly as adversarial for learning as the noise generated by randomly flipping labels, which is the noise most prevalent in works demonstrating noise robustness of neural networks.""","""The authors present an approach to learning from noisy labels. The reviews were mixed and several issues remain unresolved. I do not accept the following as a valid response: ""We fully agree that noisily collected labels are common for many problems other than image classification. However, the focus of our paper is image classification, and we thus concentrate on classification problems related to the widely popular CIFAR-10 and ImageNet classification problems."" ICLR is a conference on theoretical and applied ML, and the fact that a technique has not been used for image classification before, does not mean you bring something to the table by doing so. The NLP literature is abundant with interesting work on label noise and should obviously be considered related work. That said, there's also missing references directly related to the connection between early stopping/regularization and label bias correction, including: [0]pseudo-url [1]pseudo-url [2] pseudo-url See also this paper submitted to this conference: pseudo-url""" 916,"""AtomNAS: Fine-Grained End-to-End Neural Architecture Search""","['Neural Architecture Search', 'Image Classification']","""Search space design is very critical to neural architecture search (NAS) algorithms. We propose a fine-grained search space comprised of atomic blocks, a minimal search unit that is much smaller than the ones used in recent NAS algorithms. This search space allows a mix of operations by composing different types of atomic blocks, while the search space in previous methods only allows homogeneous operations. Based on this search space, we propose a resource-aware architecture search framework which automatically assigns the computational resources (e.g., output channel numbers) for each operation by jointly considering the performance and the computational cost. In addition, to accelerate the search process, we propose a dynamic network shrinkage technique which prunes the atomic blocks with negligible influence on outputs on the fly. Instead of a search-and-retrain two-stage paradigm, our method simultaneously searches and trains the target architecture. Our method achieves state-of-the-art performance under several FLOPs configurations on ImageNet with a small searching cost. We open our entire codebase at: pseudo-url.""","""Reviewer #1 noted that he wishes to change his review to weak accept post rebuttal, but did not change his score in the system. Presuming his score is weak accept, then all reviewers are unanimous for acceptance. I have reviewed the paper and find the results appear to be clear, but the magnitude of the improvement is modest. I concur with the weak accept recommendation. """ 917,"""AMRL: Aggregated Memory For Reinforcement Learning""","['deep learning', 'reinforcement learning', 'rl', 'memory', 'noise', 'machine learning']","""In many partially observable scenarios, Reinforcement Learning (RL) agents must rely on long-term memory in order to learn an optimal policy. We demonstrate that using techniques from NLP and supervised learning fails at RL tasks due to stochasticity from the environment and from exploration. Utilizing our insights on the limitations of traditional memory methods in RL, we propose AMRL, a class of models that can learn better policies with greater sample efficiency and are resilient to noisy inputs. Specifically, our models use a standard memory module to summarize short-term context, and then aggregate all prior states from the standard model without respect to order. We show that this provides advantages both in terms of gradient decay and signal-to-noise ratio over time. Evaluating in Minecraft and maze environments that test long-term memory, we find that our model improves average return by 19% over a baseline that has the same number of parameters and by 9% over a stronger baseline that has far more parameters.""","""This paper introduces a way to augment memory in recurrent neural networks with order-independent aggregators. In noisy environments this results in an increase in training speed and stability. The reviewers considered this to be a strong paper with potential for impact, and were satisfied with the author response to their questions and concerns.""" 918,"""Representation Quality Explain Adversarial Attacks""","['Representation Metrics', 'Adversarial Machine Learning', 'One-Pixel Attack', 'DeepFool', 'CapsNet']","""Neural networks have been shown vulnerable to adversarial samples. Slightly perturbed input images are able to change the classification of accurate models, showing that the representation learned is not as good as previously thought. To aid the development of better neural networks, it would be important to evaluate to what extent are current neural networks' representations capturing the existing features. Here we propose a way to evaluate the representation quality of neural networks using a novel type of zero-shot test, entitled Raw Zero-Shot. The main idea lies in the fact that some features are present on unknown classes and that unknown classes can be defined as a combination of previous learned features without representation bias (a bias towards representation that maps only current set of input-outputs and their boundary). To evaluate the soft-labels of unknown classes, two metrics are proposed. One is based on clustering validation techniques (Davies-Bouldin Index) and the other is based on soft-label distance of a given correct soft-label. Experiments show that such metrics are in accordance with the robustness to adversarial attacks and might serve as a guidance to build better models as well as be used in loss functions to create new types of neural networks. Interestingly, the results suggests that dynamic routing networks such as CapsNet have better representation while current deeper DNNs are trading off representation quality for accuracy.""","""The reviewers found the aim of the paper interesting (to connect representation quality with adversarial examples). However, the reviewers consistently pointed out writing issues, such as inaccurate or unsubstantiated claims, which are not appropriate for a scientific venue. The reviewers also found the experiments, which are on simple datasets, unconvincing.""" 919,"""Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model""",[],""" Recent breakthroughs of pretrained language models have shown the effectiveness of self-supervised learning for a wide range of natural language processing (NLP) tasks. In addition to standard syntactic and semantic NLP tasks, pretrained models achieve strong improvements on tasks that involve real-world knowledge, suggesting that large-scale language modeling could be an implicit method to capture knowledge. In this work, we further investigate the extent to which pretrained models such as BERT capture knowledge using a zero-shot fact completion task. Moreover, we propose a simple yet effective weakly supervised pretraining objective, which explicitly forces the model to incorporate knowledge about real-world entities. Models trained with our new objective yield significant improvements on the fact completion task. When applied to downstream tasks, our model consistently outperforms BERT on four entity-related question answering datasets (i.e., WebQuestions, TriviaQA, SearchQA and Quasar-T) with an average 2.7 F1 improvements and a standard fine-grained entity typing dataset (i.e., FIGER) with 5.7 accuracy gains.""","""This submission proposes a secondary objective when learning language models like BERT that improves the ability of such models to learn entity-centric information. This additional objective involves predicting whether an entity has been replaced. Replacement entities are mined using wikidata. Strengths: -The proposed method is simple and shows significant performance improvements for various tasks including fact completion and question answering. Weaknesses: -The experimental settings and data splits were not always clear. This was sufficiently addressed in a revised version. -The paper could have probed performance on tasks involving less common entities. The reviewer consensus was to accept this submission. """ 920,"""On Bonus Based Exploration Methods In The Arcade Learning Environment""","['exploration', 'arcade learning environment', 'bonus-based methods']","""Research on exploration in reinforcement learning, as applied to Atari 2600 game-playing, has emphasized tackling difficult exploration problems such as Montezuma's Revenge (Bellemare et al., 2016). Recently, bonus-based exploration methods, which explore by augmenting the environment reward, have reached above-human average performance on such domains. In this paper we reassess popular bonus-based exploration methods within a common evaluation framework. We combine Rainbow (Hessel et al., 2018) with different exploration bonuses and evaluate its performance on Montezuma's Revenge, Bellemare et al.'s set of hard of exploration games with sparse rewards, and the whole Atari 2600 suite. We find that while exploration bonuses lead to higher score on Montezuma's Revenge they do not provide meaningful gains over the simpler epsilon-greedy scheme. In fact, we find that methods that perform best on that game often underperform epsilon-greedy on easy exploration Atari 2600 games. We find that our conclusions remain valid even when hyperparameters are tuned for these easy-exploration games. Finally, we find that none of the methods surveyed benefit from additional training samples (1 billion frames, versus Rainbow's 200 million) on Bellemare et al.'s hard exploration games. Our results suggest that recent gains in Montezuma's Revenge may be better attributed to architecture change, rather than better exploration schemes; and that the real pace of progress in exploration research for Atari 2600 games may have been obfuscated by good results on a single domain.""","""This paper presents a detailed comparison of different bonus-based exploration methods on a common evaluation framework (Rainbow) when used with the ATARI game suite. They find that while these bonuses help on Montezuma's Revenge (MR), they underperform relative to epsilon-greedy on other games. This suggests that architectural changes may be a more important factor than bonus-based exploration in recent advances on MR. The reviewers commented that this paper makes no effort to present new techniques, and the insights discovered could be expanded on. Despite this, it is an interesting paper that is generally well argued and would be a useful contribution to the field. I recommend acceptance.""" 921,"""Learning to Coordinate Manipulation Skills via Skill Behavior Diversification""","['reinforcement learning', 'hierarchical reinforcement learning', 'modular framework', 'skill coordination', 'bimanual manipulation']","""When mastering a complex manipulation task, humans often decompose the task into sub-skills of their body parts, practice the sub-skills independently, and then execute the sub-skills together. Similarly, a robot with multiple end-effectors can perform complex tasks by coordinating sub-skills of each end-effector. To realize temporal and behavioral coordination of skills, we propose a modular framework that first individually trains sub-skills of each end-effector with skill behavior diversification, and then learns to coordinate end-effectors using diverse behaviors of the skills. We demonstrate that our proposed framework is able to efficiently coordinate skills to solve challenging collaborative control tasks such as picking up a long bar, placing a block inside a container while pushing the container with two robot arms, and pushing a box with two ant agents. Videos and code are available at pseudo-url""","""This paper deals with multi-agent hierarchical reinforcement learning. A discrete set of pre-specified low-level skills are modulated by a conditioning vector and trained in a fashion reminiscent of Diversity Is All You Need, and then combined via a meta-policy which coordinates multiple agents in pursuit of a goal. The idea is that fine control over primitive skills is beneficial for achieving coordinated high-level behaviour. The paper improved considerably in its completeness and in the addition of baselines, notably DIAYN without discrete, mutually exclusive skills. Reviewers agreed that the problem is interesting and the method, despite involving a degree of hand-crafting, showed promise for informing future directions. On the basis that this work addresses an interesting problem setting with a compelling set of experiments, I recommend acceptance.""" 922,"""Tranquil Clouds: Neural Networks for Learning Temporally Coherent Features in Point Clouds""","['point clouds', 'spatio-temporal representations', 'Lagrangian data', 'temporal coherence', 'super-resolution', 'denoising']","""Point clouds, as a form of Lagrangian representation, allow for powerful and flexible applications in a large number of computational disciplines. We propose a novel deep-learning method to learn stable and temporally coherent feature spaces for points clouds that change over time. We identify a set of inherent problems with these approaches: without knowledge of the time dimension, the inferred solutions can exhibit strong flickering, and easy solutions to suppress this flickering can result in undesirable local minima that manifest themselves as halo structures. We propose a novel temporal loss function that takes into account higher time derivatives of the point positions, and encourages mingling, i.e., to prevent the aforementioned halos. We combine these techniques in a super-resolution method with a truncation approach to flexibly adapt the size of the generated positions. We show that our method works for large, deforming point sets from different sources to demonstrate the flexibility of our approach.""","""This paper provides an improved method for deep learning on point clouds. Reviewers are unanimous that this paper is acceptable, and the AC concurs. """ 923,"""On the Linguistic Capacity of Real-time Counter Automata""","['formal language theory', 'counter automata', 'natural language processing', 'deep learning']","""While counter machines have received little attention in theoretical computer science since the 1960s, they have recently achieved a newfound relevance to the field of natural language processing (NLP). Recent work has suggested that some strong-performing recurrent neural networks utilize their memory as counters. Thus, one potential way to understand the sucess of these networks is to revisit the theory of counter computation. Therefore, we choose to study the abilities of real-time counter machines as formal grammars. We first show that several variants of the counter machine converge to express the same class of formal languages. We also prove that counter languages are closed under complement, union, intersection, and many other common set operations. Next, we show that counter machines cannot evaluate boolean expressions, even though they can weakly validate their syntax. This has implications for the interpretability and evaluation of neural network systems: successfully matching syntactic patterns does not guarantee that a counter-like model accurately represents underlying semantic structures. Finally, we consider the question of whether counter languages are semilinear. This work makes general contributions to the theory of formal languages that are of particular interest for the interpretability of recurrent neural networks.""","""This paper presents an analysis of the languages that can be accepted by a counter machine, motivated by recent work that suggests that counter machines might be a good formal model from which to approach the analysis of LSTM representations. This is one of the trickiest papers in my batch. Reviewers agree that it represents an interesting and provocative direction, and I suspect that it could yield valuable discussion at the conference. However, reviewers were not convinced that the claims made (or implied) _about LSTMs_ are motivated, given the imperfect analogy between them and counter machines. The authors promise some empirical evidence that might mitigate these concerns to some extent, but the paper has not yet been updated, so I cannot take that into account. As a very secondary point, which is only relevant because this paper is borderline, LSTMs are no longer widely used for language tasks, so discussion about the capacity of LSTMs _for language_ seems like an imperfect fit for an machine learning conference with a fairly applied bent.""" 924,"""Subjective Reinforcement Learning for Open Complex Environments""","['reinforcement learning theory', 'subjective learning']","""Solving tasks in open environments has been one of the long-time pursuits of reinforcement learning researches. We propose that data confusion is the core underlying problem. Although there exist methods that implicitly alleviate it from different perspectives, we argue that their solutions are based on task-specific prior knowledge that is constrained to certain kinds of tasks and lacks theoretical guarantees. In this paper, Subjective Reinforcement Learning Framework is proposed to state the problem from a broader and systematic view, and subjective policy is proposed to represent existing related algorithms in general. Theoretical analysis is given about the conditions for the superiority of a subjective policy, and the relationship between model complexity and the overall performance. Results are further applied as guidance for algorithm designing without task-specific prior knowledge about tasks. ""","""The authors propose a learning framework to reframe non-stationary MDPs as smaller stationary MDPs, thus hopefully addressing problems with contradictory or continually changing environments. A policy is learned for each sub-MDP, and the authors present theoretical guarantees that the reframing does not inhibit agent performance. The reviewers discussed the paper and the authors' rebuttal. They were mainly concerned that the submission offered no practical implementation or demonstration of feasibility, and secondarily concerned that the paper was unclearly written and motivated. The authors' rebuttal did not resolve these issues. My recommendation is to reject the submission and encourage the authors to develop an empirical validation of their method before resubmitting.""" 925,"""On the Variance of the Adaptive Learning Rate and Beyond""","['warmup', 'adam', 'adaptive learning rate', 'variance']","""The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate -- its variance is problematically large in the early stage, and presume warmup works as a variance reduction technique. We provide both empirical and theoretical evidence to verify our hypothesis. We further propose Rectified Adam (RAdam), a novel variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate. Experimental results on image classification, language modeling, and neural machine translation verify our intuition and demonstrate the efficacy and robustness of RAdam. ""","""The paper considers an important topic of the warmup in deep learning, and investigates the problem of the adaptive learning rate. While the paper is somewhat borderline, the reviewers agree that it might be useful to present it to the ICLR community.""" 926,"""Dynamic Time Lag Regression: Predicting What & When""","['Dynamic Time-Lag Regression', 'Time Delay', 'Regression', 'Time Series']","""This paper tackles a new regression problem, called Dynamic Time-Lag Regression (DTLR), where a cause signal drives an effect signal with an unknown time delay. The motivating application, pertaining to space weather modelling, aims to predict the near-Earth solar wind speed based on estimates of the Sun's coronal magnetic field. DTLR differs from mainstream regression and from sequence-to-sequence learning in two respects: firstly, no ground truth (e.g., pairs of associated sub-sequences) is available; secondly, the cause signal contains much information irrelevant to the effect signal (the solar magnetic field governs the solar wind propagation in the heliosphere, of which the Earth's magnetosphere is but a minuscule region). A Bayesian approach is presented to tackle the specifics of the DTLR problem, with theoretical justifications based on linear stability analysis. A proof of concept on synthetic problems is presented. Finally, the empirical results on the solar wind modelling task improve on the state of the art in solar wind forecasting.""","""The paper proposes a Bayesian approach for time-series regression when the explanatory time-series influences the response time-series with a time lag. The time lag is unknown and allowed to be non-stationary process. Reviewers have appreciated the significance of the problem and novelty of the proposed method, and also highlighted the importance of the application domain considered by the paper. """ 927,"""The intriguing role of module criticality in the generalization of deep networks""","['Module Criticality Phenomenon', 'Complexity Measure', 'Deep Learning']","""We study the phenomenon that some modules of deep neural networks (DNNs) are more critical than others. Meaning that rewinding their parameter values back to initialization, while keeping other modules fixed at the trained parameters, results in a large drop in the network's performance. Our analysis reveals interesting properties of the loss landscape which leads us to propose a complexity measure, called module criticality, based on the shape of the valleys that connect the initial and final values of the module parameters. We formulate how generalization relates to the module criticality, and show that this measure is able to explain the superior generalization performance of some architectures over others, whereas, earlier measures fail to do so.""","""The paper analyses the importance of different DNN modules for generalization performance, explaining why certain architectures may be much better performing than others. All reviewers agree that this is an interesting paper with a novel and important contribution. """ 928,"""Semi-Implicit Back Propagation""","['Optimization', 'Neural Network', 'Proximal mapping', 'Back propagation', 'Implicit']","""Neural network has attracted great attention for a long time and many researchers are devoted to improve the effectiveness of neural network training algorithms. Though stochastic gradient descent (SGD) and other explicit gradient-based methods are widely adopted, there are still many challenges such as gradient vanishing and small step sizes, which leads to slow convergence and instability of SGD algorithms. Motivated by error back propagation (BP) and proximal methods, we propose a semi-implicit back propagation method for neural network training. Similar to BP, the difference on the neurons are propagated in a backward fashion and the parameters are updated with proximal mapping. The implicit update for both hidden neurons and parameters allows to choose large step size in the training algorithm. Finally, we also show that any fixed point of convergent sequences produced by this algorithm is a stationary point of the objective loss function. The experiments on both MNIST and CIFAR-10 demonstrate that the proposed semi-implicit BP algorithm leads to better performance in terms of both loss decreasing and training/validation accuracy, compared to SGD and a similar algorithm ProxBP.""","""The reviewers equivocally reject the paper, which is mostly experimental and the results of which are limited. The authors do not react to the reviewers' comments.""" 929,"""Learning to Remember from a Multi-Task Teacher""","['Meta-learning', 'sequential learning', 'catastrophic forgetting']","""Recent studies on catastrophic forgetting during sequential learning typically focus on fixing the accuracy of the predictions for a previously learned task. In this paper we argue that the outputs of neural networks are subject to rapid changes when learning a new data distribution, and networks that appear to ""forget"" everything still contain useful representation towards previous tasks. We thus propose to enforce the output accuracy to stay the same, we should aim to reduce the effect of catastrophic forgetting on the representation level, as the output layer can be quickly recovered later with a small number of examples. Towards this goal, we propose an experimental setup that measures the amount of representational forgetting, and develop a novel meta-learning algorithm to overcome this issue. The proposed meta-learner produces weight updates of a sequential learning network, mimicking a multi-task teacher network's representation. We show that our meta-learner can improve its learned representations on new tasks, while maintaining a good representation for old tasks.""","""The paper addresses the setting of continual learning. Instead of focusing on catastrophic forgetting measured in terms of the output performance of the previous tasks, the authors tackle forgetting that happens at the level of the feature representation via a meta-learning approach. As rightly acknowledged by R2, from a meta-learning perspective the work is quite interesting and demonstrates a number of promising results. However the reviewers have raised several important concerns that placed this work below the acceptance bar: (1) the current manuscript lacks convincing empirical evaluations that clearly show the benefits of the proposed approach over SOTA continual learning methods; specifically the generalization of the proposed strategy to more than two sequential tasks is essential; also see R1s detailed suggestions that would strengthen the contributions of this approach in light of continual learning; (2) training a meta-learner to predict the weight updates with supervision from a multi-task teacher network as an oracle, albeit nicely motivated, is unrealistic in the continual learning setting -- see R1s detailed comments on this issue. (3) R2 and R3 expressed concerns regarding i) stronger baselines that are tuned to take advantage of the meta-learning data and ii) transferability to the different new tasks, i.e. dissimilarity of the meta-train and meta-test settings. Pleased to report that the authors showed and discussed in their response some initial qualitative results regarding these issues. An analysis on the performance of the proposed method when the meta-training and testing datasets are made progressively dissimilar would strengthen the evaluation the proposed meta-learning approach. There is a reviewer disagreement on this paper. AC can confirm that all three reviewers have read the rebuttal and have contributed to a long discussion. Among the aforementioned concerns, (3) did not have a decisive impact on the decision, but would be helpful to address in a subsequent revision. However, (1) and (2) make it very difficult to assess the benefits of the proposed approach, and were viewed by AC as critical issues. AC suggests, that in its current state the manuscript is not ready for a publication and needs a major revision before submitting for another round of reviews. We hope the reviews are useful for improving and revising the paper. """ 930,"""Learning RNNs with Commutative State Transitions""",[],"""Many machine learning tasks involve analysis of set valued inputs, and thus the learned functions are expected to be permutation invariant. Recent works (e.g., Deep Sets) have sought to characterize the neural architectures which result in permutation invariance. These typically correspond to applying the same pointwise function to all set components, followed by sum aggregation. Here we take a different approach to such architectures and focus on recursive architectures such as RNNs, which are not permutation invariant in general, but can implement permutation invariant functions in a very compact manner. We first show that commutativity and associativity of the state transition function result in permutation invariance. Next, we derive a regularizer that minimizes the degree of non-commutativity in the transitions. Finally, we demonstrate that the resulting method outperforms other methods for learning permutation invariant models, due to its use of recursive computation.""","""This paper examines learning problems where the network outputs are intended to be invariant to permutations of the network inputs. Some past approaches for this problem setting have enforced permutation-invariance by construction. This paper takes a different approach, using a recurrent neural network that passes over the data. The paper proves the network will be permutation invariant when the internal state transition function is associative and commutative. The paper then focuses on the commutative property by describing a regularization objective that pushes the recurrent network towards becoming commutative. Experimental results with this regularizer show potentially better performance than DeepSet, another architecture that is designed for permutation invariance. The subsequent discussion of the paper raised several concerns with the current version of the paper. The theoretical contributions for full permutation-invariance follow quickly from the prior DeepSet results. The paper's focus on commutative regularization in the absence of associative regularization is not compelling if the objective is really for permutation invariance. The experimental results were limited in scope. These results lacked error bars and an examination of the relevance of associativity. The reviewers also identified several related lines of work which could provide additional context for the results that were missing from the paper. This paper is not ready for publication due to the multiple concerns raised by the reviewers. The paper would become stronger by addressing these concerns, particularly the associativity of the transition function, empirical results, and related work. """ 931,"""Bandlimiting Neural Networks Against Adversarial Attacks""","['adversarial examples', 'adversarial attack defense', 'neural network', 'Fourier analysis']","""In this paper, we study the adversarial attack and defence problem in deep learning from the perspective of Fourier analysis. We first explicitly compute the Fourier transform of deep ReLU neural networks and show that there exist decaying but non-zero high frequency components in the Fourier spectrum of neural networks. We then demonstrate that the vulnerability of neural networks towards adversarial samples can be attributed to these insignificant but non-zero high frequency components. Based on this analysis, we propose to use a simple post-averaging technique to smooth out these high frequency components to improve the robustness of neural networks against adversarial attacks. Experimental results on the ImageNet and the CIFAR-10 datasets have shown that our proposed method is universally effective to defend many existing adversarial attacking methods proposed in the literature, including FGSM, PGD, DeepFool and C&W attacks. Our post-averaging method is simple since it does not require any re-training, and meanwhile it can successfully defend over 80-96% of the adversarial samples generated by these methods without introducing significant performance degradation (less than 2%) on the original clean images.""","""The reviewers recommend rejection due to various concerns about novelty and experimental validation. The authors have not provided a response.""" 932,"""Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning""","['multi-agent reinforcement learning', 'evolutionary learning', 'curriculum learning']","""In multi-agent games, the complexity of the environment can grow exponentially as the number of agents increases, so it is particularly challenging to learn good policies when the agent population is large. In this paper, we introduce Evolutionary Population Curriculum (EPC), a curriculum learning paradigm that scales up Multi-Agent Reinforcement Learning (MARL) by progressively increasing the population of training agents in a stage-wise manner. Furthermore, EPC uses an evolutionary approach to fix an objective misalignment issue throughout the curriculum: agents successfully trained in an early stage with a small population are not necessarily the best candidates for adapting to later stages with scaled populations. Concretely, EPC maintains multiple sets of agents in each stage, performs mix-and-match and fine-tuning over these sets and promotes the sets of agents with the best adaptability to the next stage. We implement EPC on a popular MARL algorithm, MADDPG, and empirically show that our approach consistently outperforms baselines by a large margin as the number of agents grows exponentially. The source code and videos can be found at pseudo-url.""","""The paper proposes a curriculum approach to increasing the number of agents (and hence complexity) in MARL. The reviewers mostly agreed that this is a simple and useful idea to the MARL community. There was some initial disagreement about relationships with other RL + evolution approaches, but it got resolved in the rebuttal. Another concern was the slight differences in the environments considered by the paper compared to the literature, but the authors added an experiment with the unmodified version. Given the positive assessment and the successful rebuttal, I recommend acceptance.""" 933,"""On Generalization Error Bounds of Noisy Gradient Methods for Non-Convex Learning""","['learning theory', 'generalization', 'nonconvex learning', 'stochastic gradient descent', 'Langevin dynamics']","""Generalization error (also known as the out-of-sample error) measures how well the hypothesis learned from training data generalizes to previously unseen data. Proving tight generalization error bounds is a central question in statistical learning theory. In this paper, we obtain generalization error bounds for learning general non-convex objectives, which has attracted significant attention in recent years. We develop a new framework, termed Bayes-Stability, for proving algorithm-dependent generalization error bounds. The new framework combines ideas from both the PAC-Bayesian theory and the notion of algorithmic stability. Applying the Bayes-Stability method, we obtain new data-dependent generalization bounds for stochastic gradient Langevin dynamics (SGLD) and several other noisy gradient methods (e.g., with momentum, mini-batch and acceleration, Entropy-SGD). Our result recovers (and is typically tighter than) a recent result in Mou et al. (2018) and improves upon the results in Pensia et al. (2018). Our experiments demonstrate that our data-dependent bounds can distinguish randomly labelled data from normal data, which provides an explanation to the intriguing phenomena observed in Zhang et al. (2017a). We also study the setting where the total loss is the sum of a bounded loss and an additiona l`2 regularization term. We obtain new generalization bounds for the continuous Langevin dynamic in this setting by developing a new Log-Sobolev inequality for the parameter distribution at any time. Our new bounds are more desirable when the noise level of the processis not very small, and do not become vacuous even when T tends to infinity.""","""The authors provide bounds on the expected generalization error for noisy gradient methods (such as SGLD). They do so using the information theoretic framework initiated by Russo and Zou, where the expected generalization error is controlled by the mutual information between the weights and the training data. The work builds on the approach pioneered by Pensia, Jog, and Loh, who proposed to bound the mutual information for noisy gradient methods in a step wise fashion. The main innovation of this work is that they do not implicitly condition on the minibatch sequence when bounding the mutual information. Instead, this uncertainty manifests as a mixture of gaussians. Essentially they avoid the looseness implied by an application of Jensen's inequality that they have shown was unnecessary. I think this is an interesting contribution and worth publishing. It contributes to a rapidly progressing literature on generalization bounds for SGLD that are becoming increasingly tight. I have one strong request that I will make of the authors, and I'll be quite disappointed if it is not executed faithfully. 1. The stepsize constraint and its violation in the experimental work is currently buried in the appendix. This fact must be brought into the main paper and made transparent to readers, otherwise it will pervert empirical comparisons and mask progress. 2. In fact, I would like the authors to re-run their experiments in a way that guarantees that the bounds are applicable. One approach is outline by the authors: the Lipschitz constant can be replaced by a max_i bound on the running squared gradient norms, and then gradient clipping can be used to guarantee that the step-size constraint is met. The authors might compare step sizes, allowing them to use less severe gradient clipping. The point of this exercise is to verify that the learning dynamics don't change when the bound conditions are met. If they change, it may upset the empirical phenomena they are trying to study. If this change does upset the empirical findings, then the authors should present both, and clearly explain that the bound is not strictly speaking known to be valid in one of the cases. It will be a good open problem. """ 934,"""Dual Graph Representation Learning""",[],"""Graph representation learning embeds nodes in large graphs as low-dimensional vectors and benefit to many downstream applications. Most embedding frameworks, however, are inherently transductive and unable to generalize to unseen nodes or learn representations across different graphs. Inductive approaches, such as GraphSAGE, neglect different contexts of nodes and cannot learn node embeddings dually. In this paper, we present an unsupervised dual encoding framework, \textbf{CADE}, to generate context-aware representation of nodes by combining real-time neighborhood structure with neighbor-attentioned representation, and preserving extra memory of known nodes. Experimently, we exhibit that our approach is effective by comparing to state-of-the-art methods.""","""This work proposes context-aware representation of graph nodes leveraging attention over neighbors (as already done in previous work). Reviewers concerns about lack of novelty, lack of clarity of paper and lack of comparison to state of the art methods have not been addressed at all. We recommend rejection.""" 935,"""Recurrent Hierarchical Topic-Guided Neural Language Models""","['Bayesian deep learning', 'recurrent gamma belief net', 'larger-context language model', 'variational inference', 'sentence generation', 'paragraph generation']","""To simultaneously capture syntax and semantics from a text corpus, we propose a new larger-context language model that extracts recurrent hierarchical semantic structure via a dynamic deep topic model to guide natural language generation. Moving beyond a conventional language model that ignores long-range word dependencies and sentence order, the proposed model captures not only intra-sentence word dependencies, but also temporal transitions between sentences and inter-sentence topic dependences. For inference, we develop a hybrid of stochastic-gradient MCMC and recurrent autoencoding variational Bayes. Experimental results on a variety of real-world text corpora demonstrate that the proposed model not only outperforms state-of-the-art larger-context language models, but also learns interpretable recurrent multilayer topics and generates diverse sentences and paragraphs that are syntactically correct and semantically coherent.""","""This paper was a very difficult case. All three original reviewers of the paper had never published in the area, and all of them advocated for acceptance of the paper. I, on the other hand, am an expert in the area who has published many papers, and I thought that while the paper is well-written and experimental evaluation is not incorrect, the method was perhaps less relevant given current state-of-the-art models. In addition, the somewhat non-standard evaluation was perhaps causing this fact to be masked. I asked the original reviewers to consider my comments multiple times both during the rebuttal period and after, and unfortunately none of them replied. Because of this, I elicited two additional reviews from people I knew were experts in the field. The reviews are below. I sent the PDF to the reviewers directly, and asked them to not look at the existing reviews (or my comments) when doing their review in order to make sure that they were making a fair assessment. Long story short, Reviewer 4 essentially agreed with my concerns and pointed out a few additional clarity issues. Reviewer 5 pointed out a number of clarity issues and was also concerned with the fact that d_j has access to all other sentences (including those following the current sentence). I know that at the end of Section 2 it is noted that at test time d_j only refers to previous sentences, but if so there is also a training-testing disconnect in model training, and it seems that this would hurt the model results. Based on this, I have decided to favor the opinions of three experts (me and the two additional reviewers) over the opinions of the original three reviewers, and not recommend the paper for acceptance at this time. In order to improve the paper I would suggest the following (1) an acknowledgement of standard methods to incorporate context by processing sequences consisting of multiple sentences simultaneously, (2) a more thorough comparison with state-of-the-art models that consider cross-sentential context on standard datasets such as WikiText or PTB. I would encourage the authors to consider this as they revise their paper. Finally, I would like to apologize to the authors that they did not get a chance to reply to the second set of reviews. As I noted above, I did try to make my best effort to encourage discussion during the rebuttal period.""" 936,"""Monte Carlo Deep Neural Network Arithmetic""","['deep learning', 'quantization', 'floating point', 'monte carlo methods']","""Quantization is a crucial technique for achieving low-power, low latency and high throughput hardware implementations of Deep Neural Networks. Quantized floating point representations have received recent interest due to their hardware efficiency benefits and ability to represent a higher dynamic range than fixed point representations, leading to improvements in accuracy. We present a novel technique, Monte Carlo Deep Neural Network Arithmetic (MCA), for determining the sensitivity of Deep Neural Networks to quantization in floating point arithmetic.We do this by applying Monte Carlo Arithmetic to the inference computation and analyzing the relative standard deviation of the neural network loss. The method makes no assumptions regarding the underlying parameter distributions. We evaluate our method on pre-trained image classification models on the CIFAR10 andImageNet datasets. For the same network topology and dataset, we demonstrate the ability to gain the equivalent of bits of precision by simply choosing weight parameter sets which demonstrate a lower loss of significance from the Monte Carlo trials. Additionally, we can apply MCA to compare the sensitivity of different network topologies to quantization effects.""","""The paper studies the impact of rounding errors on deep neural networks. The authors apply Monte Carlos arithmetics to standard DNN operations. Their results indeed show catastrophic cancellation in DNNs and that the resulting loss of significance in the number representation correlates with decrease in validation performance, indicating that DNN performances are sensitive to rounding errors. Although recognizing that the paper addresses an important problem (quantized / finite precision neural networks), the reviewers point out the contribution of the paper is somewhat incremental. During the rebuttal, the authors made an effort to improve the manuscript based on reviewer suggestions, however review scores were not increased. The paper is slightly below acceptance threshold, based on reviews and my own reading, as the method is mostly restricted to diagnostics and cannot yet be used to help training low-precision neural networks.""" 937,"""Amata: An Annealing Mechanism for Adversarial Training Acceleration""",[],"""Despite of the empirical success in various domains, it has been revealed that deep neural networks are vulnerable to maliciously perturbed input data that much degrade their performance. This is known as adversarial attacks. To counter adversarial attacks, adversarial training formulated as a form of robust optimization has been demonstrated to be effective. However, conducting adversarial training brings much computational overhead compared with standard training. In order to reduce the computational cost, we propose a simple yet effective modification to the commonly used projected gradient descent (PGD) adversarial training by increasing the number of adversarial training steps and decreasing the adversarial training step size gradually as training proceeds. We analyze the optimality of this annealing mechanism through the lens of optimal control theory, and we also prove the convergence of our proposed algorithm. Numerical experiments on standard datasets, such as MNIST and CIFAR10, show that our method can achieve similar or even better robustness with around 1/3 to 1/2 computation time compared with PGD.""","""The paper proposes a modification for adversarial training in order to improve the robustness of the algorithm by developing an annealing mechanism for PGD adversarial training. This mechanism gradually reduces the step size and increases the number of iterations of PGD maximization. One reviewer found the paper to be clear and competitive with existing work, but raised concerns of novelty and significance. Another reviewer noted the significant improvements in training times but had concerns about small scale datasets. The final reviewer liked the optimal control formulation, and requested further details. The authors provided detailed answers and responses to the reviews, although some of these concerns remain. The paper has improved over the course of the review, but due to a large number of stronger papers, was not accepted at this time.""" 938,"""The advantage of using Student's t-priors in variational autoencoders""","['Variational Autoencoders', 'DLVMs', 'Posterior Collapse']","""Is it optimal to use the standard Gaussian prior in variational autoencoders? With Gaussian distributions, which are not weakly informative priors, variational autoencoders struggle to reconstruct the actual data. We provide numerical evidence that encourages using Student's t-distributions as default priors in variational autoencoders, and we challenge the usual setup for the variational autoencoder structure by comparing Gaussian and Student's t-distribution priors with different forms of the covariance matrix.""","""The consensus among all reviewers was to reject this paper, and the authors did not provide a rebuttal.""" 939,"""Leveraging Adversarial Examples to Obtain Robust Second-Order Representations""","['Second-order representation', 'adversarial examples', 'robustness', 'gradients']","""Deep neural networks represent data as projections on trained weights in a high dimensional manifold. This is a first-order based absolute representation that is widely used due to its interpretable nature and simple mathematical functionality. However, in the application of visual recognition, first-order representations trained on pristine images have shown a vulnerability to distortions. Visual distortions including imaging acquisition errors and challenging environmental conditions like blur, exposure, snow and frost cause incorrect classification in first-order neural nets. To eliminate vulnerabilities under such distortions, we propose representing data points by their relative positioning in a high dimensional manifold instead of their absolute positions. Such a positioning scheme is based on a data points second-order property. We obtain a data points second-order representation by creating adversarial examples to all possible decision boundaries and tracking the movement of corresponding boundaries. We compare our representation against first-order methods and show that there is an increase of more than 14% under severe distortions for ResNet-18. We test the generalizability of the proposed representation on larger networks and on 19 complex and real-world distortions from CIFAR-10-C. Furthermore, we show how our proposed representation can be used as a plug-in approach on top of any network. We also provide methodologies to scale our proposed representation to larger datasets.""","""The authors propose a method to train a neural network that is robust to visual distortions of the input image. The reviewers agree that the paper lacks justification of the proposed method and experimental evidence of its performance.""" 940,"""LARGE SCALE REPRESENTATION LEARNING FROM TRIPLET COMPARISONS""","['representation learning', 'triplet comparison', 'contrastive learning', 'ordinal embedding']","""In this paper, we discuss the fundamental problem of representation learning from a new perspective. It has been observed in many supervised/unsupervised DNNs that the final layer of the network often provides an informative representation for many tasks, even though the network has been trained to perform a particular task. The common ingredient in all previous studies is a low-level feature representation for items, for example, RGB values of images in the image context. In the present work, we assume that no meaningful representation of the items is given. Instead, we are provided with the answers to some triplet comparisons of the following form: Is item A more similar to item B or item C? We provide a fast algorithm based on DNNs that constructs a Euclidean representation for the items, using solely the answers to the above-mentioned triplet comparisons. This problem has been studied in a sub-community of machine learning by the name ""Ordinal Embedding"". Previous approaches to the problem are painfully slow and cannot scale to larger datasets. We demonstrate that our proposed approach is significantly faster than available methods, and can scale to real-world large datasets. Thereby, we also draw attention to the less explored idea of using neural networks to directly, approximately solve non-convex, NP-hard optimization problems that arise naturally in unsupervised learning problems.""","""The authors demonstrate how neural networks can be used to learn vectorial representations of a set of items given only triplet comparisons among those items. The reviewers had some concerns regarding the scale of the experiments and strength of the conclusions: empirically, it seemed like there should be more truly large-scale experiments considering that this is a selling point; there should have been more analysis and/or discussion of why/how the neural networks help; and the claim that deep networks are approximately solving an NP-hard problem seemed unimportant as they are routinely used for this purpose in ML problems. With a combination of improved experiments and revised discussion/analysis, I believe a revised version of this paper could make a good submission to a future conference.""" 941,"""Lattice Representation Learning""","['lattices', 'representation learning', 'coding theory', 'lossy source coding', 'information theory']","""We introduce the notion of \emph{lattice representation learning}, in which the representation for some object of interest (e.g. a sentence or an image) is a lattice point in an Euclidean space. Our main contribution is a result for replacing an objective function which employs lattice quantization with an objective function in which quantization is absent, thus allowing optimization techniques based on gradient descent to apply; we call the resulting algorithms \emph{dithered stochastic gradient descent} algorithms as they are designed explicitly to allow for an optimization procedure where only local information is employed. We also argue that a technique commonly used in Variational Auto-Encoders (Gaussian priors and Gaussian approximate posteriors) is tightly connected with the idea of lattice representations, as the quantization error in good high dimensional lattices can be modeled as a Gaussian distribution. We use a traditional encoder/decoder architecture to explore the idea of latticed valued representations, and provide experimental evidence of the potential of using lattice representations by modifying the \texttt{OpenNMT-py} generic \texttt{seq2seq} architecture so that it can implement not only Gaussian dithering of representations, but also the well known straight-through estimator and its application to vector quantization. ""","""This paper presents a new view of latent variable learning as learning lattice representations. Overall, the reviewers thought the underlying ideas were interesting, but both the description and the experimentation in the paper were not quite sufficient at this time. I'd encourage the authors to continue on this path and take into account the extensive review feedback in improving the paper!""" 942,"""LAVAE: Disentangling Location and Appearance""","['structured scene representations', 'compositional representations', 'generative models', 'unsupervised learning']","""We propose a probabilistic generative model for unsupervised learning of structured, interpretable, object-based representations of visual scenes. We use amortized variational inference to train the generative model end-to-end. The learned representations of object location and appearance are fully disentangled, and objects are represented independently of each other in the latent space. Unlike previous approaches that disentangle location and appearance, ours generalizes seamlessly to scenes with many more objects than encountered in the training regime. We evaluate the proposed model on multi-MNIST and multi-dSprites data sets.""","""This paper presents a VAE approach where the model learns representation while disentangling the location and appearance information. The reviewers found issues with the experimental evaluation of the paper, and have given many useful feedback. None of the reviewers were willing to change their score during the discussion period. with the current score, the paper does not make the cut for ICLR, and I recommend to reject this paper. """ 943,"""Trajectory growth through random deep ReLU networks""","['Deep networks', 'expressivity', 'trajectory growth', 'sparse neural networks']","""This paper considers the growth in the length of one-dimensional trajectories as they are passed through deep ReLU neural networks, which, among other things, is one measure of the expressivity of deep networks. We generalise existing results, providing an alternative, simpler method for lower bounding expected trajectory growth through random networks, for a more general class of weights distributions, including sparsely connected networks. We illustrate this approach by deriving bounds for sparse-Gaussian, sparse-uniform, and sparse-discrete-valued random nets. We prove that trajectory growth can remain exponential in depth with these new distributions, including their sparse variants, with the sparsity parameter appearing in the base of the exponent.""","""This article studies the length of one-dimensional trajectories as they are mapped through the layers of a ReLU network, simplifying proof methods and generalising previous results on networks with random weights to cover different classes of weight distributions including sparse ones. It is observed that the behaviour is similar for different distributions, suggesting a type of universality. The reviewers found that the paper is well written and appreciated the clear description of the places where the proofs deviate from previous works. However, they found that the results, although adding interesting observations in the sparse setting, are qualitatively very close to previous works and possibly not substantial enough for publication in ICLR. The revision includes some experiments with trained networks and updates the title to better reflect the contribution. However, the reviewers did not find this convincing enough. The article would benefit from a deeper theory clarifying the observations that have been made so far, and more extensive experiments connecting to practice. """ 944,"""Hyperparameter Tuning and Implicit Regularization in Minibatch SGD""","['SGD', 'momentum', 'batch size', 'learning rate', 'noise', 'temperature', 'implicit regularization', 'optimization', 'generalization']","""This paper makes two contributions towards understanding how the hyperparameters of stochastic gradient descent affect the final training loss and test accuracy of neural networks. First, we argue that stochastic gradient descent exhibits two regimes with different behaviours; a noise dominated regime which typically arises for small or moderate batch sizes, and a curvature dominated regime which typically arises when the batch size is large. In the noise dominated regime, the optimal learning rate increases as the batch size rises, and the training loss and test accuracy are independent of batch size under a constant epoch budget. In the curvature dominated regime, the optimal learning rate is independent of batch size, and the training loss and test accuracy degrade as the batch size rises. We support these claims with experiments on a range of architectures including ResNets, LSTMs and autoencoders. We always perform a grid search over learning rates at all batch sizes. Second, we demonstrate that small or moderately large batch sizes continue to outperform very large batches on the test set, even when both models are trained for the same number of steps and reach similar training losses. Furthermore, when training Wide-ResNets on CIFAR-10 with a constant batch size of 64, the optimal learning rate to maximize the test accuracy only decays by a factor of 2 when the epoch budget is increased by a factor of 128, while the optimal learning rate to minimize the training loss decays by a factor of 16. These results confirm that the noise in stochastic gradients can introduce beneficial implicit regularization.""","""Authors provide an empirical evaluation of batch size and learning rate selection and its effect on training and generalization performance. As the authors and reviewers note, this is an active area of research with many closely related results to the contributions of this paper already existing in the literature. In light of this work, reviewers felt that this paper did not clearly place itself in the appropriate context to make its contributions clear. Following the rebuttal, reviewers minds remained unchanged. """ 945,"""Feature Map Transform Coding for Energy-Efficient CNN Inference""","['compression', 'efficient inference', 'quantization', 'memory bandwidth', 'entropy']",""" Convolutional neural networks (CNNs) achieve state-of-the-art accuracy in a variety of tasks in computer vision and beyond. One of the major obstacles hindering the ubiquitous use of CNNs for inference on low-power edge devices is their high computational complexity and memory bandwidth requirements. The latter often dominates the energy footprint on modern hardware. In this paper, we introduce a lossy transform coding approach, inspired by image and video compression, designed to reduce the memory bandwidth due to the storage of intermediate activation calculation results. Our method does not require fine-tuning the network weights and halves the data transfer volumes to the main memory by compressing feature maps, which are highly correlated, with variable length coding. Our method outperform previous approach in term of the number of bits per value with minor accuracy degradation on ResNet-34 and MobileNetV2. We analyze the performance of our approach on a variety of CNN architectures and demonstrate that FPGA implementation of ResNet-18 with our approach results in a reduction of around 40% in the memory energy footprint, compared to quantized network, with negligible impact on accuracy. When allowing accuracy degradation of up to 2%, the reduction of 60% is achieved. A reference implementation}accompanies the paper.""","""The paper proposed the use of a lossy transform coding approach to to reduce the memory bandwidth brought by the storage of intermediate activations. It has shown the proposed method can bring good memory usage while maintaining the the accuracy. The main concern on this paper is the limited novelty. The lossy transform coding is borrowed from other domains and only the use of it on CNN intermediate activation is new, which seems insufficient. """ 946,"""In Search for a SAT-friendly Binarized Neural Network Architecture""","['verification', 'Boolean satisfiability', 'Binarized Neural Networks']","""Analyzing the behavior of neural networks is one of the most pressing challenges in deep learning. Binarized Neural Networks are an important class of networks that allow equivalent representation in Boolean logic and can be analyzed formally with logic-based reasoning tools like SAT solvers. Such tools can be used to answer existential and probabilistic queries about the network, perform explanation generation, etc. However, the main bottleneck for all methods is their ability to reason about large BNNs efficiently. In this work, we analyze architectural design choices of BNNs and discuss how they affect the performance of logic-based reasoners. We propose changes to the BNN architecture and the training procedure to get a simpler network for SAT solvers without sacrificing accuracy on the primary task. Our experimental results demonstrate that our approach scales to larger deep neural networks compared to existing work for existential and probabilistic queries, leading to significant speed ups on all tested datasets. ""","""This paper studies how the architecture and training procedure of binarized neural networks can be changed in order to make it easier for SAT solvers to verify certain properties of them. All of the reviewers were positive about the paper, and their questions were addressed to their satisfaction, so all reviewers are in favor of accepting the paper. I therefore recommend acceptance.""" 947,"""Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning""","['Deep reinforcement learning', 'Generalization in visual domains']","""Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained agents), particularly when they are trained on high-dimensional state spaces, such as images. In this paper, we propose a simple technique to improve a generalization ability of deep RL agents by introducing a randomized (convolutional) neural network that randomly perturbs input observations. It enables trained agents to adapt to new domains by learning robust features invariant across varied and randomized environments. Furthermore, we consider an inference method based on the Monte Carlo approximation to reduce the variance induced by this randomization. We demonstrate the superiority of our method across 2D CoinRun, 3D DeepMind Lab exploration and 3D robotics control tasks: it significantly outperforms various regularization and data augmentation methods for the same purpose.""","""This submission proposes an RL method for learning policies that generalize better in novel visual environments. The authors propose to introduce some noise in the feature space rather than in the input space as is typically done for visual inputs. They also propose an alignment loss term to enforce invariance to the random perturbation. Reviewers agreed that the experimental results were extensive and that the proposed method is novel and works well. One reviewer felt that the experiments didnt sufficiently demonstrate invariance to additional potential domain shifts. AC believes that additional experiments to probe this would indeed be interesting but that the demonstrated improvements when compared to existing image perturbation methods and existing regularization methods is sufficient experimental justification of the usefulness of the approach. Two reviewers felt that the method should be more extensively compared to data augmentation methods for computer vision tasks. AC believes that the proposed method is not only a data augmentation method given that the added loss tries to enforce representation invariance to perturbations as well. As such comparisons to feature adaptation techniques to tackle domain shift would be appropriate but it is reasonable to consider this line of comparison beyond the scope of this particular work. Ac agrees with the majority opinion that the submission should be accepted.""" 948,"""Translation Between Waves, wave2wave""","['sequence to sequence model', 'signal to signal', 'deep learning', 'RNN', 'encoder-decoder model']","""The understanding of sensor data has been greatly improved by advanced deep learning methods with big data. However, available sensor data in the real world are still limited, which is called the opportunistic sensor problem. This paper proposes a new variant of neural machine translation seq2seq to deal with continuous signal waves by introducing the window-based (inverse-) representation to adaptively represent partial shapes of waves and the iterative back-translation model for high-dimensional data. Experimental results are shown for two real-life data: earthquake and activity translation. The performance improvements of one-dimensional data was about 46 % in test loss and that of high-dimensional data was about 1625 % in perplexity with regard to the original seq2seq. ""","""The paper considers the task of sequence to sequence modelling with multivariate, real-valued time series. The authors propose an encoder-decoder based architecture that operates on fixed windows of the original signals. The reviewers unanimously criticise the lack of novelty in this paper and the lack of comparison to existing baselines. While Rev #1 positively highlights human evaluation contained in the experiments, they nevertheless do not think this paper is good enough for publication as is. The authors did not submit a rebuttal. I therefore recommend to reject the paper.""" 949,"""Maximum Likelihood Constraint Inference for Inverse Reinforcement Learning""","['learning from demonstration', 'inverse reinforcement learning', 'constraint inference']","""While most approaches to the problem of Inverse Reinforcement Learning (IRL) focus on estimating a reward function that best explains an expert agents policy or demonstrated behavior on a control task, it is often the case that such behavior is more succinctly represented by a simple reward combined with a set of hard constraints. In this setting, the agent is attempting to maximize cumulative rewards subject to these given constraints on their behavior. We reformulate the problem of IRL on Markov Decision Processes (MDPs) such that, given a nominal model of the environment and a nominal reward function, we seek to estimate state, action, and feature constraints in the environment that motivate an agents behavior. Our approach is based on the Maximum Entropy IRL framework, which allows us to reason about the likelihood of an expert agents demonstrations given our knowledge of an MDP. Using our method, we can infer which constraints can be added to the MDP to most increase the likelihood of observing these demonstrations. We present an algorithm which iteratively infers the Maximum Likelihood Constraint to best explain observed behavior, and we evaluate its efficacy using both simulated behavior and recorded data of humans navigating around an obstacle.""","""The paper introduces a novel way of doing IRL based on learning constraints. The topic of IRL is an important one in RL and the approach introduced is interesting and forms a fundamental contribution that could lead to relevant follow-up work.""" 950,"""Count-guided Weakly Supervised Localization Based on Density Map""","['Semi-supervised Learning', 'Weakly Supervised Localization', 'Variational Autoencoder', 'Density Map', 'Counting']","""Weakly supervised localization (WSL) aims at training a model to find the positions of objects by providing it with only abstract labels. For most of the existing WSL methods, the labels are the class of the main object in an image. In this paper, we generalize WSL to counting machines that apply convolutional neural networks (CNN) and density maps for counting. We show that given only ground-truth count numbers, the density map as a hidden layer can be trained for localizing objects and detecting features. Convolution and pooling are the two major building blocks of CNNs. This paper discusses their impacts on an end-to-end WSL network. The learned features in a density map present in the form of dots. In order to make these features interpretable for human beings, this paper proposes a Gini impurity penalty to regularize the density map. Furthermore, it will be shown that this regularization is similar to the variational term of the pseudo-formula -variational autoencoder. The details of this algorithm are demonstrated through a simple bubble counting task. Finally, the proposed methods are applied to the widely used crowd counting dataset the Mall to learn discriminative features of human figures.""","""This work proposes a new regularization method for weakly supervised localization based on counting. Reviewers agree that this is an interesting topic but the experimental validation is weak (qualitative, lack of baselines), and the contribution too incremental. Therefore, we recommend rejection.""" 951,"""Selective sampling for accelerating training of deep neural networks""",[],"""We present a selective sampling method designed to accelerate the training of deep neural networks. To this end, we introduce a novel measurement, the {\it minimal margin score} (MMS), which measures the minimal amount of displacement an input should take until its predicted classification is switched. For multi-class linear classification, the MMS measure is a natural generalization of the margin-based selection criterion, which was thoroughly studied in the binary classification setting. In addition, the MMS measure provides an interesting insight into the progress of the training process and can be useful for designing and monitoring new training regimes. Empirically we demonstrate a substantial acceleration when training commonly used deep neural network architectures for popular image classification tasks. The efficiency of our method is compared against the standard training procedures, and against commonly used selective sampling alternatives: Hard negative mining selection, and Entropy-based selection. Finally, we demonstrate an additional speedup when we adopt a more aggressive learning-drop regime while using the MMS selective sampling method.""","""The paper proposes a method to speed up training of deep nets by re-weighting samples based on their distance to the decision boundary. However, they paper seems hastily written and the method is not backed by sufficient experimental evidence.""" 952,"""Interpretable Complex-Valued Neural Networks for Privacy Protection""","['Deep Learning', 'Privacy Protection', 'Complex-Valued Neural Networks']","""Previous studies have found that an adversary attacker can often infer unintended input information from intermediate-layer features. We study the possibility of preventing such adversarial inference, yet without too much accuracy degradation. We propose a generic method to revise the neural network to boost the challenge of inferring input attributes from features, while maintaining highly accurate outputs. In particular, the method transforms real-valued features into complex-valued ones, in which the input is hidden in a randomized phase of the transformed features. The knowledge of the phase acts like a key, with which any party can easily recover the output from the processing result, but without which the party can neither recover the output nor distinguish the original input. Preliminary experiments on various datasets and network structures have shown that our method significantly diminishes the adversary's ability in inferring about the input while largely preserves the resulting accuracy.""","""The reviewers are unanimous in their opinion that this paper offers a novel approach to secure edge learning. I concur. Reviewers mention clarity, but I find the latest paper clear enough.""" 953,"""Neural Text Generation With Unlikelihood Training""","['language modeling', 'machine learning']","""Neural text generation is a key tool in natural language applications, but it is well known there are major problems at its core. In particular, standard likelihood training and decoding leads to dull and repetitive outputs. While some post-hoc fixes have been proposed, in particular top-k and nucleus sampling, they do not address the fact that the token-level probabilities predicted by the model are poor. In this paper we show that the likelihood objective itself is at fault, resulting in a model that assigns too much probability to sequences containing repeats and frequent words, unlike those from the human training distribution. We propose a new objective, unlikelihood training, which forces unlikely generations to be assigned lower probability by the model. We show that both token and sequence level unlikelihood training give less repetitive, less dull text while maintaining perplexity, giving superior generations using standard greedy or beam search. According to human evaluations, our approach with standard beam search also outperforms the currently popular decoding methods of nucleus sampling or beam blocking, thus providing a strong alternative to existing techniques.""","""This paper introduces a new objective for text generation with neural nets. The main insight is that the standard likelihood objective assigns excessive probability to sequences containing repeated and frequent words. The paper proposes an objective that penalizes these patterns. This technique yields better text generation than alternative methods according to human evaluations. The reviewers found the paper to be written clearly. They found the problem to be relevant and found the proposed solution method to be both novel and simple. The experiments were carefully designed and the results were convincing. The reviewers raised several concerns on particular details of the method. These concerns were largely addressed by the authors in their response. Overall, the reviewers did not find the weaknesses of the paper to be serious flaws. This paper should be published. The paper provides a clearly presented solution for a relevant problem, along with careful experiments. """ 954,"""Amharic Text Normalization with Sequence-to-Sequence Models""","['Text Normalization', 'Sequence-to-Sequence Model', 'Encoder-Decoder']","""All areas of language and speech technology, directly or indirectly, require handling of real text. In addition to ordinary words and names, the real text contains non-standard words (NSWs), including numbers, abbreviations, dates, currency, amounts, and acronyms. Typically, one cannot find NSWs in a dictionary, nor can one find their pronunciation by an application of ordinary letter-to-sound rules. It is desirable to normalize text by replacing such non-standard words with a consistently formatted and contextually appropriate variant in several NLP applications. To address this challenge, in this paper, we model the problem as character-level sequence-to-sequence learning where we map a sequence of input characters to a sequence of output words. It consists of two neural networks, the encoder network, and the decoder network. The encoder maps the input characters to a fixed dimensional vector and the decoder generates the output words. We have achieved an accuracy of 94.8 % which is promising given the resource we use.""","""The paper proposes a text normalisation model for Amharic text. The model uses word classification, followed by a character-based GRU attentive encoder-decoder model. The paper is very short and does not present reproducible experiments. It also does not conform to the style guidelines of the conference. There has been no discussion of this paper beyond the initial reviews, all of which reject it with a score of 1. It is not ready to publish and the authors should consider a more NLP focussed venue for future research of this kind. """ 955,"""SemanticAdv: Generating Adversarial Examples via Attribute-Conditional Image Editing""","['adversarial examples', 'semantic attack']","""Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples which are manipulated instances targeting to mislead DNNs to make incorrect predictions. Currently, most such adversarial examples try to guarantee subtle perturbation"" by limiting the Lp norm of the perturbation. In this paper, we aim to explore the impact of semantic manipulation on DNNs predictions by manipulating the semantic attributes of images and generate unrestricted adversarial examples"". Such semantic based perturbation is more practical compared with the Lp bounded perturbation. In particular, we propose an algorithm SemanticAdv which leverages disentangled semantic factors to generate adversarial perturbation by altering controlled semantic attributes to fool the learner towards various adversarial"" targets. We conduct extensive experiments to show that the semantic based adversarial examples can not only fool different learning tasks such as face verification and landmark detection, but also achieve high targeted attack success rate against real-world black-box services such as Azure face verification service based on transferability. To further demonstrate the applicability of SemanticAdv beyond face recognition domain, we also generate semantic perturbations on street-view images. Such adversarial examples with controlled semantic manipulation can shed light on further understanding about vulnerabilities of DNNs as well as potential defensive approaches.""","""I had a little bit of difficulty with my recommendation here, but in the end I don't feel confident in recommending this paper for acceptance, with my concerns largely boiling down to the lack of clear description of the overall motivation. Standard adversarial attacks are meant to be *imperceptible* changes that do not change the underlying semantics of the input to the human eye. In other words, the goal of the current work, generating ""semantically meaningful"" perturbations goes against the standard definition of adversarial attacks. This left me with two questions: 1. Under the definition of semantic adversarial attacks, what is to prevent someone from swapping out the current image with an entirely different image? From what I saw in the evaluation measures utilized in the paper, such a method would be judged as having performed a successful attack, and given no constraints there is nothing stopping this. 2. In what situation would such an attack method would be practically useful? Even the reviewers who reviewed the paper favorably were not able to provide answers to these questions, and I was not able to resolve this from my reading of the paper as well. I do understand that there is a challenge on this by Google. In my opinion, even this contest is somewhat ill-defined, but it also features extensive human evaluation to evaluate the validity of the perturbations, which is not featured in the experimental evaluation here. While I think this work is potentially interesting, it seems that there are too many open questions that are not resolved yet to recommend acceptance at this time, but I would encourage the authors to tighten up the argumentation/evaluation in this regard and revise the paper to be better accordingly!""" 956,"""Subgraph Attention for Node Classification and Hierarchical Graph Pooling""","['Graph Neural Network', 'Graph Attention', 'Graph Pooling', 'Node Classification', 'Graph Classification', 'Network Representation Learning']","""Graph neural networks have gained significant interest from the research community for both node classification within a graph and graph classification within a set of graphs. Attention mechanism applied on the neighborhood of a node improves the performance of graph neural networks. Typically, it helps to identify a neighbor node which plays more important role to determine the label of the node under consideration. But in real world scenarios, a particular subset of nodes together, but not the individual nodes in the subset, may be important to determine the label of a node. To address this problem, we introduce the concept of subgraph attention for graphs. To show the efficiency of this, we use subgraph attention with graph convolution for node classification. We further use subgraph attention for the entire graph classification by proposing a novel hierarchical neural graph pooling architecture. Along with attention over the subgraphs, our pooling architecture also uses attention to determine the important nodes within a level graph and attention to determine the important levels in the whole hierarchy. Competitive performance over the state-of-the-arts for both node and graph classification shows the efficiency of the algorithms proposed in this paper.""","""Initially, two reviewers gave high scores to this paper while they both admitted that they know little about this field. The other review raised significant concerns on novelty while claiming high confidence. During discussions, one of the high-scoring reviewers lowered his/her score. Thus a reject is recommended.""" 957,"""Adapt-to-Learn: Policy Transfer in Reinforcement Learning""","['Transfer Learning', 'Reinforcement Learning', 'Adaptation']","""Efficient and robust policy transfer remains a key challenge in reinforcement learning. Policy transfer through warm initialization, imitation, or interacting over a large set of agents with randomized instances, have been commonly applied to solve a variety of Reinforcement Learning (RL) tasks. However, this is far from how behavior transfer happens in the biological world: Humans and animals are able to quickly adapt the learned behaviors between similar tasks and learn new skills when presented with new situations. Here we seek to answer the question: Will learning to combine adaptation reward with environmental reward lead to a more efficient transfer of policies between domains? We introduce a principled mechanism that can \textbf{``Adapt-to-Learn""}, that is adapt the source policy to learn to solve a target task with significant transition differences and uncertainties. We show through theory and experiments that our method leads to a significantly reduced sample complexity of transferring the policies between the tasks.""","""This paper considers inter-domain policy transfer in reinforcement learning. The proposed approach involves adapting existing policies from a source task to a target task by adding a cost related to the difference between the dynamics and trajectory likelihoods of the two tasks. There are three major problems with this paper as it stands, as pointed out by the reviewers. Firstly, the ""KL divergence"" is not a real KL divergence and seems to be only empirically motivated. Then, there are issues with the derivative of the policy gradient. Finally, the theory is not well connected to the proposed algorithm. The rebuttals not only failed to convince the reviewer that raised these issues, but another reviewer lowered their score as a result of these raised points. This is a really interesting idea with compelling experiments, but must be rejected atthis point for the aforementioned reasons.""" 958,"""Match prediction from group comparison data using neural networks""","['Neural networks', 'Group comparison', 'Match prediction', 'Rank aggregation']","""We explore the match prediction problem where one seeks to estimate the likelihood of a group of M items preferred over another, based on partial group comparison data. Challenges arise in practice. As existing state-of-the-art algorithms are tailored to certain statistical models, we have different best algorithms across distinct scenarios. Worse yet, we have no prior knowledge on the underlying model for a given scenario. These call for a unified approach that can be universally applied to a wide range of scenarios and achieve consistently high performances. To this end, we incorporate deep learning architectures so as to reflect the key structural features that most state-of-the-art algorithms, some of which are optimal in certain settings, share in common. This enables us to infer hidden models underlying a given dataset, which govern in-group interactions and statistical patterns of comparisons, and hence to devise the best algorithm tailored to the dataset at hand. Through extensive experiments on synthetic and real-world datasets, we evaluate our framework in comparison to state-of-the-art algorithms. It turns out that our framework consistently leads to the best performance across all datasets in terms of cross entropy loss and prediction accuracy, while the state-of-the-art algorithms suffer from inconsistent performances across different datasets. Furthermore, we show that it can be easily extended to attain satisfactory performances in rank aggregation tasks, suggesting that it can be adaptable for other tasks as well.""","""This paper investigates neural networks for group comparison -- i.e., deciding if one group of objects would be preferred over another. The paper received 4 reviews (we requested an emergency review because of a late review that eventually did arrive). R1 recommends Weak Reject, based primarily on unclear presentation, missing details, and concerns about experiments. R2 recommends Reject, also based on concerns about writing, unclear notation, weak baselines, and unclear technical details. In a short review, R3 recommends Weak Accept and suggests some additional experiments, but also indicates that their familiarity with this area is not strong. R4 also recommends Weak Accept and suggests some clarifications in the writing (e.g. additional motivation future work). The authors submitted a response and revision that addresses many of these concerns. Given the split decision, the AC also read the paper; while we see that it has significant merit, we agree with R1 and R2's concerns, and feel the paper needs another round of peer review to address the remaining concerns.""" 959,"""Novelty Detection Via Blurring""","['novelty', 'anomaly', 'uncertainty']",""" Conventional out-of-distribution (OOD) detection schemes based on variational autoencoder or Random Network Distillation (RND) are known to assign lower uncertainty to the OOD data than the target distribution. In this work, we discover that such conventional novelty detection schemes are also vulnerable to the blurred images. Based on the observation, we construct a novel RND-based OOD detector, SVD-RND, that utilizes blurred images during training. Our detector is simple, efficient in test time, and outperforms baseline OOD detectors in various domains. Further results show that SVD-RND learns a better target distribution representation than the baselines. Finally, SVD-RND combined with geometric transform achieves near-perfect detection accuracy in CelebA domain.""","""The paper proposes a new method for out-of-distribution detection by combining random network distillation (RND) and blurring (via SVD). The proposed idea is very simple but achieves strong empirical performance, outperforming baseline methods in several OOD detection benchmarks. There were many detailed questions raised by the reviewers but they got mostly resolved, and all reviewers recommend acceptance, and this AC agrees that it is an interesting and effective method worth presenting at ICLR. """ 960,"""Harnessing Structures for Value-Based Planning and Reinforcement Learning""","['Deep reinforcement learning', 'value-based reinforcement learning']","""Value-based methods constitute a fundamental methodology in planning and deep reinforcement learning (RL). In this paper, we propose to exploit the underlying structures of the state-action value function, i.e., Q function, for both planning and deep RL. In particular, if the underlying system dynamics lead to some global structures of the Q function, one should be capable of inferring the function better by leveraging such structures. Specifically, we investigate the low-rank structure, which widely exists for big data matrices. We verify empirically the existence of low-rank Q functions in the context of control and deep RL tasks. As our key contribution, by leveraging Matrix Estimation (ME) techniques, we propose a general framework to exploit the underlying low-rank structure in Q functions. This leads to a more efficient planning procedure for classical control, and additionally, a simple scheme that can be applied to value-based RL techniques to consistently achieve better performance on ""low-rank"" tasks. Extensive experiments on control tasks and Atari games confirm the efficacy of our approach.""","""The paper shows empirical evidence that the the optimal action-value function Q* often has a low-rank structure. It uses ideas from the matrix estimation/completion literature to provide a modification of value iteration that benefits from such a low-rank structure. The reviewers are all positive about this paper. They find the idea novel and the writing clear. There have been some questions about the relation of this concept of rank to other definitions and usage of rank in the RL literature. The authors rebuttal seem to be satisfactory to the reviewers. Given these, I recommend acceptance of this paper.""" 961,"""Learning Similarity Metrics for Numerical Simulations""","['metric learning', 'CNNs', 'PDEs', 'numerical simulation', 'perceptual evaluation', 'physics simulation']","""We propose a novel approach to compute a stable and generalizing metric (LNSM) with convolutional neural networks (CNN) to compare field data from a variety of numerical simulation sources. Our method employs a Siamese network architecture that is motivated by the mathematical properties of a metric and is known to work well for finding similarities of other data modalities. We leverage a controllable data generation setup with partial differential equation (PDE) solvers to create increasingly different outputs from a reference simulation. In addition, the data generation allows for adjusting the difficulty of the resulting learning task. A central component of our learned metric is a specialized loss function, that introduces knowledge about the correlation between single data samples into the training process. To demonstrate that the proposed approach outperforms existing simple metrics for vector spaces and other learned, image based metrics we evaluate the different methods on a large range of test data. Additionally, we analyze generalization benefits of using the proposed correlation loss and the impact of an adjustable training data difficulty.""","""The authors present a Siamese neural net architecture for learning similarities among field data generated by numerical simulations of partial differential equations. The goal would be to find which two field data are more similar to each. One use case mentioned is the debugging of new numerical simulators, by comparing them with existing ones. The reviewers had mixed opinions on the paper. I agree with a negative comment of all three reviewers that the paper lacks a bit on the originality of the technique and the justification of the new loss proposed, as well as the fact that no strong explicit real world use case was given. I find this problematic especially given that similarities of solutions to PDEs is not a mainstream topic of the conference. Hence a good real world example use of the method would be more convincing.""" 962,"""A Graph Neural Network Assisted Monte Carlo Tree Search Approach to Traveling Salesman Problem""","['Traveling Salesman Problem', 'Graph Neural Network', 'Monte Carlo Tree Search']","""We present a graph neural network assisted Monte Carlo Tree Search approach for the classical traveling salesman problem (TSP). We adopt a greedy algorithm framework to construct the optimal solution to TSP by adding the nodes successively. A graph neural network (GNN) is trained to capture the local and global graph structure and give the prior probability of selecting each vertex every step. The prior probability provides a heuristics for MCTS, and the MCTS output is an improved probability for selecting the successive vertex, as it is the feedback information by fusing the prior with the scouting procedure. Experimental results on TSP up to 100 nodes demonstrate that the proposed method obtains shorter tours than other learning-based methods.""","""The paper is a contribution to the recently emerging literature on learning based approaches to combinatorial optimization. The authors propose to pre-train a policy network to imitate SOTA solvers for TSPs. At test time, this policy is then improved, in an alpha-go like manner, with MCTS, using beam-search rollouts to estimate bootstrap values. The main concerns raised by the reviewers is lack of novelty (the proposed algorithm is a straight forward application of graph NNs to MCTS) as well a the experimental results. Although comparing well to other learning based methods, the algorithm is far away from the performance of SOTA solvers. Although well written, the paper is below acceptance threshold. The methodological novelty is low. The reported results are an order of magnitude away from SOTA solvers, while previous work has already reported the general feasibility of learned solvers to TPSs. Furthermore, the overall contribution is somewhat unclear as the policy relies on pre-training with solutions form existing solvers. """ 963,"""Asynchronous Stochastic Subgradient Methods for General Nonsmooth Nonconvex Optimization""","['optimziation', 'stochastic optimization', 'asynchronous parallel architecture', 'deep neural networks']","""Asynchronous distributed methods are a popular way to reduce the communication and synchronization costs of large-scale optimization. Yet, for all their success, little is known about their convergence guarantees in the challenging case of general non-smooth, non-convex objectives, beyond cases where closed-form proximal operator solutions are available. This is all the more surprising since these objectives are the ones appearing in the training of deep neural networks. In this paper, we introduce the first convergence analysis covering asynchronous methods in the case of general non-smooth, non-convex objectives. Our analysis applies to stochastic sub-gradient descent methods both with and without block variable partitioning, and both with and without momentum. It is phrased in the context of a general probabilistic model of asynchronous scheduling accurately adapted to modern hardware properties. We validate our analysis experimentally in the context of training deep neural network architectures. We show their overall successful asymptotic convergence as well as exploring how momentum, synchronization, and partitioning all affect performance.""","""This paper considers an interesting theoretical question. However, it would add to the strength of the paper if it was able to meaningfully connect the considered model as well as derived methodology to the challenges and performance that arise in practice. """ 964,"""Is Deep Reinforcement Learning Really Superhuman on Atari? Leveling the playing field""","['Reinforcement Learning', 'Deep Learning', 'Atari benchmark', 'Reproducibility']","""Consistent and reproducible evaluation of Deep Reinforcement Learning (DRL) is not straightforward. In the Arcade Learning Environment (ALE), small changes in environment parameters such as stochasticity or the maximum allowed play time can lead to very different performance. In this work, we discuss the difficulties of comparing different agents trained on ALE. In order to take a step further towards reproducible and comparable DRL, we introduce SABER, a Standardized Atari BEnchmark for general Reinforcement learning algorithms. Our methodology extends previous recommendations and contains a complete set of environment parameters as well as train and test procedures. We then use SABER to evaluate the current state of the art, Rainbow. Furthermore, we introduce a human world records baseline, and argue that previous claims of expert or superhuman performance of DRL might not be accurate. Finally, we propose Rainbow-IQN by extending Rainbow with Implicit Quantile Networks (IQN) leading to new state-of-the-art performance. Source code is available for reproducibility.""","""This paper proposes a new benchmark that compares performance of deep reinforcement learning algorithms on the Atari Learning Environment to the best human players. The paper identifies limitations of past evaluations of deep RL agents on Atari. The human baseline scores commonly used in deep RL are not the highest known human scores. To enable learning agents to reach these high scores, the paper recommends allowing the learning agents to play without a time limit. The time limit in Atari is not always consistent across papers, and removing the time limit requires additional software fixes due to some bugs in the game software. These ideas form the core of the paper's proposed new benchmark (SABER). The paper also proposes a new deep RL algorithm that combines earlier ideas. The reviews and the discussion with the authors brought out several strengths and weaknesses of the proposal. One strength was identifying the best known human performance in these Atari games. However, the reviewers were not convinced that this new benchmark is useful. The reviewers raised concerns about using clipped rewards, using games that received substantially different amounts of human effort, comparing learning algorithms to human baselines instead of other learning algorithms, and also the continued use of the Atari environment. Given all these many concerns about a new benchmark, the newly proposed algorithm was not viewed as a distraction. This paper is not ready for publication. The new benchmark proposed for deep reinforcement learning on Atari was not convincing to the reviewers. The paper requires further refinement of the benchmark or further justification for the new benchmark.""" 965,"""Enhancing Adversarial Defense by k-Winners-Take-All""","['adversarial defense', 'activation function', 'winner takes all']","""We propose a simple change to existing neural network structures for better defending against gradient-based adversarial attacks. Instead of using popular activation functions (such as ReLU), we advocate the use of k-Winners-Take-All (k-WTA) activation, a C0 discontinuous function that purposely invalidates the neural network models gradient at densely distributed input data points. The proposed k-WTA activation can be readily used in nearly all existing networks and training methods with no significant overhead. Our proposal is theoretically rationalized. We analyze why the discontinuities in k-WTA networks can largely prevent gradient-based search of adversarial examples and why they at the same time remain innocuous to the network training. This understanding is also empirically backed. We test k-WTA activation on various network structures optimized by a training method, be it adversarial training or not. In all cases, the robustness of k-WTA networks outperforms that of traditional networks under white-box attacks.""","""This paper presents new non-linearity function which specially affects regions of the model which are densely valued. The non-linearity is simple: it retains only top-k highest units from the input, while truncating the rest to zero. This also makes the models more robust to adversarial defense which depend on the gradients. The non-linearity function is shown to have better adversarial robustness on CIFAR-10 and SVHN datasets. The paper also presents theoretical analysis for why the non-linearity is a good function. The authors have already incorporated major suggestions by the reviewers and the paper can make significant impact on the community. Thus, I recommend its acceptance.""" 966,"""Domain-Agnostic Few-Shot Classification by Learning Disparate Modulators""","['Meta-learning', 'few-shot learning', 'multi-domain']","""Although few-shot learning research has advanced rapidly with the help of meta-learning, its practical usefulness is still limited because most of the researches assumed that all meta-training and meta-testing examples came from a single domain. We propose a simple but effective way for few-shot classification in which a task distribution spans multiple domains including previously unseen ones during meta-training. The key idea is to build a pool of embedding models which have their own metric spaces and to learn to select the best one for a particular task through multi-domain meta-learning. This simplifies task-specific adaptation over a complex task distribution as a simple selection problem rather than modifying the model with a number of parameters at meta-testing time. Inspired by common multi-task learning techniques, we let all models in the pool share a base network and add a separate modulator to each model to refine the base network in its own way. This architecture allows the pool to maintain representational diversity and each model to have domain-invariant representation as well. Experiments show that our selection scheme outperforms other few-shot classification algorithms when target tasks could come from many different domains. They also reveal that aggregating outputs from all constituent models is effective for tasks from unseen domains showing the effectiveness of our framework.""","""This paper addresses the problem of few-shot classification across multiple domains. The main algorithmic contribution consists of a selection criteria to choose the best source domain embedding for a given task using a multi-domain modulator. All reviewers were in agreement that this paper is not ready for publication. Some key concerns were the lack of scalability (though the authors argue that this may not be a concern as all models are only stored during meta-training, still if you want to incorporate many training settings it may become challenging) and low algorithmic novelty. The issue with novelty is that there is inconclusive experimental evidence to justify the selection criteria over simple methods like averaging, especially when considering novel test time domains. The authors argue that since their approach chooses the single best training domain it may not be best suited to generalize to a novel test time domain. Based on the reviews and discussions the AC does not recommend acceptance. The authors should consider revisions for clarity and to further polish their claims providing any additional experiments to justify where appropriate. """ 967,"""Constant Time Graph Neural Networks""","['graph neural networks', 'constant time algorithm']","""The recent advancements in graph neural networks (GNNs) have led to state-of-the-art performances in various applications, including chemo-informatics, question-answering systems, and recommender systems. However, scaling up these methods to huge graphs such as social network graphs and web graphs still remains a challenge. In particular, the existing methods for accelerating GNNs are either not theoretically guaranteed in terms of approximation error, or they require at least a linear time computation cost. In this study, we analyze the neighbor sampling technique to obtain a constant time approximation algorithm for GraphSAGE, the graph attention networks (GAT), and the graph convolutional networks (GCN). The proposed approximation algorithm can theoretically guarantee the precision of approximation. The key advantage of the proposed approximation algorithm is that the complexity is completely independent of the numbers of the nodes, edges, and neighbors of the input and depends only on the error tolerance and confidence probability. To the best of our knowledge, this is the first constant time approximation algorithm for GNNs with a theoretical guarantee. Through experiments using synthetic and real-world datasets, we demonstrate the speed and precision of the proposed approximation algorithm and validate our theoretical results.""","""There was some interest in the ideas presented, but this paper was on the borderline and ultimately not able to be accepted for publication at ICLR. The primary reviewer concern was about the level of novelty and significance of the contribution. This was not sufficiently demonstrated.""" 968,"""Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering""","['Multi-hop Open-domain Question Answering', 'Graph-based Retrieval', 'Multi-step Retrieval']","""Answering questions that require multi-hop reasoning at web-scale necessitates retrieving multiple evidence documents, one of which often has little lexical or semantic relationship to the question. This paper introduces a new graph-based recurrent retrieval approach that learns to retrieve reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions. Our retriever model trains a recurrent neural network that learns to sequentially retrieve evidence paragraphs in the reasoning path by conditioning on the previously retrieved documents. Our reader model ranks the reasoning paths and extracts the answer span included in the best reasoning path. Experimental results show state-of-the-art results in three open-domain QA datasets, showcasing the effectiveness and robustness of our method. Notably, our method achieves significant improvement in HotpotQA, outperforming the previous best model by more than 14 points.""","""The paper proposed a multi-hop machine reading method for hotpotqa and squad-open datasets. The reviewers agreed that it is very interesting to learn to retrieve, and the paper presents an interesting solution. Some additional experiments as suggested by the reviewers will help improve the paper further. """ 969,"""Word embedding re-examined: is the symmetrical factorization optimal?""","['word embedding', 'matrix factorization', 'linear transformation', 'neighborhood structure']","""As observed in previous works, many word embedding methods exhibit two interesting properties: (1) words having similar semantic meanings are embedded closely; (2) analogy structure exists in the embedding space, such that ''emph{Paris} is to \emph{France} as \emph{Berlin} is to \emph{Germany}''. We theoretically analyze the inner mechanism leading to these nice properties. Specifically, the embedding can be viewed as a linear transformation from the word-context co-occurrence space to the embedding space. We reveal how the relative distances between nodes change during this transforming process. Such linear transformation will result in these good properties. Based on the analysis, we also provide the answer to a question whether the symmetrical factorization (e.g., \texttt{word2vec}) is better than traditional SVD method. We propose a method to improve the embedding further. The experiments on real datasets verify our analysis.""","""The paper studies word embeddings using the matrix factorization framework introduced by Levy et al 2015. The authors provide a theoretical explanation for how the hyperparameter alpha controls the distance between words in the embedding and a method to estimate the optimal alpha. The authors also provide experiments showing the alpha found using their method is close to the alpha that gives the highest performance on the word-similarity task on several datasets. The paper received 2 weak rejects and 1 weak accept. The reviews were unchanged after the rebuttal, with even the review for weak accept (R2) indicating that they felt the submission to be of low quality. Initially, reviewers commented that while the work seemed solid and provided insights into the problem of learning word embeddings, the paper needed to improve their positioning with respect to prior work on word embeddings and add missing citations. In the revision, the authors improved the related work, but removed the conclusion. The current version of the paper is still low quality and has the following issues 1. The paper exposition still needs improvement and it would benefit from another review pass Following R3's suggestions, the authors have made various improvements to the paper, including modifying the terminology and contextualizing the work. However, as R3 suggests, the paper still needs more rewriting to clearly articulate the contribution and how it relates to prior work throughout the paper. In addition, the conclusion was removed and the paper still needs an editing pass as there are still many language/grammar issues. Page 5: ""inherites"" -> ""inherits"" Page 5: ""top knn"" -> ""top k"" 2. More experimental evaluation is needed. For instance, R1 suggested that the authors perform additional experiments on other tasks (e.g. NER, POS Tagging). The authors indicated that this was not a focus of their work as other works have already looked at the impact of alpha on other task. While prior works has looked at the correlation of alpha vs performance on the task, they have not looked at whether alpha estimated the method proposed by the author will give good performance on these tasks as well. Including such analysis will make this a stronger paper. Overall, there are some promising elements in the paper but the quality of the paper needs to be improved. The authors are encouraged to improve the paper by adding more experimental evaluation on other tasks, improving the writing, as well as incorporating other reviewer comments and resubmit to an appropriate venue. """ 970,"""Analysis and Interpretation of Deep CNN Representations as Perceptual Quality Features""","['interpretation', 'perceptual quality', 'perceptual loss', 'image-restoration.']","""Pre-trained Deep Convolutional Neural Network (CNN) features have popularly been used as full-reference perceptual quality features for CNN based image quality assessment, super-resolution, image restoration and a variety of image-to-image translation problems. In this paper, to get more insight, we link basic human visual perception to characteristics of learned deep CNN representations as a novel and first attempt to interpret them. We characterize the frequency and orientation tuning of channels in trained object detection deep CNNs (e.g., VGG-16) by applying grating stimuli of different spatial frequencies and orientations as input. We observe that the behavior of CNN channels as spatial frequency and orientation selective filters can be used to link basic human visual perception models to their characteristics. Doing so, we develop a theory to get more insight into deep CNN representations as perceptual quality features. We conclude that sensitivity to spatial frequencies that have lower contrast masking thresholds in human visual perception and a definite and strong orientation selectivity are important attributes of deep CNN channels that deliver better perceptual quality features. ""","""This paper aims to analyze CNN representations in terms of how well they measure the perceptual severity of image distortions. In particularly, (a) sensitivity to changes in visual frequency and (b) orientation selectivity was used. Although the reviewers agree that this paper presents some interesting initial findings with a promising direction, the majority of the reviewers (three out of four) find that the paper is incomplete, raising concerns in terms of experimental settings and results. Multiple reviewers explicitly asked for additional experiments to confirm whether the presented empirical results can be used to improve results of an image generation. Responding to the reviews, the authors added a super-resolution experiment in the appendix, which the reviewers believe is the right direction but is still preliminary. Overall, we believe the paper reports interesting findings but it will require a series of additional work to make it ready for the publication.""" 971,"""Pretraining boosts out-of-domain robustness for pose estimation""","['pose estimation', 'robustness', 'out-of-domain', 'transfer learning']","""Deep neural networks are highly effective tools for human and animal pose estimation. However, robustness to out-of-domain data remains a challenge. Here, we probe the transfer and generalization ability for pose estimation with two architecture classes (MobileNetV2s and ResNets) pretrained on ImageNet. We generated a novel dataset of 30 horses that allowed for both within-domain and out-of-domain (unseen horse) testing. We find that pretraining on ImageNet strongly improves out-of-domain performance. Moreover, we show that for both pretrained and networks trained from scratch, better ImageNet-performing architectures perform better for pose estimation, with a substantial improvement on out-of-domain data when pretrained. Collectively, our results demonstrate that transfer learning is particularly beneficial for out-of-domain robustness.""","""The paper presents a new dataset, containing around 8k pictures of 30 horses in different poses. This is used to study the benefits of pretraining for in- and out-of-domain images. The paper is somewhat lacking in novelty. Others have studied the same type of pre-training in the past using other datasets, which makes the dataset the main novelty. But reviewers raised many questions about the dataset, in particular about how many of the frames of the same horse might be similar, and of how few horses there are; few enough to potentially not make the results statistically meaningful. The authors replied to these questions more by appealing to standards in other fields than by explaining why this is a good choice. Apart from these crucial weaknesses, however, the research appears good. This is a pretty clear reject based on lack of novelty and oddities with the dataset.""" 972,"""Topological Autoencoders""","['Topology', 'Deep Learning', 'Autoencoders', 'Persistent Homology', 'Representation Learning', 'Dimensionality Reduction', 'Topological Machine Learning', 'Topological Data Analysis']","""We propose a novel approach for preserving topological structures of the input space in latent representations of autoencoders. Using persistent homology, a technique from topological data analysis, we calculate topological signatures of both the input and latent space to derive a topological loss term. Under weak theoretical assumptions, we can construct this loss in a differentiable manner, such that the encoding learns to retain multi-scale connectivity information. We show that our approach is theoretically well-founded and that it exhibits favourable latent representations on a synthetic manifold as well as on real-world image data sets, while preserving low reconstruction errors.""","""This paper introduces a new variant of autoencoders with an topological loss term. The reviewers appreciated part of the paper and it is borderline. However, there are enough reservations to argue for it will be better for the paper to updated and submitted to next conference. Rejection is recommended. """ 973,"""Learn Interpretable Word Embeddings Efficiently with von Mises-Fisher Distribution""","['word embedding', 'natural language processing']","""Word embedding plays a key role in various tasks of natural language processing. However, the dominant word embedding models don't explain what information is carried with the resulting embeddings. To generate interpretable word embeddings we intend to replace the word vector with a probability density distribution. The insight here is that if we regularize the mixture distribution of all words to be uniform, then we can prove that the inner product between word embeddings represent the point-wise mutual information between words. Moreover, our model can also handle polysemy. Each word's probability density distribution will generate different vectors for its various meanings. We have evaluated our model in several word similarity tasks. Results show that our model can outperform the dominant models consistently in these tasks.""","""The paper presents an approach to learning interpretable word embeddings. The reviewers put this in the lower half of the submissions. One reason seems to be the size of the training corpora used in the experiments, as well as the limited number of experiments; another that the claim of interpretability seems over-stated. There's also a lack of comparison to related work. I also think it would be interesting to move beyond the standard benchmarks - and either use word embeddings downstream or learn word embeddings for multiple languages [you should do this, regardless] and use Procrustes analysis or the like to learn a mapping: A good embedding algorithm should induce more linearly alignable embedding spaces. NB: While the authors cite other work by these authors, [0] seems relevant, too. Other related work: [1-4]. [0] pseudo-url [1] pseudo-url [2] pseudo-url [3] pseudo-url [4]pseudo-url""" 974,"""Learning to Learn Kernels with Variational Random Features""","['Meta-learning', 'few-shot learning', 'Random Fourier Feature', 'Kernel learning']","""Meta-learning for few-shot learning involves a meta-learner that acquires shared knowledge from a set of prior tasks to improve the performance of a base-learner on new tasks with a small amount of data. Kernels are commonly used in machine learning due to their strong nonlinear learning capacity, which have not yet been fully investigated in the meta-learning scenario for few-shot learning. In this work, we explore kernel approximation with random Fourier features in the meta-learning framework for few-shot learning. We propose learning adaptive kernels by meta variational random features (MetaVRF), which is formulated as a variational inference problem. To explore shared knowledge across diverse tasks, our MetaVRF deploys an LSTM inference network to generate informative features, which can establish kernels of highly representational power with low spectral sampling rates, while also being able to quickly adapt to specific tasks for improved performance. We evaluate MetaVRF on a variety of few-shot learning tasks for both regression and classification. Experimental results demonstrate that our MetaVRF can deliver much better or competitive performance than recent meta-learning algorithms.""","""The paper looks at meta learning using random Fourier features for kernel approximations. The idea is to learn adaptive kernels by inferring Fourier bases from related tasks that can be used for the new task. A key insight of the paper is to use an LSTM to share knowledge across tasks. The paper tackles an interesting problem, and the idea to use a meta learning setting for transfer learning within a kernel setting is quite interesting. It may be worthwhile relating this work to this paper by Titsias et al. (pseudo-url), which looks at a slightly different setting (continual learning with Gaussian processes, where information is shared through inducing variables). Having read the paper, I have some comments/questions: 1. log-likelihood should be called log-marginal likelihood (wherever the ELBO shows up) 2. The derivation of the ELBO confuses me (section 3.1). First, I don't know whether this ELBO is at training time or at test time. If it was at training time, then I agree with Reviewer #1 in the sense that pseudo-formula should not depend on either pseudo-formula or {S} If it is at test time, the log-likelihood term should not depend on pseudo-formula (which is the training set), because S is taken care of by S) However, critically, S) should not depend on pseudo-formula . I agree with Reviewer #1 that this part is confusing, and the authors' response has not helped me to diffuse this confusion (e.g., priors should not be conditioned on any data). 3. The tasks are indirectly represented by a set of basis functions, which are represented by pseudo-formula for task pseudo-formula . In the paper, these tasks are then inferred using variational inference and an LSTM. It may be worthwhile relating this to the latent-variable approach by Saemundsson et al. (pseudo-url) for meta learning. 4. The expression ""meta ELBO"" is inappropriate. This is a simple ELBO, nothing meta about it. If we think of the tasks as latent variables (which the paper also states), this ELBO in equation (9) is a vanilla ELBO that is used in variational inference. 5. For the LSTM, does it make a difference how the tasks are ordered? 6. Experiments: Figure 3 clearly needs error bars, and MSEs need to be reported with error bars as well; 6a) Figures 4 and 5 need error bars. 6b) Error bars should also be based on different random initializations of the learning procedure to evaluate the robustness of the methods (use at least 20 random seeds). I don't think any of the results is based on more than one random seed (at least I could not find any statement regarding this). 7. Table 1 and 2: The highlighting in bold is unclear. If it is supposed to highlight the best methods, then the highlighting is dishonest in the sense that methods, which perform similarly, are not highlighted. For example, in Table 1, VERSA or MetaVRF (w/o LSTM) could be highlighted for all tasks because the error bars are so huge (similar in Table 2). 8. One of the things I'm missing completely is a discussion about computational demand: How efficiently can we train the model, and how long does it take to make predictions? It would be great to have some discussion about this in the paper and relate this to other approaches. 9. The paper evaluates also the effect of having an LSTM that correlates tasks in the posterior. The analysis shows that there are some marginal gains, but none of the is statistically significant. I would have liked to see much more analysis of the effect/benefit of the LSTM. Summary: The paper addresses an interesting problem. However, I have reservations regarding some theoretical bits and regarding the quality of the evaluation. Given that this paper also exceeds the 8 pages (default) limit, we are supposed to ask for higher acceptance standards than for an 8-pages paper. Hence, putting everything together, I recommend to reject this paper.""" 975,"""Efficacy of Pixel-Level OOD Detection for Semantic Segmentation""","['Out-of-Distribution Detection', 'Semantic Segmentation', 'Deep Learning']","""The detection of out of distribution samples for image classification has been widely researched. Safety critical applications, such as autonomous driving, would benefit from the ability to localise the unusual objects causing the image to be out of distribution. This paper adapts state-of-the-art methods for detecting out of distribution images for image classification to the new task of detecting out of distribution pixels, which can localise the unusual objects. It further experimentally compares the adapted methods on two new datasets derived from existing semantic segmentation datasets using PSPNet and DeeplabV3+ architectures, as well as proposing a new metric for the task. The evaluation shows that the performance ranking of the compared methods does not transfer to the new task and every method performs significantly worse than their image-level counterparts.""","""This paper studies the problem of out-of-distribution (OOD) detection for semantic segmentation. Reviewers and AC agree that the problem might be important and interesting, but the paper is not ready to publish in various aspects, e.g., incremental contribution and less-motivated/convincing experimental setups/results. Hence, I recommend rejection.""" 976,"""Finding Deep Local Optima Using Network Pruning""","['network pruning', 'non-convex optimization']","""Artificial neural networks (ANNs) are very popular nowadays and offer reliable solutions to many classification problems. However, training deep neural networks (DNN) is time-consuming due to the large number of parameters. Recent research indicates that these DNNs might be over-parameterized and different solutions have been proposed to reduce the complexity both in the number of parameters and in the training time of the neural networks. Furthermore, some researchers argue that after reducing the neural network complexity via connection pruning, the remaining weights are irrelevant and retraining the sub-network would obtain a comparable accuracy with the original one. This may hold true in most vision problems where we always enjoy a large number of training samples and research indicates that most local optima of the convolutional neural networks may be equivalent. However, in non-vision sparse datasets, especially with many irrelevant features where a standard neural network would overfit, this might not be the case and there might be many non-equivalent local optima. This paper presents empirical evidence for these statements and an empirical study of the learnability of neural networks (NNs) on some challenging non-linear real and simulated data with irrelevant variables. Our simulation experiments indicate that the cross-entropy loss function on XOR-like data has many local optima, and the number of local optima grows exponentially with the number of irrelevant variables. We also introduce a connection pruning method to improve the capability of NNs to find a deep local minimum even when there are irrelevant variables. Furthermore, the performance of the discovered sparse sub-network degrades considerably either by retraining from scratch or the corresponding original initialization, due to the existence of many bad optima around. Finally, we will show that the performance of neural networks for real-world experiments on sparse datasets can be recovered or even improved by discovering a good sub-network architecture via connection pruning.""","""This paper provides empirical evidence on synthetic examples with a focus on understanding the relationship between the number of good local minima and number of irrelevant features. The reviewers find the problem discussed to be important. One of the reviewers has pointed out that the paper does not present deep insights and is more suitable for workshops. The authors did not provide a rebuttal, and it appears that the reviewers opinion has not changed. The current score is clearly not sufficient to accept this paper in its current form. Due to this reason, I recommend to reject this paper. """ 977,"""Regularly varying representation for sentence embedding""","['extreme value theory', 'classification', 'supvervised learning', 'data augmentation', 'representation learning']","""The dominant approaches to sentence representation in natural language rely on learning embeddings on massive corpuses. The obtained embeddings have desirable properties such as compositionality and distance preservation (sentences with similar meanings have similar representations). In this paper, we develop a novel method for learning an embedding enjoying a dilation invariance property. We propose two algorithms: Orthrus, a classification algorithm, constrains the distribution of the embedded variable to be regularly varying, i.e. multivariate heavy-tail. and uses Extreme Value Theory (EVT) to tackle the classification task on two separate regions: the tail and the bulk. Hydra, a text generation algorithm for dataset augmentation, leverages the invariance property of the embedding learnt by Orthrus to generate coherent sentences with controllable attribute, e.g. positive or negative sentiment. Numerical experiments on synthetic and real text data demonstrate the relevance of the proposed framework. ""","""Three reviewers recommend rejection. After a good rebuttal, the first reviewer is more positive about the paper yet still feels the paper is not ready for publication. The authors are encouraged to strengthen their work and resubmit to a future venue.""" 978,"""BREAKING CERTIFIED DEFENSES: SEMANTIC ADVERSARIAL EXAMPLES WITH SPOOFED ROBUSTNESS CERTIFICATES""",[],"""Defenses against adversarial attacks can be classified into certified and non-certified. Certifiable defenses make networks robust within a certain pseudo-formula -bounded radius, so that it is impossible for the adversary to make adversarial examples in the certificate bound. We present an attack that maintains the imperceptibility property of adversarial examples while being outside of the certified radius. Furthermore, the proposed ""Shadow Attack"" can fool certifiably robust networks by producing an imperceptible adversarial example that gets misclassified and produces a strong ``spoofed'' certificate.""","""This work presents a ""shadow attack"" that fools certifiably robust networks by producing imperceptible adversarial examples by search outside of the certified radius. The reviewers are generally positive on the novelty and contribution of the work. """ 979,"""At Your Fingertips: Automatic Piano Fingering Detection""","['piano', 'fingering', 'dataset']","""Automatic Piano Fingering is a hard task which computers can learn using data. As data collection is hard and expensive, we propose to automate this process by automatically extracting fingerings from public videos and MIDI files, using computer-vision techniques. Running this process on 90 videos results in the largest dataset for piano fingering with more than 150K notes. We show that when running a previously proposed model for automatic piano fingering on our dataset and then fine-tuning it on manually labeled piano fingering data, we achieve state-of-the-art results. In addition to the fingering extraction method, we also introduce a novel method for transferring deep-learning computer-vision models to work on out-of-domain data, by fine-tuning it on out-of-domain augmentation proposed by a Generative Adversarial Network (GAN). For demonstration, we anonymously release a visualization of the output of our process for a single video on pseudo-url""","""The paper shows an automatic piano fingering algorithm. The idea is good. But the reviewers find that the novelty is limited and it is an incremental work. All the reivewers agree to reject.""" 980,"""Context Based Machine Translation With Recurrent Neural Network For English-Amharic Translation ""","['Context based machine translation', 'machine translation', 'Neural network machine translation', 'English to Amharic machine translation']","""The current approaches for machine translation usually require large set of parallel corpus in order to achieve fluency like in the case of neural machine translation (NMT), statistical machine translation (SMT) and example-based machine translation (EBMT). The context awareness of phrase-based machine translation (PBMT) approaches is also questionable. This research develops a system that translates English text to Amharic text using a combination of context based machine translation (CBMT) and a recurrent neural network machine translation (RNNMT). We built a bilingual dictionary for the CBMT system to use along with a large target corpus. The RNNMT model has then been provided with the output of the CBMT and a parallel corpus for training. Our combinational approach on English-Amharic language pair yields a performance improvement over the simple neural machine translation (NMT).""","""The authors propose a model which combines a neural machine translation system and a context-based machine translation model, which combines some aspects of rule and example based MT. This paper presents work based on obsolete techniques, has relatively low novelty, has problematic experimental design and lacks compelling performance improvements. The authors rebutted some of the reviewers claims, but did not convince them to change their scores. """ 981,"""From English to Foreign Languages: Transferring Pre-trained Language Models""","['pretrained language model', 'zero-shot transfer', 'parsing', 'natural language inference']","""Pre-trained models have demonstrated their effectiveness in many downstream natural language processing (NLP) tasks. The availability of multilingual pre-trained models enables zero-shot transfer of NLP tasks from high resource languages to low resource ones. However, recent research in improving pre-trained models focuses heavily on English. While it is possible to train the latest neural architectures for other languages from scratch, it is undesirable due to the required amount of compute. In this work, we tackle the problem of transferring an existing pre-trained model from English to other languages under a limited computational budget. With a single GPU, our approach can obtain a foreign BERT-base model within a day and a foreign BERT-large within two days. Furthermore, evaluating our models on six languages, we demonstrate that our models are better than multilingual BERT on two zero-shot tasks: natural language inference and dependency parsing.""","""This paper proposes a method to transfer a pretrained language model in one language (English) to a new language. The method first learns word embeddings for the new language while keeping the the body of the English model fixed, and further refines it in a fine-tuning procedure as a bilingual model. Experiments on XNLI and dependency parsing demonstrate the benefit of the proposed approach. R3 pointed out that the paper is missing an important baseline, which is a bilingual BERT model. The authors acknowledged this in their rebuttal and ran a preliminary experiment to obtain a first set of results. However, since the main claim of the paper depends on this new experiment, which was not finished by the end of the rebuttal period, it is difficult to accept the paper in its current state. In an internal discussion, R1 also agreed that this baseline is critical to support the paper. As a result, I recommend to reject this paper for ICLR. I encourage the authors to update their paper with the new experiment for submission to future conferences (given consistent results).""" 982,"""N-BEATS: Neural basis expansion analysis for interpretable time series forecasting""","['time series forecasting', 'deep learning']","""We focus on solving the univariate times series point forecasting problem using deep learning. We propose a deep neural architecture based on backward and forward residual links and a very deep stack of fully-connected layers. The architecture has a number of desirable properties, being interpretable, applicable without modification to a wide array of target domains, and fast to train. We test the proposed architecture on several well-known datasets, including M3, M4 and TOURISM competition datasets containing time series from diverse domains. We demonstrate state-of-the-art performance for two configurations of N-BEATS for all the datasets, improving forecast accuracy by 11% over a statistical benchmark and by 3% over last year's winner of the M4 competition, a domain-adjusted hand-crafted hybrid between neural network and statistical time series models. The first configuration of our model does not employ any time-series-specific components and its performance on heterogeneous datasets strongly suggests that, contrarily to received wisdom, deep learning primitives such as residual blocks are by themselves sufficient to solve a wide range of forecasting problems. Finally, we demonstrate how the proposed architecture can be augmented to provide outputs that are interpretable without considerable loss in accuracy.""","""The paper received positive recommendation from all reviewers. Accept.""" 983,"""Support-guided Adversarial Imitation Learning""","['Adversarial Imitation Learning', 'Reinforcement Learning', 'Learning from Demonstrations']","""We propose Support-guided Adversarial Imitation Learning (SAIL), a generic imitation learning framework that unifies support estimation of the expert policy with the family of Adversarial Imitation Learning (AIL) algorithms. SAIL addresses two important challenges of AIL, including the implicit reward bias and potential training instability. We also show that SAIL is at least as efficient as standard AIL. In an extensive evaluation, we demonstrate that the proposed method effectively handles the reward bias and achieves better performance and training stability than other baseline methods on a wide range of benchmark control tasks.""","""The submission proposes a method for adversarial imitation learning that combines two previous approaches - GAIL and RED - by simply multiplying their reward functions. The claim is that this adaptation allows for better learning - both handling reward bias and improving training stability. The reviewers were divided in their assessment of the paper, criticizing the empirical results and the claims made by the authors. In particular, the primary claims of handling reward bias and reducing variance seem to be not well justified, including results which show that training stability only substantially improves when SAIL-b, which uses reward clipping, is used. Although the paper is promising, the recommendation is for a reject at this time. The authors are encouraged to clarify their claims and supporting experiments and to validate their method on more challenging domains.""" 984,"""Understanding and Robustifying Differentiable Architecture Search""","['Neural Architecture Search', 'AutoML', 'AutoDL', 'Deep Learning', 'Computer Vision']","""Differentiable Architecture Search (DARTS) has attracted a lot of attention due to its simplicity and small search costs achieved by a continuous relaxation and an approximation of the resulting bi-level optimization problem. However, DARTS does not work robustly for new problems: we identify a wide range of search spaces for which DARTS yields degenerate architectures with very poor test performance. We study this failure mode and show that, while DARTS successfully minimizes validation loss, the found solutions generalize poorly when they coincide with high validation loss curvature in the architecture space. We show that by adding one of various types of regularization we can robustify DARTS to find solutions with less curvature and better generalization properties. Based on these observations, we propose several simple variations of DARTS that perform substantially more robustly in practice. Our observations are robust across five search spaces on three image classification tasks and also hold for the very different domains of disparity estimation (a dense regression task) and language modelling.""","""This paper studies the properties of Differentiable Architecture Search, and in particular when it fails, and then proposes modifications that improve its performance for several tasks. The reviews were all very supportive with three Accept opinions, and authors have addressed their comments and suggestions. Given the unanimous reviews, this appears to be a clear Accept. """ 985,"""Universal Adversarial Attack Using Very Few Test Examples""","['universal', 'adversarial', 'SVD']","""Adversarial attacks such as Gradient-based attacks, Fast Gradient Sign Method (FGSM) by Goodfellow et al.(2015) and DeepFool by Moosavi-Dezfooli et al. (2016) are input-dependent, small pixel-wise perturbations of images which fool state of the art neural networks into misclassifying images but are unlikely to fool any human. On the other hand a universal adversarial attack is an input-agnostic perturbation. The same perturbation is applied to all inputs and yet the neural network is fooled on a large fraction of the inputs. In this paper, we show that multiple known input-dependent pixel-wise perturbations share a common spectral property. Using this spectral property, we show that the top singular vector of input-dependent adversarial attack directions can be used as a very simple universal adversarial attack on neural networks. We evaluate the error rates and fooling rates of three universal attacks, SVD-Gradient, SVD-DeepFool and SVD-FGSM, on state of the art neural networks. We show that these universal attack vectors can be computed using a small sample of test inputs. We establish our results both theoretically and empirically. On VGG19 and VGG16, the fooling rate of SVD-DeepFool and SVD-Gradient perturbations constructed from observing less than 0.2% of the validation set of ImageNet is as good as the universal attack of Moosavi-Dezfooli et al. (2017a). To prove our theoretical results, we use matrix concentration inequalities and spectral perturbation bounds. For completeness, we also discuss another recent approach to universal adversarial perturbations based on (p, q)-singular vectors, proposed independently by Khrulkov & Oseledets (2018), and point out the simplicity and efficiency of our universal attack as the key difference.""","""The paper proposes to get universal adversarial examples using few test samples. The approach is very close to the Khrulkov & Oseledets, and the abstract for some reason claims that it was proposed independently, which looks like a very strange claim. Overall, all reviewers recommend rejection, and I agree with them.""" 986,"""Learning from Rules Generalizing Labeled Exemplars""","['Learning from Rules', 'Learning from limited labeled data', 'Weakly Supervised Learning']","""In many applications labeled data is not readily available, and needs to be collected via pain-staking human supervision. We propose a rule-exemplar method for collecting human supervision to combine the efficiency of rules with the quality of instance labels. The supervision is coupled such that it is both natural for humans and synergistic for learning. We propose a training algorithm that jointly denoises rules via latent coverage variables, and trains the model through a soft implication loss over the coverage and label variables. The denoised rules and trained model are used jointly for inference. Empirical evaluation on five different tasks shows that (1) our algorithm is more accurate than several existing methods of learning from a mix of clean and noisy supervision, and (2) the coupled rule-exemplar supervision is effective in denoising rules.""","""The paper addresses the problem of costly human supervision for training supervised learning methods. The authors propose a joint approach for more effectively collecting supervision data from humans, by extracting rules and their exemplars, and a model for training on this data. They demonstrate the effectiveness of their approach on multiple datasets by comparing to a range of baselines. Based on the reviews and my own reading I recommend to accept this paper. The approach makes intuitively a lot of sense and is well explained. The experimental results are convincing. """ 987,"""Geom-GCN: Geometric Graph Convolutional Networks""","['Deep Learning', 'Graph Convolutional Network', 'Network Geometry']","""Message-passing neural networks (MPNNs) have been successfully applied in a wide variety of applications in the real world. However, two fundamental weaknesses of MPNNs' aggregators limit their ability to represent graph-structured data: losing the structural information of nodes in neighborhoods and lacking the ability to capture long-range dependencies in disassortative graphs. Few studies have noticed the weaknesses from different perspectives. From the observations on classical neural network and network geometry, we propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses. The behind basic idea is the aggregation on a graph can benefit from a continuous space underlying the graph. The proposed aggregation scheme is permutation-invariant and consists of three modules, node embedding, structural neighborhood, and bi-level aggregation. We also present an implementation of the scheme in graph convolutional networks, termed Geom-GCN, to perform transductive learning on graphs. Experimental results show the proposed Geom-GCN achieved state-of-the-art performance on a wide range of open datasets of graphs.""","""This paper is consistently supported by all three reviewers and thus an accept is recommended.""" 988,"""Out-of-Distribution Image Detection Using the Normalized Compression Distance""","['Out-of-Distribution Detection', 'Normalized Compression Distance', 'Convolutional Neural Networks']","""On detection of the out-of-distribution images, whose underlying distribution is different from that of the training dataset, we tackle to apply out-of-distribution detection methods to already deployed convolutional neural networks. Most recent approaches have to utilize out-of-distribution samples for validation or retrain the model, which makes it less practical for real-world applications. We propose a novel out-of-distribution detection method MALCOM, which neither uses any out-of-distribution samples nor retrain the model. Inspired by the method using the global average pooling on the feature maps of the convolutional neural networks, the goal of our method is to extract informative sequential patterns from the feature maps. To this end, we introduce a similarity metric which focuses on the shared patterns between two sequences. In short, MALCOM uses both the global average and spatial pattern of the feature maps to accurately identify out-of-distribution samples. ""","""This paper proposes an out-of-distribution detection (OOD) method without assuming OOD in validation. As reviewers mentioned, I think the idea is interesting and the proposed method has potential. However, I think the paper can be much improved and is not ready to publish due to the followings given reviewers' comments: (a) The prior work also has some experiments without OOD in validation, i.e., use adversarial examples (AE) instead in validation. Hence, the main motivation of this paper becomes weak unless the authors justify enough why AE is dangerous to use in validation. (b) The performance of their replication of the prior method is far lower than reported. I understand that sometimes it is not easy to reproduce the prior results. In this case, one can put the numbers in the original paper. Or, one can provide detailed analysis why the prior method should fail in some cases. (c) The authors follow exactly same experimental settings in the prior works. But, the reported score of the prior method is already very high in the settings, and the gain can be marginal. Namely, the considered settings are more or less ""easy problems"". Hence, additional harder interesting OOD settings, e.g., motivated by autonomous driving, would strength the paper. Hence, I recommend rejection.""" 989,"""Simple and Effective Stochastic Neural Networks""","['stochastic neural networks', 'pruning', 'adversarial defence', 'label noise']","""Stochastic neural networks (SNNs) are currently topical, with several paradigms being actively investigated including dropout, Bayesian neural networks, variational information bottleneck (VIB) and noise regularized learning. These neural network variants impact several major considerations, including generalization, network compression, and robustness against adversarial attack and label noise. However, many existing networks are complicated and expensive to train, and/or only address one or two of these practical considerations. In this paper we propose a simple and effective stochastic neural network (SE-SNN) architecture for discriminative learning by directly modeling activation uncertainty and encouraging high activation variability. Compared to existing SNNs, our SE-SNN is simpler to implement and faster to train, and produces state of the art results on network compression by pruning, adversarial defense and learning with label noise.""","""This paper proposes to use stacked layers of Gaussian latent variables with a maxent objective function as a regulariser. I agree with the reviewers that there is very little novelty and the experiments are not very convincing.""" 990,"""Meta Reinforcement Learning with Autonomous Inference of Subtask Dependencies""","['Meta reinforcement learning', 'subtask graph']","""We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph which describes a set of subtasks and their dependencies that are unknown to the agent. The agent needs to quickly adapt to the task over few episodes during adaptation phase to maximize the return in the test phase. Instead of directly learning a meta-policy, we develop a Meta-learner with Subtask Graph Inference (MSGI), which infers the latent parameter of the task by interacting with the environment and maximizes the return given the latent parameter. To facilitate learning, we adopt an intrinsic reward inspired by upper confidence bound (UCB) that encourages efficient exploration. Our experiment results on two grid-world domains and StarCraft II environments show that the proposed method is able to accurately infer the latent task parameter, and to adapt more efficiently than existing meta RL and hierarchical RL methods.""","""This work formulates and tackles a few-shot RL problem called subtask graph inference, where hierarchical tasks are characterized by a graph describing all subtasks and their dependencies. In other words, each task consists of multiple subtasks and completing a subtask provides a reward. The authors propose a meta-RL approach to meta-train a policy that infers the subtask graph from any new task data in a few shots. Empirical experiments are performed on different domains, including Startcraft II, highlighting the efficiency and scalability of the proposed approach. Most concerns of reviewers were addressed in the rebuttal. The main remaining concerns about this work are that it is mainly an extension of Sohn et al. (2018), making the contribution somewhat incremental, and that its applicability is limited to problems where subtasks are provided. However, all reviewers being positive about this paper, I would still recommend acceptance. """ 991,"""Adversarial Robustness as a Prior for Learned Representations""","['adversarial robustness', 'adversarial examples', 'robust optimization', 'representation learning', 'feature visualization']","""An important goal in deep learning is to learn versatile, high-level feature representations of input data. However, standard networks' representations seem to possess shortcomings that, as we illustrate, prevent them from fully realizing this goal. In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks. It turns out that representations learned by robust models address the aforementioned shortcomings and make significant progress towards learning a high-level encoding of inputs. In particular, these representations are approximately invertible, while allowing for direct visualization and manipulation of salient input features. More broadly, our results indicate adversarial robustness as a promising avenue for improving learned representations.""","""The paper proposes recasting robust optimization as regularizer for learning representations by neural networks, resulting e.g. in more semantically meaningful representations. The reviewers found that the claimed contributions were well supported by the experimental evidence. The reviewers noted a few minor points regarding clarity that seem to have been addressed. The problems addressed are very relevant to the ICLR community (representation learning and adversarial robustness). However, the reviewers were not convinced by the novelty of the paper. A big part of the discussion focused on prior work by the authors that is to be published at NeurIPS. This paper was not referenced in the manuscript but does reduce the novelty of the present submission. In contrast to the current submission, that paper focuses on manipulating the learned manipulations to solve image generation tasks, whereas the current paper focuses on the underlying properties of the representation. Since the underlying phenomenon had been described in the earlier paper and the current submission does not introduce a new approach / algorithm, the paper was deemed to lack the novelty for acceptance to ICLR. """ 992,"""iWGAN: an Autoencoder WGAN for Inference""","['Generative model', 'Autoencoder', 'Inference']","""Generative Adversarial Networks (GANs) have been impactful on many problems and applications but suffer from unstable training. Wasserstein GAN (WGAN) leverages the Wasserstein distance to avoid the caveats in the minmax two-player training of GANs but has other defects such as mode collapse and lack of metric to detect the convergence. We introduce a novel inference WGAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs. The iWGAN jointly learns an encoder network and a generative network using an iterative primal dual optimization process. We establish the generalization error bound of iWGANs. We further provide a rigorous probabilistic interpretation of our model under the framework of maximum likelihood estimation. The iWGAN, with a clear stopping criteria, has many advantages over other autoencoder GANs. The empirical experiments show that our model greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample. We illustrate the ability of iWGANs by obtaining a competitive and stable performance with state-of-the-art for benchmark datasets.""","""This paper proposes a new way to stabilise GAN training. The reviews were very mixed but taken together below acceptance threshold. Rejection is recommended with strong motivation to work on the paper for next conference. This is potentially an important contribution. """ 993,"""Visual Imitation with Reinforcement Learning using Recurrent Siamese Networks""","['imitation learning', 'reinforcement learning', 'imitation from video']","""It would be desirable for a reinforcement learning (RL) based agent to learn behaviour by merely watching a demonstration. However, defining rewards that facilitate this goal within the RL paradigm remains a challenge. Here we address this problem with Siamese networks, trained to compute distances between observed behaviours and the agents behaviours. Given a desired motion such Siamese networks can be used to provide a reward signal to an RL agent via the distance between the desired motion and the agents motion. We experiment with an RNN-based comparator model that can compute distances in space and time between motion clips while training an RL policy to minimize this distance. Through experimentation, we have had also found that the inclusion of multi-task data and an additional image encoding loss helps enforce the temporal consistency. These two components appear to balance reward for matching a specific instance of a behaviour versus that behaviour in general. Furthermore, we focus here on a particularly challenging form of this problem where only a single demonstration is provided for a given task the one-shot learning setting. We demonstrate our approach on humanoid agents in both 2D with 10 degrees of freedom (DoF) and 3D with 38 DoF.""","""The main concern raised by reviewers is limited novelty, poor presentation, and limited experiments. All the reviewers appreciate the difficulty and importance of the problem. The rebuttal helped clarify novelty, but the other concerns remain.""" 994,"""Enforcing Physical Constraints in Neural Neural Networks through Differentiable PDE Layer""","['PDE', 'Hard Constraints', 'Turbulence', 'Super-Resolution', 'Spectral Methods']","""Recent studies at the intersection of physics and deep learning have illustrated successes in the application of deep neural networks to partially or fully replace costly physics simulations. Enforcing physical constraints to solutions generated by neural networks remains a challenge, yet it is essential to the accuracy and trustworthiness of such model predictions. Many systems in the physical sciences are governed by Partial Differential Equations (PDEs). Enforcing these as hard constraints, we show, are inefficient in conventional frameworks due to the high dimensionality of the generated fields. To this end, we propose the use of a novel differentiable spectral projection layer for neural networks that efficiently enforces spatial PDE constraints using spectral methods, yet is fully differentiable, allowing for its use as a layer in neural networks that supports end-to-end training. We show that its computational cost is cheaper than a regular convolution layer. We apply it to an important class of physical systems incompressible turbulent flows, where the divergence-free PDE constraint is required. We train a 3D Conditional Generative Adversarial Network (CGAN) for turbulent flow super-resolution efficiently, whilst guaranteeing the spatial PDE constraint of zero divergence. Furthermore, our empirical results show that the model produces realistic flow fields with more accurate flow statistics when trained with hard constraints imposed via the proposed novel differentiable spectral projection layer, as compared to soft constrained and unconstrained counterparts.""","""This paper introduces an FFT-based loss function to enforce physical constraints in a CNN-based PDE solver. The proposed idea seems sensible, but the reviewers agreed that not enough attention was paid to baseline alternatives, and that a single example problem was not enough to understand the pros and cons of this method.""" 995,"""Music Source Separation in the Waveform Domain""","['source separation', 'audio synthesis', 'deep learning']","""Source separation for music is the task of isolating contributions, or stems, from different instruments recorded individually and arranged together to form a song.Such components include voice, bass, drums and any other accompaniments. While end-to-end models that directly generate the waveform are state-of-the-art in many audio synthesis problems, the best multi-instrument source separation models generate masks on the magnitude spectrum and achieve performances far above current end-to-end, waveform-to-waveform models. We present an in-depth analysis of a new architecture, which we will refer to as Demucs, based on a (transposed) convolutional autoencoder, with a bidirectional LSTM at the bottleneck layer and skip-connections as in U-Networks (Ronneberger et al., 2015). Compared to the state-of-the-art waveform-to-waveform model, Wave-U-Net (Stoller et al., 2018), the main features of our approach in addition of the bi-LSTM are the use of trans-posed convolution layers instead of upsampling-convolution blocks, the use of gated linear units, exponentially growing the number of channels with depth and a new careful initialization of the weights. Results on the MusDB dataset show that our architecture achieves a signal-to-distortion ratio (SDR) nearly 2.2 points higher than the best waveform-to-waveform competitor (from 3.2 to 5.4 SDR). This makes our model match the state-of-the-art performances on this dataset, bridging the performance gap between models that operate on the spectrogram and end-to-end approaches.""","""The paper proposed a waveform-to-waveform music source separation system. Experimental justification shows the proposed model achieved the best SDR among all the existing waveform-to-waveform models, and obtained similar performance to spectrogram based ones. The paper is clearly written and the experimental evaluation and ablation study are thorough. But the main concern is the limited novelty, it is an improvement over the existing Wave-U-Net, it added some changes to the existing model architecture for better modeling the waveform data and compared masking vs. synthesis for music source separation. """ 996,"""Convolutional Bipartite Attractor Networks""","['attractor network', 'recurrent network', 'energy function', 'convolutional network', 'image completion', 'super-resolution']","""In human perception and cognition, a fundamental operation that brains perform is interpretation: constructing coherent neural states from noisy, incomplete, and intrinsically ambiguous evidence. The problem of interpretation is well matched to an early and often overlooked architecture, the attractor network---a recurrent neural net that performs constraint satisfaction, imputation of missing features, and clean up of noisy data via energy minimization dynamics. We revisit attractor nets in light of modern deep learning methods and propose a convolutional bipartite architecture with a novel training loss, activation function, and connectivity constraints. We tackle larger problems than have been previously explored with attractor nets and demonstrate their potential for image completion and super-resolution. We argue that this architecture is better motivated than ever-deeper feedforward models and is a viable alternative to more costly sampling-based generative methods on a range of supervised and unsupervised tasks.""","""This paper proposes to reintroduce bipartite attractor networks and update them using ideas from modern deep net architectures. After some discussions, all three reviewers felt that the paper did not meet the ICLR bar, in part because of an insufficiency of quantitative results, and in part because the extension was considered pretty straightforward and the results unsurprising, and hence it did not meet the novelty bar. I therefore recommend rejection. """ 997,"""Understanding the functional and structural differences across excitatory and inhibitory neurons""",['Neuroscience'],"""One of the most fundamental organizational principles of the brain is the separation of excitatory (E) and inhibitory (I) neurons. In addition to their opposing effects on post-synaptic neurons, E and I cells tend to differ in their selectivity and connectivity. Although many such differences have been characterized experimentally, it is not clear why they exist in the first place. We studied this question in deep networks equipped with E and I cells. We found that salient distinctions between E and I neurons emerge across various deep convolutional recurrent networks trained to perform standard object classification tasks. We explored the necessary conditions for the networks to develop distinct selectivity and connectivity across cell types. We found that neurons that project to higher-order areas will have greater stimulus selectivity, regardless of whether they are excitatory or not. Sparser connectivity is required for higher selectivity, but only when the recurrent connections are excitatory. These findings demonstrate that the functional and structural differences observed across E and I neurons are not independent, and can be explained using a smaller number of factors.""","""This paper explores the role of excitatory and inhibitory neurons, and how their properties might differ based on simulations. A few issues were raised during the review period, and I commend the authors for stepping up to address these comments and run additional experiments. It seems, though, that the reviewer's worries were born out in the results of the additional experiments: ""1. The object classification task is not really relevant to elicit the observed behavior and 2. Inhibitory neurons are not essential (at least when training with batch norm)."" I hope the authors can make improvements in light of these observations, and discuss their implications in a future version of this paper. """ 998,"""Behaviour Suite for Reinforcement Learning""","['reinforcement learning', 'benchmark', 'core issues', 'scalability', 'reproducibility']","""This paper introduces the Behaviour Suite for Reinforcement Learning, or bsuite for short. bsuite is a collection of carefully-designed experiments that investigate core capabilities of reinforcement learning (RL) agents with two objectives. First, to collect clear, informative and scalable problems that capture key issues in the design of general and efficient learning algorithms. Second, to study agent behaviour through their performance on these shared benchmarks. To complement this effort, we open source this http URL, which automates evaluation and analysis of any agent on bsuite. This library facilitates reproducible and accessible research on the core issues in RL, and ultimately the design of superior learning algorithms. Our code is Python, and easy to use within existing projects. We include examples with OpenAI Baselines, Dopamine as well as new reference implementations. Going forward, we hope to incorporate more excellent experiments from the research community, and commit to a periodic review of bsuite from a committee of prominent researchers.""","""This paper proposes a platform for benchmarking and evaluating reinforcement learning algorithms. While reviewers had some concerns about whether such a tool was necessary given existing tools, reviewers who interacted with the tool found it easy to use and useful. Making such tools is often an engineering task and rarely aligned with typical research value systems, despite potentially acting as a public good. The success or failure of similar tools rely on community acceptance and it is my belief that this tool surpasses the bar to be promoted to the community at a top tier venue. """ 999,"""Regularizing Trajectories to Mitigate Catastrophic Forgetting""","['Continual Learning', 'Regularization', 'Adaptation', 'Natural Gradient']","""Regularization-based continual learning approaches generally prevent catastrophic forgetting by augmenting the training loss with an auxiliary objective. However in most practical optimization scenarios with noisy data and/or gradients, it is possible that stochastic gradient descent can inadvertently change critical parameters. In this paper, we argue for the importance of regularizing optimization trajectories directly. We derive a new co-natural gradient update rule for continual learning whereby the new task gradients are preconditioned with the empirical Fisher information of previously learnt tasks. We show that using the co-natural gradient systematically reduces forgetting in continual learning. Moreover, it helps combat overfitting when learning a new task in a low resource scenario.""","""The submission proposes a 'co-natural' gradient update rule to precondition the optimization trajectory using a Fisher information estimate acquired from previous experience. This results in reduced sensitivity and forgetting when new tasks are learned. The reviews were mixed on this paper, and unfortunately not all reviewers had enough expertise in the field. After reading the paper carefully, I believe that the paper has significance and relevance to the field of continual learning, however it will benefit from more careful positioning with respect to other work as well as more empirical support. The application to the low-data-regime is interesting and could be expanded and refined in a future submission. The recommendation is for rejection.""" 1000,"""Implicit -Jeffreys Autoencoders: Taking the Best of Both Worlds""","['Variational Inference', 'Generative Adversarial Networks']","""We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN). It is known that GAN can produce very realistic samples while VAE does not suffer from mode collapsing problem. Our model optimizes -Jeffreys divergence between the model distribution and the true data distribution. We show that it takes the best properties of VAE and GAN objectives. It consists of two parts. One of these parts can be optimized by using the standard adversarial training, and the second one is the very objective of the VAE model. However, the straightforward way of substituting the VAE loss does not work well if we use an explicit likelihood such as Gaussian or Laplace which have limited flexibility in high dimensions and are unnatural for modelling images in the space of pixels. To tackle this problem we propose a novel approach to train the VAE model with an implicit likelihood by an adversarially trained discriminator. In an extensive set of experiments on CIFAR-10 and TinyImagent datasets, we show that our model achieves the state-of-the-art generation and reconstruction quality and demonstrate how we can balance between mode-seeking and mode-covering behaviour of our model by adjusting the weight in our objective. ""","""The paper received Weak Reject scores from all three reviewers. The AC has read the reviews and lengthy discussions and examined the paper. AC feels that there is a consensus that the paper does not quite meet the acceptance threshold and thus cannot be accepted. Hopefully the authors can use the feedback to improve their paper and resubmit to another venue.""" 1001,"""iSparse: Output Informed Sparsification of Neural Networks""","['dropout', 'dropconnect', 'sparsification', 'deep learning', 'neural network']","""Deep neural networks have demonstrated unprecedented success in various knowledge management applications. However, the networks created are often very complex, with large numbers of trainable edges which require extensive computational resources. We note that many successful networks nevertheless often contain large numbers of redundant edges. Moreover, many of these edges may have negligible contributions towards the overall network performance. In this paper, we propose a novel iSparse framework and experimentally show, that we can sparsify the network, by 30-50%, without impacting the network performance. iSparse leverages a novel edge significance score, E, to determine the importance of an edge with respect to the final network output. Furthermore, iSparse can be applied both while training a model or on top of a pre-trained model, making it a retraining-free approach - leading to a minimal computational overhead. Comparisons of iSparse against PFEC, NISP, DropConnect, and Retraining-Free on benchmark datasets show that iSparse leads to effective network sparsifications.""","""Thank you very much for your feedback to the reviewers, which helped us a lot to better understand your paper. However, the paper is still premature to be accepted to ICLR2020. We hope that the detailed reviewers' comments help you improve your paper for potential future submission. """ 1002,"""Proactive Sequence Generator via Knowledge Acquisition""","['neural machine translation', 'knowledge distillation', 'exposure bias', 'reinforcement learning']","""Sequence-to-sequence models such as transformers, which are now being used in a wide variety of NLP tasks, typically need to have very high capacity in order to perform well. Unfortunately, in production, memory size and inference speed are all strictly constrained. To address this problem, Knowledge Distillation (KD), a technique to train small models to mimic larger pre-trained models, has drawn lots of attention. The KD approach basically attempts to maximize recall, i.e., ranking Top-ktokens in teacher models as higher as possible, however, whereas precision is more important for sequence generation because of exposure bias. Motivated by this, we develop Knowledge Acquisition (KA) where student models receive log q(y_t|y_{