File size: 36,241 Bytes
fad35ef
1
{"forum": "B1e-kxSKDH", "submission_url": "https://openreview.net/forum?id=B1e-kxSKDH", "submission_content": {"authorids": ["kossen@stud.uni-heidelberg.de", "stelzner@cs.tu-darmstadt.de", "marcel.hussing@stud.tu-darmstadt.de", "c.voelcker@stud.tu-darmstadt.de", "kersting@cs.tu-darmstadt.de"], "title": "Structured Object-Aware Physics Prediction for Video Modeling and Planning", "authors": ["Jannik Kossen", "Karl Stelzner", "Marcel Hussing", "Claas Voelcker", "Kristian Kersting"], "pdf": "/pdf/76907f6dbb663f21a53e495b4d4d46aa1cb59af6.pdf", "TL;DR": "We propose a structured object-aware video prediction model, which explicitly reasons about objects and demonstrate that it provides high-quality long term video predictions for planning.", "abstract": "When humans observe a physical system, they can easily locate components, understand their interactions, and anticipate future behavior, even in settings with complicated and previously unseen interactions. For computers, however, learning such models from videos in an unsupervised fashion is an unsolved research problem.  In this paper, we present STOVE, a novel state-space model for  videos, which explicitly reasons about objects and their positions, velocities, and interactions. It is constructed by combining an image model and a dynamics model in compositional manner and improves on previous work by reusing the dynamics model for inference, accelerating and regularizing training. STOVE predicts videos with convincing physical behavior over hundreds of timesteps, outperforms previous unsupervised models, and even approaches the performance of supervised baselines. We further demonstrate the strength of our model as a simulator for sample efficient model-based control, in a task with heavily interacting objects.\n", "code": "https://github.com/ICLR20/STOVE", "keywords": ["self-supervised learning", "probabilistic deep learning", "structured models", "video prediction", "physics prediction", "planning", "variational auteoncoders", "model-based reinforcement learning", "VAEs", "unsupervised", "variational", "graph neural networks", "tractable probabilistic models", "attend-infer-repeat", "relational learning", "AIR", "sum-product networks", "object-oriented", "object-centric", "object-aware", "MCTS"], "paperhash": "kossen|structured_objectaware_physics_prediction_for_video_modeling_and_planning", "_bibtex": "@inproceedings{\nKossen2020Structured,\ntitle={Structured Object-Aware Physics Prediction for Video Modeling and Planning},\nauthor={Jannik Kossen and Karl Stelzner and Marcel Hussing and Claas Voelcker and Kristian Kersting},\nbooktitle={International Conference on Learning Representations},\nyear={2020},\nurl={https://openreview.net/forum?id=B1e-kxSKDH}\n}", "full_presentation_video": "", "original_pdf": "/attachment/c2c4c247cd3fb0bbb300b5b4ef023d39c0000f4b.pdf", "appendix": "", "poster": "", "spotlight_video": "", "slides": ""}, "submission_cdate": 1569439704542, "submission_tcdate": 1569439704542, "submission_tmdate": 1583912034199, "submission_ddate": null, "review_id": ["S1gNkPJnYr", "rylOjQdDuS", "BJgAmQbAFB"], "review_url": ["https://openreview.net/forum?id=B1e-kxSKDH&noteId=S1gNkPJnYr", "https://openreview.net/forum?id=B1e-kxSKDH&noteId=rylOjQdDuS", "https://openreview.net/forum?id=B1e-kxSKDH&noteId=BJgAmQbAFB"], "review_cdate": [1571710683789, 1570370463812, 1571848998489], "review_tcdate": [1571710683789, 1570370463812, 1571848998489], "review_tmdate": [1573703330899, 1573551788900, 1572972389562], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2020/Conference/Paper2050/AnonReviewer3"], ["ICLR.cc/2020/Conference/Paper2050/AnonReviewer1"], ["ICLR.cc/2020/Conference/Paper2050/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1e-kxSKDH", "B1e-kxSKDH", "B1e-kxSKDH"], "review_content": [{"experience_assessment": "I have published in this field for several years.", "rating": "6: Weak Accept", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "review_assessment:_thoroughness_in_paper_reading": "I read the paper thoroughly.", "title": "Official Blind Review #3", "review": "In this paper the authors present a graph neural network for modeling the dynamics of objects in simple environments from video. The intuition of the presented system is that it first identifies the different objects from the image using Sum-Product Attend-Infer-Repeat (SuPAIR), which gives the objects positions and sizes. The system uses a \u201csimple matching procedure\u201d to map objects between frames, which allows for the system to extra the object\u2019s velocities. Then a graph neural network is employed to model the dynamics of the particular environment (whether objects bounce, whether there are other forces at play like gravity, etc.). The authors present two environments (Billiards and Gravity) and two evaluations, one focused on predicting future states, and the second focused on using these predictions to play the game. \n\nI think that this paper presents an interesting approach and I agree with the authors of the importance of developing approaches that allow AI to make good predictions of future environments. However, I\u2019m not convinced of many of the technical details in the paper. \n\nI am not certain whether I would classify this work as unsupervised learning. While it\u2019s certainly true that there are no labels in the raw video, the object-finding can be understood as a preprocessing step after which the data is in fact in a fairly standard supervised learning framework. The authors use the term \u201cself-supervised\u201d in the first section, which I believe describes the work more clearly. \n\nThe primary technical contributions of the work appear to be the graph network, the experiments, and their results. While I would have preferred more detail on the graph network in an appendix, it\u2019s acceptable to instead have access to the code. However, the experiments seem set up primarily to evaluate the system as a whole. For example, the inclusion of a supervised learning version of the system where the object\u2019s positions are given exactly sheds light on the quality of SuPAIR. However, SuPAIR is taken from prior work. I would have thought that an entirely different approach, like that used by Ha and Schmidhuber in their World Models paper would have been more appropriate as a comparison as it represents an alternate approach entirely. \n\nThere is a repeated claim made in the paper that the system presents output that is \u201cconvincing\u201d and \u201crealistic\u201d over hundreds of time steps. There is no clear definition given for what this means. Figure 1 only presents pixel and positional error for 80 frames, and the error appears to go pretty large (~15%) after only forty frames. The results presented in Figure 4 suggests a much larger timescale, but it\u2019s unclear the quality of the output predictions from it. Some clarity on this or scaling back the claims would improve the paper. \n\nIn terms of related work Guzdial and Riedl\u2019s 2017 \u201cGame Engine Learning from Gameplay Video\u201d appear to use a very similar approach (but with OpenCV instead of SuPAIR and search instead of a graph network) as does Ersen and Sariel\u2019s 2015 \u201cLearning behaviors of and interactions among objects through spatio\u2013temporal reasoning\u201d. These approaches also function over much more complex environments with variable numbers of objects. It would be helpful for the authors to continue adding some discussion of this and related papers. \n\n---\n\nEdit: In response to the author's changes I have increased my rating to a weak accept. This is in large part due to Figure 4, which provides a great deal of additional support to the author's claims and clarity on the technical value of the results. ", "review_assessment:_checking_correctness_of_derivations_and_theory": "N/A"}, {"experience_assessment": "I have published one or two papers in this area.", "rating": "6: Weak Accept", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "title": "Official Blind Review #1", "review": "This paper introduces a structured deep generative model for video frame prediction, with an object recognition model based on the Attend, Infer, Repeat (AIR) model by Eslami et al. (2016) and a graph neural network as a latent dynamics model. The model is evaluated on two synthetic physics simulation datasets (N-body gravitational systems and bouncing billiard balls) for next frame prediction and on a control task in the billiard domain. The model can produce accurate predictions for several time steps into the future and beats a variational RNN and SQAIR (sequential AIR variant) baseline, and is more sample-efficient than a model-free PPO agent in the control task.\n\nOverall, the paper is well-structured, nicely written and addresses an interesting and challenging problem. The experiments use simple domains/problems, but give good insights into how the model performs.\n\nRelated work is covered to a satisfactory degree, but a discussion of some of the following closely related papers could improve the paper:\n* Chang et al., A Compositional Object-Based Approach To Learning Physical Dynamics, ICLR 2017\n* Greff et al., Neural Expectation Maximization, NeurIPS 2017\n* Kipf et al., Neural Relational Inference for Interacting Systems, ICML 2018\n* Greff et al., Multi-object representation learning with iterative variational inference, ICML 2019\n* Sun et al., Actor-centric relation network, ECCV 2018\n* Sun et al., Relational Action Forecasting, CVPR 2019\n* Wang et al., NerveNet: Learning structured policy with graph neural networks, ICLR 2018\n* Xu et al., Unsupervised discovery of parts, structure and dynamics, ICLR 2019\n* Erhardt et al., Unsupervised intuitive physics from visual observations, ACCV 2018\n\nIn terms of clarity, the paper could be improved by making the used model architecture more explicit, e.g., by adding a model figure, and by providing an introduction to the SuPAIR model (Stelzner et al., 2019) \u2014 the authors assume that the reader is more or less familiar with this particular model. It is further unclear how exactly the input data is provided to the model; Figure 2 makes it seem that inputs are colored frames, section 3.1 mentions that inputs are grayscale videos (do all objects have the same appearance or different shades of gray?), which is in conflict with the statement on page 5 that the model is provided with mean values of input color channels. Please clarify.\n\nIn terms of novelty, the proposed modification of SQAIR (separating object detection and latent dynamics prediction) is novel and likely leads to a speed-up in training and evaluation. Using a Graph Neural Network for modeling latent physics is reasonable and has been shown to work on related problems before (see referenced work above and related work mentioned in the paper). Similarly, using such a model for planning/control is interesting and adds to the value of the paper, but has in related settings been explored before (e.g. Wang et al. (ICLR 2018) and Sanchez-Gonzalez (ICML 2018)).\n\nExperimentally, it would be good to provide ablation studies (e.g. a different object detection module like AIR instead of SuPAIR, not splitting the latent variables into position, velocity, size etc.) and run-time comparisons (wall-clock time), as one of the main contributions of the paper is that the proposed model is claimed to be faster than SQAIR. The overall model predictions are (to my surprise) somewhat inaccurate, when looking at e.g. the billiard ball example in Figure 2. In Steenkiste et al. (ICLR 2018), roll-outs appear to be more accurate. Maybe a quantitative experimental comparison could help?\n\nWhy does the proposed model perform worse than a model-free PPO baseline when trained to convergence on the control task? What is missing to close this gap?\n\nDo all objects have the same appearance (color/greyscale values) or are they unique in appearance? In the second case, a simpler encoder architecture could be used such as in Jaques et al. (2019) or Xu et al. (ICLR 2019).\n\nOverall, I think that this paper addresses an important issue and is potentially of high interest to the community. Nonetheless I think that this paper needs a bit more work and at this point I recommend a weak reject.\n\nOther comments:\n* This sentence is unclear to me: \u201cAn additional benefit of this approach is that the information learned by the dynamics model is reused for inference \u2014 [\u2026]\u201d\n* What are the failure modes of the model? Where does it break down? \n* How does the model deal with partial occlusion?\n\n---------------------\nUPDATE (after reading the author response and the revised manuscript): My questions and comments are addressed and the additional ablation studies and experimental results on energy conservation are convincing and insightful. I think the revised version of the paper meets the bar for acceptance at ICLR.\n", "review_assessment:_checking_correctness_of_derivations_and_theory": "I did not assess the derivations or theory."}, {"experience_assessment": "I have published one or two papers in this area.", "rating": "6: Weak Accept", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "title": "Official Blind Review #2", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory.", "review": "This paper presents STOVE, an object-centric structured model for predicting the dynamics of interacting objects. It extends SuPAIR, a probabilistic deep model based on Sum-Product Networks, towards modeling multi-object interactions in video sequences. Compared to prior work, the model uses graph neural networks for learning the transition dynamics and reuses the dynamics model for the state-space inference model, further regularising the learning process. The approach has been tested on simple multi-body physics tasks and performs well compared to other unsupervised and supervised baselines. Additionally, an action-conditional version of STOVE was tested on a visual MPC task (using MCTS for planning) and was shown to learn significantly faster compared to model-free baselines.\n\nThe paper is well written and clearly motivated but comes across as an incremental improvement on top of prior work. Here are a few comments:\n1. The idea of reusing the dynamics model for inference is neat as it helps to regularise the learning process and remove the costly double recurrence, potentially speeding up learning. It would be great if this could be evaluated experimentally via an ablation study \u2014 this can be done by using two separate instances of the transition model with separate weights. \n2. A keys step that allows to reconcile the transition model and the object detection network is the matching process. Currently, this is done via choosing the pair with the least position and velocity difference between subsequent time steps. This could give erroneous results in the case of object interactions when objects are fairly close to each other (or colliding). A potentially better way could be to additionally use the content/latent codes for this matching process \u2014 as long as the object\u2019s appearance stays similar these can provide good signal that disambiguates different objects.\n3. The experiments presented in the paper are quite simplistic visually \u2014 it is not clear if this approach can generalise to more complicated visual settings. Additionally, it would be good to see further comparisons and ablations that quantifies the effect of the different components \u2014 e.g. comparing to a combination of image model + black-box MLP dynamics model can quantify the effect of the graph neural network. These results can add further strength to the paper. \n\nOverall, the approach presented in the paper is a bit incremental and the experiments are somewhat simplistic. Further comparisons and ablation experiments can significantly\tstrengthen the paper. I would suggest a borderline accept."}], "comment_id": ["r1lMMgfssr", "H1lg7JLcsB", "HkgCxPNusB", "rJlPFpxdiH", "HylUcrfQsr", "ryg8vXzXjr", "rJlOFefQjr", "SJxYFRbQoS"], "comment_cdate": [1573752842483, 1573703448322, 1573566197941, 1573551487065, 1573229965856, 1573229406437, 1573228672493, 1573228161233], "comment_tcdate": [1573752842483, 1573703448322, 1573566197941, 1573551487065, 1573229965856, 1573229406437, 1573228672493, 1573228161233], "comment_tmdate": [1573752842483, 1573703448322, 1573566197941, 1573551487065, 1573229965856, 1573229406437, 1573228672493, 1573228161233], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2020/Conference/Paper2050/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper2050/AnonReviewer3", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper2050/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper2050/AnonReviewer1", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper2050/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper2050/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper2050/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper2050/Authors", "ICLR.cc/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response to updated review #3", "comment": "Thank you for updating your evaluation. We are glad that the revision addressed your concerns."}, {"title": "Reviewer #3 response to author response", "comment": "Thank you for the clear response and updated document. I have updated my review in response as I now believe that the paper should be accepted."}, {"title": "Response to updated review #1", "comment": "Thank you for updating your review. We will make sure to stick to the term \"graph neural network\" in the camera-ready version."}, {"title": "Reviewer (#1) response to author response", "comment": "Thank you for your detailed response. My questions and comments are addressed and I think the revised version of the paper meets the bar for acceptance at ICLR.\n\nOne minor note: In the revised version of the paper, you use \u201cgraph network\u201d and \u201cgraph neural network\u201d interchangeably \u2014 maybe you could consider consistently just using either one of the two terms to avoid potential confusion."}, {"title": "Response to Reviewer 1", "comment": "Dear Reviewer 1,\n\nthank you for your valuable feedback.\nBelow, we give a detailed response to your questions and comments.\nPlease also see the changes to the manuscript outlined in our top level comment.\n\n[Added a \"Model Figure\"]\nWe have revised Figure 1 to include a visualisation of the latent space and the corresponding recognition distributions. We hope this clarifies the model structure.\n\n[Introduction to SuPAIR]\nWe chose to omit details on SuPAIR as they are not required for understanding STOVE - in principle, any image model delivering a likelihood p(x | z_where) based on location information z_where could be used in its stead, including AIR. As said, we mainly chose SuPAIR due to its fast training times. If you have specific suggestions for what should be clarified about SuPAIR, we will be glad to do so.\n\n[Color vs. Grayscale]\nFor the video modeling task, we use grayscale images in which all objects are the same shade of white. Color has been added to Figure 2 to make it more readable. For the RL task, we use colored images such that the models may recognize the object which is controlled by the agent. The mean values per color channels are added to each objects state, as a simple encoding of appearance. We clarified this in the revision.\n\n[RL experiments]\nThe main motivation of our RL experiments is to demonstrate planning based on an object-aware dynamics model learned on purely visual input, which to our knowledge has not been done in prior work. Wang et al. use GNNs very differently from us, by employing them in a model-free policy network. Sanchez-Gonzalez et al., like us, use GNNs as a dynamics model for planning, but assume access to the ground truth states as opposed to inferring them from images.\n\n[Realistic Rollouts]\nWe find that STOVE significantly improves upon prior work in that it predicts physical behavior across long timeframes, instead of stopping or teleporting objects. We quantify this in the revision by plotting the conservation of kinetic energy in the rollouts, which STOVE achieves up to at least 100,000 steps, while DDPAE and SQAIR break down after less than 100. See (1) of our top level comment and the animated GIFs in our anonymized GitHub [1].\n\n[Ablations]\nIn the revision, we provide results for three ablations (see (3) in our general comment and Table 1), including two with an ablated state representation. We did not explore AIR as an alternative object detector, since we chose SuPAIR for its faster training times. We do not claim, or even expect, that AIR would perform worse.\n\n[Steenkiste et al.]\nFor a visual evaluation, please compare our animated rollouts [1] with the ones presented by Steenkiste et al. [2, very bottom]. We find that STOVE more accurately captures object permanence and energy conservation. We decided against a quantitative comparison due to qualitative differences:\n(a) R-NEM requires around 10 given observations before the iterative inference procedure converges to a good segmentation,\n(b) it does not explicitly model object positions, and\n(c) it requires noisy input to avoid local minima.\nWe have instead added DDPAE as a baseline. See (2) in our top level comment.\n\n[1] https://github.com/ICLR20/STOVE\n[2] https://sites.google.com/view/r-nem-gifs/\n\n[RL Performance]\nThe performance of MCTS+STOVE was very close to the performance of MCTS on the ground truth environment. This indicates that the weak point of the agent was not the model (STOVE), but rather the planner, and that more thorough planning would allow it to match PPO's performance. Since the goal of our RL experiments was to highlight the applicability and sample efficiency of our model in the RL domain, we opted for an off-the-shelf planner instead of tuning for final performance. \n\n[Suggested Related Work]\nThank you for the references, we have added them.\n\n[Reuse of Dynamics Model]\nPrevious models, such as SQAIR and DDPAE, use an inference distribution $q(z_t | x_t, z_{t-1})$ which is entirely separate from the generative dynamics model $p(z_t | z_{t-1})$. We argue that this is wasteful, as much of the knowledge captured by the generative dynamics model is also relevant for the inference network. We therefore reuse it in our formulation of the inference network (Eq. 2), saving model parameters and regularizing training. We explore the benefits of this in one of the new ablations (\"double dynamics\").\n\n[Failure Modes]\nThe main failure mode is that the inductive bias in the image model is insufficient to reliably detect objects. See Stelzner et al. for a discussion of noisy backgrounds in SuPAIR. In addition, our matching procedure assumes that objects move continuously.\n\n[Occlusion]\nOcclusion is explicitly modelled in SuPAIR: If objects overlap, the hidden parts of the occluded object are treated as unobserved, and therefore marginalized during the evaluation of the object appearances' likelihood.\n\nWe hope that the changes made will address your concerns and look forward to further discussion."}, {"title": "Response to Reviewer 3", "comment": "Dear Reviewer 3,\n\nthank you for your valuable feedback.\nBelow, we give a detailed response to your questions and comments.\n\n[Realistic Rollouts]\nWe have quantified the notion of realistic rollouts by adding a plot of the kinetic energy in the billiards ball system across prediction timesteps. This energy should be conserved, as collisions are fully elastic and energies thus remain constant in the training data. For STOVE, the mean energy remains constant even over extremely long timeframes (we checked up to 100,000 steps), whereas for the baselines, it quickly diverges (after less than 100 steps). While in chaotic systems like the billiards environment, model predictions will necessarily differ from the ground truth after a number of timesteps, it is a desirable property of STOVE to continue to exhibit physical behavior. In contrast, all baselines predict overlapping, stopping, or teleporting objects after a short period. This can be observed visually in our animated GIFs [1].\n\n[1] https://github.com/ICLR20/STOVE\n\n[Unsupervised Learning]\nWe agree that 'self-supervised' is a good term for STOVE. However, we do not view the end-to-end learning approach of STOVE as equivalent to decomposing the task into two distinct steps, one for feature extraction and one for supervised prediction. Kosiorek et al. (SQAIR) have shown that training dynamics and recognition models jointly can significantly improve object detection performance through the incorporation of a temporal consistency bias. We therefore believe that maintaining this coupling is a valuable feature of STOVE. In any case, the successive training of SuPAIR and dynamics model is more brittle and raises the need for additional auxiliary losses (as in Watters et al. (2017)), such as a carefully tuned discounted rollout error.\n\n[Contribution]\nAs requested, we have added detailed information on the graph neural network and other components of STOVE to the appendix. We disagree with the assessment that our paper's main contribution is the graph network architecture. The benefits of relational architectures for multi-object dynamics tasks have previously been demonstrated, e.g. by Battaglia et al. (2016) and Watters et al. (2017). What has not been done before is to employ them in a setting in which state information is entirely latent, and only raw video is available. Our main contributions are to show how to do this (structured latent space, reuse of the dynamics model, joint variational inference), and to demonstrate that this enables predictions of comparable quality to the supervised setting with observed states. This comparison does not merely evaluate SuPAIR, but rather the techniques we proposed for connecting image and dynamics models.\n\n[Ha & Schmidhuber]\nWe compare to VRNN, which belongs to the same class of model as the one Ha & Schmidhuber propose. Both encode input images via a VAE, and model the dynamics of the latent state via an RNN. It has been repeatedly demonstrated in the literature that models with object-factorized state representations such as STOVE outperform models with unstructured states, and our results support this, too. See e.g. the papers on SQAIR (Kosiorek et al., (2018)), and DDPAE (Hsieh et al., (2018)). We therefore deem a comparison to VRNN as a representative of unstructured models sufficient.\n\n[Diverse Number of Objects]\nEven though we did not explore this in this paper, one of the main appeals of both GNNs and AIR-based models is the ability to handle a variable number of objects. This is enabled by the GNNs focus on pairwise interactions. STOVE can thus be easily extended to handle a variable number of objects. As an ad-hoc demonstration, we provide an animated rollout with 6 objects on our GitHub [1].\n\n[Game Engine Learning]\nBoth Ersen & Sariel and Guzdial & Riedl share our motivation of learning the rules of games from video, we have therefore added the references. However, they explore a very different setting, since they assume access to a curated set of sprites to handle object detection, and use logical rules instead of continuous dynamics to model interactions. We find it misleading to credit these works with being able to handle more complex visual environments, as the a-priori knowledge of pixel-perfect object appearances trivializes the detection task. The goal of the field of representation learning, including AIR and all of its derivatives, is to extract meaningful, potentially discrete information from noisy and continuous input data without relying on domain specific knowledge. While hand-engineered approaches to object detection would certainly work on the domains we considered here, the techniques we present in this paper generalize to different image models and different environments. It is our hope that models like ours will make it possible to apply logical reasoning to domains where it was previously impossible, because of their continuous and noisy nature, and the absence of domain-specific knowledge.\n"}, {"title": "Response to Reviewer 2", "comment": "Dear Reviewer 2,\n\nthank you for your valuable feedback.\nBelow, we give a detailed response to your questions and comments.\nPlease also see the changes to the manuscript outlined in our top level comment.\n\n[Ablations]\nWe have added results for three different ablations of STOVE, including the suggested one in which two separate dynamics nets are used for generation and inference, demonstrating the value of reusing the dynamics net. Please see (3) in our general comment and Table 1 in the manuscript. We have chosen not to explore black-box MLPs as dynamics models, as the benefits of graph neural networks for multi-object dynamics tasks are well documented in the literature, see e.g. Battaglia et al. (2016) and Watters et al. (2017). We therefore do not believe this to be a crucial baseline.\n\n[Appearance-Based Matching]\nWe agree, and have tried matching procedures which involve object appearance encodings. However, one of the main features of SuPAIR in contrast to AIR is that it does not necessitate a latent encoding of the object appearance. This means that an encoder network would have to be 'tacked on' to the model in order to allow for appearance based matching, as mentioned in Section 2.4. We did not find this necessary, since for the settings we considered, STOVE precisely inferred object centers with a mean error of less than 1/3 of a pixel, which suffices even during collisions or in scenarios with partial overlap. We therefore leave the exploration of appearance-based matching to future work.\n\n[Visual Complexity]\nThe visual complexity of scenes and robustness of SuPAIR with respect to visual noise has been explored by Stelzner et al.. We expect that these results translate to STOVE, i.e., that STOVE is able to handle background noise better than AIR (and, by extension, DDPAE and SQAIR). Figure 5 in the appendix shows that we are able to model scenes of differently shaped object sprites. However, we did not focus on this in this paper, as its main contributions are the techniques presented to combine image and dynamics models, as opposed to the performance of the specific image model used. Due to the compositional nature of STOVE, more sophisticated image models may easily be plugged in in place of SuPAIR. Finally, we note that the complexity of the experiments is in line with previous work (DDPAE, R-NEM). We choose to extend them by exploring the RL domain, which brings additional challenges, such as dynamics depending on object identities and actions.\n\n[Meaningful Improvement]\nPlease see (1) of our top level comment, as we believe the energy conservation plot clearly demonstrates the stark performance improvements achieved with STOVE over prior work. While previous approaches break down after less than 100 frames of rollout, STOVE predicts trajectories with constant mean energy trajectories for 100,000 frames or more. Additionally, DDPAE and SQAIR predict overlapping, stopping, or teleporting objects after a short period. Apart from the added conservation plot, this is also apparent from the animated GIFs in our anonymized GitHub [1].\n\n[1] https://github.com/ICLR20/STOVE"}, {"title": "Revision", "comment": "We thank the reviewers for their valuable feedback and have revised the manuscript accordingly.\n\nThe main changes are:\n\n1) We quantify the notion of 'realistic' rollouts by plotting the kinetic energy of rollouts on the billiards task (Fig. 4). This energy should be conserved, as collisions are fully elastic and no friction is applied. We find that the energy in STOVE's rollouts remains constant over very long timeframes (we checked up to 100,000 steps), whereas it quickly diverges for the baselines (SQAIR and DDPAE) after less than 100 steps. Additionally, all baselines predict overlapping, stopping, or teleporting objects after a short period. This stark difference in quality can be observed visually in our animated GIFs [1]. \n\n2) We have added DDPAE as a baseline for the video prediction tasks. According to its authors, DDPAEs is capable of handling complex interactions on the billiards task. In our experiments, it performs better than SQAIR, but significantly worse than STOVE.\n\n3) We have added results for three of the suggested ablations to table 1. They are:\n   a) Double Dynamics Networks (two separate dynamics nets for inference and generation),\n   b) No Velocity (no explicit modelling of the velocity within the state),\n   c) No Latents (no unstructured latent variables in the dynamics state, only positions and velocities).\nWe find that they all perform consistently worse than full STOVE, demonstrating each component's value.\n\n4) We fixed a bug in our RL environment and adjusted the results accordingly. PPO now converges slightly faster (4M instead of 5M steps), but all high-level observations remain the same.\n\n5) We have improved the clarity of the writing.\n\n6) We have added a detailed description of the model architectures, hyperparameters, and baselines to the appendix.\n\n7) We have included the suggested additional references.\n\nWe hope these changes address the concerns expressed by the reviewers and look forward to further discussion.\n\n[1] github.com/ICLR20/STOVE\n"}], "comment_replyto": ["H1lg7JLcsB", "HylUcrfQsr", "rJlPFpxdiH", "HylUcrfQsr", "rylOjQdDuS", "S1gNkPJnYr", "BJgAmQbAFB", "B1e-kxSKDH"], "comment_url": ["https://openreview.net/forum?id=B1e-kxSKDH&noteId=r1lMMgfssr", "https://openreview.net/forum?id=B1e-kxSKDH&noteId=H1lg7JLcsB", "https://openreview.net/forum?id=B1e-kxSKDH&noteId=HkgCxPNusB", "https://openreview.net/forum?id=B1e-kxSKDH&noteId=rJlPFpxdiH", "https://openreview.net/forum?id=B1e-kxSKDH&noteId=HylUcrfQsr", "https://openreview.net/forum?id=B1e-kxSKDH&noteId=ryg8vXzXjr", "https://openreview.net/forum?id=B1e-kxSKDH&noteId=rJlOFefQjr", "https://openreview.net/forum?id=B1e-kxSKDH&noteId=SJxYFRbQoS"], "meta_review_cdate": 1576798739185, "meta_review_tcdate": 1576798739185, "meta_review_tmdate": 1576800897133, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "The paper presents a method for modeling videos with object-centric structured representations. The paper is well written and clearly motivated. Using a Graph Neural Network for modeling latent physics is a sensible idea and can be beneficial for planning/control. Experimental results show improved performance over the baselines. After the rebuttal, many questions/concerns from the reviewers were addressed, and all reviewers recommend weak acceptance.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1e-kxSKDH&noteId=9AvyA73Dd"], "decision": "Accept (Poster)"}