{"forum": "B1g79grKPr", "submission_url": "https://openreview.net/forum?id=B1g79grKPr", "submission_content": {"authorids": ["oleh@seas.upenn.edu", "pertsch@usc.edu", "febert@berkeley.edu", "dineshjayaraman@berkeley.edu", "cbfinn@cs.stanford.edu", "svlevine@eecs.berkeley.edu"], "title": "Goal-Conditioned Video Prediction", "authors": ["Oleh Rybkin", "Karl Pertsch", "Frederik Ebert", "Dinesh Jayaraman", "Chelsea Finn", "Sergey Levine"], "pdf": "/pdf/bdca1a749a11f6cf7f6931bb8fbe74dcaac5e30c.pdf", "TL;DR": "We propose a new class of visual generative models: goal-conditioned predictors. We show experimentally that conditioning on the goal allows to reduce uncertainty and produce predictions over much longer horizons.", "abstract": "Many processes can be concisely represented as a sequence of events leading from a starting state to an end state. Given raw ingredients, and a finished cake, an experienced chef can surmise the recipe. Building upon this intuition, we propose a new class of visual generative models: goal-conditioned predictors (GCP). Prior work on video generation largely focuses on prediction models that only observe frames from the beginning of the video. GCP instead treats videos as start-goal transformations, making video generation easier by conditioning on the more informative context provided by the first and final frames. Not only do existing forward prediction approaches synthesize better and longer videos when modified to become goal-conditioned, but GCP models can also utilize structures that are not linear in time, to accomplish hierarchical prediction. To this end, we study both auto-regressive GCP models and novel tree-structured GCP models that generate frames recursively, splitting the video iteratively into finer and finer segments delineated by subgoals. In experiments across simulated and real datasets, our GCP methods generate high-quality sequences over long horizons. Tree-structured GCPs are also substantially easier to parallelize than auto-regressive GCPs, making training and inference very efficient, and allowing the model to train on sequences that are thousands of frames in length.Finally, we demonstrate the utility of GCP approaches for imitation learning in the setting without access to expert actions. Videos are on the supplementary website: https://sites.google.com/view/video-gcp", "keywords": ["predictive models", "video prediction", "latent variable models"], "paperhash": "rybkin|goalconditioned_video_prediction", "original_pdf": "/attachment/75e6b37d0afb9d5dfd0a1f5945ea11263dd75249.pdf", "_bibtex": "@misc{\nrybkin2020goalconditioned,\ntitle={Goal-Conditioned Video Prediction},\nauthor={Oleh Rybkin and Karl Pertsch and Frederik Ebert and Dinesh Jayaraman and Chelsea Finn and Sergey Levine},\nyear={2020},\nurl={https://openreview.net/forum?id=B1g79grKPr}\n}"}, "submission_cdate": 1569439882856, "submission_tcdate": 1569439882856, "submission_tmdate": 1577168293398, "submission_ddate": null, "review_id": ["HyedaO2pKH", "rJxnKiMaKB", "HklBPG5AFS"], "review_url": ["https://openreview.net/forum?id=B1g79grKPr¬eId=HyedaO2pKH", "https://openreview.net/forum?id=B1g79grKPr¬eId=rJxnKiMaKB", "https://openreview.net/forum?id=B1g79grKPr¬eId=HklBPG5AFS"], "review_cdate": [1571829952445, 1571789699863, 1571885660712], "review_tcdate": [1571829952445, 1571789699863, 1571885660712], "review_tmdate": [1574378646728, 1572972334847, 1572972334747], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2020/Conference/Paper2465/AnonReviewer2"], ["ICLR.cc/2020/Conference/Paper2465/AnonReviewer3"], ["ICLR.cc/2020/Conference/Paper2465/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1g79grKPr", "B1g79grKPr", "B1g79grKPr"], "review_content": [{"experience_assessment": "I have published one or two papers in this area.", "rating": "6: Weak Accept", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "title": "Official Blind Review #2", "review": "Summary: The following work proposes a model for long-range video interpolation -- specifically targetting cases where the intermediate content trajectories may be highly non-linear. This is referred to as goal-conditioned in the paper. They present an autoregressive sequential model, as well as a hierarchical model -- each based on a probabilistic framework. Finally, they demonstrate an application in imitation learning by introducing an additional model that maps pairs of observations (frames) to a distribution over actions that predicts how likely each action will map the first observation to the second. Their imitation learning method is able to successfully solve mazes, given just the start and goal observations.\n\nStrengths:\n-The extension to visual planning/imitation learning was very interesting\n-Explores differences between sequential and hierarchical prediction models\n\nWeaknesses/questions/suggestions:\n-In addition to SSIM and PSNR, one might also want to consider the FVD and LPIPS, both which should correlate better with human perception.\n-How does the inverse model in section $ p(a | o,o')$ account for the case in which multiple actions may eventually result in o -> o', given than o' is sufficiently far from o? Does the random controller need to be implemented in a specific way to handle this?\n-I think a fairly important unstated limitation is that latent-variable based methods tend not to generalize well outside of their trained domain. In table 1, I assume DVF was taken off-the-shelf, but all other methods were trained specifically on H3.6M?\n\n\nLPIPS: https://github.com/richzhang/PerceptualSimilarity\nFVD: https://github.com/google-research/google-research/tree/master/frechet_video_distance\n\n\nOverall, I think the results seem pretty promising -- most notably the imitation learning results. I hope that the authors can address some of my concerns stated above.\n\n\n** Post Rebuttal:\nThe authors have adequately addressed my concerns regarding clarity and metrics. The current draft also better motivates the task of long-range interpolation vs short range interpolation. I maintain my original rating.", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory."}, {"experience_assessment": "I have published in this field for several years.", "rating": "3: Weak Reject", "review_assessment:_thoroughness_in_paper_reading": "N/A", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "title": "Official Blind Review #3", "review_assessment:_checking_correctness_of_derivations_and_theory": "N/A", "review": "This paper reformulates video prediction problem by conditioning the prediction on the start and end (goal) frame. This essentially changes the problem from extrapolation to interpolation which results in higher quality predictions. \n\nThe motivation behind the paper is not clear. First of all, the previous work in video predicted is typically formulated as \"conditioned frame prediction\" where the prediction of the next frame is conditioned on \"a set of context frames\" and there is no reason why this set cannot contain the goal frame. Their implementation, however, is motivated by their application and therefore these models are usually only conditioned on the start frames. Unfortunately, besides the reverse planning in imitation learning, the authors did not provide a suite of applications where such a model can be useful. Hence, I think the authors should answer these two questions to clear up the motivation:\n1. Why conditioning on the goal frame is interesting? It specifically helps to provide more concrete details than getting from Oakland to San Fransico.\n2. Where the current conditional models suffer by conditioning on the goal image?\n\nMore experiments are required to support the claims of the paper as well. \nGiven my point regarding context frames, a more fair experiment would be to compare the proposed method with them when they are conditioned on the goal frame as well. This explicitly has been avoided in 5.1.\n The used metrics are not a good evaluation metric for frame prediction as they both do not give us an objective evaluation in the sense of the semantic quality of predicted frames. The authors should present additional quantitative evaluation to show that the predicted frames contain useful semantic information. FVD and Inception score come to my mind as good candidates. \n\nOn quality of writing, the paper is well written but it can use a figure that demonstrates proposed architecture. The authors provided the code which is always a plus. \n\nIn conclusion, I believe the impact of the paper, in the current form, is marginal at best and for sure does not meet the requirements for a prestigious conference such as ICLR. However, a more clear motivation, a concrete set of goals and claims, as well as more comprehensive experiments, can push the quality above the bar. "}, {"experience_assessment": "I have published one or two papers in this area.", "rating": "6: Weak Accept", "review_assessment:_thoroughness_in_paper_reading": "I read the paper thoroughly.", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "title": "Official Blind Review #1", "review_assessment:_checking_correctness_of_derivations_and_theory": "N/A", "review": "\nREFERENCES ARE LISTED AT THE END OF THE REVIEW\n\n\nSummary:\nThis paper proposes a method for video prediction that, given a starting and ending image, is able to generate the frame trajectory in between. They propose two variations of their method: A sequential and a tree based methods. The tree-based method enables efficient frame sampling in a hierarchical way. In experiments, they outperform the used baselines in the task of video prediction. Additionally, they used the learned pixel dynamics model and an inverse dynamics model to plan actions for an agent to navigate from a starting frame to an ending frame.\n\n\nPros:\n+ Novel latent method for goal conditioned prediction (sequential and hierarchical)\n+ Really cool experiments on navigation using the predicted frames\n+ Outperforms used baselines\n\nWeaknesses / comments:\n- Missing baseline:\nThe Human 3.6M experiments are missing the baseline from Wichers et al., 2018. I would be good to compare against them for better assessment of the predicted videos.\n\n- Bottleneck discovery experiments (Figure 8):\nThe visualizations shown in Figure 8 are very interesting, however, I would like to see if the model is able to generate multiple trajectories from the same frame. It looks like the starting frames (left) are not the same.\n\n\nConclusion:\nThis paper proposes a novel latent variable method for goal oriented video prediction which is then used to enable an agent to go from point A to point B. I feel this paper brings nice insights useful for the model based reinforcement learning literature where the end goal can be guided by an image rather than predefined rewards. It would be good if the authors can include the suggested video prediction baseline from Wichers et al., 2018 in their quantitative comparisons.\n\n\nReferences:\nNevan Wichers, Ruben Villegas, Dumitru Erhan, Honglak Lee. Hierarchical Long-term Video Prediction without Supervision. In ICML, 2018\n"}], "comment_id": ["SygAcbAjiS", "HkgEsgRooS", "Skxd4xAojH", "HygiHzCjiB"], "comment_cdate": [1573802389952, 1573802140513, 1573802032152, 1573802563192], "comment_tcdate": [1573802389952, 1573802140513, 1573802032152, 1573802563192], "comment_tmdate": [1573854714677, 1573854593395, 1573854400385, 1573802563192], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2020/Conference/Paper2465/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper2465/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper2465/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper2465/Authors", "ICLR.cc/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Added FVD/LPIPS, clarified motivation", "comment": "We thank the reviewer for the comments on the motivation and suggesting additional experiments. As suggested, we made the following changes:\n- In Tab 4, evaluated the compared models on FVD and LPIPS, perceptual visual quality metrics, showing that both goal-conditioned prediction models outperform all baselines across all datasets. \n- Improved the presentation of our motivation and the introductory figure.\n\nWe answer the questions in detail below. Please let us know if this addresses your concern, or if you would like us to discuss this further or add additional evaluations!\n\n== 1. Why conditioning on the goal frame is interesting? ==\nA: We thank the reviewer for bringing up the important point of motivation. We revised the introductory figure to more clearly reflect our motivation, and we next provide detailed application examples for goal-conditioned prediction (GCP) that expand on the motivation in our introduction. We will integrate these arguments in the final version of the paper.\n- When controlling an agent, the goal state is often known in practice, and utilizing it for prediction should allow to construct better plans. Building such goal-conditioned agents with model-free techniques is an active area of research [1, 2, 3, 4, 5, 6, 7]. We are hopeful that building better goal-conditioned predictors will enable use of data efficient model-based techniques for such problems.\n- More generally, in many natural settings the goal of a certain process is known and we want to leverage it for video generation. An example application of GCP is a tool that allows to edit or create a video. To modify a video, a human graphics designer might simply want to change a few seconds of video, and GCP can generate the interpolations to smoothly embed the frames in the video. This problem is distinct from open-ended forward prediction as the video is constrained by the desired final frame.\n- Finally, we argue in the introduction that unconstrained prediction without a goal is often very challenging, as uncertainty increases dramatically for long time horizons. Conditioning on the goal reduces the uncertainty and makes long-horizon video prediction beyond lengths considered by prior work tractable, as our paper shows.\n\n== 2. Where the current conditional models suffer by conditioning on the goal image? ==\nWe find that for long-horizon goal-conditioned prediction an expressive model that is able to handle the stochasticity in long sequences well is necessary. The two goal-conditioned prediction methods we compare to, DVF and CIGAN, are unable to handle the complexity of such prediction as they are designed for rather short sequences. This motivated our sequential latent variable approach. We note that certain prior work like Denton&Fergus\u201918, Lee\u201918, used sequential latent variable models for forward prediction and therefore one version of our proposed method, GCP-sequential, can be considered the goal-conditioned extension of this prior work. We clarified this in the manuscript.\n\n[1] Kaelbling, Leslie Pack. \"Learning to achieve goals.\" IJCAI. 1993.\n[2] Schaul, Tom, et al. \"Universal value function approximators.\" International Conference on Machine Learning. 2015.\n[3] Andrychowicz, Marcin, et al. \"Hindsight experience replay.\" Advances in Neural Information Processing Systems. 2017.\n[4] Pong, Vitchyr, et al. \"Temporal difference models: Model-free deep rl for model-based control.\" ICLR. 2018.\n[5] Nair, Ashvin V., et al. \"Visual reinforcement learning with imagined goals.\" Advances in Neural Information Processing Systems. 2018.\n[6] Fu, Justin, et al. \"Variational inverse control with events: A general framework for data-driven reward definition.\" Advances in Neural Information Processing Systems. 2018.\n[7] Warde-Farley, David, et al. \"Unsupervised control through non-parametric discriminative rewards.\" ICLR. 2019.\n"}, {"title": "Added FVD/LPIPS evaluation, additional clarifications", "comment": "We thank the reviewer for the helpful comments and suggestions. We made the following changes to the submission to address the reviewers remarks and answer the posed questions:\n\n== FVD+LPIPS metrics ==\nWe added evaluation results with both metrics for all four datasets to Tab.4 in the appendix. We find that both proposed models for goal-conditioned prediction outperform video interpolation baselines as well as non-goal-conditioned prediction. \n\n== Stochastic inverse model ==\nIndeed it is possible that multiple different action sequences lead from a start state o to a goal state o\u2019 and prior work addressed this problem by conditioning the inverse model on a stochastic latent variable to explicitly model the uncertainty over the action trajectory [1]. However, in our experiments we did not find to be an issue, because there are typically only 1-3 time steps between the current state and the next predicted target of the inverse model. This is because GCP is able to predict a dense plan for the inverse model to follow. We note that the proposed method is general and can be used with stochastic inverse models.\n\n== DVF Off-the-shelf ==\nWe want to point out that *all methods* were trained from scratch on the respective domain they were tested on, i.e. we re-trained the DVF model that we used to report numbers on H3.6M using the H3.6M training set. This is to allow fair comparison to the GCP models that were trained on the same data. We did not use the off-the-shelf DVF network. We thank the reviewer for pointing out this possible confusion and we added a footnote to the revised manuscript clarifying that all models were trained from scratch.\n\nWe again thank the reviewer for the helpful suggestions that improved the quality of the submission. Please let us know if there are any further questions! \n\n[1] Learning Latent Plans from Play, Lynch et al., 2019\n"}, {"title": "Added Wichers'18 Comparison, added additional qualitative visualizations, updated bottleneck results", "comment": "We thank the reviewer for the helpful comments and suggestions. To address the reviewers remarks we made the following improvements to the paper. \n\n== Wichers\u201918 ==\nAs suggested by the reviewer, we trained Wichers\u201918 and report video prediction metrics in Tab. 1. We observe that this method struggles in our experimental setup, likely because deterministic prediction given only one conditioning frame is challenging, especially in stochastic environments. We have made an attempt at extending this baseline to the goal-conditioned setting. However, in our preliminary experiments we were not able to improve the performance over the original version. We also note that we were not able to run Wichers\u201918 on datasets longer than 100 frames due to computational requirements.\n\n== Multiple sampled sequences == \nWe added a visualization of multiple sequences sampled given the same start-goal frames to the appendix, Figure 10 for the Human 3.6 dataset and Figure 11 for the 2D maze dataset. We note that the original supplementary website contained examples of multiple sampled sequences for every dataset.\n\n== Bottleneck discovery ==\nTo further investigate the bottleneck discovery phenomenon, we performed an experiment on the Pick&Place data, and we observe that the model reliably discovers bottlenecks in those data too. The generations are now shown in Fig. 8.\n\nWe again thank the reviewer for the helpful suggestions that improved the quality of the submission. Please let us know if there are any further questions! \n"}, {"title": "Author Response: added Wichers'18 comparison, added FVD/LPIPS evaluation, updated bottleneck results, added clarifications", "comment": "We thank all reviewers for the helpful comments and suggestions. To address them we made the following changes to the manuscript:\n(1) We added comparison to the video prediction model of Wichers\u201918 to Tab 1, showing that both our models, GCP-sequential and GCP-tree, outperform the added baseline on multiple datasets.\n(2) We added evaluation with the perceptual metrics FVD and LPIPS to Tab 4 in addition to the reported standard video prediction metrics PSNR/SSIM. We show that both proposed goal-conditioned prediction models outperform all baselines on the added metrics across the four tested datasets.\n(3) We extended the analysis of bottleneck discovery for hierarchical GCP to the Pick&Place dataset and find that the model is able to discover bottleneck states in the top nodes of the predicted hierarchy.\n(4) We added clarifications to multiple sections of the manuscript addressing questions the reviewers raised. We updated Fig. 1 to better visualize the motivation of the approach. We updated Fig. 8 with the added bottleneck results and added Tab. 4 to the appendix to include the added comparisons and metrics. Further, we added an architecture figure to the appendix.\n"}], "comment_replyto": ["rJxnKiMaKB", "HyedaO2pKH", "HklBPG5AFS", "B1g79grKPr"], "comment_url": ["https://openreview.net/forum?id=B1g79grKPr¬eId=SygAcbAjiS", "https://openreview.net/forum?id=B1g79grKPr¬eId=HkgEsgRooS", "https://openreview.net/forum?id=B1g79grKPr¬eId=Skxd4xAojH", "https://openreview.net/forum?id=B1g79grKPr¬eId=HygiHzCjiB"], "meta_review_cdate": 1576798749682, "meta_review_tcdate": 1576798749682, "meta_review_tmdate": 1576800886209, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "The paper addresses a video generation setting where both initial and goal state are provided as a basis for long-term prediction. The authors propose two types of models, sequential and hierarchical, and obtain interesting insights into the performance of these two models. Reviewers raised concerns about evaluation metrics, empirical comparisons, and the relationship of the proposed model to prior work.\n\nWhile many of the initial concerns have been addressed by the authors, reviewers remain concerned about two issues in particular. First, the proposed model is similar to previous approaches with sequential latent variable models, and it is unclear how such existing models would compare if applied in this setting. Second, there are remaining concerns on whether the model may learn degenerate solutions. I quote from the discussion here, as I am not sure this will be visible to authors [about Figure 12]: \"now the two examples with two samples they show have the same door in the middle frame which makes me doubt the method learn[s] anything meaningful in terms of the agent walking through the door but just go to the middle of the screen every time.\"", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1g79grKPr¬eId=QBNt8I0zAs"], "decision": "Reject"}