{"forum": "B1MbDj0ctQ", "submission_url": "https://openreview.net/forum?id=B1MbDj0ctQ", "submission_content": {"title": "Switching Linear Dynamics for Variational Bayes Filtering", "abstract": "System identification of complex and nonlinear systems is a central problem for model predictive control and model-based reinforcement learning. Despite their complexity, such systems can often be approximated well by a set of linear dynamical systems if broken into appropriate subsequences. This mechanism not only helps us find good approximations of dynamics, but also gives us deeper insight into the underlying system. Leveraging Bayesian inference and Variational Autoencoders, we show how to learn a richer and more meaningful state space, e.g. encoding joint constraints and collisions with walls in a maze, from partial and high-dimensional observations. This representation translates into a gain of accuracy of the learned dynamics which we showcase on various simulated tasks.", "keywords": ["sequence model", "switching linear dynamical systems", "variational bayes", "filter", "variational inference", "stochastic recurrent neural network"], "authorids": ["philip.becker-ehmck@volkswagen.de", "peters@ias.tu-darmstadt.de", "smagt@volkswagen.de"], "authors": ["Philip Becker-Ehmck", "Jan Peters", "Patrick van der Smagt"], "TL;DR": "A recurrent variational autoencoder with a latent transition function modeled by switching linear dynamical systems.", "pdf": "/pdf/6f302e0241598e27d16dd94b6bcbf1dfaa7b750e.pdf", "paperhash": "beckerehmck|switching_linear_dynamics_for_variational_bayes_filtering", "_bibtex": "@misc{\nbecker-ehmck2019switching,\ntitle={Switching Linear Dynamics for Variational Bayes Filtering},\nauthor={Philip Becker-Ehmck and Jan Peters and Patrick van der Smagt},\nyear={2019},\nurl={https://openreview.net/forum?id=B1MbDj0ctQ},\n}"}, "submission_cdate": 1538087769188, "submission_tcdate": 1538087769188, "submission_tmdate": 1545355384395, "submission_ddate": null, "review_id": ["H1ljKxccnX", "ryla3ZvP3X", "S1eYvPS0om"], "review_url": ["https://openreview.net/forum?id=B1MbDj0ctQ¬eId=H1ljKxccnX", "https://openreview.net/forum?id=B1MbDj0ctQ¬eId=ryla3ZvP3X", "https://openreview.net/forum?id=B1MbDj0ctQ¬eId=S1eYvPS0om"], "review_cdate": [1541214338868, 1541005749285, 1540409185144], "review_tcdate": [1541214338868, 1541005749285, 1540409185144], "review_tmdate": [1543272135950, 1541534166854, 1541534166498], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1MbDj0ctQ", "B1MbDj0ctQ", "B1MbDj0ctQ"], "review_content": [{"title": "Interesting ideas, but more justifications and comparisons necessary", "review": "Thank you for the detailed reply and for updating the draft \n\nThe authors have added in a sentence about the SLDS-VAE from Johnson et al and I agree that reproducing their results from the open source code is difficult. I think my concerns about similarities have been sufficiently addressed.\n\nMy main concerns about the paper still stem from the complexity of the inference procedure. Although the inference section is still a bit dense, I think the restructuring helped quite a bit. I am changing my score to a 6 to reflect the authors' efforts to improve the clarity of the paper. The discussion in the comments has been helpful in better understanding the paper but there is still room for improvement in the paper itself.\n=============\n\nSummary: The authors present an SLDS + neural network observation model for the purpose of fitting complex dynamical systems. They introduce an RNN-based inference procedure and evaluate how well this model fits various systems. (I\u2019ll refer to the paper as SLDVBF for the rest of the review.)\n\nWriting: The paper is well-written and explains its ideas clearly\n\nMajor Comments:\nThere are many similarities between SLDVBF and the SLDS-VAE model in Johnson et al [1] and I think the authors need to address them, or at least properly compare the models and justify their choices:\n\n- The first is that the proposed latent SLDS generative models are very similar: both papers connect an SLDS with a neural network observation model. Johnson et al [1] present a slightly simpler SLDS (with no edges from z_t -> s_{t + 1} or s_t -> x_t) whereas LDVBF uses the \u201caugmented SLDS\u201d from Barber et al. It is unclear what exactly z_t -> s_{t + 1} is in the LDVBF model, as there is no stated form for p(s_t | s_{t -1}, z_{t - 1}).\n\n- When performing inference, Johnson et al use a recognition network that outputs potentials used for Kalman filtering for z_t and then do conjugate message passing for s_t. I see this as a simpler alternative to the inference algorithm proposed in SLDVBF. SLDVBF proposes relaxing the discrete random variables using Concrete distributions and using LSTMs to output potentials used in computing variational posteriors. There are few additional tricks used, such as having these networks output parameters that gate potentials from other sources. The authors state that this strategy allows reconstruction signal to backpropagate through transitions, but Johnson et al accomplish this (in theory) by backpropagating through the message passing fixed-point iteration itself. I think the authors need to better motivate the use of RNNs over the message-passing ideas presented in Johnson et al.\n\n- Although SLDVBF provides more experiments evaluating the SLDS than Johnson, there is an overlap. Johnson et al successfully simulates dynamics in toy image systems in an image-based ball-bouncing task (in 1d, not 2d). I find that the results from SLDVBF, on their own, are not quite convincing enough to distinguish their methods from those from Johnson et al and a direct comparison is necessary.\n\nDespite these similarities, I think this paper is a step in the right direction, though it needs to far more to differentiate it from Johnson et al. The paper draws on many ideas from recent literature for inference, and incorporating these ideas is a good start. \n\nMinor Comments:\n\n- Structurally, I found it odd that the authors present the inference algorithm before fully defining the generative model. I think it would be clearer if the authors provided a clear description of the model before describing variational approximations and inference strategies. \n- The authors do not justify setting $\\beta = 0.1$ when training the model. Is there a particular reason you need to downweight the KL term as opposed to annealing?\n\n[1] Johnson, Matthew, et al. \"Composing graphical models with neural networks for structured representations and fast inference.\" Advances in neural information processing systems. 2016.", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "The proposed approach is not clearly presented.", "review": "This paper proposes a new model for switching linear dynamical systems. The standard model and the proposed model are presented. Together with the inference procedure associated to the new model. This inference procedure is based on variational auto-encoders, which model the transition and measurement posterior distributions, which is exactly the methodological contribution of the manuscript. Experiments on three different tasks are reported, and qualitative and quantitative results (comparing with different state-of-the-art methods) are reported.\n\nThe standard model is very well described, formally and graphically, except for the dynamic model of the switching variable, and its dependence on z_t-1. The proposed model has a clear graphical representation, but its formal counterpart is a bit more difficult to grasp, we need to reach 4.2 (after the inference procedure is discussed) to understand the main difference (the switching variable does not influence the observation model). Still, the dependency of the dynamics of s_t on z_t is not discussed.\n\nIn my opinion, another issue is the discussion of the variational inference procedure, mainly because it is unclear what additional assumptions are made. This is because the procedure does not seem to derive from the a posteriori distribution (at least it is not presented like this). Sometimes we do not know if the authors are assuming further hypothesis or if there are typos in the equations. \n\nFor instance (7) is quite problematic. Indeed, the starting point of (7) is the approximation of the a posteriori distribution q_phi(z_t|z_t-1,x_1:T,u_1:T), that is split into two parts, a transition model and an inverse measurement model. First, this split is neither well motivated nor justified: does it come from smartly using the Bayes and other probability rules? In particular, I do not understand how come, given that q_phi is not conditioned on s_t, the past measurements and control inputs can be discarded. Second, do the authors impose that this a posteriori probability is a Gaussian? Third, the variable s_t seems to be in and out at the authors discretion, which is not correct from a mathematical point of view, and critical since the interesting part of the model is exactly the existence of a switching variable and its relationship with the other latent/observed variables. Finally, if the posterior q_phi is conditioned to s_t (and I am sure it must), then the measurement model also has to be conditioned on s_t, which poses perhaps another inference problem.\n\nEquation (10) has the same problem, in the sense that we do not understand where does it derive from, why is the chosen split justified and why the convex sum of the two distributions is the appropriate way to merge the information of the inverse measurements and the transition model.\n\nAnother difficulty is found in the generative model, when it is announced that the model uses M base matrices (but there are S possibilities for the switching variable). s_t(i) is not defined and the transition model for the switching variable is not defined. This part is difficult to understand and confusing. At the end, since we do not understand the basic assumptions of the model, it is very hard to grasp the contribution of the paper. In addition, the interpretation of the results is much harder, since we are missing an overall understanding of the proposed approach.\n\nThe numerical and quantitative results demonstrate the ability of the approach to outperform the state-of-the-art (at least for the normal distribution and on the first two tasks).\n\nDue to the lack of discussion, motivation, justification and details of the proposed approach, I recommend this paper to be rejected and resubmitted when all these concerns will be addressed.", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Interesting paper showing how to use switching variables in deep probabilistic temporal models", "review": "This paper proposes a deep probabilistic model for temporal data that leverages latent variables to switch between different learned linear dynamics. The probability distributions are parameterized by deep neural networks and learning is performed end-to-end with amortized variational inference using inference networks.\n\nThere has been a lot of recent research trying to combine probabilistic models and deep learning to define powerful transition models that can be learned in an unsupervised way, to be used for model-based RL. This paper fits in this research area, and presents a nice combination of several interesting ideas presented in related works (switching variables, structured inference networks, merging updates as in the Kalman filter). The novelty of this paper in terms of original ideas is limited, the novel part lies in the clever combination of known approaches.\n\nThe paper reads well, but I found the explanation and notation in section 4 quite confusing (although easy to improve). The authors propose a structured variational approximation, but the factorization assumptions are not clear from the notation (I had to rely on Figure 2a to fully understand them). \n- In the first line of equation 7 it seems that the variational approximation q_phi for z_t only depends on x_t, but it is actually dependent also on the future x through s_t and q_meas\n- The first line of section 4.1.1 shows that q_phi depends on x_{1:T}. However from figure 2a it seems that it only directly depends on x_{t:T}, and that the dependence on x_{1:t-1} is modelled through the dependence on z_{t-1}. \n- Is there a missing s_t in q_trans in the first line of (7)?\n- why do you keep the dependence on future outputs in q_meas if it is not used in the experiments and not shown in figure 2a? It only makes the notation more confusing.\n- You use f_phi to denote all the function in 4.1.1 (with different inputs). It would be clearer to use a different letter or for example add numbers (e.g. f^1_\\phi) \n- Despite being often done in VAE papers, it feels strange to me to introduce the inference model (4.1) before the generative model (4.2), as the inference model defines an approximation to the true posterior which is derived from the generative model. One could in principle use other type of approximate inference techniques while keeping the generative model unchanged. \n\nIt is difficult for me to understand how useful are in practice the switching variables. Reading the first part of the paper it seems that the authors will use discrete random variables, but they actually use for s_t continuous relaxiations of discrete variables (concrete distribution), or gaussian variables. As described in appendix B2 by the authors, training models with such continuous relaxations is often challenging in terms of hyper-parameter tuning. One may even wonder if it is worth the effort: could you have used instead a deterministic s_t parameterized for example as a bidirectional LSTM with softmax output? This may give equivalent results and remove a lot of complexity. Also, the fact that the gaussian switching variables perform better in the experiments is an indication that this may be the case.\n\nTo be able to detect walls the z variables basically need to learn to represent the position of the agent and encoding the information on the position of the walls in the connection to s_t. Would you then need to train the model from scratch for any new environment?\n\nMinor comment:\n- in the softmax equation (6) there are missing brackets: lambda is at the denominator both for g and the log\n", "rating": "7: Good paper, accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": ["SJgmbwx9A7", "r1xjQg-l07", "H1e8dgblCm", "S1eLPJbgAX", "Bkx8EQ3BCQ"], "comment_cdate": [1543272187055, 1542619171380, 1542619246336, 1542618974466, 1542992685571], "comment_tcdate": [1543272187055, 1542619171380, 1542619246336, 1542618974466, 1542992685571], "comment_tmdate": [1543272187055, 1543257594989, 1543257588989, 1543257580166, 1542992685571], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper239/AnonReviewer2", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper239/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper239/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper239/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper239/AnonReviewer1", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Updated score and review", "comment": "I have read your reply and have updated my score and comments up above."}, {"title": "Thanks for the feedback", "comment": "- The first line of section 4.1.1 shows that q_phi depends on x{1:T}. However from figure 2a it seems that it only directly depends on x{t:T}, and that the dependence on x{1:t-1} is modelled through the dependence on z{t-1}. \n\nThanks, this is indeed a bit confusing. The latter description is our chosen parameterization. We will correct this.\n\n- In the first line of equation 7 it seems that the variational approximation q_phi for z_t only depends on x_t, but it is actually dependent also on the future x through s_t and q_meas\n- Is there a missing s_t in q_trans in the first line of (7)?\n\nIndeed, there are multiple conflicting specifications in (7) which we will address. q_trans is certainly conditioned on s_t as that is of course the entire point of s_t - influencing the transition of z.\n\n- One may even wonder if it is worth the effort: could you have used instead a deterministic s_t parameterized for example as a bidirectional LSTM with softmax output?\n\nWith regards to deterministic switching variables s_t, this is exactly the approach the Deep Variational Bayes Filter took (although not parameterized by an RNN). We argue and try to demonstrate that the performance gains stem from the probabilistic treatment of the switching variable, be it Concrete or Gaussian.\n\n- To be able to detect walls the z variables basically need to learn to represent the position of the agent and encoding the information on the position of the walls in the connection to s_t. Would you then need to train the model from scratch for any new environment?\n\nGiven that the model was trained on the agents' global position, a retraining would certainly be required in any case. If trained on local measurements, e.g. every agent equipped with some distance sensors, one could imagine that s_t is learned based on locally encoded information only, supposing a clean split between local and global information in z_t. This, however, is mere speculation, what is shown in our experiment is mainly that a single switching variable generalizes over the entire maze, e.g. encoding all vertical walls. This is in contrast to the deterministic treatment where we found a single switching variable only covering parts of the maze as shown in figure 3.\n\nAs multiple reviewers have raised concern about the chosen structure (inference treated before generative model), we will address this part in the hope to make the paper more coherent and readily understandable."}, {"title": "Addressing some of the confusion and justifications", "comment": "We'd like to cast away at least some of the confusion caused by our paper in order to improve it.\n\n- For instance (7) is quite problematic [...] Third, the variable s_t seems to be in and out at the authors discretion\n\nIndeed, there are multiple conflicting specifications in (7) which we will address. q_trans is certainly conditioned on s_t.\n\n- First, this split is neither well motivated nor justified: does it come from smartly using the Bayes and other probability rules?\n\nThis split will become clearer when we restructure the paper to describe the generative model first. The motivation is the following. It has been noted in prior work that in this kind of recurrent VAE, getting the reconstruction gradient through the generative transition is paramount for performance. In many works, the transition is only forced by the KL term, which is not enough. We'd like to achieve that by sharing parameters of the transition between generative and inference model. Therefore, we require one part of the inference model to be identical to the generative transition model so that it can be reused. This is q_trans. The other part needs to integrate the observations and then adjust whatever the prior/transition model predicted. Together, they constitute basic essence of a filter. Another motivation is: if our goal is to learn transition dynamics, why not reuse it in the inference model, where it may also be useful?\n\nNow, there is a loose parallel we can draw to the update step of a Kalman filter and the reason why this type of integration (Gaussian fusion) between q_trans and q_meas was chosen. This could be viewed as the update step of a (extended) Kalman filter if q_meas gave us an observation in the latent space. We say this with a big asterisk, as this is not to be understood as a principled argument, but just as way to think about.\n\n- Finally, if the posterior q_phi is conditioned to s_t (and I am sure it must), then the measurement model also has to be conditioned on s_t, which poses perhaps another inference problem. \n\nYou are right, it is. However, only q_trans is conditioned on s_t, not q_meas which is only conditioned on observations (and possibly control inputs). Only together they constitute our variational approximation. Inference remains unproblematic as we factorize over time and can infer z_t and s_t in an alternating fashion.\n\n- In particular, I do not understand how come, given that q_phi is not conditioned on s_t, the past measurements and control inputs can be discarded.\n\nNote that q_meas alone constitutes only one part of the inference model, only in combination with q_trans may it infer the approximate posterior. We make the assumption of a Markovian state space system (z_t \\indep x_1, .., x{t-1} | z{t-1}), meaning all information from observations/controls up to time t will be encoded in z_t and can come through q_trans. Therefore, we do not need to condition on past observations x_{