AMSR / conferences_raw /iclr19 /ICLR.cc_2019_Conference_B1epooR5FX.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
21.5 kB
{"forum": "B1epooR5FX", "submission_url": "https://openreview.net/forum?id=B1epooR5FX", "submission_content": {"title": "Predicted Variables in Programming", "abstract": "We present Predicted Variables, an approach to making machine learning (ML) a first class citizen in programming languages.\nThere is a growing divide in approaches to building systems: using human experts (e.g. programming) on the one hand, and using behavior learned from data (e.g. ML) on the other hand. PVars aim to make using ML in programming easier by hybridizing the two. We leverage the existing concept of variables and create a new type, a predicted variable. PVars are akin to native variables with one important distinction: PVars determine their value using ML when evaluated. We describe PVars and their interface, how they can be used in programming, and demonstrate the feasibility of our approach on three algorithmic problems: binary search, QuickSort, and caches.\nWe show experimentally that PVars are able to improve over the commonly used heuristics and lead to a better performance than the original algorithms.\nAs opposed to previous work applying ML to algorithmic problems, PVars have the advantage that they can be used within the existing frameworks and do not require the existing domain knowledge to be replaced. PVars allow for a seamless integration of ML into existing systems and algorithms.\nOur PVars implementation currently relies on standard Reinforcement Learning (RL) methods. To learn faster, PVars use the heuristic function, which they are replacing, as an initial function. We show that PVars quickly pick up the behavior of the initial function and then improve performance beyond that without ever performing substantially worse -- allowing for a safe deployment in critical applications.", "keywords": ["predicted variables", "machine learning", "programming", "computing systems", "reinforcement learning"], "authorids": ["victor.carbune@gmail.com", "thierryc@google.com", "shurick@google.com", "deselaers@google.com", "nikhilsarda@google.com", "jyagnik@google.com"], "authors": ["Victor Carbune", "Thierry Coppey", "Alexander Daryin", "Thomas Deselaers", "Nikhil Sarda", "Jay Yagnik"], "TL;DR": "We present Predicted Variables, an approach to making machine learning a first class citizen in programming languages.", "pdf": "/pdf/5b01bfae84014a02afcf507715e6de9bbf62a58a.pdf", "paperhash": "carbune|predicted_variables_in_programming", "_bibtex": "@misc{\ncarbune2019predicted,\ntitle={Predicted Variables in Programming},\nauthor={Victor Carbune and Thierry Coppey and Alexander Daryin and Thomas Deselaers and Nikhil Sarda and Jay Yagnik},\nyear={2019},\nurl={https://openreview.net/forum?id=B1epooR5FX},\n}"}, "submission_cdate": 1538087845224, "submission_tcdate": 1538087845224, "submission_tmdate": 1545355437500, "submission_ddate": null, "review_id": ["SylpG0Y52Q", "Ske788m937", "H1xn7AL8h7"], "review_url": ["https://openreview.net/forum?id=B1epooR5FX&noteId=SylpG0Y52Q", "https://openreview.net/forum?id=B1epooR5FX&noteId=Ske788m937", "https://openreview.net/forum?id=B1epooR5FX&noteId=H1xn7AL8h7"], "review_cdate": [1541213717143, 1541187146557, 1540939299983], "review_tcdate": [1541213717143, 1541187146557, 1540939299983], "review_tmdate": [1544538767446, 1541533794180, 1541533793975], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1epooR5FX", "B1epooR5FX", "B1epooR5FX"], "review_content": [{"title": "Potentially interesting idea, not well explained and justified", "review": "This paper proposes using predicted variables(PVars) - variables that learn\ntheir values through reinforcement learning (using observed values and\nrewards provided explicitly by the programmer). PVars are meant to replace\nvariables that are computed using heuristics.\n\nPros:\n* Interesting/intriguing idea\n* Applicability discussed through 3 different examples\n\nCons:\n* Gaps in explanation\n* Exaggerated claims\n* Problems inherent to the proposed technique are not properly addressed, brushed off as if unimportant\n\nThe idea of PVars is potentially interesting and worth exploring; that\nbeing said, the paper in its current form is not ready for\npublication.\n\nSome criticism/suggestions for improvement:\n\nWhile the idea may be appealing and worth studying, the paper does not address several problems inherent to the technique, such as:\n\n- overheads (computational cost for inference, not only in\n prediction/inference time but also all resources necessary to run\n the RL algorithm; what is the memory footprint of running the RL?)\n\n- reproducibility\n\n- programming overhead: I personally do not buy that this technique -\n at least as presented in this paper - is as easy as \"if statements\"\n (as stated in the paper) or will help ML become mainstream in\n programming. I think the programmer needs to understand the\n underpinnings of the PVars to be able to meaningfully provide\n observations and rewards, in addition to the domain specific\n knowledge. In fact, as the paper describes, there is a strong\n interplay between the problem setting/domain and how the rewards should be\n designed.\n\n- applicability: when and where such a technique makes sense\n\nThe interface for PVars is not entirely clear, in particular the\nmeaning of \"observations\" and \"rewards\" do not come natural to\nprogrammers unless they are exposed to a RL setting. Section 2 could\nprovide more details such that it would read as a tutorial on\nPVars. If regular programmers read that section, not sure they\nunderstand right away how to use PVars. The intent behind PVars\nbecomes clearer throughout the examples that follow.\n\nIt was not always clear when PVars use the \"initialization function\"\nas a backup solution. In fact, not sure \"initialization\" is the right\nterm, it behaves almost like an \"alternative\" prediction/safety net.\n\nThe examples would benefit from showing the initialization of the PVars.\n\nThe paper would improve if the claims would be toned down, the\nlimitations properly addressed and discussed and the implications of\nthe technique honestly described. I also think discussing the\napplicability of the technique beyond the 3 examples presented needs\nto be conveyed, specially given the \"performance\" of the technique\n(several episodes are needed to achieve good performance).\n\nWhile not equivalent, I think papers from approximate computing (and\nperhaps even probabilistic programming) could be cited in the related\nwork. In fact, for an example of how \"non-mainstream\" ideas can be\nproposed for programming languages (and explained in a scientific\npublication), see the work of Adrian Sampson on approximate computing\nhttps://www.cs.cornell.edu/~asampson/research.html\nIn particular, the EnerJ paper (PLDI 2011) and Probabilistic Assertions (PLDI 2014).\n\nUpdate: I maintain my scores after the rebuttal discussion.", "rating": "5: Marginally below acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Interesting proposal without clear contributions", "review": "This paper proposes the use of RL as a set of commands to be included as programming instructions in common programming languages. In this aspect, the authors propose to add simple instructions to employ the power of machine learning in general, and reinforcement learning in particular in common programming tasks.\n\nIn this aspect, the authors show with three different examples how the use of RL can speed up the performance of common tasks: binary search, sorting and caches.\n\nThe paper is easy to read and follow. \n\nIn my opinion, the main problem of the paper is that the contributions are not clear. The authors claim that the introduce a new hybrid approach of programming between common programming and ML, however, I do not see many differences between calling APIs and the current proposal. The paper seems to be a wrapper of API calls. Here, the authors should comment existing approaches based on ML and APIs.\n\nThe authors introduce the examples to show the advantages of using predictive variables. Many of the advantages are based on increasing the performance of the algorithms using these predictive variables, however, the results do not include the computational costs of learning the models. \n\nTherefore, in my opinion the paper should be more focused on detailing the commands of use of predictive variables and emphasising the advantages with respect to existing methods. Currently, the paper gives too relevance to the performance of the experiments, where the novel contributions are not there.", "rating": "5: Marginally below acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Interesting idea but replaces constants with other constants ", "review": "The paper proposes to include within regular programs, learned parameters that are then tuned in an online manner whenever the program is invoked. Thus learning is continuous, integration with the ML backend seamless. The idea is very interesting however, it seems to me that while we can replace native variables with learned parameters, the hyperparameters involved in the learning become new native variables (e.g. the value of feedback). Perhaps with some effort we can replace the hyperparameters with predicted variables too. Other concerns of mine stem from the programmer in me. I think of a program as something deterministic and predictable. With continuous, online, self-tuning, these properties are gone. How do the authors propose to assuage folks with my kind of mindset? Is debugging programs with predicted variables an issue? Consider a situation where the program showed some behavior with a certain setting of q which has since been tuned to another value and thus the same behavior doesn't show up. I find these to be very interesting questions but don't see much of a discussion in the current draft. Also, how does this work relate to probabilistic programming?", "rating": "7: Good paper, accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["BkguooveAX", "Byl2ksvgAm", "BJe9P9PeAQ"], "comment_cdate": [1542646688464, 1542646500043, 1542646369533], "comment_tcdate": [1542646688464, 1542646500043, 1542646369533], "comment_tmdate": [1542646688464, 1542646500043, 1542646369533], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper664/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper664/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper664/Authors", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Addressed some of the questions through comments and updated submission.", "comment": "We thank the reviewer for the comments and questions brought up related to our proposed interface.\n\n[hyperparameters become new variables]\nWe agree that hyperparameters introduce an additional search space, but we consider that navigating through this space is sometimes simpler than in the space of building complex heuristic functions to improve a specific problem, which would be the equivalent of not being able to use machine learning at all through an interface such as PVars. \n\n[debugging programs with predicted variables]\nAs with debugging any complex ML model, predicted variables will likely add additional challenges to debugging. However, because of their natural integration into the programming language, debugging the logic around the predicted variable should not be affected and inspecting the values coming from a predicted variable in a debugger will also be as simple as inspecting a regular predicted variable. \n\n[relation to probabilistic programming]\nProbabilistic programming is a line of work similar to ours but focused on a specific class of models. The interface introduced by the probabilistic programming line of work exposes directly methods required for operating with that class of models, e.g. graphical models, where as PVars leaves that as a solution detail.\nWe added related work from the probabilistic programming literature to our paper. \n\n[interesting questions that aren't discussed much in the current draft]\nWe have updated our draft to highlight our position related to some of your questions. \nWe consider this work on predicted variables a first step into an interesting field of research and we hope to be able to address more of these questions in future work. \n"}, {"title": "Clarified contributions in the paper ", "comment": "We thank the reviewer for their insightful comments and the very relevant question about clarifying our contributions. We have tried to clarify and itemized our contribution (see page 2).\n\n[no operational cost given] \nThe main focus of the paper is not to improve specific algorithms but demonstrate that such improvement is possible easily, and illustrate the claim with simple/well-known algorithms examples.\n\nWe did not provide an analysis of the computational overhead of our method because we see the three algorithmic problems as tasks to demonstrate that the interface that we provide is expressive and powerful enough to bring ML into normal software development. In many other applications, where predicted variables can be applied, speed is not a relevant metric, e.g user modelling , optimizing UI components, predicting user preference, systems optimization, or content recommendations. We acknowledge that our current implementation is probably slower than the original variant - but as we describe above, we don't consider actual runtime to be the relevant metric here.\nFurther - we strongly believe that specialized hardware such as GPUs or TPUs are continuously improving the runtime of ML models which will eventually make our proposed implementation practical even for speed sensitive applications (compare also Kraska et al, 2017).\n\n[\"commands of use\"] \nWe do agree with R2 that the main contribution of this paper is in the novel API that we propose. As we describe in the paper, the experiments are performed to demonstrate that such an API is actually feasible and to indicate how good the state of the art in machine learning supports such an API at this point. \nThe experiments performed serve as examples of how to apply predicted variables and to demonstrate that they are a viable solution to enable software developers to add ML models into their regular development workflow at a low engineering cost. \nArguably, the current state of machine learning does not yet make \"ML as easy as if statements\" which is why we removed that claim from our paper."}, {"title": "Added reproducibility data and incorporated feedback in paper", "comment": "We thank the reviewer for relevant and insightful comments. We provide responses and, when applicable, pointers to the changes we\u2019ve done in the paper aiming to address some of the problems related to the technique we introduced.\n\n- computation overhead\nWe did not provide an analysis of the computational overhead of our method because we see the three algorithmic problems as tasks to demonstrate that the interface that we provide is expressive and powerful enough to bring ML into normal software development. In many other applications, where predicted variables can be applied, speed is not a relevant metric, e.g user modelling, optimizing UI components, predicting user preference, systems optimization, or content recommendations. We acknowledge that our current implementation is probably slower than the original variant - but as we describe above, we don't consider actual runtime to be the relevant metric here.\nFurther - we strongly believe that specialized hardware such as GPUs or TPUs are continuously improving the runtime of ML models which will eventually make our proposed implementation practical even for speed sensitive applications (compare also Kraska et al, 2017).\n\n- reproducibility\nWe acknowledge that the paper does not provide sufficient data related to reproducibility and we present additional reproducibility experiments in the appendix. Similar to other RL work, there are some problems with reproducibility. However, for binary search we obtain positive results (negative cumulative regret) with a reproducibility of 85% (Quicksort: 94%).\n\n- applicability\nWe assume throughout our work that the developer -- algorithm and problem expert -- has domain-specific knowledge that is relevant for the problem being solved. Therefore our interface enables the developer to make use of their expert knowledge without requiring deep machine learning expertise. The developer decides what are the most important contextual signals and what metric to optimize for - The API naturally translates these into observations and rewards for the RL methods applied.\n\n- initial function\nWe thank the reviewer for pointing out the lack of more detailed explanations. The initial function does not serve only for initialization but it plays two other important roles \n(1) it generates safe experience trajectories from which the off-policy RL algorithm learns and \n(2) can be reused as a safety net, should the model performance degrade. \nWe have updated our draft to more clearly express this.\n\n- performance/episodes\nWe are not 100% sure what the reviewer means with the comment about \"performance\" - we try to respond to this comment as good as we can.\nAs we describe in the paper, we measure cumulative regret as our main performance metric. A negative cumulative regret indicates that the user benefits from using a predicted variable compared to the baseline. While initially, the predicted variable might perform a bit worse than the baseline, the goal is to outperform the baseline as quickly as possible. Note also, that the use of the initial function in our setup enables us to ensure a certain safety net in the beginning which helps the method to never perform terribly badly.\n\n- citations, related work\nThank you for the reference, we have updated our draft to point out work related specifically to approximate computing, as well as for probabilistic programming."}], "comment_replyto": ["H1xn7AL8h7", "Ske788m937", "SylpG0Y52Q"], "comment_url": ["https://openreview.net/forum?id=B1epooR5FX&noteId=BkguooveAX", "https://openreview.net/forum?id=B1epooR5FX&noteId=Byl2ksvgAm", "https://openreview.net/forum?id=B1epooR5FX&noteId=BJe9P9PeAQ"], "meta_review_cdate": 1545094946544, "meta_review_tcdate": 1545094946544, "meta_review_tmdate": 1545354479630, "meta_review_ddate ": null, "meta_review_title": "innovative idea, contributions insufficient", "meta_review_metareview": "The paper proposes a framework at the intersection of programming and machine learning, where some variables in a program are replaced by PVars - variables whose values are learned using machine learning from data. The paper presents an API that is designed to support this scenario, as well as three case studies: binary search, quick sort, and caching - all implemented with PVars.\n\nThe reviewers and the AC agree that the paper presents and potentially valuable new idea, and shows concrete applications in the presented case studies. They provide example code in the paper, and present a detailed analysis of the obtained results.\n\nThe reviewers and AC also not several potential weaknesses - the AC will focus on a subset for the present discussion. The paper is unusual in that it presents a programming API rather than e.g., a thorough empirical comparison, a novel approach, or new theoretical insights. Paper at the intersection of systems and machine learning can make valuable contributions to the ICLR community, but need to provide a clear contributions which are supported in the paper by empirical or theoretical results. The research contributions of the present paper are vague, even after the revision phase. The main contribution claimed is the introduction of the API, and that such an API / system is feasible. This is an extremely weak claim. A stronger claim would be if e.g., the present approach would advance the state of the art beyond an existing such framework, e.g., probabilistic programming, either conceptually or empirically. I want to particularly highlight probabilistic programming here, as it is mentioned by the authors - this is a well developed research area, with existing approaches and widely used tools. The authors dismiss this approach in their related work section, saying that probabilistic programming is \"specialized on working with distributions\". Many would see the latter as a benefit, so the authors should clearly motivate how their approach improves over these existing methods, and how it would enable novel uses or otherwise provide benefits. At the current stage, the paper is not ready for publication.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper664/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1epooR5FX&noteId=ryxo7v6rg4"], "decision": "Reject"}