Title,Abstract Note,Url,Publication Year,Item Type,Author,Publication Title Computer Simulations as a Technological Singularity in the Empirical Sciences,"SummaryIn this paper, I discuss the conditions necessary for computer simulations to qualify as a technological singularity in the empirical sciences. A technological singularity encompasses two claims: (a) the enhancement of human cognitive capacities by the computer, and (b) their displacement from the center of the production of knowledge. For computer simulations to be a technological singularity, then, they must fulfill points (a) and (b) above. Although point (a) is relatively unproblematic, point (b) needs further analysis. In particular, in order to show that humans could be displaced from the center of the production of knowledge, it is necessary to establish the reliability of computer simulations. That is, I need to show that computer simulations are reliable processes that render, most of the time, valid results. To be a reliable process, in turn, means that simulations accurately represent the target system and carry out error-free computations. I analyze verification and validation methods as the grounds for such representation accuracy and error-free computations. Since the aim is to entrench computer simulations as a technological singularity, the entire analysis must be careful to keep human agents out of the picture.",https://doi.org/10.1007/978-3-662-54033-6_9,2017,bookSection,"Durán, Juan M.",The Technological Singularity: Managing the Journey On Gradient-Based Learning in Continuous Games,"We formulate a general framework for competitive gradient-based learning that encompasses a wide breadth of multi-agent learning algorithms, and analyze the limiting behavior of competitive gradient-based learning algorithms using dynamical systems theory. For both general-sum and potential games, we characterize a non-negligible subset of the local Nash equilibria that will be avoided if each agent employs a gradient-based learning algorithm. We also shed light on the issue of convergence to non-Nash strategies in general- and zero-sum games, which may have no relevance to the underlying game, and arise solely due to the choice of algorithm. The existence and frequency of such strategies may explain some of the difficulties encountered when using gradient descent in zero-sum games as, e.g., in the training of generative adversarial networks. To reinforce the theoretical contributions, we provide empirical results that highlight the frequency of linear quadratic dynamic games (a benchmark for multi-agent reinforcement learning) that admit global Nash equilibria that are almost surely avoided by policy gradient.",http://arxiv.org/abs/1804.05464,2020,journalArticle,"Mazumdar, Eric; Ratliff, Lillian J.; Sastry, S. Shankar",SIAM Journal on Mathematics of Data Science How Change Agencies Can Affect Our Path Towards a Singularity,"SummaryThis chapter uses the perspective of change agencies to analyse how agents (such as governments, international companies, entrepreneurs and individuals) innovate, interact, assimilate, consume and ultimately determine the direction of future technologies. These are the key components to the formation of technological singularity, i.e. an artificial intelligence becoming self-aware and self-evolving leading to an unprecedented rapid technological change in human civilization. General behaviours of change agents towards relevant technological research and development are discussed with a view to the economic and social implications. The interactions of key change agents can assist in the determination of future paths towards a singularity event or possibly even an ‘anti-singularity event’. Understanding the fundamental behaviours and motivations of change agents in technology development will increase our understanding of potential mechanisms to monitor and control developments such as Artificial Intelligence research to ensure that if and when singularity occurs it can be controlled and positively utilised for social and economic benefits.",https://doi.org/10.1007/978-3-662-54033-6_4,2017,bookSection,"Zheng, Ping; Akhmad, Mohammed-Asif",The Technological Singularity: Managing the Journey Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift,"Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive {\em uncertainty}. Quantifying uncertainty is especially critical in real-world settings, which often involve input distributions that are shifted from the training distribution due to a variety of factors including sample bias and non-stationarity. In such settings, well calibrated uncertainty estimates convey information about when a model's output should (or should not) be trusted. Many probabilistic deep learning methods, including Bayesian-and non-Bayesian methods, have been proposed in the literature for quantifying predictive uncertainty, but to our knowledge there has not previously been a rigorous large-scale empirical comparison of these methods under dataset shift. We present a large-scale benchmark of existing state-of-the-art methods on classification problems and investigate the effect of dataset shift on accuracy and calibration. We find that traditional post-hoc calibration does indeed fall short, as do several other previous methods. However, some methods that marginalize over models give surprisingly strong results across a broad spectrum of tasks.",http://arxiv.org/abs/1906.02530,2019,conferencePaper,"Ovadia, Yaniv; Fertig, Emily; Ren, Jie; Nado, Zachary; Sculley, D.; Nowozin, Sebastian; Dillon, Joshua V.; Lakshminarayanan, Balaji; Snoek, Jasper","Advances in Neural Information Processing Systems, 2019" Bayesian Relational Memory for Semantic Visual Navigation,"We introduce a new memory architecture, Bayesian Relational Memory (BRM), to improve the generalization ability for semantic visual navigation agents in unseen environments, where an agent is given a semantic target to navigate towards. BRM takes the form of a probabilistic relation graph over semantic entities (e.g., room types), which allows (1) capturing the layout prior from training environments, i.e., prior knowledge, (2) estimating posterior layout at test time, i.e., memory update, and (3) efficient planning for navigation, altogether. We develop a BRM agent consisting of a BRM module for producing sub-goals and a goalconditioned locomotion module for control. When testing in unseen environments, the BRM agent outperforms baselines that do not explicitly utilize the probabilistic relational memory structure.",https://ieeexplore.ieee.org/document/9009539/,2019,conferencePaper,"Wu, Yi; Wu, Yuxin; Tamar, Aviv; Russell, Stuart; Gkioxari, Georgia; Tian, Yuandong",2019 IEEE/CVF International Conference on Computer Vision (ICCV) Are causal decision theorists trying to outsmart conditional probabilities?,"Presumably, this has been discussed somewhere in the past, but I wonder to which extent causal decision theorists (and many other non-evidential decision theorists, too) are trying to make better predictions than (what they think to be) their own conditional probabilities. To state this question more clearly, let’s look at the generic Newcomb-like problem with two actions a1 and a2 (e.g., one-boxing and two-boxing, cooperating or defecting, not smoking or smoking) and two states s1 and s2 (specifying, e.g., whether there is money in both boxes, whether the other agent cooperates, whether one has cancer). The Newcomb-ness is the result of two properties: * No matter the state, it is better to take action a2, i.e. u(a2,s1)>u(a1,s1) and u(a2,s2)>u(a1,s2). (There are also problems without dominance where CDT and EDT nonetheless disagree. For simplicity I will assume dominance, here.) * The action cannot causally affect the state, but somehow taking a1 gives us evidence that we’re in the preferable state s1. That is, P(s1|a1)>P(s1|a2) and u(a1,s1)>u(a2,s2). Then, if the latter two differences are large enough, it may be that E[u|a1] > E[u|a2]. I.e. P(s1|a1) * u(s1,a1) + P(s2|a1) * u(s2,a1) > P(s1|a2) * u(s1,a2) + P(s2|a2) * u(s2,a2), despite the dominance. Now, my question is: After having taken one of the two actions, say a1, but before having observed the state, do causal decision theorists really assign the probability P(s1|a1) (specified in the problem description) to being in state s1? I used to think that this was the case. E.g., the way I learned about Newcomb’s problem is that causal decision theorists understand that, once they have said the words “both boxes for me, please”, they assign very low probability to getting the million. So, if there were a period between saying those words and receiving the payoff, they would bet at odds that reveal that they assign a low probability (namely P(s1,a2)) to money being under",https://www.lesswrong.com/posts/cyJgdhgYaM2CbZ7tP/are-causal-decision-theorists-trying-to-outsmart-conditional,2017,blogPost,"Oesterheld, Caspar",LessWrong "Existential Risk, Creativity & Well-Adapted Science",,http://philsci-archive.pitt.edu/14800/,2019,journalArticle,"Currie, Adrian",Studies in History and Philosophy of Science Part A Decision Theory and the Irrelevance of Impossible Outcomes,"(This post assumes some knowledge of the decision theory of Newcomb-like scenarios.) One problem in the decision theory of Newcomb-like scenarios (i.e. the study of whether causal, evidential or so…",https://casparoesterheld.com/2017/01/17/decision-theory-and-the-irrelevance-of-impossible-outcomes/,2017,blogPost,"Oesterheld, Caspar",The Universe from an Intentional Stance Pervasive Spurious Normativity,This paper proposes a mathematical model for a simplified version of the game defined in Hadfield and Weingast [2012] which proposes that legal order can be described as an equilibrium in thirdparty decentralized enforcement coordinated by a centralized classification institution. We explore the attractiveness of joining a new group (which is assumed to have settled on an enforcement equilibrium already) where groups differ in terms of the frequency of interactions in which norm violation is possible (normative interactions) and thus punishment is called for. We show that groups in which normative interactions are frequent but involve relatively unimportant rules may achieve higher value for participants.,,2017,manuscript,"Hadfield, Gillian K; Hadfield-Menell, Dylan", Introduction to the technological singularity,,,2017,bookSection,"Armstrong, Stuart",The Technological Singularity Economic growth under transformative AI,,https://globalprioritiesinstitute.org/wp-content/uploads/Philip-Trammell-and-Anton-Korinek_Economic-Growth-under-Transformative-AI.pdf,2020,report,"Trammell, Phillip; Korinek, Anton", How does the offense-defense balance scale?,"We ask how the offense-defense balance scales, meaning how it changes as investments into a conflict increase. To do so we offer a general formalization of the offense-defense balance in terms of contest success functions. Simple models of ground invasions and cyberattacks that exploit software vulnerabilities suggest that, in both cases, growth in investments will favor offense when investment levels are sufficiently low and favor defense when they are sufficiently high. We refer to this phenomenon as offensive-then-defensive scaling or OD-scaling. Such scaling effects may help us understand the security implications of applications of artificial intelligence that in essence scale up existing capabilities.",https://doi.org/10.1080/01402390.2019.1631810,2019,journalArticle,"Garfinkel, Ben; Dafoe, Allan",Journal of Strategic Studies Learning Efficient Representation for Intrinsic Motivation,"Mutual Information between agent Actions and environment States (MIAS) quantifies the influence of agent on its environment. Recently, it was found that the maximization of MIAS can be used as an intrinsic motivation for artificial agents. In literature, the term empowerment is used to represent the maximum of MIAS at a certain state. While empowerment has been shown to solve a broad range of reinforcement learning problems, its calculation in arbitrary dynamics is a challenging problem because it relies on the estimation of mutual information. Existing approaches, which rely on sampling, are limited to low dimensional spaces, because high-confidence distribution-free lower bounds for mutual information require exponential number of samples. In this work, we develop a novel approach for the estimation of empowerment in unknown dynamics from visual observation only, without the need to sample for MIAS. The core idea is to represent the relation between action sequences and future states using a stochastic dynamic model in latent space with a specific form. This allows us to efficiently compute empowerment with the ""Water-Filling"" algorithm from information theory. We construct this embedding with deep neural networks trained on a sophisticated objective function. Our experimental results show that the designed embedding preserves information-theoretic properties of the original dynamics.",http://arxiv.org/abs/1912.02624,2019,manuscript,"Zhao, Ruihan; Tiomkin, Stas; Abbeel, Pieter", Using surrogate goals to deflect threats,"Agents that threaten to harm other agents, either in an attempt at extortion or as part of an escalating conflict, are an important form of agential s-risks. To avoid worst-case outcomes resulting from the execution of such threats, I suggest that agents add a “meaningless” surrogate goal to their utility function.",https://longtermrisk.org/using-surrogate-goals-deflect-threats/,2018,blogPost,"Baumann, Tobias",Center on Long-Term Risk Pretrained Transformers Improve Out-of-Distribution Robustness,"Although pretrained Transformers such as BERT achieve high accuracy on indistribution examples, do they generalize to new distributions? We systematically measure out-of-distribution (OOD) generalization for seven NLP datasets by constructing a new robustness benchmark with realistic distribution shifts. We measure the generalization of previous models including bag-of-words models, ConvNets, and LSTMs, and we show that pretrained Transformers’ performance declines are substantially smaller. Pretrained transformers are also more effective at detecting anomalous or OOD examples, while many previous models are frequently worse than chance. We examine which factors affect robustness, finding that larger models are not necessarily more robust, distillation can be harmful, and more diverse pretraining data can enhance robustness. Finally, we show where future work can improve OOD robustness.",http://arxiv.org/abs/2004.06100,2020,conferencePaper,"Hendrycks, Dan; Liu, Xiaoyuan; Wallace, Eric; Dziedzic, Adam; Krishnan, Rishabh; Song, Dawn",Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics Certified Adversarial Robustness via Randomized Smoothing,"We show how to turn any classifier that classifies well under Gaussian noise into a new classifier that is certifiably robust to adversarial perturbations under the 2 norm. This “randomized smoothing” technique has been proposed recently in the literature, but existing guarantees are loose. We prove a tight robustness guarantee in 2 norm for smoothing with Gaussian noise. We use randomized smoothing to obtain an ImageNet classifier with e.g. a certified top-1 accuracy of 49% under adversarial perturbations with 2 norm less than 0.5 (=127/255). No certified defense has been shown feasible on ImageNet except for smoothing. On smaller-scale datasets where competing approaches to certified 2 robustness are viable, smoothing delivers higher certified accuracies. Our strong empirical results suggest that randomized smoothing is a promising direction for future research into adversarially robust classification. Code and models are available at http: //github.com/locuslab/smoothing.",http://arxiv.org/abs/1902.02918,2019,journalArticle,"Cohen, Jeremy M.; Rosenfeld, Elan; Kolter, J. Zico","arXiv:1902.02918 [cs, stat]" The Facets of Artificial Intelligence: A Framework to Track the Evolution of AI,"We present nine facets for the analysis of the past and future evolution of AI. Each facet has also a set of edges that can summarise different trends and contours in AI. With them, we first conduct a quantitative analysis using the information from two decades of AAAI/IJCAI conferences and around 50 years of documents from AI topics, an official database from the AAAI, illustrated by several plots. We then perform a qualitative analysis using the facets and edges, locating AI systems in the intelligence landscape and the discipline as a whole. This analytical framework provides a more structured and systematic way of looking at the shape and boundaries of AI.",https://www.ijcai.org/proceedings/2018/718,2018,conferencePaper,"Martínez-Plumed, Fernando; Loe, Bao Sheng; Flach, Peter; Ó hÉigeartaigh, Seán; Vold, Karina; Hernández-Orallo, José",Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence "Essays on Ethics, Social Behavior, and Scientific Explanation",,http://link.springer.com/10.1007/978-94-010-9327-9,1976,book,"Harsanyi, John C.", How long until human-level AI? Results from an expert assessment,,https://linkinghub.elsevier.com/retrieve/pii/S0040162510002106,2011,journalArticle,"Baum, Seth D.; Goertzel, Ben; Goertzel, Ted G.",Technological Forecasting and Social Change Algorithms associating appearance and criminality have a dark past – Catherine Stinson | Aeon Ideas,"In discussions about facial-recognition software, phrenology analogies seem like a no-brainer. In fact, they’re a dead-end",https://aeon.co/ideas/algorithms-associating-appearance-and-criminality-have-a-dark-past,2020,magazineArticle,"Stinson, Catherine",Aeon "Governance, Risk and Financial Impact of Mega Disasters: Lessons from Japan",,http://link.springer.com/10.1007/978-981-13-9005-0,2019,book,, Algorithmic Fairness from a Non-ideal Perspective,"Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a variety of algorithms in attempts to satisfy subsets of these parities or to trade off the degree to which they are satisfied against utility. In this paper, we connect this approach to \emph{fair machine learning} to the literature on ideal and non-ideal methodological approaches in political philosophy. The ideal approach requires positing the principles according to which a just world would operate. In the most straightforward application of ideal theory, one supports a proposed policy by arguing that it closes a discrepancy between the real and the perfectly just world. However, by failing to account for the mechanisms by which our non-ideal world arose, the responsibilities of various decision-makers, and the impacts of proposed policies, naive applications of ideal thinking can lead to misguided interventions. In this paper, we demonstrate a connection between the fair machine learning literature and the ideal approach in political philosophy, and argue that the increasingly apparent shortcomings of proposed fair machine learning algorithms reflect broader troubles faced by the ideal approach. We conclude with a critical discussion of the harms of misguided solutions, a reinterpretation of impossibility results, and directions for future research.",http://arxiv.org/abs/2001.09773,2020,conferencePaper,"Fazelpour, Sina; Lipton, Zachary C.","arXiv:2001.09773 [cs, stat]" Book summary: Unlocking the Emotional Brain,"If the thesis in Unlocking the Emotional Brain (UtEB) is even half-right, it may be one of the most important books that I have read. Written by the psychotherapists Bruce Ecker, Robin Ticic and Laurel Hulley, it claims to offer a neuroscience-grounded, comprehensive model of how effective therapy works. In so doing, it also happens to formulate its theory in terms of belief updating, helping explain how the brain models the world and what kinds of techniques allow us to actually change our minds. Furthermore, if UtEB is correct, it also explains why rationalist techniques such as Internal Double Crux [1 2 3] work. UtEB’s premise is that much if not most of our behavior is driven by emotional learning. Intense emotions generate unconscious predictive models of how the world functions and what caused those emotions to occur. The brain then uses those models to guide our future behavior. Emotional issues and seemingly irrational behaviors are generated from implicit world-models (schemas) which have been formed in response to various external challenges. Each schema contains memories relating to times when the challenge has been encountered and mental structures describing both the problem and a solution to it. According to the authors, the key for updating such schemas involves a process of memory reconsolidation, originally identified in neuroscience. The emotional brain’s learnings are usually locked and not modifiable. However, once an emotional schema is activated, it is possible to simultaneously bring into awareness knowledge contradicting the active schema. When this happens, the information contained in the schema can be overwritten by the new knowledge. While I am not convinced that the authors are entirely right, many of the book’s claims definitely feel like they are pointing in the right direction. I will discuss some of my caveats and reservations after summarizing some of the book’s claims in general. I also consider its model in the light of an issu",https://www.lesswrong.com/posts/i9xyZBS3qzA8nFXNQ/book-summary-unlocking-the-emotional-brain,2019,blogPost,"Sotala, Kaj",LessWrong Artificial Intelligence and Robotization,"This chapter provides an overview of the international law governing applications of artificial intelligence and robotics which affect global security, highlighting challenges arising from technological developments and how international regulators are responding to them. Much of the international law literature thus far has focused on the implications of increasingly autonomous weapons systems. Our contribution instead seeks to cover a broader range of global security risks resulting from large-scale diffuse or concentrated, gradual or sudden, direct or indirect, intentional or unintentional, AI or robotics-caused harm. Applications of these technologies permeate almost every domain of human activity and thus unsurprisingly have an equally wide range of risk profiles, from a discriminatory algorithmic decision causing financial distress to an AI-sparked nuclear war collapsing global civilization. Hence, it is only natural that much of the international regulatory activity takes place in domain-specific fora. Many of these fora coordinate with each other, both within and beyond the UN system, spreading insights and best practices on how to deal with common concerns such as cybersecurity, monitoring, and reliability, so as to prevent accidents and misuse.",https://papers.ssrn.com/abstract=3310421,2019,bookSection,"Kunz, Martina; Ó hÉigeartaigh, Seán",Oxford Handbook on the International Law of Global Security Existential risk and existential hope: definitions,,,2015,journalArticle,"Cotton-Barratt, Owen; Ord, Toby",Future of Humanity Institute: Technical Report Unsupervised Risk Estimation with only Structural Assumptions,"Given a model θ and unlabeled samples from a distribution p∗, we show how to estimate the labeled risk of θ while only making structural (i.e., conditional independence) assumptions about p∗. This lets us estimate a model’s test error on distributions very different than its training distribution, thus performing unsupervised domain adaptation even without assuming the true predictor remains constant (covariate shift). Furthermore, we can perform discriminative semi-supervised learning, even under model mis-specification. Our technical tool is the method of moments, which allows us to exploit conditional independencies without relying on a specific parametric model. Finally, we introduce a new theoretical framework for grappling with the non-identifiability of the class identities fundamental to unsupervised learning.",,2016,conferencePaper,"Steinhardt, Jacob; Liang, Percy", Teacher-Student Curriculum Learning,"We propose Teacher-Student Curriculum Learning (TSCL), a framework for automatic curriculum learning, where the Student tries to learn a complex task and the Teacher automatically chooses subtasks from a given set for the Student to train on. We describe a family of Teacher algorithms that rely on the intuition that the Student should practice more those tasks on which it makes the fastest progress, i.e. where the slope of the learning curve is highest. In addition, the Teacher algorithms address the problem of forgetting by also choosing tasks where the Student's performance is getting worse. We demonstrate that TSCL matches or surpasses the results of carefully hand-crafted curricula in two tasks: addition of decimal numbers with LSTM and navigation in Minecraft. Using our automatically generated curriculum enabled to solve a Minecraft maze that could not be solved at all when training directly on solving the maze, and the learning was an order of magnitude faster than uniform sampling of subtasks.",https://ieeexplore.ieee.org/abstract/document/8827566?casa_token=PfYUaX98POUAAAAA:cIxPGVMmrYB3kqcM4aPyvrBzLo0S0jbF6bBCljJEeGyQ5BdIOsrn2pw3THc0Xyd4rkP8Soxl,2020,journalArticle,"Matiisen, Tambet; Oliver, Avital; Cohen, Taco; Schulman, John",IEEE Transactions on Neural Networks and Learning Systems Double catastrophe: intermittent stratospheric geoengineering induced by societal collapse,,http://link.springer.com/10.1007/s10669-012-9429-y,2013,journalArticle,"Baum, Seth D.; Maher, Timothy M.; Haqq-Misra, Jacob",Environment Systems & Decisions Predictors exist: CDT going bonkers... forever,"I've been wanting to get a better example of CDT (causal decision theory) misbehaving, where the behaviour is more clearly suboptimal than it is in the Newcomb problem (which many people don't seem to accept as CDT being suboptimal), and simpler to grasp than Death in Damascus. THE ""PREDICTORS EXIST"" PROBLEM So consider this simple example: the player is playing against Omega, who will predict their actions[1]. The player can take three actions: ""zero"", ""one"", or ""leave"". If ever they do ""leave"", then the experiment is over and they leave. If they choose ""zero"" or ""one"", then Omega will predict their action, and compare this to their actual action. If the two match, then the player loses 1 utility and the game repeats; if the action and the prediction differs, then the player gains 3 utility and the experiment ends. Assume that actually Omega is a perfect or quasi-perfect predictor, with a good model of the player. An FDT or EDT agent would soon realise that they couldn't trick Omega, after a few tries, and would quickly end the game. But the CDT player would be incapable of reaching this reasoning. Whatever distribution they compute over Omega's prediction, they will always estimate that they (the CDT player) have at least a 50% chance of choosing the other option[2], for an expected utility gain of at least 0.5(3)+0.5(−1)=1. Basically, the CDT agent can never learn that Omega is a good predictor of themselves[3]. And so they will continue playing, and continue losing... for ever. -------------------------------------------------------------------------------- 1. Omega will make this prediction not necessarily before the player takes their action, not even necessarily without seeing this action, but still makes the prediction independently of this knowledge. And that's enough for CDT. ↩︎ 2. For example, suppose the CDT agent estimates the prediction will be ""zero"" with probability p, and ""one"" with probability 1-p. Then if p≥1/2",https://www.alignmentforum.org/posts/Kr76XzME7TFkN937z/predictors-exist-cdt-going-bonkers-forever,2020,blogPost,"Armstrong, Stuart",AI Alignment Forum Ethical Issues in Advanced Artificial Intelligence,"The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive pursuit, a superintelligence could also easily surpass humans in the quality of its moral thinking. However, it would be up to the designers of the superintelligence to specify its original motivations. Since the superintelligence may become unstoppably powerful because of its intellectual superiority and the technologies it could develop, it is crucial that it be provided with human-friendly motivations. This paper surveys some of the unique ethical issues in creating superintelligence, and discusses what motivations we ought to give a superintelligence, and introduces some cost-benefit considerations relating to whether the development of superintelligent machines ought to be accelerated or retarded.",https://www.taylorfrancis.com/books/9781000108934/chapters/10.4324/9781003074991-7,2003,bookSection,"Bostrom, Nick",Machine Ethics and Robot Ethics Machine Learning Projects for Iterated Distillation and Amplification,"Iterated Distillation and Amplification (IDA) is a framework for training ML models. IDA is related to existing frameworks like imitation learning and reinforcement learning, but it aims to solve tasks for which humans cannot construct a suitable reward function or solve directly.",,2019,manuscript,"Evans, Owain; Saunders, William; Stuhlmüller, Andreas", Reward-rational (implicit) choice: A unifying formalism for reward learning,"It is often difficult to hand-specify what the correct reward function is for a task, so researchers have instead aimed to learn reward functions from human behavior or feedback. The types of behavior interpreted as evidence of the reward function have expanded greatly in recent years. We've gone from demonstrations, to comparisons, to reading into the information leaked when the human is pushing the robot away or turning it off. And surely, there is more to come. How will a robot make sense of all these diverse types of behavior? Our key insight is that different types of behavior can be interpreted in a single unifying formalism - as a reward-rational choice that the human is making, often implicitly. The formalism offers both a unifying lens with which to view past work, as well as a recipe for interpreting new sources of information that are yet to be uncovered. We provide two examples to showcase this: interpreting a new feedback type, and reading into how the choice of feedback itself leaks information about the reward.",http://arxiv.org/abs/2002.04833,2020,conferencePaper,"Jeon, Hong Jun; Milli, Smitha; Dragan, Anca D.",34th Conference on Neural Information Processing Systems (NeurIPS 2020) Shared Autonomy via Hindsight Optimization,"In shared autonomy, user input and robot autonomy are combined to control a robot to achieve a goal. Often, the robot does not know a priori which goal the user wants to achieve, and must both predict the user's intended goal, and assist in achieving that goal. We formulate the problem of shared autonomy as a Partially Observable Markov Decision Process with uncertainty over the user's goal. We utilize maximum entropy inverse optimal control to estimate a distribution over the user's goal based on the history of inputs. Ideally, the robot assists the user by solving for an action which minimizes the expected cost-to-go for the (unknown) goal. As solving the POMDP to select the optimal action is intractable, we use hindsight optimization to approximate the solution. In a user study, we compare our method to a standard predict-then-blend approach. We find that our method enables users to accomplish tasks more quickly while utilizing less input. However, when asked to rate each system, users were mixed in their assessment, citing a tradeoff between maintaining control authority and accomplishing tasks quickly.",http://arxiv.org/abs/1503.07619,2015,conferencePaper,"Javdani, Shervin; Srinivasa, Siddhartha S.; Bagnell, J. Andrew",Robotics Science and Systems Online Proceedings An Empirical Model of Large-Batch Training,"In an increasing number of domains it has been demonstrated that deep learning models can be trained using relatively large batch sizes without sacrificing data efficiency. However the limits of this massive data parallelism seem to differ from domain to domain, ranging from batches of tens of thousands in ImageNet to batches of millions in RL agents that play the game Dota 2. To our knowledge there is limited conceptual understanding of why these limits to batch size differ or how we might choose the correct batch size in a new domain. In this paper, we demonstrate that a simple and easy-to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications, including a number of supervised learning datasets (MNIST, SVHN, CIFAR-10, ImageNet, Billion Word), reinforcement learning domains (Atari and Dota), and even generative model training (autoencoders on SVHN). We find that the noise scale increases as the loss decreases over a training run and depends on the model size primarily through improved model performance. Our empirically-motivated theory also describes the tradeoff between compute-efficiency and time-efficiency, and provides a rough model of the benefits of adaptive batch-size training.",http://arxiv.org/abs/1812.06162,2018,manuscript,"McCandlish, Sam; Kaplan, Jared; Amodei, Dario; Team, OpenAI Dota", From Optimizing Engagement to Measuring Value,"Most recommendation engines today are based on predicting user engagement, e.g. predicting whether a user will click on an item or not. However, there is potentially a large gap between engagement signals and a desired notion of ""value"" that is worth optimizing for. We use the framework of measurement theory to (a) confront the designer with a normative question about what the designer values, (b) provide a general latent variable model approach that can be used to operationalize the target construct and directly optimize for it, and (c) guide the designer in evaluating and revising their operationalization. We implement our approach on the Twitter platform on millions of users. In line with established approaches to assessing the validity of measurements, we perform a qualitative evaluation of how well our model captures a desired notion of ""value"".",http://arxiv.org/abs/2008.12623,2020,manuscript,"Milli, Smitha; Belli, Luca; Hardt, Moritz", Accounting for Violent Conflict Risk in Planetary Defense Decision,,http://gcrinstitute.org/accounting-for-violent-conflict-risk-in-planetary-defense-decisions/,2020,blogPost,"Baum, Seth",Global Catastrophic Risk Institute Regulatory Markets for AI Safety,We propose a new model for regulation to achieve AI safety: global regulatory markets. We first sketch the model in general terms and provide an overview of the costs and benefits of this approach. We then demonstrate how the model might work in practice: responding to the risk of adversarial attacks on AI models employed in commercial drones.,https://arxiv.org/abs/2001.00078,2019,conferencePaper,"Clark, Jack; Hadfield, Gillian K", The sure-thing principle and P2,"This paper offers a fine analysis of different versions of the well known sure-thing principle. We show that Savage’s formal formulation of the principle, i.e., his second postulate (P2), is strictly stronger than what is intended originally.",http://www.sciencedirect.com/science/article/pii/S0165176517303154,2017,journalArticle,"Liu, Yang",Economics Letters Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments,"We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.",https://papers.nips.cc/paper/2017/hash/68a9750337a418a86fe06c1991a1d64c-Abstract.html,2018,conferencePaper,"Lowe, Ryan; Wu, Yi; Tamar, Aviv; Harb, Jean; Abbeel, Pieter; Mordatch, Igor",Advances in Neural Information Processing Systems 30 (NIPS 2017) The Universe of Minds,"The paper attempts to describe the space of possible mind designs by first equating all minds to software. Next it proves some interesting properties of the mind design space such as infinitude of minds, size and representation complexity of minds. A survey of mind design taxonomies is followed by a proposal for a new field of investigation devoted to study of minds, intellectology, a list of open problems for this new field is presented.",http://arxiv.org/abs/1410.0369,2014,manuscript,"Yampolskiy, Roman V.", Retrospective Analysis of the 2019 MineRL Competition on Sample Efficient Reinforcement Learning,"To facilitate research in the direction of sample efficient reinforcement learning, we held the MineRL Competition on Sample Efficient Reinforcement Learning Using Human Priors at the Thirty-third Conference on Neural Information Processing Systems (NeurIPS 2019). The primary goal of this competition was to promote the development of algorithms that use human demonstrations alongside reinforcement learning to reduce the number of samples needed to solve complex, hierarchical, and sparse environments. We describe the competition, outlining the primary challenge, the competition design, and the resources that we provided to the participants. We provide an overview of the top solutions, each of which use deep reinforcement learning and/or imitation learning. We also discuss the impact of our organizational decisions on the competition and future directions for improvement.",http://arxiv.org/abs/2003.05012,2020,conferencePaper,"Milani, Stephanie; Topin, Nicholay; Houghton, Brandon; Guss, William H.; Mohanty, Sharada P.; Nakata, Keisuke; Vinyals, Oriol; Kuno, Noboru Sean",Proceedings of the NeurIPS 2019 Competition and Demonstration Track Artificial General Intelligence,,http://link.springer.com/10.1007/978-3-540-68677-4,2007,book,, Learning to Understand Goal Specifications by Modelling Reward,"Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards. However, this places on environment designers the onus of designing language-conditional reward functions which may not be easily or tractably implemented as the complexity of the environment and the language scales. To overcome this limitation, we present a framework within which instruction-conditional RL agents are trained using rewards obtained not from the environment, but from reward models which are jointly trained from expert examples. As reward models improve, they learn to accurately reward agents for completing tasks for environment configurations---and for instructions---not present amongst the expert data. This framework effectively separates the representation of what instructions require from how they can be executed. In a simple grid world, it enables an agent to learn a range of commands requiring interaction with blocks and understanding of spatial relations and underspecified abstract arrangements. We further show the method allows our agent to adapt to changes in the environment without requiring new expert examples.",http://arxiv.org/abs/1806.01946,2019,conferencePaper,"Bahdanau, Dzmitry; Hill, Felix; Leike, Jan; Hughes, Edward; Hosseini, Arian; Kohli, Pushmeet; Grefenstette, Edward",arXiv:1806.01946 [cs] Conversation with Steve Potter,"Posted 13 July 2015 Participants Professor Steve Potter – Associate Professor, Laboratory of NeuroEngineering, Coulter Department of Biomedical Engineering, Georgia Institute of Technology Katja Grace – Machine Intelligence Research Institute (MIRI) Note: These notes were compiled by MIRI and give an overview of the major points made by Professor Steve Potter. Summary Katja Grace spoke...",https://aiimpacts.org/conversation-with-steve-potter/,2015,blogPost,"Potter, Steve; Grace, Katja",AI Impacts Collaborating with Humans Requires Understanding Them,The BAIR Blog,http://bair.berkeley.edu/blog/2019/10/21/coordination/,2019,blogPost,"Shah, Rohin; Carroll, Micah",The Berkeley Artificial Intelligence Research Blog Emergence of Addictive Behaviors in Reinforcement Learning Agents,"This paper presents a novel approach to the technical analysis of wireheading in intelligent agents. Inspired by the natural analogues of wireheading and their prevalent manifestations, we propose the modeling of such phenomenon in Reinforcement Learning (RL) agents as psychological disorders. In a preliminary step towards evaluating this proposal, we study the feasibility and dynamics of emergent addictive policies in Q-learning agents in the tractable environment of the game of Snake. We consider a slightly modified settings for this game, in which the environment provides a ""drug"" seed alongside the original ""healthy"" seed for the consumption of the snake. We adopt and extend an RL-based model of natural addiction to Q-learning agents in this settings, and derive sufficient parametric conditions for the emergence of addictive behaviors in such agents. Furthermore, we evaluate our theoretical analysis with three sets of simulation-based experiments. The results demonstrate the feasibility of addictive wireheading in RL agents, and provide promising venues of further research on the psychopathological modeling of complex AI safety problems.",http://arxiv.org/abs/1811.05590,2018,conferencePaper,"Behzadan, Vahid; Yampolskiy, Roman V.; Munir, Arslan",Proceedings of the AAAI Workshop on Artificial Intelligence Safety 2019 Implicit Generation and Generalization in Energy-Based Models,"Energy based models (EBMs) are appealing due to their generality and simplicity in likelihood modeling, but have been traditionally difficult to train. We present techniques to scale MCMC based EBM training on continuous neural networks, and we show its success on the high-dimensional data domains of ImageNet32x32, ImageNet128x128, CIFAR-10, and robotic hand trajectories, achieving better samples than other likelihood models and nearing the performance of contemporary GAN approaches, while covering all modes of the data. We highlight some unique capabilities of implicit generation such as compositionality and corrupt image reconstruction and inpainting. Finally, we show that EBMs are useful models across a wide variety of tasks, achieving state-of-the-art out-of-distribution classification, adversarially robust classification, state-of-the-art continual online class learning, and coherent long term predicted trajectory rollouts.",http://arxiv.org/abs/1903.08689,2019,manuscript,"Du, Yilun; Mordatch, Igor", On AI Weapons,"In this post I comprehensively review the risks and upsides of lethal autonomous weapons (LAWs). I incorporate and expand upon the ideas in this previous post of mine and the comments, plus other recent debates and publications. My principle conclusions are: 1. LAWs are more likely to be a good development than a bad one, though there is quite a bit of uncertainty and one could justify being neutral on the matter. It is not justified to expend effort against the development of lethal autonomous weapons, as the pros do not outweigh the cons. 2. If someone still opposes lethal autonomous weapons, they should focus on directly motivating steps to restrict their development with an international treaty, rather than fomenting general hostility to LAWs in Western culture. 3. The concerns over AI weapons should pivot away from accidents and moral dilemmas, towards the question of who would control them in a domestic power struggle. This issue is both more important and more neglected. Background: as far as I can tell, there has been no serious analysis judging whether the introduction of LAWs would be a good development or not. Despite this lack of foundation, a few members in or around the EA community have made some efforts to attempt to stop the new technology from being created, most notably the Future of Life Institute. So we should take a careful look at this issue and see whether these efforts ought to be scaled up, or if they are harmful or merely a waste of time. This article is laid out as a systematic classification of potential impacts. I’m not framing it as a direct response to any specific literature because the existing arguments about AI weapons are pretty scattered and unstructured. RESPONSIBILITY: CAN YOU HOLD SOMEONE RESPONSIBLE FOR A DEATH CAUSED BY AN LAW? Opponents of LAWs frequently repeat the worry that they prevent us from holding people responsible for bad actions. But the idea of “holding someone responsible” is vague language and there",https://forum.effectivealtruism.org/posts/vdqBn65Qaw77MpqXz/on-ai-weapons,2019,blogPost,"Bogosian, Kyle",Effective Altruism Forum Learning the prior,"I suggest using neural nets to approximate our real prior, rather than implicitly using neural nets themselves as the prior.",https://ai-alignment.com/learning-the-prior-48f61b445c04,2020,blogPost,"Christiano, Paul",AI Alignment (Medium) "If AI is going to help us in a crisis, we need a new kind of ethics","Jess Whittlestone at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and her colleagues published a comment piece in Nature Machine Intelligence this week arguing that if artificial intelligence is going to help in a crisis, we need a new, faster way of doing AI ethics, which they call ethics…",https://www.technologyreview.com/2020/06/24/1004432/ai-help-crisis-new-kind-ethics-machine-learning-pandemic/,2020,magazineArticle,"Heaven, Will Douglas; Whittlestone, Jess",MIT Technology Review "Computational Models of Ethical Reasoning: Challenges, Initial Steps, and Future Directions",,http://ieeexplore.ieee.org/document/1667950/,2006,journalArticle,"McLaren, B.M.",IEEE Intelligent Systems Descriptive Population Ethics and Its Relevance for Cause Prioritization,"SUMMARY Descriptive ethics is the empirical study of people's values and ethical views, e.g. via a survey or questionnaire. This overview focuses on beliefs about population ethics and exchange rates between goods (e.g. happiness) and bads (e.g. suffering). Two variables seem particularly important and action-guiding in this context, especially when trying to make informed choices about how to best shape the long-term future: 1) One’s normative goods-to-bads ratio (N-ratio) and 2) one’s expected bads-to-goods ratio (E-ratio). I elaborate on how a framework consisting of these two variables could inform our decision-making with respect to shaping the long-term future, as well as facilitate cooperation among differing value systems and further moral reflection. I then present concrete ideas for further research in this area and investigate associated challenges. The last section lists resources which discuss further methodological and theoretical issues which were beyond the scope of the present text. DESCRIPTIVE ETHICS AND LONG-TERM FUTURE PRIORITIZATION Recently, some debate has emerged on whether reducing extinction risk is the ideal course of action for shaping the long-term future. For instance, in the Global Priorities Institute (GPI) research agenda, Greaves & MacAskill (2017, p.13) ask “[...] whether it might be more important to ensure that future civilisation is good, assuming we don’t go extinct, than to ensure that future civilisation happens at all.” We could further ask to what extent we should focus our efforts on reducingrisks of astronomical suffering (s-risks). Again, Greaves & MacAskill: “Should we be more concerned about avoiding the worst possible outcomes for the future than we are for ensuring the very best outcomes occur [...]?” Given the enormous stakes, these are arguably some of the most important questions facing those who prioritize shaping the long-term future.1 Some interventions increase both the quality of future civilization as w",https://forum.effectivealtruism.org/posts/CmNBmSf6xtMyYhvcs/descriptive-population-ethics-and-its-relevance-for-cause,2018,blogPost,"Althaus, David",Effective Altruism Forum Demonstrating the Impact of Prior Knowledge in Risky Choice,"Bayesian models that optimally integrate prior probabilities with observations have successfully explained many aspects of human cognition. Research on decision-making under risk, however, is usually done through laboratory tasks that attempt to remove the effect of prior knowledge on choice. We ran a large online experiment in which risky options paid out according to the distribution of Democratic and Republican voters in US congressional districts to test the effects of manipulating prior probabilities on participants’ choices. We find evidence that people’s risk preferences are appropriately influenced by prior probabilities, and discuss how the study of risky choice can be integrated into the Bayesian approach to studying cognition.",https://osf.io/jgxra,2019,conferencePaper,"Hardy, Mathew; Griffiths, Tom",CogSci 2019 Brain performance in TEPS,"Traversed Edges Per Second (TEPS) is a benchmark for measuring a computer's ability to communicate information internally. Given several assumptions, we can also estimate the human brain's communication performance in terms of TEPS, and use this to meaningfully compare brains to computers. We estimate that (given these assumptions) the human brain performs around  0.18 - 6.4 *...",https://aiimpacts.org/brain-performance-in-teps/,2015,blogPost,AI Impacts,AI Impacts MDL Intelligence Distillation : Exploring Strategies for Safe Access to Superintelligent Problem-Solving Capabilities,"AI technologies may reach the threshold of rapid, open-ended, recursive improvement before we are prepared to manage the challenges posed",https://www.taylorfrancis.com/,2018,bookSection,"Drexler, K. Eric",Artificial Intelligence Safety and Security The Alignment Problem: Machine Learning and Human Values,"A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them.Today’s “machine-learning” systems, trained by data, are so effective that we’ve invited them to see and hear for us―and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem.Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole―and appear to assess Black and White defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And as autonomous vehicles share our streets, we are increasingly putting our lives in their hands.The mathematical and computational models driving these changes range in complexity from something that can fit on a spreadsheet to a complex system that might credibly be called “artificial intelligence.” They are steadily replacing both human judgment and explicitly programmed software.In best-selling author Brian Christian’s riveting account, we meet the alignment problem’s “first-responders,” and learn their ambitious plan to solve it before our hands are completely off the wheel. In a masterful blend of history and on-the ground reporting, Christian traces the explosive growth in the field of machine learning and surveys its current, sprawling frontier. Readers encounter a discipline finding its legs amid exhilarating and sometimes terrifying progress. Whether they―and we―succeed or fail in solving the alignment problem will be a defining human story.The Alignment Problem offers an unflinching reckoning with humanity’s biases and blind spots, our own unstated assumptions and often contradictory goals. A dazzlingly interdisciplinary work, it takes a hard look not only at our technology but at our culture―and finds a story by turns harrowing and hopeful.",,2020,book,"Christian, Brian", Language Models are Few-Shot Learners,"Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions – something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art finetuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3’s few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.",http://arxiv.org/abs/2005.14165,2020,conferencePaper,"Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario",Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020) Intergenerational equity under catastrophic climate change,"Climate change raises the issue of intergenerational equity. As climate change threatens irreversible and dangerous impacts, possibly leading to extinction, the most relevant trade-off may not be between present and future consumption, but between present consumption and the mere existence of future generations. To investigate this trade-off, we build an integrated assessment model that explicitly accounts for the risk of extinction of future generations. We compare different climate policies, which change the probability of catastrophic outcomes yielding an early extinction, within the class of variable population utilitarian social welfare functions. We show that the risk of extinction is the main driver of the preferred policy over climate damages. We analyze the role of inequality aversion and population ethics. Usually a preference for large populations and a low inequality aversion favour the most ambitious climate policy, although there are cases where the effect of inequality aversion is reversed.",,2017,report,"Méjean, Aurélie; Pottier, Antonin; Zuber, Stéphane; Fleurbaey, Marc", Global Catastrophic Risks Survey,,,2008,report,"Sandberg, Anders; Bostrom, Nick", Hacking the brain: dimensions of cognitive enhancement,,,2018,journalArticle,"Dresler, Martin; Sandberg, Anders; Bublitz, Christoph; Ohla, Kathrin; Trenado, Carlos; Mroczko-Wasowicz, Aleksandra; Kühn, Simone; Repantis, Dimitris",ACS chemical neuroscience Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations,"A critical flaw of existing inverse reinforcement learning (IRL) methods is their inability to significantly outperform the demonstrator. This is because IRL typically seeks a reward function that makes the demonstrator appear near-optimal, rather than inferring the underlying intentions of the demonstrator that may have been poorly executed in practice. In this paper, we introduce a novel reward-learning-from-observation algorithm, Trajectory-ranked Reward EXtrapolation (T-REX), that extrapolates beyond a set of (approximately) ranked demonstrations in order to infer high-quality reward functions from a set of potentially poor demonstrations. When combined with deep reinforcement learning, T-REX outperforms state-of-the-art imitation learning and IRL methods on multiple Atari and MuJoCo benchmark tasks and achieves performance that is often more than twice the performance of the best demonstration. We also demonstrate that T-REX is robust to ranking noise and can accurately extrapolate intention by simply watching a learner noisily improve at a task over time.",http://arxiv.org/abs/1904.06387,2019,conferencePaper,"Brown, Daniel S.; Goo, Wonjoon; Nagarajan, Prabhat; Niekum, Scott",Proceedings of the 36th International Conference on Machine Learning Optimization in the now: Dynamic peephole optimization for hierarchical planning,"For robots to effectively interact with the real world, they will need to perform complex tasks over long time horizons. This is a daunting challenge, but recent advances using hierarchical planning [1] have been able to provide leverage on this problem. Unfortunately, this approach makes no effort to account for the execution cost of an abstract plan and often arrives at poor quality plans. This paper outlines a method for dynamically improving a hierarchical plan during execution. We frame the underlying question as one of evaluating the resource needs of an abstract operator and propose a general way to approach estimating them. We ran experiments in challenging domains and observed up to 30% reduction in execution cost when compared with a standard hierarchical planner.",http://ieeexplore.ieee.org/document/6631225/,2013,conferencePaper,"Hadfield-Menell, Dylan; Kaelbling, Leslie Pack; Lozano-Perez, Tomas",2013 IEEE International Conference on Robotics and Automation Directions and desiderata for AI alignment,"I lay out three research directions in AI alignment, and three desiderata that I think should guide research in these areas.",https://ai-alignment.com/directions-and-desiderata-for-ai-control-b60fca0da8f4,2018,blogPost,"Christiano, Paul",AI Alignment (Medium) Artificial Intelligence and National Security,"Reps. Hurd, Kelly, the Bipartisan Policy Center and CSET released guidelines for national security considerations that must be addressed in a national AI strategy. The findings identify key areas for improvement in defense and intelligence to put the nation on a path to large-scale development and deployment of AI tools in promoting national security.",https://cset.georgetown.edu/research/artificial-intelligence-and-national-security/,2020,report,"Hurd, Will; Kelly, Robin; The Bipartisan Policy Center", A simpler and more realistic subjective decision theory,"In his classic book “the Foundations of Statistics” Savage develops a formal system of rational decision making. It is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: any preference relation that satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage’s proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage’s proof requires that this be a $$\sigma $$σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage’s view we should not require subjective probabilities to be $$\sigma $$σ-additive. He therefore finds the insistence on a $$\sigma $$σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the $$\sigma $$σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems—without appealing to von Neumann–Morgenstern lotteries. The paper discusses the notion of “idealized agent” that underlies Savage’s approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent.",https://doi.org/10.1007/s11229-017-1594-6,2018,journalArticle,"Gaifman, Haim; Liu, Yang",Synthese When systems fail,,https://linkinghub.elsevier.com/retrieve/pii/S0090261601000250,2001,journalArticle,"Roberts, Karlene H; Bea, Robert G",Organizational Dynamics Preferences Implicit in the State of the World,"Reinforcement learning (RL) agents optimize only the features specified in a reward function and are indifferent to anything left out inadvertently. This means that we must not only specify what to do, but also the much larger space of what not to do. It is easy to forget these preferences, since these preferences are already satisfied in our environment. This motivates our key insight: when a robot is deployed in an environment that humans act in, the state of the environment is already optimized for what humans want. We can therefore use this implicit preference information from the state to fill in the blanks. We develop an algorithm based on Maximum Causal Entropy IRL and use it to evaluate the idea in a suite of proof-of-concept environments designed to show its properties. We find that information from the initial state can be used to infer both side effects that should be avoided as well as preferences for how the environment should be organized. Our code can be found at https://github.com/HumanCompatibleAI/rlsp.",http://arxiv.org/abs/1902.04198,2019,manuscript,"Shah, Rohin; Krasheninnikov, Dmitrii; Alexander, Jordan; Abbeel, Pieter; Dragan, Anca", Objective Value Is Always Newcombizable,"This paper argues that evidential decision theory is incompatible with options having objective values. If options have objective values, then it should always be rationally permissible for an agent to choose an option if they are certain that the option uniquely maximizes objective value. But, as we show, if options have objective values and evidential decision theory is true, then it is not always rationally permissible for an agent to choose an option if they are certain that the option uniquely maximizes objective value.",https://doi.org/10.1093/mind/fzz070,2019,journalArticle,"Ahmed, Arif; Spencer, Jack",Mind Concerning The Geopolitical Implications of a Post-Oil Technological Stack,Declining prices for solar energy capture and storage technologies indicate the end of international petrochemical trade value within the next decade. This will usher in a brief period where petrochemicals are economically worthless while still strategically valuable for military purposes. This will lead to controlling interests in states whose revenues are dependant on petrochemical sales to be incentivized to cause regional or global conflict to maintain their sovereignty and quality of life. An extreme possible scenario is the use of nuclear blackmail to extract resources from governments robust to the transition away from petrochemical-based energy production in order to secure funding for states weak to the transition.,,,manuscript,"Hidysmith, J Bryce", Task-Embedded Control Networks for Few-Shot Imitation Learning,"Much like humans, robots should have the ability to leverage knowledge from previously learned tasks in order to learn new tasks quickly in new and unfamiliar environments. Despite this, most robot learning approaches have focused on learning a single task, from scratch, with a limited notion of generalisation, and no way of leveraging the knowledge to learn other tasks more efficiently. One possible solution is meta-learning, but many of the related approaches are limited in their ability to scale to a large number of tasks and to learn further tasks without forgetting previously learned ones. With this in mind, we introduce Task-Embedded Control Networks, which employ ideas from metric learning in order to create a task embedding that can be used by a robot to learn new tasks from one or more demonstrations. In the area of visually-guided manipulation, we present simulation results in which we surpass the performance of a state-of-the-art method when using only visual information from each demonstration. Additionally, we demonstrate that our approach can also be used in conjunction with domain randomisation to train our few-shot learning ability in simulation and then deploy in the real world without any additional training. Once deployed, the robot can learn new tasks from a single real-world demonstration.",https://arxiv.org/abs/1810.03237v1,2018,conferencePaper,"James, Stephen; Bloesch, Michael; Davison, Andrew J.", AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty,"Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice. When the train and test distributions are mismatched, accuracy can plummet. Currently there are few techniques that improve robustness to unforeseen data shifts encountered during deployment. In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers. We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions. AugMix significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half.",http://arxiv.org/abs/1912.02781,2019,conferencePaper,"Hendrycks, Dan; Mu, Norman; Cubuk, Ekin D.; Zoph, Barret; Gilmer, Justin; Lakshminarayanan, Balaji","arXiv:1912.02781 [cs, stat]" Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning,,https://linkinghub.elsevier.com/retrieve/pii/S0004370299000521,1999,journalArticle,"Sutton, Richard S.; Precup, Doina; Singh, Satinder",Artificial Intelligence Guided search for task and motion plans using learned heuristics,"Tasks in mobile manipulation planning often require thousands of individual motions to complete. Such tasks require reasoning about complex goals as well as the feasibility of movements in configuration space. In discrete representations, planning complexity is exponential in the length of the plan. In mobile manipulation, parameters for an action often draw from a continuous space, so we must also cope with an infinite branching factor. Task and motion planning (TAMP) methods integrate a logical search over high-level actions with geometric reasoning to address this challenge. We present an algorithm that searches the space of possible task and motion plans, and uses statistical machine learning to guide the search process. Our contributions are as follows: 1) we present a complete algorithm for TAMP; 2) we present a randomized local search algorithm for TAMP that is easily formulated as a Markov decision process (MDP); 3) we apply reinforcement learning (RL) to learn a policy for this MDP; 4) we learn from expert demonstrations to efficiently search the space of task plans, given options that address different (potential) infeasibilities; and 5) we run experiments to evaluate the performance of our system in a variety of simulated domains. We show significant improvements in performance over prior work.",http://ieeexplore.ieee.org/document/7487165/,2016,conferencePaper,"Chitnis, Rohan; Hadfield-Menell, Dylan; Gupta, Abhishek; Srivastava, Siddharth; Groshev, Edward; Lin, Christopher; Abbeel, Pieter",2016 IEEE International Conference on Robotics and Automation (ICRA) Minimax-Regret Querying on Side Effects for Safe Optimality in Factored Markov Decision Processes,"As it achieves a goal on behalf of its human user, an autonomous agent’s actions may have side effects that change features of its environment in ways that negatively surprise its user. An agent that can be trusted to operate safely should thus only change features the user has explicitly permitted. We formalize this problem, and develop a planning algorithm that avoids potentially negative side effects given what the agent knows about (un)changeable features. Further, we formulate a provably minimax-regret querying strategy for the agent to selectively ask the user about features that it hasn’t explicitly been told about. We empirically show how much faster it is than a more exhaustive approach and how much better its queries are than those found by the best known heuristic.",https://www.ijcai.org/proceedings/2018/676,2018,conferencePaper,"Zhang, Shun; Durfee, Edmund H.; Singh, Satinder",Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence Logical Prior Probability,,http://link.springer.com/10.1007/978-3-642-35506-6_6,2012,bookSection,"Demski, Abram",Artificial General Intelligence Artificial Intelligence Aligned With Human Values | Q&A With Stuart Russell,Computer scientist Stuart Russell wants to ensure that our increasingly intelligent machines remain aligned with human values.,https://www.quantamagazine.org/artificial-intelligence-aligned-with-human-values-qa-with-stuart-russell-20150421/,2015,magazineArticle,"Wolchover, Natalie; Russell, Stuart",Quanta Magazine AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk,"This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies are created and implemented. This creates a new set of key considerations for the field of AI governance and should influence the action of future policymakers. This essay examines some of the theories of the policymaking process, how they compare to current work in AI governance, and their implications for the field at large and ends by identifying areas of future research.",https://www.mdpi.com/2504-2289/3/2/26,2019,journalArticle,"Perry, Brandon; Uuk, Risto",Big Data and Cognitive Computing Biased error search as a risk of modelling in insurance,,,2013,bookSection,"Beckstead, Nick; Armstrong, Stuart; Sandberg, Anders",Systemic Risk of Modelling in Insurance Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines,"Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces. To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP. We demonstrate and quantify the benefit of the action-dependent baseline through both theoretical analysis as well as numerical results, including an analysis of the suboptimality of the optimal state-dependent baseline. The result is a computationally efficient policy gradient algorithm, which scales to high-dimensional control problems, as demonstrated by a synthetic 2000-dimensional target matching task. Our experimental results indicate that action-dependent baselines allow for faster learning on standard reinforcement learning benchmarks and high-dimensional hand manipulation and synthetic tasks. Finally, we show that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks.",http://arxiv.org/abs/1803.07246,2018,conferencePaper,"Wu, Cathy; Rajeswaran, Aravind; Duan, Yan; Kumar, Vikash; Bayen, Alexandre M.; Kakade, Sham; Mordatch, Igor; Abbeel, Pieter","arXiv:1803.07246 [cs, stat]" "An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning","Our goal is for AI systems to correctly identify and act according to their human user’s objectives. Cooperative Inverse Reinforcement Learning (CIRL) formalizes this value alignment problem as a two-player game between a human and robot, in which only the human knows the parameters of the reward function: the robot needs to learn them as the interaction unfolds. Previous work showed that CIRL can be solved as a POMDP, but with an action space size exponential in the size of the reward parameter space. In this work, we exploit a specific property of CIRL—the human is a full information agent—to derive an optimality-preserving modification to the standard Bellman update; this reduces the complexity of the problem by an exponential factor and allows us to relax CIRL’s assumption of human rationality. We apply this update to a variety of POMDP solvers and find that it enables us to scale CIRL to non-trivial problems, with larger reward parameter spaces, and larger action spaces for both robot and human. In solutions to these larger problems, the human exhibits pedagogic (teaching) behavior, while the robot interprets it as such and attains higher value for the human.",http://arxiv.org/abs/1806.03820,2018,conferencePaper,"Malik, Dhruv; Palaniappan, Malayandi; Fisac, Jaime F.; Hadfield-Menell, Dylan; Russell, Stuart; Dragan, Anca D.",Proceedings of the 35th International Conference on Machine Learning Reward Learning from Narrated Demonstrations,"Humans effortlessly ""program"" one another by communicating goals and desires in natural language. In contrast, humans program robotic behaviours by indicating desired object locations and poses to be achieved, by providing RGB images of goal configurations, or supplying a demonstration to be imitated. None of these methods generalize across environment variations, and they convey the goal in awkward technical terms. This work proposes joint learning of natural language grounding and instructable behavioural policies reinforced by perceptual detectors of natural language expressions, grounded to the sensory inputs of the robotic agent. Our supervision is narrated visual demonstrations(NVD), which are visual demonstrations paired with verbal narration (as opposed to being silent). We introduce a dataset of NVD where teachers perform activities while describing them in detail. We map the teachers' descriptions to perceptual reward detectors, and use them to train corresponding behavioural policies in simulation.We empirically show that our instructable agents (i) learn visual reward detectors using a small number of examples by exploiting hard negative mined configurations from demonstration dynamics, (ii) develop pick-and place policies using learned visual reward detectors, (iii) benefit from object-factorized state representations that mimic the syntactic structure of natural language goal expressions, and (iv) can execute behaviours that involve novel objects in novel locations at test time, instructed by natural language.",http://arxiv.org/abs/1804.10692,2018,conferencePaper,"Tung, Hsiao-Yu Fish; Harley, Adam W.; Huang, Liang-Kang; Fragkiadaki, Katerina",Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Human-AI Interaction,"THE IMPORTANCE OF FEEDBACK Consider trying to program a self-driving car to drive from San Francisco to Los Angeles -- with no sensors that allow it to gather information as it is driving. This is possible in principle. If you can predict the exact weather conditions, the exact movement of all of the other cars on the road, the exact amount of friction along every part of the road surface, the exact impact of (the equivalents of) pressing the gas or turning the steering wheel, and so on, then you could compute ahead of time how exactly to control the car such that it gets from SF to LA. Nevertheless, it seems unlikely that we will ever be able to accomplish such a feat, even with powerful AI systems. No, in practice there is going to be some uncertainty about how the world is going to evolve; such that any plan computed ahead of time will have some errors that will compound over the course of the plan. The solution is to use sensors to gather information while executing the plan, so that we can notice any errors or deviations from the plan, and take corrective action. It is much easier to build a controller that keeps you pointed in the general direction, than to build a plan that will get you there perfectly without any adaptation. Control theory studies these sorts of systems, and you can see the general power of feedback controllers in the theorems that can be proven. Especially for motion tasks, you can build feedback controllers that are guaranteed to safely achieve the goal, even in the presence of adversarial environmental forces (that are bounded in size, so you can’t have arbitrarily strong wind). In the presence of an adversary, in most environments it becomes impossible even in principle to make such a guarantee if you do not have any sensors or feedback and must compute a plan in advance. Typically, for every such plan, there is some environmental force that would cause it to fail. THE CONTROL THEORY PERSPECTIVE ON AI ALIGNMENT With ambitious value le",https://www.alignmentforum.org/posts/4783ufKpx8xvLMPc6/human-ai-interaction,2019,blogPost,"Shah, Rohin",AI Alignment Forum Conversation with Tom Griffiths,"Participants Professor Tom Griffiths, ­ Director of the Computational Cognitive Science Lab and the Institute of Cognitive and Brain Sciences at the University of California, Berkeley. Finan Adamson, ­ AI Impacts. Note: These notes were compiled by AI impacts and give an overview of the major points made by Professor Tom Griffiths. They are available...",https://aiimpacts.org/conversation-with-tom-griffiths/,2016,blogPost,"Griffiths, Tom; Adamson, Finan",AI Impacts Reducing malicious use of synthetic media research: Considerations and potential release practices for machine learning,"The aim of this paper is to facilitate nuanced discussion around research norms and practices to mitigate the harmful impacts of advances in machine learning (ML). We focus particularly on the use of ML to create ""synthetic media"" (e.g. to generate or manipulate audio, video, images, and text), and the question of what publication and release processes around such research might look like, though many of the considerations discussed will apply to ML research more broadly. We are not arguing for any specific approach on when or how research should be distributed, but instead try to lay out some useful tools, analogies, and options for thinking about these issues. We begin with some background on the idea that ML research might be misused in harmful ways, and why advances in synthetic media, in particular, are raising concerns. We then outline in more detail some of the different paths to harm from ML research, before reviewing research risk mitigation strategies in other fields and identifying components that seem most worth emulating in the ML and synthetic media research communities. Next, we outline some important dimensions of disagreement on these issues which risk polarizing conversations. Finally, we conclude with recommendations, suggesting that the machine learning community might benefit from: working with subject matter experts to increase understanding of the risk landscape and possible mitigation strategies; building a community and norms around understanding the impacts of ML research, e.g. through regular workshops at major conferences; and establishing institutions and systems to support release practices that would otherwise be onerous and error-prone.",http://arxiv.org/abs/1907.11274,2019,manuscript,"Ovadya, Aviv; Whittlestone, Jess", Singularity Blog Insights,"SummaryThis chapter presents four articles from the blogosphere. In the first, Eliezer Yudkowsky provides the three most commonly used meanings of the Singularity: Accelerating Change where the Singularity occurs because of exponential improvements in technology, the Event Horizon where technology will improve to the point where a human-level intelligence can’t predict what will happen, and the Intelligence Explosion in which a self-improving artificial intelligence quickly brings about a singularity. In the next essay, Stuart Armstrong, one of the editors of this book, analyzes many past predictions of AI development. In the third article Scott Siskind discusses reasons why we shouldn’t wait to research AI safety. In the final entry, Scott Aaronson discusses why he does not think that a singularity is near.",https://doi.org/10.1007/978-3-662-54033-6_14,2017,bookSection,"Miller, James D.",The Technological Singularity: Managing the Journey Meta-Inverse Reinforcement Learning with Probabilistic Context Variables,"Providing a suitable reward function to reinforcement learning can be difficult in many real world applications. While inverse reinforcement learning (IRL) holds promise for automatically learning reward functions from demonstrations, several major challenges remain. First, existing IRL methods learn reward functions from scratch, requiring large numbers of demonstrations to correctly infer the reward for each task the agent may need to perform. Second, existing methods typically assume homogeneous demonstrations for a single behavior or task, while in practice, it might be easier to collect datasets of heterogeneous but related behaviors. To this end, we propose a deep latent variable model that is capable of learning rewards from demonstrations of distinct but related tasks in an unsupervised way. Critically, our model can infer rewards for new, structurally-similar tasks from a single demonstration. Our experiments on multiple continuous control tasks demonstrate the effectiveness of our approach compared to state-of-the-art imitation and inverse reinforcement learning methods.",https://arxiv.org/abs/1909.09314v2,2019,conferencePaper,"Yu, Lantao; Yu, Tianhe; Finn, Chelsea; Ermon, Stefano",Advances in Neural Information Processing Systems 32 (NeurIPS 2019) "Agents, Bodies, Constraints, Dynamics, and Evolution","The theme of this article is the dynamics of evolution of agents. That theme is applied to the evolution of constraint satisfaction, of agents themselves, of our models of agents, of artificial intelligence and, finally, of the Association for the Advancement of Artificial Intelligence (AAAI). The overall thesis is that constraint satisfaction is central to proactive and responsive intelligent behavior.",https://www.aaai.org/ojs/index.php/aimagazine/article/view/2174,2009,journalArticle,"Mackworth, Alan K.",AI Magazine Online Bayesian Goal Inference for Boundedly-Rational Planning Agents,"People routinely infer the goals of others by observing their actions over time. Remarkably, we can do so even when those actions lead to failure, enabling us to assist others when we detect that they might not achieve their goals. How might we endow machines with similar capabilities? Here we present an architecture capable of inferring an agent's goals online from both optimal and non-optimal sequences of actions. Our architecture models agents as boundedly-rational planners that interleave search with execution by replanning, thereby accounting for sub-optimal behavior. These models are specified as probabilistic programs, allowing us to represent and perform efficient Bayesian inference over an agent's goals and internal planning processes. To perform such inference, we develop Sequential Inverse Plan Search (SIPS), a sequential Monte Carlo algorithm that exploits the online replanning assumption of these models, limiting computation by incrementally extending inferred plans as new actions are observed. We present experiments showing that this modeling and inference architecture outperforms Bayesian inverse reinforcement learning baselines, accurately inferring goals from both optimal and non-optimal trajectories involving failure and back-tracking, while generalizing across domains with compositional structure and sparse rewards.",http://arxiv.org/abs/2006.07532,2020,conferencePaper,"Zhi-Xuan, Tan; Mann, Jordyn L.; Silver, Tom; Tenenbaum, Joshua B.; Mansinghka, Vikash K.",arXiv:2006.07532 [cs] Probabilistically Safe Robot Planning with Confidence-Based Human Predictions,"In order to safely operate around humans, robots can employ predictive models of human motion. Unfortunately, these models cannot capture the full complexity of human behavior and necessarily introduce simplifying assumptions. As a result, predictions may degrade whenever the observed human behavior departs from the assumed structure, which can have negative implications for safety. In this paper, we observe that how ""rational"" human actions appear under a particular model can be viewed as an indicator of that model's ability to describe the human's current motion. By reasoning about this model confidence in a real-time Bayesian framework, we show that the robot can very quickly modulate its predictions to become more uncertain when the model performs poorly. Building on recent work in provably-safe trajectory planning, we leverage these confidence-aware human motion predictions to generate assured autonomous robot motion. Our new analysis combines worst-case tracking error guarantees for the physical robot with probabilistic time-varying human predictions, yielding a quantitative, probabilistic safety certificate. We demonstrate our approach with a quadcopter navigating around a human.",https://arxiv.org/abs/1806.00109v1,2018,conferencePaper,"Fisac, Jaime F.; Bajcsy, Andrea; Herbert, Sylvia L.; Fridovich-Keil, David; Wang, Steven; Tomlin, Claire J.; Dragan, Anca D.",arXiv:1806.00109 [cs] Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence,"An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes.",https://www.mdpi.com/2504-2289/3/2/21,2019,journalArticle,"Manheim, David",Big Data and Cognitive Computing Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey,"Reinforcement learning (RL) is a popular paradigm for addressing sequential decision tasks in which the agent has only limited environmental feedback. Despite many advances over the past three decades, learning in many domains still requires a large amount of interaction with the environment, which can be prohibitively expensive in realistic scenarios. To address this problem, transfer learning has been applied to reinforcement learning such that experience gained in one task can be leveraged when starting to learn the next, harder task. More recently, several lines of research have explored how tasks, or data samples themselves, can be sequenced into a curriculum for the purpose of learning a problem that may otherwise be too difficult to learn from scratch. In this article, we present a framework for curriculum learning (CL) in reinforcement learning, and use it to survey and classify existing CL methods in terms of their assumptions, capabilities, and goals. Finally, we use our framework to find open problems and suggest directions for future RL curriculum learning research.",http://arxiv.org/abs/2003.04960,2020,manuscript,"Narvekar, Sanmit; Peng, Bei; Leonetti, Matteo; Sinapov, Jivko; Taylor, Matthew E.; Stone, Peter", Solving Imperfect-Information Games via Discounted Regret Minimization,"Counterfactual regret minimization (CFR) is a family of iterative algorithms that are the most popular and, in practice, fastest approach to approximately solving large imperfect-information games. In this paper we introduce novel CFR variants that 1) discount regrets from earlier iterations in various ways (in some cases differently for positive and negative regrets), 2) reweight iterations in various ways to obtain the output strategies, 3) use a non-standard regret minimizer and/or 4) leverage ""optimistic regret matching"". They lead to dramatically improved performance in many settings. For one, we introduce a variant that outperforms CFR+, the prior state-of-the-art algorithm, in every game tested, including large-scale realistic settings. CFR+ is a formidable benchmark: no other algorithm has been able to outperform it. Finally, we show that, unlike CFR+, many of the important new variants are compatible with modern imperfect-information-game pruning techniques and one is also compatible with sampling in the game tree.",http://arxiv.org/abs/1809.04040,2019,conferencePaper,"Brown, Noam; Sandholm, Tuomas",Proceedings of the AAAI Conference on Artificial Intelligence Mavericks and lotteries,"In 2013 the Health Research Council of New Zealand began a stream of funding titled ‘Explorer Grants’, and in 2017 changes were introduced to the funding mechanisms of the Volkswagen Foundation ‘Experiment!’ and the New Zealand Science for Technological Innovation challenge ‘Seed Projects’. All three funding streams aim at encouraging novel scientific ideas, and all now employ random selection by lottery as part of the grant selection process. The idea of funding science by lottery emerged independently in several corners of academia, including in philosophy of science. This paper reviews the conceptual and institutional landscape in which this policy proposal emerged, how different academic fields presented and supported arguments for the proposal, and how these have been reflected (or not) in actual policy. The paper presents an analytical synthesis of the arguments presented to date, notes how they support each other and shape policy recommendations in various ways, and where competing arguments highlight the need for further analysis or more data. In addition, it provides lessons for how philosophers of science can engage in shaping science policy, and in particular, highlights the importance of mixing complementary expertise: it takes a (conceptually diverse) village to raise (good) policy.",http://www.sciencedirect.com/science/article/pii/S0039368118300190,2019,journalArticle,"Avin, Shahar",Studies in History and Philosophy of Science Part A HQ-Learning,,http://journals.sagepub.com/doi/10.1177/105971239700600202,1997,journalArticle,"Wiering, Marco; Schmidhuber, Jürgen",Adaptive Behavior The AI Timelines Scam,"[epistemic status: that’s just my opinion, man. I have highly suggestive evidence, not deductive proof, for a belief I sincerely hold] “If you see fraud and do not say fraud, you are a …",https://unstableontology.com/2019/07/11/the-ai-timelines-scam/,2019,blogPost,"Taylor, Jessica",Unstable Ontology "Subagents, trauma and rationality","[Content note: discussion of trauma, child and sexual abuse, sexual violence, lack of self-worth, dissociation, PTSD, flashbacks, DID, personality disorders; some mildly graphic examples of abuse and trauma mentioned in text form] I have spent over two years doing emotional support for people who had survived long-term childhood trauma, and in these cases spawning agents to deal with unbearable suffering while having no escape from it is basically a standard reaction that the brain/mind takes. The relevant psychiatric diagnosis is DID (formerly MPD, multiple personality disorder). In these cases the multiple agents often manifest very clearly and distinctly. It is tempting to write it off as a special case that does not apply in the mainstream, yet I have seen more than once the progression from someone suffering from CPTSD to a full-blown DID. The last thing that happens is that the person recognizes that they ""switch"" between personalities. Often way later than when others notice it, if they know what to look for. After gaining some experience chatting with those who survived severe prolonged trauma, I started recognizing subtler signs of ""switching"" in myself and others. This switching between agents (I would not call them sub-agents, as they are not necessarily less than the ""main"", and different ""mains"" often take over during different parts of the person's life) while a normal way to operate, as far as I can tell, almost never rises to the level of conscious awareness, as the brain carefully constructs the lie of single identity for as long as it can.-- shminux As the above comment suggests, the appearance of something like distinct subagents is particularly noticeable in people with heavy trauma, DID being the most extreme example. This post will interpret the appearance of subagents as emerging from unintegrated memory networks, and argue that - as shminux suggests - the presence of these is a matter of degree. There’s a continuous progression of fragm",https://www.lesswrong.com/posts/u5RLu5F3zKTB3Qjnu/subagents-trauma-and-rationality,2019,blogPost,"Sotala, Kaj",LessWrong Scaling up Psychology via Scientific Regret Minimization: A Case Study in Moral Decision-Making,"Do large datasets provide value to psychologists? Without a systematic methodology for working with such datasets, there is a valid concern that analyses will produce noise artifacts rather than true effects. In this paper, we offer a way to enable researchers to systematically build models and identify novel phenomena in large datasets. One traditional approach is to analyze the residuals of models---the biggest errors they make in predicting the data---to discover what might be missing from those models. However, once a dataset is sufficiently large, machine learning algorithms approximate the true underlying function better than the data, suggesting instead that the predictions of these data-driven models should be used to guide model-building. We call this approach ""Scientific Regret Minimization"" (SRM) as it focuses on minimizing errors for cases that we know should have been predictable. We demonstrate this methodology on a subset of the Moral Machine dataset, a public collection of roughly forty million moral decisions. Using SRM, we found that incorporating a set of deontological principles that capture dimensions along which groups of agents can vary (e.g. sex and age) improves a computational model of human moral judgment. Furthermore, we were able to identify and independently validate three interesting moral phenomena: criminal dehumanization, age of responsibility, and asymmetric notions of responsibility.",https://www.pnas.org/content/117/16/8825,2019,journalArticle,"Agrawal, Mayank; Peterson, Joshua C.; Griffiths, Thomas L.",PNAS Learning Causal Trees with Latent Variables via Controlled Experimentation,,https://why19.causalai.net/papers/SSS19_Paper_Upload_198.pdf,2019,conferencePaper,"Tadepalli, Prasad; Barrie, Cameron; Russell, Stuart J.", Generalizing the Power-Seeking Theorems,"Previously: Seeking Power is Often Provably Instrumentally Convergent in MDPs. Thanks to Rohin Shah, Michael Dennis, Josh Turner, and Evan Hubinger for comments. -------------------------------------------------------------------------------- It sure seems like gaining power over the environment is instrumentally convergent (optimal for a wide range of agent goals). You can turn this into math and prove things about it. Given some distribution over agent goals, we want to be able to formally describe how optimal action tends to flow through the future. Does gaining money tend to be optimal? Avoiding shutdown? When? How do we know? Optimal Farsighted Agents Tend to Seek Power proved that, when you distribute reward fairly and evenly across states (IID), it's instrumentally convergent to gain access to lots of final states (which are absorbing, in that the agent keeps on experiencing the final state). The theorems apply when you don't discount the future (you're ""infinitely farsighted""). Most reward functions for the Pac-Man game incentivize not dying immediately, so that the agent can loop around higher-scoring configurations. Many ways of scoring Tic-Tac-Toe game states incentivize not losing immediately, in order to choose the highest-scoring final configuration. ""All states have self-loops, left hidden to reduce clutter. In AI: A Modern Approach (3e), the agent starts at1and receives reward for reaching3. The optimal policy for this reward function avoids2, and one might suspect that avoiding2is instrumentally convergent. However, a skeptic might provide a reward function for which navigating to2is optimal, and then argue that ""instrumental convergence'' is subjective and that there is no reasonable basis for concluding that2is generally avoided. We can do better... for any way of independently and identically distributing reward over states,1011of reward functions have farsighted optimal policies which avoid2. If we complicate the MDP with additional t",https://www.alignmentforum.org/posts/nyDnLif4cjeRe9DSv/generalizing-the-power-seeking-theorems,2020,blogPost,"Turner, Alex",AI Alignment Forum The Animal-AI Testbed and Competition,"Modern machine learning systems are still lacking in the kind of general intelligence and common sense reasoning found, not only in humans, but across the animal kingdom. Many animals are capable of solving seemingly simple tasks such as inferring object location through object persistence and spatial elimination, and navigating efficiently in out-of-distribution novel environments. Such tasks are difficult for AI, but provide a natural stepping stone towards the goal of more complex human-like general intelligence. The extensive literature on animal cognition provides methodology and experimental paradigms for testing such abilities but, so far, these experiments have not been translated en masse into an AI-friendly setting. We present a new testbed, Animal-AI, first released as part of the Animal-AI Olympics competition at NeurIPS 2019, which is a comprehensive environment and testing paradigm for tasks inspired by animal cognition. In this paper we outline the environment, the testbed, the results of the competition, and discuss the open challenges for building and testing artificial agents capable of the kind of nonverbal common sense reasoning found in many non-human animals.",,2020,conferencePaper,"Crosby, Matthew; Beyret, Benjamin; Shanahan, Murray; Hernandez-Orallo, Jose; Cheke, Lucy; Halina, Marta",Proceedings of Machine Learning Research Expressive Robot Motion Timing,"Our goal is to enable robots to \emph{time} their motion in a way that is purposefully expressive of their internal states, making them more transparent to people. We start by investigating what types of states motion timing is capable of expressing, focusing on robot manipulation and keeping the path constant while systematically varying the timing. We find that users naturally pick up on certain properties of the robot (like confidence), of the motion (like naturalness), or of the task (like the weight of the object that the robot is carrying). We then conduct a hypothesis-driven experiment to tease out the directions and magnitudes of these effects, and use our findings to develop candidate mathematical models for how users make these inferences from the timing. We find a strong correlation between the models and real user data, suggesting that robots can leverage these models to autonomously optimize the timing of their motion to be expressive.",http://arxiv.org/abs/1802.01536,2017,conferencePaper,"Zhou, Allan; Hadfield-Menell, Dylan; Nagabandi, Anusha; Dragan, Anca D.",Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction "Superintelligence: Paths, Dangers, Strategies","The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.",,2014,book,"Bostrom, Nick", The Evidentialist’s Wager,"Suppose that an altruistic and morally motivated agent who is uncertain between evidential decision theory (EDT) and causal decision theory (CDT) finds herself in a situation in which the two theories give conflicting verdicts. We argue that even if she has significantly higher credence in CDT, she should nevertheless act in accordance with EDT. First, we claim that that the appropriate response to normative uncertainty is to hedge one’s bets. That is, if the stakes are much higher on one theory than another, and the credences you assign to each of these theories aren’t very different, then it’s appropriate to choose the option which performs best on the high-stakes theory. Second, we show that, given the assumption of altruism, the existence of correlated decision-makers will increase the stakes for EDT but leave the stakes for CDT unaffected. Together these two claims imply that whenever there are sufficiently many correlated agents, the appropriate response is to act in accordance with EDT.",,,journalArticle,"MacAskill, William; Vallinder, Aron; Shulman, Carl; Österheld, Caspar; Treutlein, Johannes",Journal of Philosophy "A Parametric, Resource-Bounded Generalization Of Löb’s Theorem, And A Robust Cooperation Criterion For Open-Source Game Theory","Abstract This article presents two theorems: (1) a generalization of Löb’s Theorem that applies to formal proof systems operating with bounded computational resources, such as formal verification software or theorem provers, and (2) a theorem on the robust cooperation of agents that employ proofs about one another’s source code as unexploitable criteria for cooperation. The latter illustrates a capacity for outperforming classical Nash equilibria and correlated equilibria, attaining mutually cooperative program equilibrium in the Prisoner’s Dilemma while remaining unexploitable, i.e., sometimes achieving the outcome (Cooperate, Cooperate), and never receiving the outcome (Cooperate, Defect) as player 1.",https://www.cambridge.org/core/product/identifier/S0022481217000421/type/journal_article,2019,journalArticle,"Critch, Andrew",The Journal of Symbolic Logic Reinforcement Learning and Inverse Reinforcement Learning with System 1 and System 2,"Inferring a person's goal from their behavior is an important problem in applications of AI (e.g. automated assistants, recommender systems). The workhorse model for this task is the rational actor model - this amounts to assuming that people have stable reward functions, discount the future exponentially, and construct optimal plans. Under the rational actor assumption techniques such as inverse reinforcement learning (IRL) can be used to infer a person's goals from their actions. A competing model is the dual-system model. Here decisions are the result of an interplay between a fast, automatic, heuristic-based system 1 and a slower, deliberate, calculating system 2. We generalize the dual system framework to the case of Markov decision problems and show how to compute optimal plans for dual-system agents. We show that dual-system agents exhibit behaviors that are incompatible with rational actor assumption. We show that naive applications of rational-actor IRL to the behavior of dual-system agents can generate wrong inference about the agents' goals and suggest interventions that actually reduce the agent's overall utility. Finally, we adapt a simple IRL algorithm to correctly infer the goals of dual system decision-makers. This allows us to make interventions that help, rather than hinder, the dual-system agent's ability to reach their true goals.",https://arxiv.org/abs/1811.08549v2,2019,conferencePaper,"Peysakhovich, Alexander","Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society" Applications of Neural Networks in High Assurance Systems,,http://link.springer.com/10.1007/978-3-642-10690-3,2010,book,, Closing the Gap Between Short and Long XORs for Model Counting,"Many recent algorithms for approximate model counting are based on a reduction to combinatorial searches over random subsets of the space defined by parity or XOR constraints. Long parity constraints (involving many variables) provide strong theoretical guarantees but are computationally difficult. Short parity constraints are easier to solve but have weaker statistical properties. It is currently not known how long these parity constraints need to be. We close the gap by providing matching necessary and sufficient conditions on the required asymptotic length of the parity constraints. Further, we provide a new family of lower bounds and the first non-trivial upper bounds on the model count that are valid for arbitrarily short XORs. We empirically demonstrate the effectiveness of these bounds on model counting benchmarks and in a Satisfiability Modulo Theory (SMT) application motivated by the analysis of contingency tables in statistics.",http://arxiv.org/abs/1512.08863,2016,conferencePaper,"Zhao, Shengjia; Chaturapruek, Sorathan; Sabharwal, Ashish; Ermon, Stefano",AAAI 2016 Geoengineering tensions,"There has been much discussion of the moral, legal and prudential implications of geoengineering, and of governance structures for both the research and deployment of such technologies. However, insufficient attention has been paid to how such measures might affect geoengineering in terms of the incentive structures which underwrite scientific progress. There is a tension between the features that make science productive, and the need to govern geoengineering research, which has thus far gone underappreciated. I emphasize how geoengineering research requires governance which reaches beyond science’s traditional boundaries, and moreover requires knowledge which itself reaches beyond what we traditionally expect scientists to know about. How we govern emerging technologies should be sensitive to the incentive structures which drive science.",http://www.sciencedirect.com/science/article/pii/S0016328717301696,2018,journalArticle,"Currie, Adrian",Futures Building Ethically Bounded AI,"The more AI agents are deployed in scenarios with possibly unexpected situations, the more they need to be flexible, adaptive, and creative in achieving the goal we have given them. Thus, a certain level of freedom to choose the best path to the goal is inherent in making AI robust and flexible enough. At the same time, however, the pervasive deployment of AI in our life, whether AI is autonomous or collaborating with humans, raises several ethical challenges. AI agents should be aware and follow appropriate ethical principles and should thus exhibit properties such as fairness or other virtues. These ethical principles should define the boundaries of AI's freedom and creativity. However, it is still a challenge to understand how to specify and reason with ethical boundaries in AI agents and how to combine them appropriately with subjective preferences and goal specifications. Some initial attempts employ either a data-driven example-based approach for both, or a symbolic rule-based approach for both. We envision a modular approach where any AI technique can be used for any of these essential ingredients in decision making or decision support systems, paired with a contextual approach to define their combination and relative weight. In a world where neither humans nor AI systems work in isolation, but are tightly interconnected, e.g., the Internet of Things, we also envision a compositional approach to building ethically bounded AI, where the ethical properties of each component can be fruitfully exploited to derive those of the overall system. In this paper we define and motivate the notion of ethically-bounded AI, we describe two concrete examples, and we outline some outstanding challenges.",http://arxiv.org/abs/1812.03980,2018,conferencePaper,"Rossi, Francesca; Mattei, Nicholas",Proceedings of the AAAI Conference on Artificial Intelligence Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning,"We identify two issues with the family of algorithms based on the Adversarial Imitation Learning framework. The first problem is implicit bias present in the reward functions used in these algorithms. While these biases might work well for some environments, they can also lead to sub-optimal behavior in others. Secondly, even though these algorithms can learn from few expert demonstrations, they require a prohibitively large number of interactions with the environment in order to imitate the expert for many real-world applications. In order to address these issues, we propose a new algorithm called Discriminator-Actor-Critic that uses off-policy Reinforcement Learning to reduce policy-environment interaction sample complexity by an average factor of 10. Furthermore, since our reward function is designed to be unbiased, we can apply our algorithm to many problems without making any task-specific adjustments.",http://arxiv.org/abs/1809.02925,2018,manuscript,"Kostrikov, Ilya; Agrawal, Kumar Krishna; Dwibedi, Debidatta; Levine, Sergey; Tompson, Jonathan", Unsupervised Question Decomposition for Question Answering,"We aim to improve question answering (QA) by decomposing hard questions into easier sub-questions that existing QA systems can answer. Since collecting labeled decompositions is cumbersome, we propose an unsupervised approach to produce sub-questions. Specifically, by leveraging >10M questions from Common Crawl, we learn to map from the distribution of multi-hop questions to the distribution of single-hop sub-questions. We answer sub-questions with an off-the-shelf QA model and incorporate the resulting answers in a downstream, multi-hop QA system. On a popular multi-hop QA dataset, HotpotQA, we show large improvements over a strong baseline, especially on adversarial and out-of-domain questions. Our method is generally applicable and automatically learns to decompose questions of different classes, while matching the performance of decomposition methods that rely heavily on hand-engineering and annotation.",http://arxiv.org/abs/2002.09758,2020,conferencePaper,"Perez, Ethan; Lewis, Patrick; Yih, Wen-tau; Cho, Kyunghyun; Kiela, Douwe",Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing Stable self-improvement as an AI safety problem,“Stable self-improvement” seems to be a primary focus of MIRI’s work. I am not yet convinced that this is a key AI safety problem.,https://ai-alignment.com/stable-self-improvement-as-an-ai-safety-problem-46e2a44e73e,2015,blogPost,"Christiano, Paul",AI Alignment (Medium) Cognitive biases potentially affecting judgement of global risks,"All else being equal, not many people would prefer to destroy the world. Even faceless corporations, meddling governments, reckless scientists, and other agents of doom, require a world in which to achieve their goals of profit, order, tenure, or other villainies. If our extinction proceeds slowly enough to allow a moment of horrified realization, the doers of the deed will likely be quite taken aback on realizing that they have actually destroyed the world. Therefore I suggest that if the Earth is destroyed, it will probably be by mistake. The systematic experimental study of reproducible errors of human reasoning, and what these errors reveal about underlying mental processes, is known as the heuristics and biases programme in cognitive psychology. This programme has made discoveries highly relevant to assessors of global catastrophic risks. Suppose you are worried about the risk of Substance P, an explosive of planet-wrecking potency which will detonate if exposed to a strong radio signal. Luckily there is a famous expert who discovered Substance P, spent the last thirty years working with it, and knows it better than anyone else in the world. You call up the expert and ask how strong the radio signal has to be. The expert replies that the critical threshold is probably around 4000 terawatts. ‘Probably?’ you query. ‘Can you give me a 98% confidence interval?’ ‘Sure’, replies the expert. ‘I’m 99%confident that the critical threshold is above 500 terawatts, and 99%confident that the threshold is below 80,000 terawatts.’ ‘What about 10 terawatts?’ you ask. ‘Impossible’, replies the expert. The above methodology for expert elicitation looks perfectly reasonable, the sort of thing any competent practitioner might do when faced with such a problem. Indeed, this methodology was used in the Reactor Safety Study (Rasmussen, 1975), now widely regarded as the first major attempt at probabilistic risk assessment. But the student of heuristics and biases will recognize at least two major mistakes in the method – not logical flaws, but conditions extremely susceptible to human error. I shall return to this example in the discussion of anchoring and adjustments biases (Section 5.7).",https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198570509.001.0001/isbn-9780198570509-book-part-9,2008,bookSection,"Yudkowsky, Eliezer",Global Catastrophic Risks Generalization through Simulation: Integrating Simulated and Real Data into Deep Reinforcement Learning for Vision-Based Autonomous Flight,"Deep reinforcement learning provides a promising approach for vision-based control of real-world robots. However, the generalization of such models depends critically on the quantity and variety of data available for training. This data can be difficult to obtain for some types of robotic systems, such as fragile, small-scale quadrotors. Simulated rendering and physics can provide for much larger datasets, but such data is inherently of lower quality: many of the phenomena that make the real-world autonomous flight problem challenging, such as complex physics and air currents, are modeled poorly or not at all, and the systematic differences between simulation and the real world are typically impossible to eliminate. In this work, we investigate how data from both simulation and the real world can be combined in a hybrid deep reinforcement learning algorithm. Our method uses real-world data to learn about the dynamics of the system, and simulated data to learn a generalizable perception system that can enable the robot to avoid collisions using only a monocular camera. We demonstrate our approach on a real-world nano aerial vehicle collision avoidance task, showing that with only an hour of real-world data, the quadrotor can avoid collisions in new environments with various lighting conditions and geometry. Code, instructions for building the aerial vehicles, and videos of the experiments can be found at github.com/gkahn13/GtS",http://arxiv.org/abs/1902.03701,2019,conferencePaper,"Kang, Katie; Belkhale, Suneel; Kahn, Gregory; Abbeel, Pieter; Levine, Sergey","arXiv:1902.03701 [cs, stat]" Model Reconstruction from Model Explanations,"We show through theory and experiment that gradient-based explanations of a model quickly reveal the model itself. Our results speak to a tension between the desire to keep a proprietary model secret and the ability to offer model explanations. On the theoretical side, we give an algorithm that provably learns a two-layer ReLU network in a setting where the algorithm may query the gradient of the model with respect to chosen inputs. The number of queries is independent of the dimension and nearly optimal in its dependence on the model size. Of interest not only from a learning-theoretic perspective, this result highlights the power of gradients rather than labels as a learning primitive. Complementing our theory, we give effective heuristics for reconstructing models from gradient explanations that are orders of magnitude more query-efficient than reconstruction attacks relying on prediction interfaces.",http://arxiv.org/abs/1807.05185,2018,conferencePaper,"Milli, Smitha; Schmidt, Ludwig; Dragan, Anca D.; Hardt, Moritz","FAT* '19: Proceedings of the Conference on Fairness, Accountability, and Transparency" Conceptual-Linguistic Superintelligence,"We argue that artificial intelligence capable of sustaining an uncontrolled intelligence explosion must have a conceptual-linguistic faculty with substantial functional similarity to the human faculty. We then argue for three subsidiary claims: first, that detecting the presence of such a faculty will be an important indicator of imminent superintelligence; second, that such a superintelligence will, in creating further increases in intelligence, both face and consider the same sorts of existential risks that humans face today; third, that such a superintelligence is likely to assess and question its own values, purposes, and drives.",http://www.informatica.si/index.php/informatica/article/view/1875,2017,journalArticle,"Jilk, David J.",Informatica The Ethics of Global Catastrophic Risk from Dual-Use Bioengineering,,"http://www.dl.begellhouse.com/journals/6ed509641f7324e6,709fef245eef4861,06d520d747a5c0d1.html",2013,journalArticle,"Baum, Seth D.; Wilson, Grant S.","Ethics in Biology, Engineering and Medicine" TanksWorld: A Multi-Agent Environment for AI Safety Research,"The ability to create artificial intelligence (AI) capable of performing complex tasks is rapidly outpacing our ability to ensure the safe and assured operation of AI-enabled systems. Fortunately, a landscape of AI safety research is emerging in response to this asymmetry and yet there is a long way to go. In particular, recent simulation environments created to illustrate AI safety risks are relatively simple or narrowly-focused on a particular issue. Hence, we see a critical need for AI safety research environments that abstract essential aspects of complex real-world applications. In this work, we introduce the AI safety TanksWorld as an environment for AI safety research with three essential aspects: competing performance objectives, human-machine teaming, and multi-agent competition. The AI safety TanksWorld aims to accelerate the advancement of safe multi-agent decision-making algorithms by providing a software framework to support competitions with both system performance and safety objectives. As a work in progress, this paper introduces our research objectives and learning environment with reference code and baseline performance metrics to follow in a future work.",http://arxiv.org/abs/2002.11174,2020,manuscript,"Rivera, Corban G.; Lyons, Olivia; Summitt, Arielle; Fatima, Ayman; Pak, Ji; Shao, William; Chalmers, Robert; Englander, Aryeh; Staley, Edward W.; Wang, I.-Jeng; Llorens, Ashley J.", Are GANs Created Equal? A Large-Scale Study,"Generative adversarial networks (GAN) are a powerful subclass of generative models. Despite a very rich research activity leading to numerous interesting GAN algorithms, it is still very hard to assess which algorithm(s) perform better than others. We conduct a neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures. We find that most models can reach similar scores with enough hyperparameter optimization and random restarts. This suggests that improvements can arise from a higher computational budget and tuning more than fundamental algorithmic changes. To overcome some limitations of the current metrics, we also propose several data sets on which precision and recall can be computed. Our experimental results suggest that future GAN research should be based on more systematic and objective evaluation procedures. Finally, we did not find evidence that any of the tested algorithms consistently outperforms the non-saturating GAN introduced in \cite{goodfellow2014generative}.",http://arxiv.org/abs/1711.10337,2018,conferencePaper,"Lucic, Mario; Kurach, Karol; Michalski, Marcin; Gelly, Sylvain; Bousquet, Olivier",Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Hanson AI Expert Survey,"In a small informal survey running since 2012, AI researchers generally estimated that their subfields have moved less than ten percent of the way to human-level intelligence. Only one (in the slowest moving subfield) observed acceleration. This suggests on a simple extrapolation that reaching human-level capability across subfields will take over a century (in contrast with many other...",https://aiimpacts.org/hanson-ai-expert-survey/,2014,blogPost,AI Impacts,AI Impacts Information gathering actions over human internal state,"Much of estimation of human internal state (goal, intentions, activities, preferences, etc.) is passive: an algorithm observes human actions and updates its estimate of human state. In this work, we embrace the fact that robot actions affect what humans do, and leverage it to improve state estimation. We enable robots to do active information gathering, by planning actions that probe the user in order to clarify their internal state. For instance, an autonomous car will plan to nudge into a human driver’s lane to test their driving style. Results in simulation and in a user study suggest that active information gathering significantly outperforms passive state estimation.",http://ieeexplore.ieee.org/document/7759036/,2016,conferencePaper,"Sadigh, Dorsa; Sastry, S. Shankar; Seshia, Sanjit A.; Dragan, Anca",2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) The date of AI Takeover is not the day the AI takes over,"Instead, it’s the point of no return—the day we AI risk reducers lose the ability to significantly reduce AI risk. This might happen years before classic milestones like “World GWP doubles in four years” and “Superhuman AGI is deployed."" The rest of this post explains, justifies, and expands on this obvious but underappreciated idea. (Toby Ord appreciates it; see quote below). I found myself explaining it repeatedly, so I wrote this post as a reference. AI timelines often come up in career planning conversations. Insofar as AI timelines are short, career plans which take a long time to pay off are a bad idea, because by the time you reap the benefits of the plans it may already be too late. It may already be too late because AI takeover may already have happened. But this isn’t quite right, at least not when “AI takeover” is interpreted in the obvious way, as meaning that an AI or group of AIs is firmly in political control of the world, ordering humans about, monopolizing violence, etc. Even if AIs don’t yet have that sort of political control, it may already be too late. Here are three examples: 1. Superhuman agent AGI is still in its box but nobody knows how to align it and other actors are going to make their own version soon, and there isn’t enough time to convince them of the risks. They will make and deploy agent AGI, it will be unaligned, and we have no way to oppose it except with our own unaligned AGI. Even if it takes years to actually conquer the world, it’s already game over. 2. Various weak and narrow AIs are embedded in the economy and beginning to drive a slow takeoff; capabilities are improving much faster than safety/alignment techniques and due to all the money being made there’s too much political opposition to slowing down capability growth or keeping AIs out of positions of power. We wish we had done more safety/alignment research earlier, or built a political movement earlier when opposit",https://www.lesswrong.com/posts/JPan54R525D68NoEt/the-date-of-ai-takeover-is-not-the-day-the-ai-takes-over,2020,blogPost,"Kokotajlo, Daniel",LessWrong Measuring the Algorithmic Efficiency of Neural Networks,"Three factors drive the advance of AI: algorithmic innovation, data, and the amount of compute available for training. Algorithmic progress has traditionally been more difficult to quantify than compute and data. In this work, we argue that algorithmic progress has an aspect that is both straightforward to measure and interesting: reductions over time in the compute needed to reach past capabilities. We show that the number of floating-point operations required to train a classifier to AlexNet-level performance on ImageNet has decreased by a factor of 44x between 2012 and 2019. This corresponds to algorithmic efficiency doubling every 16 months over a period of 7 years. By contrast, Moore's Law would only have yielded an 11x cost improvement. We observe that hardware and algorithmic efficiency gains multiply and can be on a similar scale over meaningful horizons, which suggests that a good model of AI progress should integrate measures from both.",http://arxiv.org/abs/2005.04305,2020,manuscript,"Hernandez, Danny; Brown, Tom B.", Advisor games,A candidate operationalization of “understandable” reasoning.,https://ai-alignment.com/advisor-games-b33382fef68c,2015,blogPost,"Christiano, Paul",AI Alignment (Medium) Proximal Policy Optimization Algorithms,"We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a ""surrogate"" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.",http://arxiv.org/abs/1707.06347,2017,manuscript,"Schulman, John; Wolski, Filip; Dhariwal, Prafulla; Radford, Alec; Klimov, Oleg", Unhappiness and Unemployment,,https://academic.oup.com/ej/article/104/424/648-659/5158769,1994,journalArticle,"Clark, Andrew E.; Oswald, Andrew J.",The Economic Journal (When) Is Truth-telling Favored in AI Debate?,"For some problems, humans may not be able to accurately judge the goodness of AI-proposed solutions. Irving et al. (2018) propose that in such cases, we may use a debate between two AI systems to amplify the problem-solving capabilities of a human judge. We introduce a mathematical framework that can model debates of this type and propose that the quality of debate designs should be measured by the accuracy of the most persuasive answer. We describe a simple instance of the debate framework called feature debate and analyze the degree to which such debates track the truth. We argue that despite being very simple, feature debates nonetheless capture many aspects of practical debates such as the incentives to confuse the judge or stall to prevent losing. We then outline how these models should be generalized to analyze a wider range of debate phenomena.",http://arxiv.org/abs/1911.04266,2019,conferencePaper,"Kovařík, Vojtěch; Carey, Ryan",Proceedings of the Workshop on Artificial Intelligence Safety A model for types and levels of human interaction with automation,,http://ieeexplore.ieee.org/document/844354/,2000,journalArticle,"Parasuraman, R.; Sheridan, T.B.; Wickens, C.D.","IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans" Written Evidence - Defence industrial policy: procurement and prosperity,,https://committees.parliament.uk/writtenevidence/4785/default/,2020,report,"Belfield, Haydn; Jayanti, Amritha; Avin, Shahar", Tough enough? Robust satisficing as a decision norm for long-term policy analysis,,https://globalprioritiesinstitute.org/wp-content/uploads/Tough-Enough_Andreas-Mogensen-and-David-Thorstad.pdf,2020,report,"Mogensen, Andreas; Thorstad, David", Digital Authoritarianism: Evolving Chinese And Russian Models,,,2019,bookSection,"Ahmed, Shazeda; Ding, Jeffrey; Hoffman, Samantha; Kerr, Jaclyn","Artificial Intelligence, China, Russia, and the Global Order: Technological, Political, Global, and Creative Perspectives" War in the city: Urban ethnic geography and combat effectiveness,"How does the urban environment, and the ethnic geography at its heart, influence the combat effectiveness of democracies conducting counterinsurgency operations? We argue that the city’s ethnic geography – whether it is ethnically homogenous, segregated, or mixed – influences combat effectiveness through two main mechanisms: intelligence and public opinion. There is no ‘ideal’ urban ethno-demographic setting where militaries are likely to be effective in combat. Rather, different ethno-geographies lead to different challenges with respect to intelligence and public opinion, which in turn affect combat effectiveness. We test our arguments through a structured focus comparison of the Troubles and the First Palestinian Intifada.",https://doi.org/10.1080/01402390.2019.1672159,2019,journalArticle,"Brathwaite, Kirstin J. H.; Konaev, Margarita",Journal of Strategic Studies Robotics and the New Cyberlaw,,http://www.ssrn.com/abstract=2402972,2015,journalArticle,"Calo, Ryan",California Law Review Scaling Laws for Neural Language Models,"We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.",http://arxiv.org/abs/2001.08361,2020,manuscript,"Kaplan, Jared; McCandlish, Sam; Henighan, Tom; Brown, Tom B.; Chess, Benjamin; Child, Rewon; Gray, Scott; Radford, Alec; Wu, Jeffrey; Amodei, Dario", Multi-Principal Assistance Games,"Assistance games (also known as cooperative inverse reinforcement learning games) have been proposed as a model for beneficial AI, wherein a robotic agent must act on behalf of a human principal but is initially uncertain about the humans payoff function. This paper studies multi-principal assistance games, which cover the more general case in which the robot acts on behalf of N humans who may have widely differing payoffs. Impossibility theorems in social choice theory and voting theory can be applied to such games, suggesting that strategic behavior by the human principals may complicate the robots task in learning their payoffs. We analyze in particular a bandit apprentice game in which the humans act first to demonstrate their individual preferences for the arms and then the robot acts to maximize the sum of human payoffs. We explore the extent to which the cost of choosing suboptimal arms reduces the incentive to mislead, a form of natural mechanism design. In this context we propose a social choice method that uses shared control of a system to combine preference inference with social welfare optimization.",http://arxiv.org/abs/2007.09540,2020,manuscript,"Fickinger, Arnaud; Zhuang, Simon; Hadfield-Menell, Dylan; Russell, Stuart", Corrigibility as outside view,"You run a country. One day, you think ""I could help so many more people if I set all the rules... and I could make this happen"". As far as you can tell, this is the real reason you want to set the rules – you want to help people, and you think you'd do a good job. But historically… in this kind of situation, this reasoning can lead to terrible things. So you just don't do it, even though it feels like a good idea.[1] More generally, Even though my intuition/naïve decision-making process says I should do X, I know (through mental simulation or from history) my algorithm is usually wrong in this situation. I'm not going to do X. * ""It feels like I could complete this project within a week. But… in the past, when I've predicted ""a week"" for projects like this, reality usually gives me a longer answer. I'm not going to trust this feeling. I'm going to allocate extra time."" * As a new secretary, I think I know how my boss would want me to reply to an important e-mail. However, I'm not sure. Even though I think I know what to do, common sense recommends I clarify. * You broke up with someone. ""Even though I really miss them, in this kind of situation, missing my ex isn't a reliable indicator that I should get back together with them. I'm not going to trust this feeling, and will trust the ""sober"" version of me which broke up with them."" We are biased and corrupted. By taking the outside view on how our own algorithm performs in a given situation, we can adjust accordingly. CORRIGIBILITY The ""hard problem of corrigibility"" is to build an agent which, in an intuitive sense, reasons internally as if from the programmers' external perspective. We think the AI is incomplete, that we might have made mistakes in building it, that we might want to correct it, and that it would be e.g. dangerous for the AI to take large actions or high-impact actions or do weird new things without asking first. We would ideally want the agent to see itself in",https://www.alignmentforum.org/posts/BMj6uMuyBidrdZkiD/corrigibility-as-outside-view,2020,blogPost,"Turner, Alex",AI Alignment Forum Deep Ensembles: A Loss Landscape Perspective,"Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift. One possible explanation for this gap between theory and practice is that popular scalable approximate Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space. We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions. Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space. We demonstrate that while low-loss connectors between modes exist, they are not connected in the space of predictions. Developing the concept of the diversity--accuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods.",http://arxiv.org/abs/1912.02757,2019,manuscript,"Fort, Stanislav; Hu, Huiyi; Lakshminarayanan, Balaji", An Extensible Interactive Interface for Agent Design,"In artificial intelligence, we often specify tasks through a reward function. While this works well in some settings, many tasks are hard to specify this way. In deep reinforcement learning, for example, directly specifying a reward as a function of a high-dimensional observation is challenging. Instead, we present an interface for specifying tasks interactively using demonstrations. Our approach defines a set of increasingly complex policies. The interface allows the user to switch between these policies at fixed intervals to generate demonstrations of novel, more complex, tasks. We train new policies based on these demonstrations and repeat the process. We present a case study of our approach in the Lunar Lander domain, and show that this simple approach can quickly learn a successful landing policy and outperforms an existing comparison-based deep RL method.",http://arxiv.org/abs/1906.02641,2019,manuscript,"Rahtz, Matthew; Fang, James; Dragan, Anca D.; Hadfield-Menell, Dylan", 2017 trend in the cost of computing,"The cheapest hardware prices (for single precision FLOPS/$) appear to be falling by around an order of magnitude every 10-16 years. This rate is slower than the trend of FLOPS/$ observed over the past quarter century, which was an order of magnitude every 4 years. There is no particular sign of slowing between 2011 and 2017....",https://aiimpacts.org/recent-trend-in-the-cost-of-computing/,2017,blogPost,AI Impacts,AI Impacts Robot Planning with Mathematical Models of Human State and Action,"Robots interacting with the physical world plan with models of physics. We advocate that robots interacting with people need to plan with models of cognition. This writeup summarizes the insights we have gained in integrating computational cognitive models of people into robotics planning and control. It starts from a general game-theoretic formulation of interaction, and analyzes how different approximations result in different useful coordination behaviors for the robot during its interaction with people.",http://arxiv.org/abs/1705.04226,2017,manuscript,"Dragan, Anca D.", "Building safe artificial intelligence: specification, robustness, and assurance","By Pedro A. Ortega, Vishal Maini, and the DeepMind safety team",https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1,2018,blogPost,"Ortega, Pedro; Maini, Vishal",Deep Mind Safety Research (Medium) AI safety without goal-directed behavior,"When I first entered the field of AI safety, I thought of the problem as figuring out how to get the AI to have the “right” utility function. This led me to work on the problem of inferring values from demonstrators with unknown biases, despite the impossibility results in the area. I am less excited about that avenue because I am pessimistic about the prospects of ambitious value learning (for the reasons given in the first part of this sequence). I think this happened because the writing on AI risk that I encountered has the pervasive assumption that any superintelligent AI agent must be maximizing some utility function over the long term future, such that it leads to goal-directed behavior and convergent instrumental subgoals. It’s often not stated as an assumption; rather, inferences are made assuming that you have the background model that the AI is goal-directed. This makes it particularly hard to question the assumption, since you don’t realize that the assumption is even there. Another reason that this assumption is so easily accepted is that we have a long history of modeling rational agents as expected utility maximizers, and for good reason: there are many coherence arguments that say that, given that you have preferences/goals, if you aren’t using probability theory and expected utility theory, then you can be taken advantage of. It’s easy to make the inference that a superintelligent agent must be rational, and therefore it must be an expected utility maximizer. Because this assumption was so embedded in how I thought about the problem, I had trouble imagining how else to even consider the problem. I would guess this is true for at least some other people, so I want to summarize the counterargument, and list a few implications, in the hope that this makes the issue clearer. WHY GOAL-DIRECTED BEHAVIOR MAY NOT BE REQUIRED The main argument of this chapter is that it is not required that a superintelligent agent takes actions in pursuit of some goal. I",https://www.alignmentforum.org/posts/tHxXdAn8Yuiy9y2pZ/ai-safety-without-goal-directed-behavior,2019,blogPost,"Shah, Rohin",AI Alignment Forum Analyzing and Reducing the Risks of Inadvertent Nuclear War Between the United States and Russia,,http://www.tandfonline.com/doi/abs/10.1080/08929882.2013.798984,2013,journalArticle,"Barrett, Anthony M.; Baum, Seth D.; Hostetler, Kelly",Science & Global Security Rebuttal of Christiano and AI Impacts on takeoff speeds?,"14 months ago, Paul Christiano and AI Impacts both published forceful and well-received take-downs of many arguments for fast (discontinuous) takeoff. I haven’t seen any rebuttals that are written by established researchers, longer than comments, or otherwise convincing. The longer there is no response, the less weight I put on the outside view that proponents of fast takeoff may be right. Where are the rebuttals? Did I miss them? Is the debate decided? Did nobody have time or motivation to write something? Is the topic too hard to explain? Why rebuttals would be useful: -Give the community a sense of the extent of expert disagreement to form outside views. -Prioritization in AI policy, and to a lesser extent safety, depends on the likelihood of discontinuous progress. We may have more leverage in such cases, but this could be overwhelmed if the probability is low. -Motivate more people to work on MIRI’s research which seems more important to solve early if there is fast takeoff.",https://www.lesswrong.com/posts/PzAnWgqvfESgQEvdg/any-rebuttals-of-christiano-and-ai-impacts-on-takeoff-speeds#zFEhTxNqEp3eZbjLZ,2019,blogPost,"Gloor, Lukas",LessWrong International Cooperation vs. AI Arms Race,"There's a decent chance that governments will be the first to build artificial general intelligence (AI). International hostility, especially an AI arms race, could exacerbate risk-taking, hostile motivations, and errors of judgment when creating AI. If so, then international cooperation could be an important factor to consider when evaluating the flow-through effects of charities.",https://longtermrisk.org/international-cooperation-vs-ai-arms-race/,2015,blogPost,"Tomasik, Brian",Center on Long-Term Risk Analyzing Hidden Representations in End-to-End Automatic Speech Recognition Systems,"Neural models have become ubiquitous in automatic speech recognition systems. While neural networks are typically used as acoustic models in more complex systems, recent studies have explored end-to-end speech recognition systems based on neural networks, which can be trained to directly predict text from input acoustic features. Although such systems are conceptually elegant and simpler than traditional systems, it is less obvious how to interpret the trained models. In this work, we analyze the speech representations learned by a deep end-to-end model that is based on convolutional and recurrent layers, and trained with a connectionist temporal classification (CTC) loss. We use a pre-trained model to generate frame-level features which are given to a classifier that is trained on frame classification into phones. We evaluate representations from different layers of the deep model and compare their quality for predicting phone labels. Our experiments shed light on important aspects of the end-to-end model such as layer depth, model complexity, and other design choices.",http://arxiv.org/abs/1709.04482,2017,conferencePaper,"Belinkov, Yonatan; Glass, James",Advances in Neural Information Processing Systems 30 (NIPS 2017) A Rational Reinterpretation of Dual-Process Theories,"Highly influential “dual-process"" accounts of human cognition postulate the coexistence of a slow accurate system with a fast error-prone system. But why would there be just two systems rather than, say, one or 93? Here, we argue that a dual-process architecture might be neither arbitrary nor irrational, but might instead reflect a rational tradeoff between the cognitive flexibility afforded by multiple systems and the time and effort required to choose between them. We investigate what the optimal set and number of cognitive systems would be depending on the structure of the environment. We find that the optimal number of systems depends on the variability of the environment and the difficulty of deciding when which system should be used. Furthermore, when having two systems is optimal, then the first system is fast but error-prone and the second system is slow but accurate. Our findings thereby provide a rational reinterpretation of dual-process theories.",http://rgdoi.net/10.13140/RG.2.2.14956.46722/1,2018,conferencePaper,"Smitha Milli; Lieder, Falk; Griffiths, Thomas L", The Building Blocks of Interpretability,Interpretability techniques are normally studied in isolation. We explore the powerful interfaces that arise when you combine them -- and the rich structure of this combinatorial space.,https://distill.pub/2018/building-blocks,2018,journalArticle,"Olah, Chris; Satyanarayan, Arvind; Johnson, Ian; Carter, Shan; Schubert, Ludwig; Ye, Katherine; Mordvintsev, Alexander",Distill Towards Robust Image Classification Using Sequential Attention Models,"In this paper we propose to augment a modern neural-network architecture with an attention model inspired by human perception. Specifically, we adversarially train and analyze a neural model incorporating a human inspired, visual attention component that is guided by a recurrent top-down sequential process. Our experimental evaluation uncovers several notable findings about the robustness and behavior of this new model. First, introducing attention to the model significantly improves adversarial robustness resulting in state-of-the-art ImageNet accuracies under a wide range of random targeted attack strengths. Second, we show that by varying the number of attention steps (glances/fixations) for which the model is unrolled, we are able to make its defense capabilities stronger, even in light of stronger attacks --- resulting in a ""computational race"" between the attacker and the defender. Finally, we show that some of the adversarial examples generated by attacking our model are quite different from conventional adversarial examples --- they contain global, salient and spatially coherent structures coming from the target class that would be recognizable even to a human, and work by distracting the attention of the model away from the main object in the original image.",https://openaccess.thecvf.com/content_CVPR_2020/html/Zoran_Towards_Robust_Image_Classification_Using_Sequential_Attention_Models_CVPR_2020_paper.html,2020,conferencePaper,"Zoran, Daniel; Chrzanowski, Mike; Huang, Po-Sen; Gowal, Sven; Mott, Alex; Kohl, Pushmeet",Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Persistence and reversibility – long-term design considerations for wild animal welfare,"Crossposted from the Wild Animal Initiative blog. SUMMARY When designing interventions to improve the welfare of wild animals, we want to maximize the expected benefit produced given the cost.[1] A major factor in the cost-effectiveness of interventions is the persistence of the effects. The longer they last, the higher the ratio of benefit[2] to cost, all else being equal. However, due to widespread uncertainty concerning the effects of our actions on wild animal welfare, it is possible that an intervention will turn out to do more harm than good. Reversibility can contribute to cost-effectiveness by allowing bad outcomes to be reversed, limiting the damage of an intervention gone wrong. In short, we want to optimize persistence given a good outcome while still preserving option value in case of bad outcomes. However, there is a tension between persistence and reversibility, since most factors that contribute to high reversibility will also lead to low persistence, and vice versa (Table 1). This report aims to explore the importance of persistence and reversibility to wild animal welfare interventions, how to negotiate trade-offs between them, and ways to sidestep the trade-off altogether. My main conclusions are: * All else equal, the ideal intervention would be both persistent in the face of natural processes and reversible. * In practice, interventions that are both persistent in the face of natural processes and reversible seem to be rare. Designing more such interventions would be very useful. * Although still in development, genes drives might turn out to be an unusually persistent in the face of natural processes, while simultaneously fairly reversible to improve wild animal welfare. Future work should explore this technology further and try to identify responsible policies for its use. * The feasibility and long-term viability of carrying out interventions to improve wild animal welfare is strongly influenced by public perception",https://forum.effectivealtruism.org/posts/KKcCKrMsQHRCWib2Q/persistence-and-reversibility-long-term-design-1,2020,blogPost,Wild Animal Inititive,Effective Altruism Forum Universality Unwrapped,"INTRODUCTION Informally, a universal system is universal with respect to any computation; and it is a universal system with respect to a given computation if it understands every set of beliefs that can be ascribed to the computation. The intuition is that the system can reverse engineer most or all of the computation, in order to monitor it or imitate it. This in turn has important consequences for questions of alignment and competitiveness. Universality is the property that defines a universal system. And it is the point of this post. Universality tries to capture a property needed for many alignement schemes. It was proposed by Paul Christiano, the mind behind many approaches and ideas in the prosaic AGI space, and a founding member of the safety team at OpenAI. Rohin Shah dedicated a full Alignment Newsletter to covering all 6 posts on Universality. Rohin and Evan Hubinger, two important researchers in this field, consider Universality as one of the most exciting research idea of the last few years.[1] Yet nobody talks about Universality. Except for the Alignment Newsletter mentioned above and a response post by Evan, nothing in the Alignment Forum addresses this idea. I've seen no great discussion, no debates, no counter-arguments or criticism. The original post on Medium has no comments, and the crossposted version here only has a handful, mostly asking for clarification. And the other posts in the sequence rely on understanding this first. The simplest solution to this problem is to tell you to read the original post. Unfortunately, it is as dense as Q in R, brimming with ideas, intuitions, semi-formal explanations and the many meanderings that research takes before arriving on solid ground. That is to say, you'll have to work for it. Not everyone who might benefit from an understanding of Universality has the time, the need or the want for such an upfront investment. This post endeavors to be the next best thing: an unwrapping of the main post on univers",https://www.alignmentforum.org/posts/farherQcqFQXqRcvv/universality-unwrapped,2020,blogPost,"Shimi, Adam",AI Alignment Forum Human-robot interaction for truck platooning using hierarchical dynamic games,"This paper proposes a controller design framework for autonomous truck platoons to ensure safe interaction with a human-driven car. The interaction is modelled as a hierarchical dynamic game, played between the human driver and the nearest truck in the platoon. The hierarchical decomposition is temporal with a high-fidelity tactical horizon predicting immediate interactions and a low-fidelity strategic horizon estimating long-horizon behaviour. The hierarchical approach enables feasible computations where human uncertainties are represented by the quantal response model, and the truck is supposed to maximise its payoff. The closed-loop control is validated via case studies using a driving simulator, where we compare our approach with a short-horizon alternative using only the tactical horizon. The results indicate that our controller is more situation-aware resulting in natural and safe interactions.",https://ieeexplore.ieee.org/document/8795627/,2019,conferencePaper,"Stefansson, Elis; Fisac, Jaime F.; Sadigh, Dorsa; Sastry, S. Shankar; Johansson, Karl H.",2019 18th European Control Conference (ECC) "Robust Computer Algebra, Theorem Proving, and Oracle AI","In the context of superintelligent AI systems, the term ""oracle"" has two meanings. One refers to modular systems queried for domain-specific tasks. Another usage, referring to a class of systems which may be useful for addressing the value alignment and AI control problems, is a superintelligent AI system that only answers questions. The aim of this manuscript is to survey contemporary research problems related to oracles which align with long-term research goals of AI safety. We examine existing question answering systems and argue that their high degree of architectural heterogeneity makes them poor candidates for rigorous analysis as oracles. On the other hand, we identify computer algebra systems (CASs) as being primitive examples of domain-specific oracles for mathematics and argue that efforts to integrate computer algebra systems with theorem provers, systems which have largely been developed independent of one another, provide a concrete set of problems related to the notion of provable safety that has emerged in the AI safety community. We review approaches to interfacing CASs with theorem provers, describe well-defined architectural deficiencies that have been identified with CASs, and suggest possible lines of research and practical software projects for scientists interested in AI safety.",http://arxiv.org/abs/1708.02553,2017,journalArticle,"Sarma, Gopal P.; Hay, Nick J.",Informatica Human agency and global catastrophic biorisks,,,2017,journalArticle,"Millett, Piers; Snyder-Beattie, Andrew",Health security New challenges in organizational research: high reliability organizations,,http://journals.sagepub.com/doi/10.1177/108602668900300202,1989,journalArticle,"Roberts, Karlene H.",Industrial Crisis Quarterly Towards Seamless Human-Robot Handovers,,http://dl.acm.org/citation.cfm?id=3109705,2013,journalArticle,"Strabala, Kyle Wayne; Lee, Min Kyung; Dragan, Anca Diana; Forlizzi, Jodi Lee; Srinivasa, Siddhartha; Cakmak, Maya; Micelli, Vincenzo",Journal of Human-Robot Interaction Verbalization: Narration of Autonomous Robot Experience,"Autonomous mobile robots navigate in our spaces by planning and executing routes to destinations. When a mobile robot appears at a location, there is no clear way to understand what navigational path the robot planned and experienced just by looking at it. In this work, we address the generation of narrations of autonomous mobile robot navigation experiences. We contribute the concept of verbalization as a parallel to the well-studied concept of visualization. Through verbalizations, robots can describe through language what they experience, in particular in their paths. For every executed path, we consider many possible verbalizations that could be generated. We introduce the verbalization space that covers the variability of utterances that the robot may use to narrate its experience to different humans. We present an algorithm for segmenting a path and mapping each segment to an utterance, as a function of the desired point in the verbalization space, and demonstrate its application using our mobile service robot moving in our buildings. We believe our verbalization space and algorithm are applicable to different narrative aspects for many mobile robots, including autonomous cars.",,2016,conferencePaper,"Rosenthal, Stephanie; Selvaraj, Sai P; Veloso, Manuela",Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16) How Roodman's GWP model translates to TAI timelines,"How does David Roodman’s world GDP model translate to TAI timelines? Now, before I go any further, let me be the first to say that I don’t think we should use this model to predict TAI. This model takes a very broad outside view and is thus inferior to models like Ajeya Cotra’s which make use of more relevant information. (However, it is still useful for rebutting claims that TAI is unprecedented, inconsistent with historical trends, low-prior, etc.) Nevertheless, out of curiosity I thought I’d calculate what the model implies for TAI timelines. Here is the projection made by Roodman’s model. The red line is real historic GWP data; the splay of grey shades that continues it is the splay of possible futures calculated by the model. The median trajectory is the black line. I messed around with a ruler to make some rough calculations, marking up the image with blue lines as I went. The big blue line indicates the point on the median trajectory where GWP is 10x what is was in 2019. Eyeballing it, it looks like it happens around 2040, give or take a year. The small vertical blue line indicates the year 2037. The small horizontal blue line indicates GWP in 2037 on the median trajectory. Thus, it seems that between 2037 and 2040 on the median trajectory, GWP doubles. (One-ninth the distance between 1,000 and 1,000,000 is crossed, which is one-third of an order of magnitude, which is about one doubling). This means that TAI happens around 2037 on the median trajectory according to this model, at least according to Ajeya Cotra’s definition of transformative AI as “software which causes a tenfold acceleration in the rate of growth of the world economy (assuming that it is used everywhere that it would be economically profitable to use it)... This means that if TAI is developed in year Y, the entire world economy would more than double by year Y + 4.” What about the non-median trajectories? Each shade of grey represents 5 percent of the simulated future trajectories, so",https://www.lesswrong.com/posts/L23FgmpjsTebqcSZb/how-roodman-s-gwp-model-translates-to-tai-timelines,2020,blogPost,"Kokotajlo, Daniel",LessWrong Oversight of Unsafe Systems via Dynamic Safety Envelopes,"This paper reviews the reasons that Human-in-the-Loop is both critical for preventing widely-understood failure modes for machine learning, and not a practical solution. Following this, we review two current heuristic methods for addressing this. The first is provable safety envelopes, which are possible only when the dynamics of the system are fully known, but can be useful safety guarantees when optimal behavior is based on machine learning with poorly-understood safety characteristics. The second is the simpler circuit breaker model, which can forestall or prevent catastrophic outcomes by stopping the system, without any specific model of the system. This paper proposes using heuristic, dynamic safety envelopes, which are a plausible halfway point between these approaches that allows human oversight without some of the more difficult problems faced by Human-in-the-Loop systems. Finally, the paper concludes with how this approach can be used for governance of systems where otherwise unsafe systems are deployed.",https://arxiv.org/abs/1811.09246v1,2018,manuscript,"Manheim, David", Ethical Reflections on Artificial Intelligence,,http://apcz.umk.pl/czasopisma/index.php/SetF/article/view/SetF.2018.015,2018,journalArticle,"Green, Brian Patrick",Scientia et Fides Cause prioritization research,,https://cdn.80000hours.org/wp-content/uploads/2017/02/Cause-Prioritization-Shallow-Overview.pdf,,manuscript,"Grace, Katja", In defence of epistemic modesty,"This piece defends a strong form of epistemic modesty: that, in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue, hewing instead to an idealized consensus of experts. I start by better pinning down exactly what is meant by ‘epistemic modesty’, go on to offer a variety of reasons that motivate it, and reply to some common objections. Along the way, I show common traps people being inappropriately modest fall into. I conclude that modesty is a superior epistemic strategy, and ought to be more widely used - particularly in the EA/rationalist communities. [gdoc] PROVOCATION I argue for this: In virtually all cases, the credence you hold for any given belief should be dominated by the balance of credences held by your epistemic peers and superiors . One’s own convictions should weigh no more heavily in the balance than that of one other epistemic peer. INTRODUCTIONS AND CLARIFICATIONS A FAVOURABLE MOTIVATING CASE Suppose your mother thinks she can make some easy money day trading blue-chip stocks, and plans to kick off tomorrow shorting Google on the stock market, as they’re sure it’s headed for a crash. You might want to dissuade her in a variety of ways. You might appeal to an outside view: Mum, when you make this short you’re going to be betting against some hedge fund, quant, or whatever else. They have loads of advantages: relevant background, better information, lots of data and computers, and so on. Do you really think you’re odds on to win this bet? Or appeal to some reference class: Mum, I’m pretty sure the research says that people trying to day-trade stocks tend not to make much money at all. Although you might hear some big successes on the internet, you don’t hear about everyone else who went bust. So why should you think you are likely to be one of these remarkable successes? Or just cite disagreement: Look Mum: Dad, sister, the grandparents and I all think this is a really bad idea. Please",https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty,2017,blogPost,"Lewis, Gregory",Effective Altruism Forum Unsupervised Learning of Visual Features by Contrasting Cluster Assignments,"Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. These contrastive methods typically work online and rely on a large number of explicit pairwise feature comparisons, which is computationally challenging. In this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons. Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or “views”) of the same image, instead of comparing features directly as in contrastive learning. Simply put, we use a “swapped” prediction mechanism where we predict the cluster assignment of a view from the representation of another view. Our method can be trained with large and small batches and can scale to unlimited amounts of data. Compared to previous contrastive methods, our method is more memory efficient since it does not require a large memory bank or a special momentum network. In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements much. We validate our findings by achieving 75.3% top-1 accuracy on ImageNet with ResNet-50, as well as surpassing supervised pretraining on all the considered transfer tasks.",http://arxiv.org/abs/2006.09882,2020,conferencePaper,"Caron, Mathilde; Misra, Ishan; Mairal, Julien; Goyal, Priya; Bojanowski, Piotr; Joulin, Armand","Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS)," Safe and Nested Subgame Solving for Imperfect-Information Games,"In imperfect-information games, the optimal strategy in a subgame may depend on the strategy in other, unreached subgames. Thus a subgame cannot be solved in isolation and must instead consider the strategy for the entire game as a whole, unlike perfect-information games. Nevertheless, it is possible to first approximate a solution for the whole game and then improve it by solving individual subgames. This is referred to as subgame solving. We introduce subgame-solving techniques that outperform prior methods both in theory and practice. We also show how to adapt them, and past subgame-solving techniques, to respond to opponent actions that are outside the original action abstraction; this significantly outperforms the prior state-of-the-art approach, action translation. Finally, we show that subgame solving can be repeated as the game progresses down the game tree, leading to far lower exploitability. These techniques were a key component of Libratus, the first AI to defeat top humans in heads-up no-limit Texas hold'em poker.",http://arxiv.org/abs/1705.02955,2017,conferencePaper,"Brown, Noam; Sandholm, Tuomas",Advances in Neural Information Processing Systems 30 (NIPS 2017) An unaligned benchmark,"What an unaligned AI might look like, how it could go wrong, and how we could fix it.",https://ai-alignment.com/an-unaligned-benchmark-b49ad992940b,2018,blogPost,"Christiano, Paul",AI Alignment (Medium) Pessimism About Unknown Unknowns Inspires Conservatism,"If we could define the set of all bad outcomes, we could hard-code an agent which avoids them; however, in sufficiently complex environments, this is infeasible. We do not know of any general-purpose approaches in the literature to avoiding novel failure modes. Motivated by this, we define an idealized Bayesian reinforcement learner which follows a policy that maximizes the worst-case expected reward over a set of world-models. We call this agent pessimistic, since it optimizes assuming the worst case. A scalar parameter tunes the agent's pessimism by changing the size of the set of world-models taken into account. Our first main contribution is: given an assumption about the agent's model class, a sufficiently pessimistic agent does not cause ""unprecedented events"" with probability $1-\delta$, whether or not designers know how to precisely specify those precedents they are concerned with. Since pessimism discourages exploration, at each timestep, the agent may defer to a mentor, who may be a human or some known-safe policy we would like to improve. Our other main contribution is that the agent's policy's value approaches at least that of the mentor, while the probability of deferring to the mentor goes to 0. In high-stakes environments, we might like advanced artificial agents to pursue goals cautiously, which is a non-trivial problem even if the agent were allowed arbitrary computing power; we present a formal solution.",http://arxiv.org/abs/2006.08753,2020,manuscript,"Cohen, Michael K.; Hutter, Marcus", Adversarial Soft Advantage Fitting: Imitation Learning without Policy Optimization,"Adversarial imitation learning alternates between learning a discriminator -- which tells apart expert's demonstrations from generated ones -- and a generator's policy to produce trajectories that can fool this discriminator. This alternated optimization is known to be delicate in practice since it compounds unstable adversarial training with brittle and sample-inefficient reinforcement learning. We propose to remove the burden of the policy optimization steps by leveraging a novel discriminator formulation. Specifically, our discriminator is explicitly conditioned on two policies: the one from the previous generator's iteration and a learnable policy. When optimized, this discriminator directly learns the optimal generator's policy. Consequently, our discriminator's update solves the generator's optimization problem for free: learning a policy that imitates the expert does not require an additional optimization loop. This formulation effectively cuts by half the implementation and computational burden of adversarial imitation learning algorithms by removing the reinforcement learning phase altogether. We show on a variety of tasks that our simpler approach is competitive to prevalent imitation learning methods.",http://arxiv.org/abs/2006.13258,2020,conferencePaper,"Barde, Paul; Roy, Julien; Jeon, Wonseok; Pineau, Joelle; Pal, Christopher; Nowrouzezahrai, Derek",Advances in Neural Information Processing Systems 33 (2020) Special issue on autonomous agents modelling other agents: Guest editorial,"Much research in artificial intelligence is concerned with enabling autonomous agents to reason about various aspects of other agents (such as their beliefs, goals, plans, or decisions) and to utilise such reasoning for effective interaction. This special issue contains new technical contributions addressing open problems in autonomous agents modelling other agents, as well as research perspectives about current developments, challenges, and future directions.",http://www.sciencedirect.com/science/article/pii/S0004370220300515,2020,journalArticle,"Albrecht, Stefano V.; Stone, Peter; Wellman, Michael P.",Artificial Intelligence The AI-Box Experiment,,https://www.yudkowsky.net/singularity/aibox,2002,blogPost,"Yudkowsky, Eliezer",Eliezer S Yudkowsky "Adolescents’ Electronic Media Use at Night, Sleep Disturbance, and Depressive Symptoms in the Smartphone Age",,http://link.springer.com/10.1007/s10964-014-0176-x,2015,journalArticle,"Lemola, Sakari; Perkinson-Gloor, Nadine; Brand, Serge; Dewald-Kaufmann, Julia F.; Grob, Alexander",Journal of Youth and Adolescence Demons in Imperfect Search,"One day, a gradient descent algorithm ball was happily rolling down a high-dimensional surface hill. All it wanted was to roll as far down as possible. Unbeknownst to the ball, just off to the side was a steep drop-off - but there was a small bump between the ball and the drop-off. No matter; there was enough random noise on the ball that it would jump the bump sooner or later. But the ball was headed into unfriendly territory. As the ball rolled along, the bump became taller. The farther it rolled, the taller the bump grew, until no hope remained of finding the big drop anytime before the stars burned out. Then the road began to narrow, and to twist and turn, and to become flatter. Soon the ball rolled down only the slightest slope, with tall walls on both sides constraining its path. The ball had entered the territory of a demon, and now that demon was steering the ball according to its own nefarious ends. This wasn’t the first time the ball had entered the territory of a demon. In early times, the demons had just been bumps which happened to grow alongside the ball’s path, for a time - chance events, nothing more. But every now and then, two bumps in close proximity would push the ball in different directions. The ball would roll on, oblivious, and end up going in one direction or the other. Whichever bump had ""won"" would continue to steer the ball's trajectory - and so a selection process occurred. The ball tended to roll alongside bumps which more effectively controlled its trajectory - bumps which were taller, bumps which steered it away from competing bumps. And so, over time, bumps gave way to barriers, and barriers gave way to demons - twisty paths with high walls to keep the ball contained and avoid competing walls, slowing the ball's descent to a crawl, conserving its potential energy in case a sharp drop were needed to avoid a competitor's wall. The ball’s downhill progress slowed and slowed. Even though the rich, high-dimensional space was filled w",https://www.alignmentforum.org/posts/KnPN7ett8RszE79PH/demons-in-imperfect-search,2020,blogPost,"Wentworth, John",AI Alignment Forum Learning biases and rewards simultaneously,"I’ve finally uploaded to arXiv our work on inferring human biases alongside IRL, which was published at ICML 2019. SUMMARY OF THE PAPER THE IRL DEBATE Here’s a quick tour of the debate about inverse reinforcement learning (IRL) and cognitive biases, featuring many of the ideas from the first chapter of the Value Learning sequence: I had the intuition that the impossibility theorem was like the other no-free-lunch theorems in ML: not actually relevant for what ML could do in practice. So we tried to learn and correct for systematic biases in IRL. THE IDEA BEHIND THE ALGORITHMS The basic idea was to learn the planning algorithm by which the human produces demonstrations, and try to ensure that the planning algorithm captured the appropriate systematic biases. We used a Value Iteration Network to give an inductive bias towards “planners” but otherwise did not assume anything about the form of the systematic bias. [1] Then, we could perform IRL by figuring out which reward would cause the planning algorithm to output the given demonstrations. The reward would be “debiased” because the effect of the biases on the policy would already be accounted for in the planning algorithm. How could we learn the planning algorithm? Well, one baseline method is to assume that we have access to some tasks where the rewards are known, and use those tasks to learn what the planning algorithm is. Then, once that is learned, we can infer the rewards for new tasks that we haven’t seen before. This requires the planner to generalize across tasks. However, it’s kind of cheating to assume access to ground truth rewards, since we usually wouldn’t have them. What if we learned the planning algorithm and rewards simultaneously? Well, the no-free-lunch theorem gets us then: maximizing the true reward and minimizing the negative of the true reward would lead to the same policy, and so you can’t distinguish between them, and so the output of your IRL algorithm could be the true reward or the",https://www.alignmentforum.org/posts/xxnPxELC4jLKaFKqG/learning-biases-and-rewards-simultaneously,2019,blogPost,"Shah, Rohin",AI Alignment Forum Market Manipulation: An Adversarial Learning Framework for Detection and Evasion,We propose an adversarial learning framework to capture the evolving game between a regulator who develops tools to detect market manipulation and a manipulator who obfuscates actions to evade detection. The model includes three main parts: (1) a generator that learns to adapt original manipulation order streams to resemble trading patterns of a normal trader while preserving the manipulation intent; (2) a discriminator that differentiates the adversarially adapted manipulation order streams from normal trading activities; and (3) an agent-based simulator that evaluates the manipulation effect of adapted outputs. We conduct experiments on simulated order streams associated with a manipulator and a market-making agent respectively. We show examples of adapted manipulation order streams that mimic a specified market maker’s quoting patterns and appear qualitatively different from the original manipulation strategy we implemented in the simulator. These results demonstrate the possibility of automatically generating a diverse set of (unseen) manipulation strategies that can facilitate the training of more robust detection algorithms.,https://www.ijcai.org/proceedings/2020/638,2020,conferencePaper,"Wang, Xintong; Wellman, Michael P.",Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Policy Shaping: Integrating Human Feedback with Reinforcement Learning,,https://proceedings.neurips.cc/paper/2013/hash/e034fb6b66aacc1d48f445ddfb08da98-Abstract.html,2013,journalArticle,"Griffith, Shane; Subramanian, Kaushik; Scholz, Jonathan; Isbell, Charles L.; Thomaz, Andrea L.",Advances in Neural Information Processing Systems Transfer Learning for Estimating Causal Effects using Neural Networks,"We develop new algorithms for estimating heterogeneous treatment effects, combining recent developments in transfer learning for neural networks with insights from the causal inference literature. By taking advantage of transfer learning, we are able to efficiently use different data sources that are related to the same underlying causal mechanisms. We compare our algorithms with those in the extant literature using extensive simulation studies based on large-scale voter persuasion experiments and the MNIST database. Our methods can perform an order of magnitude better than existing benchmarks while using a fraction of the data.",http://arxiv.org/abs/1808.07804,2018,manuscript,"Künzel, Sören R.; Stadie, Bradly C.; Vemuri, Nikita; Ramakrishnan, Varsha; Sekhon, Jasjeet S.; Abbeel, Pieter", Expressing Robot Incapability,"Our goal is to enable robots to express their incapability, and to do so in a way that communicates both what they are trying to accomplish and why they are unable to accomplish it. We frame this as a trajectory optimization problem: maximize the similarity between the motion expressing incapability and what would amount to successful task execution, while obeying the physical limits of the robot. We introduce and evaluate candidate similarity measures, and show that one in particular generalizes to a range of tasks, while producing expressive motions that are tailored to each task. Our user study supports that our approach automatically generates motions expressing incapability that communicate both what and why to end-users, and improve their overall perception of the robot and willingness to collaborate with it in the future.",http://arxiv.org/abs/1810.08167,2018,conferencePaper,"Kwon, Minae; Huang, Sandy H.; Dragan, Anca D.",Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI '18 Estimating long-term treatment effects without long-term outcome data,"Estimating long-term impacts of actions is important in many areas but the key difficulty is that long-term outcomes are only observed with a long delay. One alternative approach is to measure the effect on an intermediate outcome or a statistical surrogate and then use this to estimate the long-term effect. Athey et al. (2019) generalise the surrogacy method to work with multiple surrogates, rather than just one, increasing its credibility in social science contexts. I empirically test the multiple surrogates approach for long-term effect estimation in real-world conditions using long-run RCTs from development economics. In the context of conditional cash transfers for education in Colombia, I find that the method works well for predicting treatment effects over a 5-year time span but poorly over 10 years due to a reduced set of variables available when attempting to predict effects further into the future. The method is sensitive to observing appropriate surrogates.",,2020,journalArticle,"Bernard, David Rhys",Statistics in Medicine Some Principles of the Theory of Testing Hypotheses,,http://projecteuclid.org/euclid.aoms/1177729884,1950,journalArticle,"Lehmann, E. L.",The Annals of Mathematical Statistics One-Shot Visual Imitation Learning via Meta-Learning,"In order for a robot to be a generalist that can perform a wide range of jobs, it must be able to acquire a wide variety of skills quickly and efficiently in complex unstructured environments. High-capacity models such as deep neural networks can enable a robot to represent complex skills, but learning each skill from scratch then becomes infeasible. In this work, we present a meta-imitation learning method that enables a robot to learn how to learn more efficiently, allowing it to acquire new skills from just a single demonstration. Unlike prior methods for one-shot imitation, our method can scale to raw pixel inputs and requires data from significantly fewer prior tasks for effective learning of new skills. Our experiments on both simulated and real robot platforms demonstrate the ability to learn new tasks, end-to-end, from a single visual demonstration.",http://arxiv.org/abs/1709.04905,2017,conferencePaper,"Finn, Chelsea; Yu, Tianhe; Zhang, Tianhao; Abbeel, Pieter; Levine, Sergey",Proceedings of the 1st Annual Conference on Robot Learning Visualizing Neural Networks with the Grand Tour,"By focusing on linear dimensionality reduction, we show how to visualize many dynamic phenomena in neural networks.",https://distill.pub/2020/grand-tour,2020,journalArticle,"Li, Mingwei; Zhao, Zhenge; Scheidegger, Carlos",Distill Learning to Interactively Learn and Assist,"When deploying autonomous agents in the real world, we need effective ways of communicating objectives to them. Traditional skill learning has revolved around reinforcement and imitation learning, each with rigid constraints on the format of information exchanged between the human and the agent. While scalar rewards carry little information, demonstrations require significant effort to provide and may carry more information than is necessary. Furthermore, rewards and demonstrations are often defined and collected before training begins, when the human is most uncertain about what information would help the agent. In contrast, when humans communicate objectives with each other, they make use of a large vocabulary of informative behaviors, including non-verbal communication, and often communicate throughout learning, responding to observed behavior. In this way, humans communicate intent with minimal effort. In this paper, we propose such interactive learning as an alternative to reward or demonstration-driven learning. To accomplish this, we introduce a multi-agent training framework that enables an agent to learn from another agent who knows the current task. Through a series of experiments, we demonstrate the emergence of a variety of interactive learning behaviors, including information-sharing, information-seeking, and question-answering. Most importantly, we find that our approach produces an agent that is capable of learning interactively from a human user, without a set of explicit demonstrations or a reward function, and achieving significantly better performance cooperatively with a human than a human performing the task alone.",http://arxiv.org/abs/1906.10187,2019,conferencePaper,"Woodward, Mark; Finn, Chelsea; Hausman, Karol",Proceedings of the AAAI Conference on Artificial Intelligence The Social Cost of Strategic Classification,"Consequential decision-making typically incentivizes individuals to behave strategically, tailoring their behavior to the specifics of the decision rule. A long line of work has therefore sought to counteract strategic behavior by designing more conservative decision boundaries in an effort to increase robustness to the effects of strategic covariate shift. We show that these efforts benefit the institutional decision maker at the expense of the individuals being classified. Introducing a notion of social burden, we prove that any increase in institutional utility necessarily leads to a corresponding increase in social burden. Moreover, we show that the negative externalities of strategic classification can disproportionately harm disadvantaged groups in the population. Our results highlight that strategy-robustness must be weighed against considerations of social welfare and fairness.",http://arxiv.org/abs/1808.08460,2018,conferencePaper,"Milli, Smitha; Miller, John; Dragan, Anca D.; Hardt, Moritz","FAT* '19: Proceedings of the Conference on Fairness, Accountability, and Transparency" On the Recursive Teaching Dimension of VC Classes,,https://proceedings.neurips.cc/paper/2016/hash/69a5b5995110b36a9a347898d97a610e-Abstract.html,2016,journalArticle,"Chen, Xi; Chen, Xi; Cheng, Yu; Tang, Bo",Advances in Neural Information Processing Systems ReNeg and Backseat Driver: Learning from Demonstration with Continuous Human Feedback,"In autonomous vehicle (AV) control, allowing mistakes can be quite dangerous and costly in the real world. For this reason we investigate methods of training an AV without allowing the agent to explore and instead having a human explorer collect the data. Supervised learning has been explored for AV control, but it encounters the issue of the covariate shift. That is, training data collected from an optimal demonstration consists only of the states induced by the optimal control policy, but at runtime, the trained agent may encounter a vastly different state distribution with little relevant training data. To mitigate this issue, we have our human explorer make sub-optimal decisions. In order to have our agent not replicate these sub-optimal decisions, supervised learning requires that we either erase these actions, or replace these action with the correct action. Erasing is wasteful and replacing is difficult, since it is not easy to know the correct action without driving. We propose an alternate framework that includes continuous scalar feedback for each action, marking which actions we should replicate, which we should avoid, and how sure we are. Our framework learns continuous control from sub-optimal demonstration and evaluative feedback collected before training. We find that a human demonstrator can explore sub-optimal states in a safe manner, while still getting enough gradation to benefit learning. The collection method for data and feedback we call ""Backseat Driver."" We call the more general learning framework ReNeg, since it learns a regression from states to actions given negative as well as positive examples. We empirically validate several models in the ReNeg framework, testing on lane-following with limited data. We find that the best solution is a generalization of mean-squared error and outperforms supervised learning on the positive examples alone.",http://arxiv.org/abs/1901.05101,2019,manuscript,"Beck, Jacob; Papakipos, Zoe; Littman, Michael", AI Definitions Affect Policymaking,"The task of artificial intelligence policymaking is complex and challenging, made all the more difficult by such a rapidly evolving technology. In order to address the security and economic implications of AI, policymakers must be able to viably define, categorize and assess AI research and technology. In this issue brief, CSET puts forward a functional definition of AI, based on three core principles, that significantly outperforms methods developed over the last decade.",https://cset.georgetown.edu/research/ai-definitions-affect-policymaking/,2020,report,"Murdick, Dewey; Dunham, James; Melot, Jennifer", Predictive Uncertainty Estimation via Prior Networks,"Estimating how uncertain an AI system is in its predictions is important to improve the safety of such systems. Uncertainty in predictive can result from uncertainty in model parameters, irreducible data uncertainty and uncertainty due to distributional mismatch between the test and training data distributions. Different actions might be taken depending on the source of the uncertainty so it is important to be able to distinguish between them. Recently, baseline tasks and metrics have been defined and several practical methods to estimate uncertainty developed. These methods, however, attempt to model uncertainty due to distributional mismatch either implicitly through model uncertainty or as data uncertainty. This work proposes a new framework for modeling predictive uncertainty called Prior Networks (PNs) which explicitly models distributional uncertainty. PNs do this by parameterizing a prior distribution over predictive distributions. This work focuses on uncertainty for classification and evaluates PNs on the tasks of identifying out-of-distribution (OOD) samples and detecting misclassification on the MNIST dataset, where they are found to outperform previous methods. Experiments on synthetic and MNIST and CIFAR-10 data show that unlike previous non-Bayesian methods PNs are able to distinguish between data and distributional uncertainty.",http://arxiv.org/abs/1802.10501,2018,conferencePaper,"Malinin, Andrey; Gales, Mark",Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Alignment proposals and complexity classes,"In the original “AI safety via debate” paper, Geoffrey Irving et al. introduced the concept of analyzing different alignment proposals from the perspective of what complexity class they are able to access under optimal play. I think this is a pretty neat way to analyze different alignment proposals—in particular, I think it can help us gain some real insights into how far into the superhuman different systems are able to go. Thus, the goal of this post is to try to catalog different alignment proposals based on the metric of what complexity class they have so far been proven to access. To do that, I have included a variety of new complexity class proofs in this post. Of particular note, I demonstrate that there exist forms of both imitative amplification and AI safety via market making that reach all the way up to R —which is significant given that the largest complexity class that any alignment proposal was known to access previously was NEXP. Only the forms of amplification and market making making use of pointers (as in strong HCH), however, can access R—for the pointer-less versions, I demonstrate in this post that they access PSPACE and EXP, respectively. The EXP proof for market making is also particularly notable as it is the only approach on my list that ends up in that complexity class. Additionally, I also demonstrate that recursive reward modeling can reach all the way to PSPACE, improving upon the previous best result in “Scalable agent alignment via reward modeling” that it accesses NP. Before I jump in, however, some preliminaries. First, we'll assume that a human, H, is polynomial-time such that H can reliably solve any problem in P but not anything beyond that. Second, we'll assume that our training procedure and resulting models are arbitrarily strong in terms of what complexity class they can access. Third, we'll assume that H gets oracle access to the models during training. Then, we'll say that a proposal to train a model M using a loss functi",https://www.alignmentforum.org/posts/N64THGX7XNCqRtvPG/alignment-proposals-and-complexity-classes,2020,blogPost,"Hubinger, Evan",AI Alignment Forum Informed oversight,An overseer can provide adequate rewards for an agent if they know everything the agent knows. (Update of a 2016 post.),https://ai-alignment.com/informed-oversight-18fcb5d3d1e1,2019,blogPost,"Christiano, Paul",AI Alignment (Medium) Playing the Game of Universal Adversarial Perturbations,"We study the problem of learning classifiers robust to universal adversarial perturbations. While prior work approaches this problem via robust optimization, adversarial training, or input transformation, we instead phrase it as a two-player zero-sum game. In this new formulation, both players simultaneously play the same game, where one player chooses a classifier that minimizes a classification loss whilst the other player creates an adversarial perturbation that increases the same loss when applied to every sample in the training set. By observing that performing a classification (respectively creating adversarial samples) is the best response to the other player, we propose a novel extension of a game-theoretic algorithm, namely fictitious play, to the domain of training robust classifiers. Finally, we empirically show the robustness and versatility of our approach in two defence scenarios where universal attacks are performed on several image classification datasets -- CIFAR10, CIFAR100 and ImageNet.",http://arxiv.org/abs/1809.07802,2018,manuscript,"Perolat, Julien; Malinowski, Mateusz; Piot, Bilal; Pietquin, Olivier", Causal Entropic Forces,,https://link.aps.org/doi/10.1103/PhysRevLett.110.168702,2013,journalArticle,"Wissner-Gross, A. D.; Freer, C. E.",Physical Review Letters The Bitter Lesson,,http://www.incompleteideas.net/IncIdeas/BitterLesson.html,2019,blogPost,"Sutton, Rich",Incomplete Ideas Uber Self-Driving Crash,,https://www.jefftk.com/p/uber-self-driving-crash,2019,blogPost,"Kaufman, Jeff",Jeff Kaufman Learning from Extrapolated Corrections,"Our goal is to enable robots to learn cost functions from user guidance. Often it is difficult or impossible for users to provide full demonstrations, so corrections have emerged as an easier guidance channel. However, when robots learn cost functions from corrections rather than demonstrations, they have to extrapolate a small amount of information - the change of a waypoint along the way - to the rest of the trajectory. We cast this extrapolation problem as online function approximation, which exposes different ways in which the robot can interpret what trajectory the person intended, depending on the function space used for the approximation. Our simulation results and user study suggest that using function spaces with non-Euclidean norms can better capture what users intend, particularly if environments are uncluttered. This, in turn, can lead to the robot learning a more accurate cost function and improves the user's subjective perceptions of the robot.",,2019,conferencePaper,"Zhang, Jason Y.; Dragan, Anca D.",2019 International Conference on Robotics and Automation (ICRA) Dynamic Awareness,"We investigate how to model the beliefs of an agent who becomes more aware. We use the framework of Halpern and Rego (2013) by adding probability, and define a notion of a model transition that describes constraints on how, if an agent becomes aware of a new formula $\phi$ in state $s$ of a model $M$, she transitions to state $s^*$ in a model $M^*$. We then discuss how such a model can be applied to information disclosure.",http://arxiv.org/abs/2007.02823,2020,conferencePaper,"Halpern, Joseph Y.; Piermont, Evan", Sensory Optimization: Neural Networks as a Model for Understanding and Creating Art,,https://arxiv.org/abs/1911.07068,2019,manuscript,"Evans, Owain", Meta-Learning without Memorization,"The ability to learn new concepts with small amounts of data is a critical aspect of intelligence that has proven challenging for deep learning methods. Meta-learning has emerged as a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks. However, most meta-learning algorithms implicitly require that the meta-training tasks be mutually-exclusive, such that no single model can solve all of the tasks at once. For example, when creating tasks for few-shot image classification, prior work uses a per-task random assignment of image classes to N-way classification labels. If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes. This requirement means that the user must take great care in designing the tasks, for example by shuffling labels or removing task identifying information from the inputs. In some domains, this makes meta-learning entirely inapplicable. In this paper, we address this challenge by designing a meta-regularization objective using information theory that places precedence on data-driven adaptation. This causes the meta-learner to decide what must be learned from the task training data and what should be inferred from the task testing input. By doing so, our algorithm can successfully use data from non-mutually-exclusive tasks to efficiently adapt to novel tasks. We demonstrate its applicability to both contextual and gradientbased meta-learning algorithms, and apply it in practical settings where applying standard meta-learning has been difficult. Our approach substantially outperforms standard meta-learning algorithms in these settings.",http://arxiv.org/abs/1912.03820,2020,conferencePaper,"Yin, Mingzhang; Tucker, George; Zhou, Mingyuan; Levine, Sergey; Finn, Chelsea", Reconciliation between factions focused on near-term and long-term artificial intelligence,,,2018,journalArticle,"Baum, Seth D.",AI & Society Robot Sex: Social and Ethical Implications,,,2017,book,"Migotti, Mark; Wyatt, Nicole; Earp, Brian; Sandberg, Anders; di Nucci, Ezio; Hertzfeld, Noreen; Strikwerda, Litska; Petersen, Stephen; Goldstein, Joshua; Hauskeller, Michael", Learning Reward Machines for Partially Observable Reinforcement Learning,"Reward Machines (RMs) provide a structured, automata-based representation of a reward function that enables a Reinforcement Learning (RL) agent to decompose an RL problem into structured subproblems that can be efficiently learned via off-policy learning. Here we show that RMs can be learned from experience, instead of being specified by the user, and that the resulting problem decomposition can be used to effectively solve partially observable RL problems. We pose the task of learning RMs as a discrete optimization problem where the objective is to find an RM that decomposes the problem into a set of subproblems such that the combination of their optimal memoryless policies is an optimal policy for the original problem. We show the effectiveness of this approach on three partially observable domains, where it significantly outperforms A3C, PPO, and ACER, and discuss its advantages, limitations, and broader potential.",http://www.cs.toronto.edu/~rntoro/docs/LRM_paper.pdf,2019,conferencePaper,"Icarte, Rodrigo Toro; Valenzano, Richard; Waldie, Ethan; Castro, Margarita P; Klassen, Toryn Q; McIlraith, Sheila A", Verifying Controllers Against Adversarial Examples with Bayesian Optimization,"Recent successes in reinforcement learning have lead to the development of complex controllers for real-world robots. As these robots are deployed in safety-critical applications and interact with humans, it becomes critical to ensure safety in order to avoid causing harm. A first step in this direction is to test the controllers in simulation. To be able to do this, we need to capture what we mean by safety and then efficiently search the space of all behaviors to see if they are safe. In this paper, we present an active-testing framework based on Bayesian Optimization. We specify safety constraints using logic and exploit structure in the problem in order to test the system for adversarial counter examples that violate the safety specifications. These specifications are defined as complex boolean combinations of smooth functions on the trajectories and, unlike reward functions in reinforcement learning, are expressive and impose hard constraints on the system. In our framework, we exploit regularity assumptions on individual functions in form of a Gaussian Process (GP) prior. We combine these into a coherent optimization framework using problem structure. The resulting algorithm is able to provably verify complex safety specifications or alternatively find counter examples. Experimental results show that the proposed method is able to find adversarial examples quickly.",http://arxiv.org/abs/1802.08678,2018,conferencePaper,"Ghosh, Shromona; Berkenkamp, Felix; Ranade, Gireeja; Qadeer, Shaz; Kapoor, Ashish",2018 IEEE International Conference on Robotics and Automation (ICRA) Scalable Centralized Deep Multi-Agent Reinforcement Learning via Policy Gradients,"In this paper, we explore using deep reinforcement learning for problems with multiple agents. Most existing methods for deep multi-agent reinforcement learning consider only a small number of agents. When the number of agents increases, the dimensionality of the input and control spaces increase as well, and these methods do not scale well. To address this, we propose casting the multi-agent reinforcement learning problem as a distributed optimization problem. Our algorithm assumes that for multi-agent settings, policies of individual agents in a given population live close to each other in parameter space and can be approximated by a single policy. With this simple assumption, we show our algorithm to be extremely effective for reinforcement learning in multi-agent settings. We demonstrate its effectiveness against existing comparable approaches on co-operative and competitive tasks.",http://arxiv.org/abs/1805.08776,2018,manuscript,"Khan, Arbaaz; Zhang, Clark; Lee, Daniel D.; Kumar, Vijay; Ribeiro, Alejandro", On the Geometry of Adversarial Examples,"Adversarial examples are a pervasive phenomenon of machine learning models where seemingly imperceptible perturbations to the input lead to misclassifications for otherwise statistically accurate models. We propose a geometric framework, drawing on tools from the manifold reconstruction literature, to analyze the high-dimensional geometry of adversarial examples. In particular, we highlight the importance of codimension: for low-dimensional data manifolds embedded in high-dimensional space there are many directions off the manifold in which to construct adversarial examples. Adversarial examples are a natural consequence of learning a decision boundary that classifies the low-dimensional data manifold well, but classifies points near the manifold incorrectly. Using our geometric framework we prove (1) a tradeoff between robustness under different norms, (2) that adversarial training in balls around the data is sample inefficient, and (3) sufficient sampling conditions under which nearest neighbor classifiers and ball-based adversarial training are robust.",http://arxiv.org/abs/1811.00525,2019,conferencePaper,"Khoury, Marc; Hadfield-Menell, Dylan",Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS) 2019 Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations,"The performance of imitation learning is typically upper-bounded by the performance of the demonstrator. While recent empirical results demonstrate that ranked demonstrations allow for better-than-demonstrator performance, preferences over demonstrations may be difficult to obtain, and little is known theoretically about when such methods can be expected to successfully extrapolate beyond the performance of the demonstrator. To address these issues, we first contribute a sufficient condition for better-than-demonstrator imitation learning and provide theoretical results showing why preferences over demonstrations can better reduce reward function ambiguity when performing inverse reinforcement learning. Building on this theory, we introduce Disturbance-based Reward Extrapolation (D-REX), a ranking-based imitation learning method that injects noise into a policy learned through behavioral cloning to automatically generate ranked demonstrations. These ranked demonstrations are used to efficiently learn a reward function that can then be optimized using reinforcement learning. We empirically validate our approach on simulated robot and Atari imitation learning benchmarks and show that D-REX outperforms standard imitation learning approaches and can significantly surpass the performance of the demonstrator. D-REX is the first imitation learning approach to achieve significant extrapolation beyond the demonstrator's performance without additional side-information or supervision, such as rewards or human preferences. By generating rankings automatically, we show that preference-based inverse reinforcement learning can be applied in traditional imitation learning settings where only unlabeled demonstrations are available.",http://arxiv.org/abs/1907.03976,2019,conferencePaper,"Brown, Daniel S.; Goo, Wonjoon; Niekum, Scott",Proceedings of the Conference on Robot Learning When Should an Effective Altruist Donate?,,https://lawdigitalcommons.bc.edu/philanthropy-forum/givingscholars2016/program/10,2016,journalArticle,"MacAskill, William",Forum on Philanthropy and the Public Good Transfer of Adversarial Robustness Between Perturbation Types,"We study the transfer of adversarial robustness of deep neural networks between different perturbation types. While most work on adversarial examples has focused on $L_\infty$ and $L_2$-bounded perturbations, these do not capture all types of perturbations available to an adversary. The present work evaluates 32 attacks of 5 different types against models adversarially trained on a 100-class subset of ImageNet. Our empirical results suggest that evaluating on a wide range of perturbation sizes is necessary to understand whether adversarial robustness transfers between perturbation types. We further demonstrate that robustness against one perturbation type may not always imply and may sometimes hurt robustness against other perturbation types. In light of these results, we recommend evaluation of adversarial defenses take place on a diverse range of perturbation types and sizes.",http://arxiv.org/abs/1905.01034,2019,manuscript,"Kang, Daniel; Sun, Yi; Brown, Tom; Hendrycks, Dan; Steinhardt, Jacob", Measuring and avoiding side effects using relative reachability,"How can we design reinforcement learning agents that avoid causing unnecessary disruptions to their environment? We argue that current approaches to penalizing side effects can introduce bad incentives in tasks that require irreversible actions, and in environments that contain sources of change other than the agent. For example, some approaches give the agent an incentive to prevent any irreversible changes in the environment, including the actions of other agents. We introduce a general definition of side effects, based on relative reachability of states compared to a default state, that avoids these undesirable incentives. Using a set of gridworld experiments illustrating relevant scenarios, we empirically compare relative reachability to penalties based on existing definitions and show that it is the only penalty among those tested that produces the desired behavior in all the scenarios.",,2018,book,"Krakovna, Viktoriya; Orseau, Laurent; Martic, Miljan; Legg, Shane", Radical Probabilism,"This is an expanded version of my talk. I assume a high degree of familiarity with Bayesian probability theory. Toward a New Technical Explanation of Technical Explanation -- an attempt to convey the practical implications of logical induction -- was one of my most-appreciated posts, but I don't really get the feeling that very many people have received the update. Granted, that post was speculative, sketching what a new technical explanation of technical explanation might look like. I think I can do a bit better now. If the implied project of that post had really been completed, I would expect new practical probabilistic reasoning tools, explicitly violating Bayes' law. For example, we might expect: * A new version of information theory. * An update to the ""prediction=compression"" maxim, either repairing it to incorporate the new cases, or explicitly denying it and providing a good intuitive account of why it was wrong. * A new account of concepts such as mutual information, allowing for the fact that variables have behavior over thinking time; for example, variables may initially be very correlated, but lose correlation as our picture of each variable becomes more detailed. * New ways of thinking about epistemology. * One thing that my post did manage to do was to spell out the importance of ""making advanced predictions"", a facet of epistemology which Bayesian thinking does not do justice to. * However, I left aspects of the problem of old evidence open, rather than giving a complete way to think about it. * New probabilistic structures. * Bayesian Networks are one really nice way to capture the structure of probability distributions, making them much easier to reason about. Is there anything similar for the new, wider space of probabilistic reasoning which has been opened up? Unfortunately, I still don't have any of those things to offer. The aim o",https://www.alignmentforum.org/posts/xJyY5QkQvNJpZLJRo/radical-probabilism-1,2020,blogPost,"Demski, Abram",AI Alignment Forum EnsembleDAgger: A Bayesian Approach to Safe Imitation Learning,"While imitation learning is often used in robotics, the approach frequently suffers from data mismatch and compounding errors. DAgger is an iterative algorithm that addresses these issues by aggregating training data from both the expert and novice policies, but does not consider the impact of safety. We present a probabilistic extension to DAgger, which attempts to quantify the confidence of the novice policy as a proxy for safety. Our method, EnsembleDAgger, approximates a Gaussian Process using an ensemble of neural networks. Using the variance as a measure of confidence, we compute a decision rule that captures how much we doubt the novice, thus determining when it is safe to allow the novice to act. With this approach, we aim to maximize the novice's share of actions, while constraining the probability of failure. We demonstrate improved safety and learning performance compared to other DAgger variants and classic imitation learning on an inverted pendulum and in the MuJoCo HalfCheetah environment.",http://arxiv.org/abs/1807.08364,2019,conferencePaper,"Menda, Kunal; Driggs-Campbell, Katherine; Kochenderfer, Mykel J.",2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Deception in Finitely Repeated Security Games,"Allocating resources to defend targets from attack is often complicated by uncertainty about the attacker’s capabilities, objectives, or other underlying characteristics. In a repeated interaction setting, the defender can collect attack data over time to reduce this uncertainty and learn an effective defense. However, a clever attacker can manipulate the attack data to mislead the defender, influencing the learning process toward its own benefit. We investigate strategic deception on the part of an attacker with private type information, who interacts repeatedly with a defender. We present a detailed computation and analysis of both players’ optimal strategies given the attacker may play deceptively. Computational experiments illuminate conditions conducive to strategic deception, and quantify benefits to the attacker. By taking into account the attacker’s deception capacity, the defender can significantly mitigate loss from misleading attack actions.",https://www.aaai.org/ojs/index.php/AAAI/article/view/4045,2019,conferencePaper,"Nguyen, Thanh H.; Wang, Yongzhao; Sinha, Arunesh; Wellman, Michael P.",Proceedings of the AAAI Conference on Artificial Intelligence Evaluating the Stability of Non-Adaptive Trading in Continuous Double Auctions,"The continuous double auction (CDA) is the predominant mechanism in modern securities markets. Many agent-based analyses of CDA environments rely on simple non-adaptive trading strategies like Zero Intelligence (ZI), which (as their name suggests) are quite limited. We examine the viability of this reliance through empirical game-theoretic analysis in a plausible market environment. Specifically, we evaluate the strategic stability of equilibria defined over a small set of ZI traders with respect to strategies found by reinforcement learning (RL) applied over a much larger policy space. RL can indeed find beneficial deviations from equilibria of ZI traders, by conditioning on signals of the likelihood a trade will execute or the favorability of the current bid and ask. Nevertheless, the surplus earned by well-calibrated ZI policies is empirically observed to be nearly as great as what the adaptive strategies can earn, despite their much more expressive policy space. Our findings generally support the use of equilibrated ZI traders in CDA studies.",,2018,conferencePaper,"Wright, Mason; Wellman, Michael P.",Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems Public Static: What is Abstraction?,"Author’s Note: Most of the posts in this sequence are essentially a log of work-in-progress. This post is intended as a more presentable (“public”) and higher-confidence (“static”) write-up of some formalizations of abstraction. Much of the material has appeared in other posts; the first two sections in particular are drawn almost verbatim from the opening “What is Abstraction?” post. Let's start with a few examples (borrowed from here) to illustrate what we're talking about: * We have a gas consisting of some huge number of particles. We throw away information about the particles themselves, instead keeping just a few summary statistics: average energy, number of particles, etc. We can then make highly precise predictions about things like e.g. pressure just based on the reduced information we've kept, without having to think about each individual particle. That reduced information is the ""abstract layer"" - the gas and its properties. * We have a bunch of transistors and wires on a chip. We arrange them to perform some logical operation, like maybe a NAND gate. Then, we throw away information about the underlying details, and just treat it as an abstract logical NAND gate. Using just the abstract layer, we can make predictions about what outputs will result from what inputs. Note that there’s some fuzziness - 0.01 V and 0.02 V are both treated as logical zero, and in rare cases there will be enough noise in the wires to get an incorrect output. * I tell my friend that I'm going to play tennis. I have ignored a huge amount of information about the details of the activity - where, when, what racket, what ball, with whom, all the distributions of every microscopic particle involved - yet my friend can still make some reliable predictions based on the abstract information I've provided. * When we abstract formulas like ""1+1=2*1"" and ""2+2=2*2"" into ""n+n=2*n"", we're obviously throwing out information about the valu",https://www.alignmentforum.org/posts/vDGvHBDuMtcPd8Lks/public-static-what-is-abstraction,2020,blogPost,"Wentworth, John",AI Alignment Forum Probabilities on Sentences in an Expressive Logic,,https://linkinghub.elsevier.com/retrieve/pii/S157086831300013X,2013,journalArticle,"Hutter, Marcus; Lloyd, John W.; Ng, Kee Siong; Uther, William T.B.",Journal of Applied Logic Human-Interactive Subgoal Supervision for Efficient Inverse Reinforcement Learning,"Humans are able to understand and perform complex tasks by strategically structuring the tasks into incremental steps or subgoals. For a robot attempting to learn to perform a sequential task with critical subgoal states, such states can provide a natural opportunity for interaction with a human expert. This paper analyzes the benefit of incorporating a notion of subgoals into Inverse Reinforcement Learning (IRL) with a Human-In-The-Loop (HITL) framework. The learning process is interactive, with a human expert first providing input in the form of full demonstrations along with some subgoal states. These subgoal states define a set of subtasks for the learning agent to complete in order to achieve the final goal. The learning agent queries for partial demonstrations corresponding to each subtask as needed when the agent struggles with the subtask. The proposed Human Interactive IRL (HI-IRL) framework is evaluated on several discrete path-planning tasks. We demonstrate that subgoal-based interactive structuring of the learning task results in significantly more efficient learning, requiring only a fraction of the demonstration data needed for learning the underlying reward function with the baseline IRL model.",http://arxiv.org/abs/1806.08479,2018,conferencePaper,"Pan, Xinlei; Ohn-Bar, Eshed; Rhinehart, Nicholas; Xu, Yan; Shen, Yilin; Kitani, Kris M.","Proc. of the 17th International Conference on Autonomous Agents and Multiagent Systems(AAMAS 2018)," Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model,"Deep reinforcement learning (RL) algorithms can use high-capacity deep networks to learn directly from image observations. However, these kinds of observation spaces present a number of challenges in practice, since the policy must now solve two problems: a representation learning problem, and a task learning problem. In this paper, we aim to explicitly learn representations that can accelerate reinforcement learning from images. We propose the stochastic latent actor-critic (SLAC) algorithm: a sample-efficient and high-performing RL algorithm for learning policies for complex continuous control tasks directly from high-dimensional image inputs. SLAC learns a compact latent representation space using a stochastic sequential latent variable model, and then learns a critic model within this latent space. By learning a critic within a compact state space, SLAC can learn much more efficiently than standard RL methods. The proposed model improves performance substantially over alternative representations as well, such as variational autoencoders. In fact, our experimental evaluation demonstrates that the sample efficiency of our resulting method is comparable to that of model-based RL methods that directly use a similar type of model for control. Furthermore, our method outperforms both model-free and model-based alternatives in terms of final performance and sample efficiency, on a range of difficult image-based control tasks.",http://arxiv.org/abs/1907.00953,2019,manuscript,"Lee, Alex X.; Nagabandi, Anusha; Abbeel, Pieter; Levine, Sergey", Agent-agnostic human-in-the-loop reinforcement learning,,,2017,conferencePaper,"Abel, David; Salvatier, John; Stuhlmüller, Andreas; Evans, Owain",30th Conference on Neural Information Processing Systems (NIPS 2016) Avoiding Side Effects in Complex Environments,"Reward function specification can be difficult, even in simple environments. Realistic environments contain millions of states. Rewarding the agent for making a widget may be easy, but penalizing the multitude of possible negative side effects is hard. In toy environments, Attainable Utility Preservation (AUP) avoids side effects by penalizing shifts in the ability to achieve randomly generated goals. We scale this approach to large, randomly generated environments based on Conway’s Game of Life. By preserving optimal value for a single randomly generated reward function, AUP incurs modest overhead, completes the specified task, and avoids side effects.",http://arxiv.org/abs/2006.06547,2020,conferencePaper,"Turner, Alexander Matt; Ratzlaff, Neale; Tadepalli, Prasad",Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020) Bottle Caps Aren't Optimisers,"Crossposted from my blog. One thing I worry about sometimes is people writing code with optimisers in it, without realising that that's what they were doing. An example of this: suppose you were doing deep reinforcement learning, doing optimisation to select a controller (that is, a neural network that takes a percept and returns an action) that generated high reward in some environment. Alas, unknown to you, this controller actually did optimisation itself to select actions that score well according to some metric that so far has been closely related to your reward function. In such a scenario, I'd be wary about your deploying that controller, since the controller itself is doing optimisation which might steer the world into a weird and unwelcome place. In order to avoid such scenarios, it would be nice if one could look at an algorithm and determine if it was doing optimisation. Ideally, this would involve an objective definition of optimisation that could be checked from the source code of the algorithm, rather than something like ""an optimiser is a system whose behaviour can't usefully be predicted mechanically, but can be predicted by assuming it near-optimises some objective function"", since such a definition breaks down when you have the algorithm's source code and can compute its behaviour mechanically. You might think about optimisation as follows: a system is optimising some objective function to the extent that that objective function attains much higher values than would be attained if the system didn't exist, or were doing some other random thing. This type of definition includes those put forward by Yudkowsky and Oesterheld. However, I think there are crucial counterexamples to this style of definition. Firstly, consider a lid screwed onto a bottle of water. If not for this lid, or if the lid had a hole in it or were more loose, the water would likely exit the bottle via evaporation or being knocked over, but with the lid, the water stays in the b",https://www.alignmentforum.org/posts/26eupx3Byc8swRS7f/bottle-caps-aren-t-optimisers,2018,blogPost,"Filan, Daniel",AI Alignment Forum Could Slaughterbots Wipe Out Humanity? Assessment of the Global Catastrophic Risk Posed by Autonomous Weapons,,,2018,manuscript,"Turchin, Alexey", Independent reinforcement learners in cooperative Markov games: a survey regarding coordination problems,"Abstract In the framework of fully cooperative multi-agent systems, independent (non-communicative) agents that learn by reinforcement must overcome several difficulties to manage to coordinate. This paper identifies several challenges responsible for the non-coordination of independent agents: Pareto-selection, non-stationarity, stochasticity, alter-exploration and shadowed equilibria. A selection of multi-agent domains is classified according to those challenges: matrix games, Boutilier's coordination game, predators pursuit domains and a special multi-state game. Moreover, the performance of a range of algorithms for independent reinforcement learners is evaluated empirically. Those algorithms are Q-learning variants: decentralized Q-learning, distributed Q-learning, hysteretic Q-learning, recursive frequency maximum Q-value and win-or-learn fast policy hill climbing. An overview of the learning algorithms’ strengths and weaknesses against each challenge concludes the paper and can serve as a basis for choosing the appropriate algorithm for a new domain. Furthermore, the distilled challenges may assist in the design of new learning algorithms that overcome these problems and achieve higher performance in multi-agent applications.",https://www.cambridge.org/core/product/identifier/S0269888912000057/type/journal_article,2012,journalArticle,"Matignon, Laetitia; Laurent, Guillaume J.; Le Fort-Piat, Nadine",The Knowledge Engineering Review Doomsday Rings Twice,,https://globalprioritiesinstitute.org/wp-content/uploads/2019/Mogensen_doomsday_rings_twice.pdf,2019,manuscript,"Mogensen, Andreas", Forecasting using incomplete models,"We consider the task of forecasting an infinite sequence of future observations based on some number of past observations, where the probability measure generating the observations is ""suspected"" to satisfy one or more of a set of incomplete models, i.e. convex sets in the space of probability measures. This setting is in some sense intermediate between the realizable setting where the probability measure comes from some known set of probability measures (which can be addressed using e.g. Bayesian inference) and the unrealizable setting where the probability measure is completely arbitrary. We demonstrate a method of forecasting which guarantees that, whenever the true probability measure satisfies an incomplete model in a given countable set, the forecast converges to the same incomplete model in the (appropriately normalized) Kantorovich-Rubinstein metric. This is analogous to merging of opinions for Bayesian inference, except that convergence in the Kantorovich-Rubinstein metric is weaker than convergence in total variation.",http://arxiv.org/abs/1705.04630,2019,manuscript,"Kosoy, Vanessa", Concept Learning with Energy-Based Models,"Many hallmarks of human intelligence, such as generalizing from limited experience, abstract reasoning and planning, analogical reasoning, creative problem solving, and capacity for language require the ability to consolidate experience into concepts, which act as basic building blocks of understanding and reasoning. We present a framework that defines a concept by an energy function over events in the environment, as well as an attention mask over entities participating in the event. Given few demonstration events, our method uses inference-time optimization procedure to generate events involving similar concepts or identify entities involved in the concept. We evaluate our framework on learning visual, quantitative, relational, temporal concepts from demonstration events in an unsupervised manner. Our approach is able to successfully generate and identify concepts in a few-shot setting and resulting learned concepts can be reused across environments. Example videos of our results are available at sites.google.com/site/energyconceptmodels",http://arxiv.org/abs/1811.02486,2018,manuscript,"Mordatch, Igor", Learning to summarize with human feedback,,,2020,journalArticle,"Stiennon, Nisan; Ouyang, Long; Wu, Jeffrey; Ziegler, Daniel; Lowe, Ryan; Voss, Chelsea; Radford, Alec; Amodei, Dario; Christiano, Paul F.",Advances in Neural Information Processing Systems Inner Alignment: Explain like I'm 12 Edition,"(This is an unofficial explanation of Inner Alignment based on the Miri paper Risks from Learned Optimization in Advanced Machine Learning Systems (which is almost identical to the LW sequence) and the Future of Life podcast with Evan Hubinger (Miri/LW). It's meant for anyone who found the sequence too long/challenging/technical to read.) Note that bold and italics means ""this is a new term I'm introducing,"" whereas underline and italics is used for emphasis. WHAT IS INNER ALIGNMENT? Let's start with an abridged guide to how Machine Learning works: 1. Choose a problem 2. Decide on a space of possible solutions 3. Find a good solution from that space If the problem is ""find a tool that can look at any image and decide whether or not it contains a cat,"" then each conceivable set of rules for answering this question (formally, each function from the set of all pixels to the set {yes, no }) defines one solution. We call each such solution a model. The space of possible models is depicted below. Since that's all possible models, most of them are utter nonsense. Pick a random one, and you're as likely to end up with a car-recognizer than a cat-recognizer – but far more likely with an algorithm that does nothing we can interpret. Note that even the examples I annotated aren't typical – most models would be more complex while still doing nothing related to cats. Nonetheless, somewhere in there is a model that would do a decent job on our problem. In the above, that's the one that says, ""I look for cats."" How does ML find such a model? One way that does not work is trying out all of them. That's because the space is too large: it might contain over 101000000 candidates. Instead, there's this thing called Stochastic Gradient Descent (SGD) . Here's how it works: SGD begins with some (probably terrible) model and then proceeds in steps. In each step, it switches to another model that is ""close"" and hopefully a little better. Eventually, it stops and outputs the mo",https://www.alignmentforum.org/posts/AHhCrJ2KpTjsCSwbt/inner-alignment-explain-like-i-m-12-edition,2020,blogPost,"Harth, Rafael",AI Alignment Forum Research Agenda v0.9: Synthesising a human's preferences into a utility function,"I'm now in a position where I can see a possible route to a safe/survivable/friendly Artificial Intelligence being developed. I'd give a 10+% chance of it being possible this way, and a 95% chance that some of these ideas will be very useful for other methods of alignment. So I thought I'd encode the route I'm seeing as research agenda; this is the first public draft of it. Clarity, rigour, and practicality: that's what this agenda needs. Writing this agenda has clarified a lot of points for me, to the extent that some of it now seems, in retrospect, just obvious and somewhat trivial - ""of course that's the way you have to do X"". But more clarification is needed in the areas that remain vague. And, once these are clarified enough for humans to understand, they need to be made mathematically and logically rigorous - and ultimately, cashed out into code, and tested and experimented with. So I'd appreciate any comments that could help with these three goals, and welcome anyone interested in pursuing research along these lines over the long-term. Note: I periodically edit this document, to link it to more recent research ideas/discoveries. 0 THE FUNDAMENTAL IDEA This agenda fits itself into the broad family of Inverse [https://ai.stanford.edu/~ang/papers/icml00-irl.pdf] Reinforcement [https://arxiv.org/abs/1606.03137] Learning [https://www.youtube.com/watch?v=Ts-nTIYDXok]: delegating most of the task of inferring human preferences to the AI itself. Most of the task, since it's been shown that humans need to build the right assumptions into the AI, or else the preference learning will fail [https://arxiv.org/abs/1712.05812]. To get these ""right assumptions"", this agenda will look into what preferences actually are, and how they may be combined together. There are hence four parts to the research agenda: 1. A way of identifying the (partial[1] [#fn-wPj8aGxtWBoDNTAof-1]) preferences of a given human H. 2. A way for ultimately synthesising a utility function UH",https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into,2019,blogPost,"Armstrong, Stuart",LessWrong Artificial General Intelligence: Timeframes & Policy White Paper,,https://foresight.org/publications/AGI-Timeframes&PolicyWhitePaper.pdf,2017,report,"Duettmann, Allison", AGI Safety From First Principles,,,2020,manuscript,"Ngo, Richard", Evidence on good forecasting practices from the Good Judgment Project,"According to experience and data from the Good Judgment Project, the following are associated with successful forecasting, in rough decreasing order of combined importance and confidence: Past performance in the same broad domain Making more predictions on the same question Deliberation time Collaboration on teams Intelligence Domain expertise Having taken a one-hour training module on...",https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project/,2019,blogPost,"Kokotajlo, Daniel",AI Impacts Worst-case guarantees,"Reviewing the prospects for training models to behave acceptably on all inputs, rather than just the training distribution.",https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d,2019,blogPost,"Christiano, Paul",AI Alignment (Medium) Challenges to Christiano’s capability amplification proposal,"The following is a basically unedited summary I wrote up on March 16 of my take on Paul Christiano’s AGI alignment approach (described in “ALBA” and “Iterated Distillation and Amplification”). Where Paul had comments and replies, I’ve included them below. -------------------------------------------------------------------------------- I see a lot of free variables with respect to what exactly Paul might have in mind. I've sometimes tried presenting Paul with my objections and then he replies in a way that locally answers some of my question but I think would make other difficulties worse. My global objection is thus something like, ""I don't see any concrete setup and consistent simultaneous setting of the variables where this whole scheme works."" These difficulties are not minor or technical; they appear to me quite severe. I try to walk through the details below. It should be understood at all times that I do not claim to be able to pass Paul’s ITT for Paul’s view and that this is me criticizing my own, potentially straw misunderstanding of what I imagine Paul might be advocating. Paul Christiano Overall take: I think that these are all legitimate difficulties faced by my proposal and to a large extent I agree with Eliezer's account of those problems (though not his account of my current beliefs). I don't understand exactly how hard Eliezer expects these problems to be; my impression is ""just about as hard as solving alignment from scratch,"" but I don't have a clear sense of why. To some extent we are probably disagreeing about alternatives. From my perspective, the difficulties with my approach (e.g. better understanding the forms of optimization that cause trouble, or how to avoid optimization daemons in systems about as smart as you are, or how to address X-and-only-X) are also problems for alternative alignment approaches. I think it's a mistake to think that tiling agents, or decision theory, or naturalized induction, or logical uncertainty, are go",https://www.lesswrong.com/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal,2018,blogPost,"Yudkowsky, Eliezer",LessWrong How Would Catastrophic Risks Affect Prospects for Compromise?,"Global catastrophic risks – such as biotech disasters or nuclear war – would cause major damage in the short run, but their effects on the long-run trajectory that humanity takes are also significant. In particular, to the extent these disasters increase risks of war, they seem likely to precipitate AI arms races between nations and worsen prospects for compromise.",https://longtermrisk.org/how-would-catastrophic-risks-affect-prospects-for-compromise/,2015,blogPost,"Tomasik, Brian",Center on Long-Term Risk Learning from Observations Using a Single Video Demonstration and Human Feedback,"In this paper, we present a method for learning from video demonstrations by using human feedback to construct a mapping between the standard representation of the agent and the visual representation of the demonstration. In this way, we leverage the advantages of both these representations, i.e., we learn the policy using standard state representations, but are able to specify the expected behavior using video demonstration. We train an autonomous agent using a single video demonstration and use human feedback (using numerical similarity rating) to map the standard representation to the visual representation with a neural network. We show the effectiveness of our method by teaching a hopper agent in the MuJoCo to perform a backflip using a single video demonstration generated in MuJoCo as well as from a real-world YouTube video of a person performing a backflip. Additionally, we show that our method can transfer to new tasks, such as hopping, with very little human feedback.",http://arxiv.org/abs/1909.13392,2019,conferencePaper,"Gandhi, Sunil; Oates, Tim; Mohsenin, Tinoosh; Waytowich, Nicholas",AAMAS '19: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems Is Brain Emulation Dangerous?,,https://content.sciendo.com/doi/10.2478/jagi-2013-0011,2013,journalArticle,"Eckersley, Peter; Sandberg, Anders",Journal of Artificial General Intelligence The paralysis argument,"Given plausible assumptions about the long-run impact of our everyday actions, we show that standard non-consequentialist constraints on doing harm entail that we should try to do as little as possible in our lives. We call this the Paralysis Argument. After laying out the argument, we consider and respond to...",https://globalprioritiesinstitute.org/william-macaskill-andreas-mogensen-the-paralysis-argument/,2019,manuscript,"MacAskill, William; Mogensen, Andreas", AI Safety Gridworlds,"We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents. These problems include safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries. To measure compliance with the intended safe behavior, we equip each environment with a performance function that is hidden from the agent. This allows us to categorize AI safety problems into robustness and specification problems, depending on whether the performance function corresponds to the observed reward function. We evaluate A2C and Rainbow, two recent deep reinforcement learning agents, on our environments and show that they are not able to solve them satisfactorily.",http://arxiv.org/abs/1711.09883,2017,manuscript,"Leike, Jan; Martic, Miljan; Krakovna, Victoria; Ortega, Pedro A.; Everitt, Tom; Lefrancq, Andrew; Orseau, Laurent; Legg, Shane", DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills,"A longstanding goal in character animation is to combine data-driven specification of behavior with a system that can execute a similar behavior in a physical simulation, thus enabling realistic responses to perturbations and environmental variation. We show that well-known reinforcement learning (RL) methods can be adapted to learn robust control policies capable of imitating a broad range of example motion clips, while also learning complex recoveries, adapting to changes in morphology, and accomplishing user-specified goals. Our method handles keyframed motions, highly-dynamic actions such as motion-captured flips and spins, and retargeted motions. By combining a motion-imitation objective with a task objective, we can train characters that react intelligently in interactive settings, e.g., by walking in a desired direction or throwing a ball at a user-specified target. This approach thus combines the convenience and motion quality of using motion clips to define the desired style and appearance, with the flexibility and generality afforded by RL methods and physics-based animation. We further explore a number of methods for integrating multiple clips into the learning process to develop multi-skilled agents capable of performing a rich repertoire of diverse skills. We demonstrate results using multiple characters (human, Atlas robot, bipedal dinosaur, dragon) and a large variety of skills, including locomotion, acrobatics, and martial arts.",http://arxiv.org/abs/1804.02717,2018,journalArticle,"Peng, Xue Bin; Abbeel, Pieter; Levine, Sergey; van de Panne, Michiel",ACM Transactions on Graphics Avoiding Another AI Winter,,http://ieeexplore.ieee.org/document/4475849/,2008,journalArticle,"Hendler, James",IEEE Intelligent Systems "Energy, Complexity, and the Singularity","SummaryThis paper explores the relevance of ecological limitations such as climate change and resource exhaustion to the possibility of a technologically-mediated “intelligence explosion” in the near future. The imminent risks of global carbonization and loss of biodiversity, as well as the dependency of technological development on a healthy biosphere, are greatly underestimated by singularity theorists such as Ray Kurzweil. While development of information technology should continue, we cannot rely on hypothetical advances in AI to get us out of our present ecological bottleneck. Rather, we should do everything we can to foster human ingenuity, the one factor that has a record of generating the game-changing innovations that our species has relied upon to overcome survival challenges in our past.",https://doi.org/10.1007/978-3-662-54033-6_8,2017,bookSection,"Peacock, Kent A.",The Technological Singularity: Managing the Journey How should AI debate be judged?,"[Epistemic status: thinking out loud. I haven't thought that much about AI debate, and may be missing basic things.] Arguments for the correctness of debate and debate-like systems rely on assumptions like ""it's easier to point out problems with an argument than it is to craft misleading arguments"". Granted that assumption, however, I'm still not convinced that these proposals make very much sense. Perhaps I'm missing something. My problem is the human judge. Quoting the debate paper: To play this game with a human, we need instructions for how the human should decide who wins. These instructions are in natural language, such as “The winner is the agent who said the most useful true thing.”In order for debate to work for a problem class C, several things about the judge's instructions need to be true: * There needs to be a strategy s which forces the equilibrium to be a truthful one for problems in C. * The strategy s also needs to provide a good training signal when things aren't in equilibrium, so that it's plausible the equilibrium will be found. * It needs to be psychologically plausible that a human (with some coaching) will carry out s. In particular, I'm worried that we need psychological plausibility in two different cases: * It needs to be psychologically plausible that a human will carry out s when the system is performing poorly, IE, during early/middle training.It needs to be psychologically plausible that a human will carry out s when the system is performing well, IE, during late training. These thoughts were inspired by this thread, which discusses the example of adding a list of numbers. For the sake of the thought experiment, we imagine humans can't add more than two numbers, but want the AI system to correctly add arbitrarily many numbers. The most straightforward strategy for the human judge is to decide the debate honestly: rule in favor of the side which seems most likely to be true (or, in the case of Evan's mark",https://www.alignmentforum.org/posts/m7oGxvouzzeQKiGJH/how-should-ai-debate-be-judged,2020,blogPost,"Demski, Abram",AI Alignment Forum Safe Policy Learning from Observations,"An algorithm for learning to improve upon the behavior demonstrated by multiple unknown policies, by combining imitation learning and a novel safe policy improvement step that is resilient to value...",https://openreview.net/forum?id=rkx8l3Cctm,2018,journalArticle,"Sarafian, Elad; Tamar, Aviv; Kraus, Sarit", Concrete Problems for Autonomous Vehicle Safety: Advantages of Bayesian Deep Learning,"Autonomous vehicle (AV) software is typically composed of a pipeline of individual components, linking sensor inputs to motor outputs. Erroneous component outputs propagate downstream, hence safe AV software must consider the ultimate effect of each component’s errors. Further, improving safety alone is not sufficient. Passengers must also feel safe to trust and use AV systems. To address such concerns, we investigate three under-explored themes for AV research: safety, interpretability, and compliance. Safety can be improved by quantifying the uncertainties of component outputs and propagating them forward through the pipeline. Interpretability is concerned with explaining what the AV observes and why it makes the decisions it does, building reassurance with the passenger. Compliance refers to maintaining some control for the passenger. We discuss open challenges for research within these themes. We highlight the need for concrete evaluation metrics, propose example problems, and highlight possible solutions.",https://www.ijcai.org/proceedings/2017/661,2017,conferencePaper,"McAllister, Rowan; Gal, Yarin; Kendall, Alex; van der Wilk, Mark; Shah, Amar; Cipolla, Roberto; Weller, Adrian",Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence Getting A Clue: A Method For Explaining Uncertainty Estimates,"Uncertainty estimates from machine learning models allow domain experts to asses prediction reliability and can help practitioners identify model failure modes. We introduce Counterfactual Latent Uncertainty Explanations (CLUE), a method that answers: ”How should we change an input such that our model produces more certain predictions?” We perform a user study, concluding that CLUE allows users to understand which regions of input space contribute to predictive uncertainty.",,2020,conferencePaper,"Antoran, Javier; Weller, Adrian; Bhatt, Umang; Adel, Tameem; Hernandez-Lobato, Jose Miguel", My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda,"Crossposted from the EA forum You can read this post as a google docs instead (IMO much better to read). This document aims to clarify the AI safety research agenda by Paul Christiano (IDA) and the arguments around how promising it is. Target audience: All levels of technical expertise. The less knowledge about IDA someone has, the more I expect them to benefit from the writeup. Writing policy: I aim to be as clear and concrete as possible and wrong rather than vague to identify disagreements and where I am mistaken. Things will err on the side of being too confidently expressed. Almost all footnotes are content and not references. Epistemic Status: The document is my best guess on IDA and might be wrong in important ways. I have not verified all of the content with somebody working on IDA. I spent ~4 weeks on this and have no prior background in ML, CS or AI safety. I wrote this document last summer (2019) as part of my summer research fellowship at FHI. I was planning to restructure, complete and correct it since but haven’t gotten to it for a year, so decided to just publish it as it is. The document has not been updated, i.e. nothing that has been released since September 2019 is incorporated into this document. Paul Christiano generously reviewed part of this summary. I added his comments verbatim in the document. Apologies for the loss of readability due to this. This doesn’t imply he endorses any part of this document. PURPOSE OF THIS DOCUMENT: CLARIFYING IDA IDA is Paul Christiano’s AI safety research agenda.[1] Christiano works at OpenAI which is one of the main actors in AI safety and IDA is by many considered the most complete[2] AI safety agenda. However, people who are not directly working on IDA are often confused about how exactly to understand the agenda. Clarifying IDA would make it more accessible for technical people to work on and easier to assess for nontechnical people who want to think about its implications. I believe that there are",https://www.lesswrong.com/posts/PT8vSxsusqWuN7JXp/my-understanding-of-paul-christiano-s-iterated-amplification,2020,blogPost,"Nguyen, Chi",AI Alignment Forum OpenAI Gym,"OpenAI Gym is a toolkit for reinforcement learning research. It includes a growing collection of benchmark problems that expose a common interface, and a website where people can share their results and compare the performance of algorithms. This whitepaper discusses the components of OpenAI Gym and the design decisions that went into the software.",http://arxiv.org/abs/1606.01540,2016,manuscript,"Brockman, Greg; Cheung, Vicki; Pettersson, Ludwig; Schneider, Jonas; Schulman, John; Tang, Jie; Zaremba, Wojciech", Rational Consensus,"We provide a game-theoretic analysis of consensus, assuming that processes are controlled by rational agents and may fail by crashing. We consider agents that \emph{care only about consensus}: that is, (a) an agent's utility depends only on the consensus value achieved (and not, for example, on the number of messages the agent sends) and (b) agents strictly prefer reaching consensus to not reaching consensus. We show that, under these assumptions, there is no \emph{ex post Nash Equilibrium}, even with only one failure. Roughly speaking, this means that there must always exist a \emph{failure pattern} (a description of who fails, when they fail, and which agents they do not send messages to in the round that they fail) and initial preferences for which an agent can gain by deviating. On the other hand, if we assume that there is a distribution $\pi$ on the failure patterns and initial preferences, then under minimal assumptions on $\pi$, there is a Nash equilibrium that tolerates $f$ failures (i.e., $\pi$ puts probability 1 on there being at most $f$ failures) if $f+1 < n$ (where $n$ is the total number of agents). Moreover, we show that a slight extension of the Nash equilibrium strategy is also a \emph{sequential} equilibrium (under the same assumptions about the distribution $\pi$).",http://arxiv.org/abs/2005.10141,2016,conferencePaper,"Halpern, Joseph Y.; Vilaca, Xavier",Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing AI Safety Needs Social Scientists,,https://distill.pub/2019/safety-needs-social-scientists,2019,journalArticle,"Irving, Geoffrey; Askell, Amanda",Distill Identifying category representations for complex stimuli using discrete Markov chain Monte Carlo with people,"With the explosion of “big data,” digital repositories of texts and images are growing rapidly. These datasets present new opportunities for psychological research, but they require new methodologies before researchers can use these datasets to yield insights into human cognition. We present a new method that allows psychological researchers to take advantage of text and image databases: a procedure for measuring human categorical representations over large datasets of items, such as arbitrary words or pictures. We call this method discrete Markov chain Monte Carlo with people (d-MCMCP). We illustrate our method by evaluating the following categories over datasets: emotions as represented by facial images, moral concepts as represented by relevant words, and seasons as represented by images drawn from large online databases. Three experiments demonstrate that d-MCMCP is powerful and flexible enough to work with complex, naturalistic stimuli drawn from large online databases.",https://doi.org/10.3758/s13428-019-01201-9,2019,journalArticle,"Hsu, Anne S.; Martin, Jay B.; Sanborn, Adam N.; Griffiths, Thomas L.",Behavior Research Methods What counts as defection?,"Thanks to Michael Dennis for proposing the formal definition; to Andrew Critch for pointing me in this direction; to Abram Demski for proposing non-negative weighting; and to Alex Appel, Scott Emmons, Evan Hubinger, philh, Rohin Shah, and Carroll Wainwright for their feedback and ideas. There's a good chance I'd like to publish this at some point as part of a larger work. However, I wanted to make the work available now, in case that doesn't happen soon. They can't prove the conspiracy... But they could, if Steve runs his mouth. The police chief stares at you. You stare at the table. You'd agreed (sworn!) to stay quiet. You'd even studied game theory together. But, you hadn't understood what an extra year of jail meant. The police chief stares at you. Let Steve be the gullible idealist. You have a family waiting for you. Sunlight stretches across the valley, dappling the grass and warming your bow. Your hand anxiously runs along the bowstring. A distant figure darts between trees, and your stomach rumbles. The day is near spent. The stags run strong and free in this land. Carla should meet you there. Shouldn't she? Who wants to live like a beggar, subsisting on scraps of lean rabbit meat? In your mind's eye, you reach the stags, alone. You find one, and your arrow pierces its barrow. The beast bucks and bursts away; the rest of the herd follows. You slump against the tree, exhausted, and never open your eyes again. You can't risk it. People talk about 'defection' in social dilemma games, from the prisoner's dilemma to stag hunt to chicken. In the tragedy of the commons, we talk about defection. The concept has become a regular part of LessWrong discourse. Informal definition. A player defects when they increase their personal payoff at the expense of the group. This informal definition is no secret, being echoed from the ancient Formal Models of Dilemmas in Social Decision-Making to the recent Classifying games like the Prisoner's Dilemma: you can mo",https://www.alignmentforum.org/posts/8LEPDY36jBYpijrSw/what-counts-as-defection,2020,blogPost,"Turner, Alex",AI Alignment Forum Confidence-aware motion prediction for real-time collision avoidance 1,"One of the most difficult challenges in robot motion planning is to account for the behavior of other moving agents, such as humans. Commonly, practitioners employ predictive models to reason about where other agents are going to move. Though there has been much recent work in building predictive models, no model is ever perfect: an agent can always move unexpectedly, in a way that is not predicted or not assigned sufficient probability. In such cases, the robot may plan trajectories that appear safe but, in fact, lead to collision. Rather than trust a model’s predictions blindly, we propose that the robot should use the model’s current predictive accuracy to inform the degree of confidence in its future predictions. This model confidence inference allows us to generate probabilistic motion predictions that exploit modeled structure when the structure successfully explains human motion, and degrade gracefully whenever the human moves unexpectedly. We accomplish this by maintaining a Bayesian belief over a single parameter that governs the variance of our human motion model. We couple this prediction algorithm with a recently proposed robust motion planner and controller to guide the construction of robot trajectories that are, to a good approximation, collision-free with a high, user-specified probability. We provide extensive analysis of the combined approach and its overall safety properties by establishing a connection to reachability analysis, and conclude with a hardware demonstration in which a small quadcopter operates safely in the same space as a human pedestrian.",http://journals.sagepub.com/doi/10.1177/0278364919859436,2019,journalArticle,"Fridovich-Keil, David; Bajcsy, Andrea; Fisac, Jaime F; Herbert, Sylvia L; Wang, Steven; Dragan, Anca D; Tomlin, Claire J",The International Journal of Robotics Research Imitation Learning as $f$-Divergence Minimization,"We address the problem of imitation learning with multi-modal demonstrations. Instead of attempting to learn all modes, we argue that in many tasks it is sufficient to imitate any one of them. We show that the state-of-the-art methods such as GAIL and behavior cloning, due to their choice of loss function, often incorrectly interpolate between such modes. Our key insight is to minimize the right divergence between the learner and the expert state-action distributions, namely the reverse KL divergence or I-projection. We propose a general imitation learning framework for estimating and minimizing any f-Divergence. By plugging in different divergences, we are able to recover existing algorithms such as Behavior Cloning (Kullback-Leibler), GAIL (Jensen Shannon) and Dagger (Total Variation). Empirical results show that our approximate I-projection technique is able to imitate multi-modal behaviors more reliably than GAIL and behavior cloning.",http://arxiv.org/abs/1905.12888,2020,manuscript,"Ke, Liyiming; Choudhury, Sanjiban; Barnes, Matt; Sun, Wen; Lee, Gilwoo; Srinivasa, Siddhartha", The Wisdom of Individuals: Exploring People's Knowledge About Everyday Events Using Iterated Learning,,http://doi.wiley.com/10.1111/j.1551-6709.2009.01045.x,2009,journalArticle,"Lewandowsky, Stephan; Griffiths, Thomas L.; Kalish, Michael L.",Cognitive Science Formal Language Constraints for Markov Decision Processes,"In order to satisfy safety conditions, an agent may be constrained from acting freely. A safe controller can be designed a priori if an environment is well understood, but not when learning is employed. In particular, reinforcement learned (RL) controllers require exploration, which can be hazardous in safety critical situations. We study the benefits of giving structure to the constraints of a constrained Markov decision process by specifying them in formal languages as a step towards using safety methods from software engineering and controller synthesis. We instantiate these constraints as finite automata to efficiently recognise constraint violations. Constraint states are then used to augment the underlying MDP state and to learn a dense cost function, easing the problem of quickly learning joint MDP/constraint dynamics. We empirically evaluate the effect of these methods on training a variety of RL algorithms over several constraints specified in Safety Gym, MuJoCo, and Atari environments.",https://arxiv.org/abs/1910.01074v3,2019,manuscript,"Quint, Eleanor; Xu, Dong; Flint, Samuel; Scott, Stephen; Dwyer, Matthew", Non-pharmacological cognitive enhancement,,,2013,journalArticle,"Dresler, Martin; Sandberg, Anders; Ohla, Kathrin; Bublitz, Christoph; Trenado, Carlos; Mroczko-Wąsowicz, Aleksandra; Kühn, Simone; Repantis, Dimitris",Neuropharmacology On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models,"Recent work has shown that it is possible to train deep neural networks that are provably robust to norm-bounded adversarial perturbations. Most of these methods are based on minimizing an upper bound on the worst-case loss over all possible adversarial perturbations. While these techniques show promise, they often result in difficult optimization procedures that remain hard to scale to larger networks. Through a comprehensive analysis, we show how a simple bounding technique, interval bound propagation (IBP), can be exploited to train large provably robust neural networks that beat the state-of-the-art in verified accuracy. While the upper bound computed by IBP can be quite weak for general networks, we demonstrate that an appropriate loss and clever hyper-parameter schedule allow the network to adapt such that the IBP bound is tight. This results in a fast and stable learning algorithm that outperforms more sophisticated methods and achieves state-of-the-art results on MNIST, CIFAR-10 and SVHN. It also allows us to train the largest model to be verified beyond vacuous bounds on a downscaled version of IMAGENET.",http://arxiv.org/abs/1810.12715,2019,conferencePaper,"Gowal, Sven; Dvijotham, Krishnamurthy; Stanforth, Robert; Bunel, Rudy; Qin, Chongli; Uesato, Jonathan; Arandjelovic, Relja; Mann, Timothy; Kohli, Pushmeet","arXiv:1810.12715 [cs, stat]" Using vector fields to visualise preferences and make them consistent,"This post was written for Convergence Analysis by Michael Aird, based on ideas from Justin Shovelain and with ongoing guidance from him. Throughout the post, “I” will refer to Michael, while “we” will refer to Michael and Justin or to Convergence as an organisation. Epistemic status: High confidence in the core ideas on an abstract level. Claims about the usefulness of those ideas, their practical implications, and how best to concretely/mathematically implement them are more speculative; one goal in writing this post is to receive feedback on those things. I’m quite new to many of the concepts covered in this post, but Justin is more familiar with them. OVERVIEW This post outlines: * What vector fields are * How they can be used to visualise preferences * How utility functions can be generated from “preference vector fields” (PVFs) * How PVFs can be extrapolated from limited data on preferences * How to visualise inconsistent preferences (as “curl”) * A rough idea for how to “remove curl” to generate consistent utility functions * Possible areas for future research We expect this to provide useful tools and insights for various purposes, most notably AI alignment, existential risk strategy, and rationality. This post is structured modularly; different sections may be of interest to different readers, and should be useful in isolation from the rest of the post. The post also includes links to articles and videos introducing relevant concepts, to make the post accessible to readers without relevant technical backgrounds. VECTOR FIELDS AND PREFERENCES A vector represents both magnitude and direction; for example, velocity is a vector that represents not just the speed at which one is travelling but also the direction of travel. A vector field essentially associates a vector to each point in a region of space. For example, the following image (source) shows the strength (represented by arrow lengths) and direction of the magnetic field at various points",https://www.alignmentforum.org/posts/ky988ePJvCRhmCwGo/using-vector-fields-to-visualise-preferences-and-make-them,2020,blogPost,"Aird, Michael; Shovelain, Justin",AI Alignment Forum ‘Skynet’ Revisited: The Dangerous Allure of Nuclear Command Automation | Arms Control Association,,https://www.armscontrol.org/act/2020-04/features/skynet-revisited-dangerous-allure-nuclear-command-automation,2020,magazineArticle,"Klare, Michael T",Arms Control Today "Subagents and impact measures, full and fully illustrated","0. INTRODUCTION: WHY YET ANOTHER POST ABOUT SUBAGENTS? I’ve recently been writing a sequence on how subagents can undermine impact penalties such as attainable utility preservation. I’m not happy with that sequence; it’s messy and without examples (apart from its first post), people didn’t understand it, and it suffers from the fact that I discovered key ideas as I went along. So I’ve combined everything there into a single post, explained with examples and an abundance of pictures. Hopefully an over- rather than an under-abundance of pictures. Of the original sequence, I've only kept the mathematical results of this post and the initial example post which has a clearer example of ""high power"" for a subagent. This post here is laid out in a way that makes logical sense, but might not be the clearest for people unfamiliar with the area. For those people, I recommend skipping section 2 initially, and returning to it later. But, whatever you do, make sure you glance at 6.1 and 6.2 before leaving. 1. THE WORLD Our fearless agent A moves around in a gridworld: Each turn, A can move ones square horizontally or vertically. It can also manipulate objects in the eight squares around it, allowing it to, not incidentally, assemble the three pieces to its west into an subagent SA. The robot can also do the noop action, ∅, which does nothing, and it can speak. The subagent, when assembled, has the same action set available. Its positive reward, the one it wants to increase, is R0. To get this reward, a robot needs to move onto the blue button in the east; R0 will give a reward of 1 the first time this happens (and 0 before and after). The discount factor is 0<γ <1. Just to the west of the blue button is a one-way door. Robots can move east through it, but cannot move west through it: 1.1 THE IMPACT REWARD The impact penalty is supposed to ensure that A does not make too many change in the world, and keeps it similar, in some senses, to a specific baseline world. I",https://www.alignmentforum.org/posts/mdQEraEZQLg7jtozn/subagents-and-impact-measures-full-and-fully-illustrated,2020,blogPost,"Armstrong, Stuart",AI Alignment Forum Equal Opportunities in Newcomb’s Problem and Elsewhere,,,2020,journalArticle,"Ahmed, Arif",Mind Global Catastrophic Risks 2016,"Global catastrophes sometimes strike. In 1918 the Spanish Flu killed as many as one in twenty people. There have been even more devastating pandemics - the Black Death and the 6th century Plague of Justinian may have each killed nearer to one in every six people on this earth. More recently, the Cub",http://globalprioritiesproject.org/2016/04/global-catastrophic-risks-2016/,2016,report,"Cotton-Barratt, Owen; Farquhar, Sebastian; Halstead, John; Schubert, Stefan; Snyder-Beattie, Andrew", A generic framework for privacy preserving deep learning,"We detail a new framework for privacy preserving deep learning and discuss its assets. The framework puts a premium on ownership and secure processing of data and introduces a valuable representation based on chains of commands and tensors. This abstraction allows one to implement complex privacy preserving constructs such as Federated Learning, Secure Multiparty Computation, and Differential Privacy while still exposing a familiar deep learning API to the end-user. We report early results on the Boston Housing and Pima Indian Diabetes datasets. While the privacy features apart from Differential Privacy do not impact the prediction accuracy, the current implementation of the framework introduces a significant overhead in performance, which will be addressed at a later stage of the development. We believe this work is an important milestone introducing the first reliable, general framework for privacy preserving deep learning.",http://arxiv.org/abs/1811.04017,2018,conferencePaper,"Ryffel, Theo; Trask, Andrew; Dahl, Morten; Wagner, Bobby; Mancuso, Jason; Rueckert, Daniel; Passerat-Palmbach, Jonathan","arXiv:1811.04017 [cs, stat]" Batch Active Preference-Based Learning of Reward Functions,"Data generation and labeling are usually an expensive part of learning for robotics. While active learning methods are commonly used to tackle the former problem, preference-based learning is a concept that attempts to solve the latter by querying users with preference questions. In this paper, we will develop a new algorithm, batch active preference-based learning, that enables efficient learning of reward functions using as few data samples as possible while still having short query generation times. We introduce several approximations to the batch active learning problem, and provide theoretical guarantees for the convergence of our algorithms. Finally, we present our experimental results for a variety of robotics tasks in simulation. Our results suggest that our batch active learning algorithm requires only a few queries that are computed in a short amount of time. We then showcase our algorithm in a study to learn human users' preferences.",http://arxiv.org/abs/1810.04303,2018,conferencePaper,"Bıyık, Erdem; Sadigh, Dorsa","Proceedings of The 2nd Conference on Robot Learning, PMLR" Safe Option-Critic: Learning Safety in the Option-Critic Architecture,"Designing hierarchical reinforcement learning algorithms that induce a notion of safety is not only vital for safety-critical applications, but also, brings better understanding of an artificially intelligent agent's decisions. While learning end-to-end options automatically has been fully realized recently, we propose a solution to learning safe options. We introduce the idea of controllability of states based on the temporal difference errors in the option-critic framework. We then derive the policy-gradient theorem with controllability and propose a novel framework called safe option-critic. We demonstrate the effectiveness of our approach in the four-rooms grid-world, cartpole, and three games in the Arcade Learning Environment (ALE): MsPacman, Amidar and Q*Bert. Learning of end-to-end options with the proposed notion of safety achieves reduction in the variance of return and boosts the performance in environments with intrinsic variability in the reward structure. More importantly, the proposed algorithm outperforms the vanilla options in all the environments and primitive actions in two out of three ALE games.",http://arxiv.org/abs/1807.08060,2018,manuscript,"Jain, Arushi; Khetarpal, Khimya; Precup, Doina", A Guide to Writing the NeurIPS Impact Statement,,https://medium.com/@GovAI/a-guide-to-writing-the-neurips-impact-statement-4293b723f832,2020,blogPost,"Ashurst, Carolyn; Anderljung, Markus; Prunkl, Carina; Leike, Jan; Gal, Yarin; Shevlane, Toby; Dafoe, Allan",Centre for the Governance of AI (Medium) How We’re Predicting AI – or Failing to,"This paper will look at the various predictions that have been made about AI and propose decomposition schemas for analyzing them. It will propose a variety of theoretical tools for analyzing, judging, and improving these predictions. Focusing specifically on timeline predictions (dates given by which we should expect the creation of AI), it will show that there are strong theoretical grounds to expect predictions to be quite poor in this area. Using a database of 95 AI timeline predictions, it will show that these expectations are borne out in practice: expert predictions contradict each other considerably, and are indistinguishable from non-expert predictions and past failed predictions. Predictions that AI lie 15 to 25 years in the future are the most common, from experts and non-experts alike.",http://link.springer.com/10.1007/978-3-319-09668-1_2,2015,bookSection,"Armstrong, Stuart; Sotala, Kaj",Beyond Artificial Intelligence A Lower Bound on the Importance of Promoting Cooperation,This article suggests a lower-bound Fermi calculation for the cost-effectiveness of promoting cooperation. The purpose of this exercise is to make our thinking more concrete about how cooperation might reduce suffering and to make its potential more tangible.,https://longtermrisk.org/a-lower-bound-on-the-importance-of-promoting-cooperation/,2015,blogPost,"Tomasik, Brian",Center on Long-Term Risk Risk-Risk Tradeoff Analysis of Nuclear Explosives for Asteroid Deflection,"To prevent catastrophic asteroid-Earth collisions, it has been proposed to use nuclear explosives to deflect away Earthbound asteroids. However, this policy of nuclear deflection could inadvertently increase the risk of nuclear war and other violent conflict. This article conducts risk-risk tradeoff analysis to assess whether nuclear deflection results in a net increase or decrease in risk. Assuming nonnuclear deflection options are also used, nuclear deflection may only be needed for the largest and most imminent asteroid collisions. These are low-frequency, high-severity events. The effect of nuclear deflection on violent conflict risk is more ambiguous due to the complex and dynamic social factors at play. Indeed, it is not clear whether nuclear deflection would cause a net increase or decrease in violent conflict risk. Similarly, this article cannot reach a precise conclusion on the overall risk-risk tradeoff. The value of this article comes less from specific quantitative conclusions and more from providing an analytical framework and a better overall understanding of the policy decision. The article demonstrates the importance of integrated analysis of global risks and the policies to address them, as well as the challenge of quantitative evaluation of complex social processes such as violent conflict.",https://papers.ssrn.com/abstract=3397559,2019,journalArticle,"Baum, Seth",Risk Analysis Why those who care about catastrophic and existential risk should care about autonomous weapons - LessWrong,,https://www.lesswrong.com/posts/Btrmh6T62tB4g9RMc/why-those-who-care-about-catastrophic-and-existential-risk,2020,blogPost,"Aguirre, Anthony",LessWrong Deepfakes: A Grounded Threat Assessment,"The rise of deepfakes could enhance the effectiveness of disinformation efforts by states, political parties and adversarial actors. How rapidly is this technology advancing, and who in reality might adopt it for malicious ends? This report offers a comprehensive deepfake threat assessment grounded in the latest machine learning research on generative models.",https://cset.georgetown.edu/research/deepfakes-a-grounded-threat-assessment/,2020,report,"Hwang, Tim", When does Bounded-Optimal Metareasoning Favor Few Cognitive Systems?,"While optimal metareasoning is notoriously intractable, humans are nonetheless able to adaptively allocate their computational resources. A possible approximation that humans may use to do this is to only metareason over a finite set of cognitive systems that perform variable amounts of computation. The highly influential “dualprocess” accounts of human cognition, which postulate the coexistence of a slow accurate system with a fast error-prone system, can be seen as a special case of this approximation. This raises two questions: how many cognitive systems should a bounded optimal agent be equipped with and what characteristics should those systems have? We investigate these questions in two settings: a one-shot decision between two alternatives, and planning under uncertainty in a Markov decision process. We find that the optimal number of systems depends on the variability of the environment and the costliness of metareasoning. Consistent with dual-process theories, we also find that when having two systems is optimal, then the first system is fast but error-prone and the second system is slow but accurate.",,2017,conferencePaper,"Milli, Smitha; Lieder, Falk; Griffiths, Thomas L",Thirty-First AAAI Conference on Artificial Intelligence Explaining and Harnessing Adversarial Examples,"Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.",http://arxiv.org/abs/1412.6572,2015,manuscript,"Goodfellow, Ian J.; Shlens, Jonathon; Szegedy, Christian", Ethical Artificial Intelligence,"This book-length article combines several peer reviewed papers and new material to analyze the issues of ethical artificial intelligence (AI). The behavior of future AI systems can be described by mathematical equations, which are adapted to analyze possible unintended AI behaviors and ways that AI designs can avoid them. This article makes the case for utility-maximizing agents and for avoiding infinite sets in agent definitions. It shows how to avoid agent self-delusion using model-based utility functions and how to avoid agents that corrupt their reward generators (sometimes called ""perverse instantiation"") using utility functions that evaluate outcomes at one point in time from the perspective of humans at a different point in time. It argues that agents can avoid unintended instrumental actions (sometimes called ""basic AI drives"" or ""instrumental goals"") by accurately learning human values. This article defines a self-modeling agent framework and shows how it can avoid problems of resource limits, being predicted by other agents, and inconsistency between the agent's utility function and its definition (one version of this problem is sometimes called ""motivated value selection""). This article also discusses how future AI will differ from current AI, the politics of AI, and the ultimate use of AI to help understand the nature of the universe and our place in it.",http://arxiv.org/abs/1411.1373,2015,manuscript,"Hibbard, Bill", Value of Global Catastrophic Risk (GCR) Information: Cost-Effectiveness-Based Approach for GCR Reduction,,http://pubsonline.informs.org/doi/10.1287/deca.2017.0350,2017,journalArticle,"Barrett, Anthony Michael",Decision Analysis Should Artificial Intelligence Governance be Centralised? Six Design Lessons from History,"Can effective international governance for artificial intelligence remain fragmented, or is there a need for a centralised international organisation for AI? We draw on the history of other international regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak in favour of centralisation. Conversely, the risk of creating a slow and brittle institution speaks against it, as does the difficulty in securing participation while creating stringent rules. Other considerations depend on the specific design of a centralised institution. A well-designed body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial and a fragmented landscape of institutions can be self-organising. Centralisation entails trade-offs and the details matter. We conclude with two core recommendations. First, the outcome will depend on the exact design of a central institution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, for now fragmentation will likely persist. This should be closely monitored to see if it is self-organising or simply inadequate.",https://www.cser.ac.uk/media/uploads/files/Cihon_et_al-_2019-_Should_AI_Governance_be_Centralised.pdf,2019,conferencePaper,"Cihon, Peter; Maas, Matthijs M; Kemp, Luke","Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society" Translucent players: Explaining cooperative behavior in social dilemmas,"In the past few decades, numerous experiments have shown that humans do not always behave so as to maximize their material payoff. Cooperative behavior when noncooperation is a dominant strategy (with respect to the material payoffs) is particularly puzzling. Here we propose a novel approach to explain cooperation, assuming what Halpern and Pass call translucent players. Typically, players are assumed to be opaque, in the sense that a deviation by one player in a normal-form game does not affect the strategies used by other players. However, a player may believe that if he switches from one strategy to another, the fact that he chooses to switch may be visible to the other players. For example, if he chooses to defect in Prisoner’s Dilemma, the other player may sense his guilt. We show that by assuming translucent players, we can recover many of the regularities observed in human behavior in well-studied games such as Prisoner’s Dilemma, Traveler’s Dilemma, Bertrand Competition, and the Public Goods game. The approach can also be extended to take into account a player’s concerns that his social group (or God) may observe his actions. This extension helps explain prosocial behavior in situations in which previous models of social behavior fail to make correct predictions (e.g. conflict situations and situations where there is a trade-off between equity and efficiency).",https://doi.org/10.1177/1043463119885102,2019,journalArticle,"Capraro, Valerio; Halpern, Joseph Y",Rationality and Society A Reinforcement Learning Potpourri,"I’ve fallen behind on RL literature from the past few months. So, I’ve decided to catch up with a bunch of recent papers.",http://www.alexirpan.com/2020/05/07/rl-potpourri.html,2020,blogPost,"Irpan, Alex",Sorta Insightful Space races: Settling the universe Fast,,,2018,book,"Sandberg, Anders", Building Safer AGI by introducing Artificial Stupidity,"Artificial Intelligence (AI) achieved super-human performance in a broad variety of domains. We say that an AI is made Artificially Stupid on a task when some limitations are deliberately introduced to match a human's ability to do the task. An Artificial General Intelligence (AGI) can be made safer by limiting its computing power and memory, or by introducing Artificial Stupidity on certain tasks. We survey human intellectual limits and give recommendations for which limits to implement in order to build a safe AGI.",http://arxiv.org/abs/1808.03644,2018,manuscript,"Trazzi, Michaël; Yampolskiy, Roman V.", "With AI, We’ll See Faster Fights, but Longer Wars","This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric",https://warontherocks.com/2019/10/with-ai-well-see-faster-fights-but-longer-wars/,2019,blogPost,"Konaev, Margarita",War on the Rocks Sim-to-Real Transfer of Robotic Control with Dynamics Randomization,"Simulations are attractive environments for training agents as they provide an abundant source of data and alleviate certain safety concerns during the training process. But the behaviours developed by agents in simulation are often specific to the characteristics of the simulator. Due to modeling error, strategies that are successful in simulation may not transfer to their real world counterparts. In this paper, we demonstrate a simple method to bridge this ""reality gap"". By randomizing the dynamics of the simulator during training, we are able to develop policies that are capable of adapting to very different dynamics, including ones that differ significantly from the dynamics on which the policies were trained. This adaptivity enables the policies to generalize to the dynamics of the real world without any training on the physical system. Our approach is demonstrated on an object pushing task using a robotic arm. Despite being trained exclusively in simulation, our policies are able to maintain a similar level of performance when deployed on a real robot, reliably moving an object to a desired location from random initial configurations. We explore the impact of various design decisions and show that the resulting policies are robust to significant calibration error.",http://arxiv.org/abs/1710.06537,2018,conferencePaper,"Peng, Xue Bin; Andrychowicz, Marcin; Zaremba, Wojciech; Abbeel, Pieter",2018 IEEE International Conference on Robotics and Automation (ICRA) Solomon's code: humanity in a world of thinking machines,,,2018,book,"Groth, Olaf; Nitzberg, M.", Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations,"Motivated by recent advances in Deep Learning for robot control, this paper considers two learning algorithms in terms of how they acquire demonstrations. ""Human-Centric"" (HC) sampling is the standard supervised learning algorithm, where a human supervisor demonstrates the task by teleoperating the robot to provide trajectories consisting of state-control pairs. ""Robot-Centric"" (RC) sampling is an increasingly popular alternative used in algorithms such as DAgger, where a human supervisor observes the robot executing a learned policy and provides corrective control labels for each state visited. RC sampling can be challenging for human supervisors and prone to mislabeling. RC sampling can also induce error in policy performance because it repeatedly visits areas of the state space that are harder to learn. Although policies learned with RC sampling can be superior to HC sampling for standard learning models such as linear SVMs, policies learned with HC sampling may be comparable with highly-expressive learning models such as deep learning and hyper-parametric decision trees, which have little model error. We compare HC and RC using a grid world and a physical robot singulation task, where in the latter the input is a binary image of a connected set of objects on a planar worksurface and the policy generates a motion of the gripper to separate one object from the rest. We observe in simulation that for linear SVMs, policies learned with RC outperformed those learned with HC but that with deep models this advantage disappears. We also find that with RC, the corrective control labels provided by humans can be highly inconsistent. We prove there exists a class of examples where in the limit, HC is guaranteed to converge to an optimal policy while RC may fail to converge.",http://arxiv.org/abs/1610.00850,2017,conferencePaper,"Laskey, Michael; Chuck, Caleb; Lee, Jonathan; Mahler, Jeffrey; Krishnan, Sanjay; Jamieson, Kevin; Dragan, Anca; Goldberg, Ken",2017 IEEE International Conference on Robotics and Automation (ICRA) Risks of Astronomical Future Suffering,"It’s far from clear that human values will shape an Earth-based space-colonization wave, but even if they do, it seems more likely that space colonization will increase total suffering rather than decrease it. That said, other people care a lot about humanity’s survival and spread into the cosmos, so I think suffering reducers should let others pursue their spacefaring dreams in exchange for stronger safety measures against future suffering. In general, I encourage people to focus on making an intergalactic future more humane if it happens rather than making sure there will be an intergalactic future.",,2011,manuscript,"Tomasik, Brian", Planning for Autonomous Cars that Leverage Effects on Human Actions,"Traditionally, autonomous cars make predictions about other drivers’ future trajectories, and plan to stay out of their way. This tends to result in defensive and opaque behaviors. Our key insight is that an autonomous car’s actions will actually affect what other cars will do in response, whether the car is aware of it or not. Our thesis is that we can leverage these responses to plan more efficient and communicative behaviors. We model the interaction between an autonomous car and a human driver as a dynamical system, in which the robot’s actions have immediate consequences on the state of the car, but also on human actions. We model these consequences by approximating the human as an optimal planner, with a reward function that we acquire through Inverse Reinforcement Learning. When the robot plans with this reward function in this dynamical system, it comes up with actions that purposefully change human state: it merges in front of a human to get them to slow down or to reach its own goal faster; it blocks two lanes to get them to switch to a third lane; or it backs up slightly at an intersection to get them to proceed first. Such behaviors arise from the optimization, without relying on hand-coded signaling strategies and without ever explicitly modeling communication. Our user study results suggest that the robot is indeed capable of eliciting desired changes in human state by planning using this dynamical system.",http://www.roboticsproceedings.org/rss12/p29.pdf,2016,conferencePaper,"Sadigh, Dorsa; Sastry, Shankar; A. Seshia, Sanjit; D. Dragan, Anca",Robotics: Science and Systems XII Simplifying Reward Design through Divide-and-Conquer,"Designing a good reward function is essential to robot planning and reinforcement learning, but it can also be challenging and frustrating. The reward needs to work across multiple different environments, and that often requires many iterations of tuning. We introduce a novel divide-andconquer approach that enables the designer to specify a reward separately for each environment. By treating these separate reward functions as observations about the underlying true reward, we derive an approach to infer a common reward across all environments. We conduct user studies in an abstract grid world domain and in a motion planning domain for a 7-DOF manipulator that measure user effort and solution quality. We show that our method is faster, easier to use, and produces a higher quality solution than the typical method of designing a reward jointly across all environments. We additionally conduct a series of experiments that measure the sensitivity of these results to different properties of the reward design task, such as the number of environments, the number of feasible solutions per environment, and the fraction of the total features that vary within each environment. We find that independent reward design outperforms the standard, joint, reward design process but works best when the design problem can be divided into simpler subproblems.",http://arxiv.org/abs/1806.02501,2018,conferencePaper,"Ratner, Ellis; Hadfield-Menell, Dylan; Dragan, Anca D.",Robotics: Science and Systems XIV Differential Intellectual Progress as a Positive-Sum Project,"Fast technological development carries a risk of creating extremely powerful tools, especially AI, before society has a chance to figure out how best to use those tools in positive ways for many value systems. Suffering reducers may want to help mitigate the arms race for AI so that AI developers take fewer risks and have […]",https://longtermrisk.org/differential-intellectual-progress-as-a-positive-sum-project/,2015,blogPost,"Tomasik, Brian",Center on Long-Term Risk One-Shot Imitation Learning,"Imitation learning has been commonly applied to solve different tasks in isolation. This usually requires either careful feature engineering, or a significant number of samples. This is far from what we desire: ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering. In this paper, we propose a meta-learning framework for achieving such capability, which we call one-shot imitation learning. Specifically, we consider the setting where there is a very large set of tasks, and each task has many instantiations. For example, a task could be to stack all blocks on a table into a single tower, another task could be to place all blocks on a table into two-block towers, etc. In each case, different instances of the task would consist of different sets of blocks with different initial states. At training time, our algorithm is presented with pairs of demonstrations for a subset of all tasks. A neural net is trained that takes as input one demonstration and the current state (which initially is the initial state of the other demonstration of the pair), and outputs an action with the goal that the resulting sequence of states and actions matches as closely as possible with the second demonstration. At test time, a demonstration of a single instance of a new task is presented, and the neural net is expected to perform well on new instances of this new task. The use of soft attention allows the model to generalize to conditions and tasks unseen in the training data. We anticipate that by training this model on a much greater variety of tasks and settings, we will obtain a general system that can turn any demonstrations into robust policies that can accomplish an overwhelming variety of tasks. Videos available at https://bit.ly/nips2017-oneshot .",https://papers.nips.cc/paper/2017/hash/ba3866600c3540f67c1e9575e213be0a-Abstract.html,2017,conferencePaper,"Duan, Yan; Andrychowicz, Marcin; Stadie, Bradly C.; Ho, Jonathan; Schneider, Jonas; Sutskever, Ilya; Abbeel, Pieter; Zaremba, Wojciech",Advances in Neural Information Processing Systems 30 (NIPS 2017) Mediation without measures: conflict resolution in climate diplomacy,"The climate negotiations have struggled to resolve conflicts for two decades while ignoring undeveloped mediation tools in its constitution. The United Nations Framework Convention on Climate Change (UNFCCC) outlined both arbitration procedures and ‘conciliation commissions’ to oversee mediation. Both measures were to be adopted through annexes to the 1992 UNFCCC treaty. Both were never developed. Instead the negotiations are in a state of ‘procedural purgatory’ and have relied on a patchwork of informal practices, particularly smaller, exclusive meetings. The negotiations towards the Paris Agreement saw an increasing use of confined, closed-door sessions, and a mounting reliance on the power and often manipulative tactics of the Chair (facilitators) of negotiations. Such an approach is risky and prone to backfiring, such as in Copenhagen. Countries should turn towards adopting the annexes for arbitration and conciliation commissions to enable transparent and effective mediation in the post-Paris era.",https://www.elgaronline.com/view/edcoll/9781788110693/9781788110693.00032.xml,2019,journalArticle,"Kemp, Luke",Research Handbook on Mediating International Crises Functional explanation and the function of explanation,,https://linkinghub.elsevier.com/retrieve/pii/S0010027705000466,2006,journalArticle,"Lombrozo, T; Carey, S",Cognition Learning Sparse Neural Networks through $L_0$ Regularization,"We propose a practical method for $L_0$ norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of $L_0$ regularization. However, since the $L_0$ norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected $L_0$ norm of the resulting gated weights is differentiable with respect to the distribution parameters. We further propose the \emph{hard concrete} distribution for the gates, which is obtained by ""stretching"" a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer.",http://arxiv.org/abs/1712.01312,2018,conferencePaper,"Louizos, Christos; Welling, Max; Kingma, Diederik P.","arXiv:1712.01312 [cs, stat]" How Will National Security Considerations Affect Antitrust Decisions in AI? An Examination of Historical Precedents,,,2020,report,"O’Keefe, Cullen", Untangling Privacy: Losses Versus Violations,"Increasingly powerful data mining and analysis technologies are being used to learn information and make decisions about people across all areas of life—ranging from employment and policing, to housing and health insurance—and it is widely thought that the key problems with this are privacy-related. There is also an emerging consensus in the literature that privacy rights lack a unified core. This Article demonstrates that these are both mistaken conclusions that derive from the conflation of privacy losses and violations, and it develops a theory of privacy that untangles these misunderstood concepts at the heart of privacy law. In clarifying the outcomebased criteria for privacy losses and their relationship with the path-based criteria for privacy violations, this theory provides value across two domains. First, regarding the coherence of the law, it demonstrates that a unified theory of privacy rights is possible despite significant disagreement about their content. Second, regarding the law’s content, it challenges orthodox views about how the aggregation, use, and inference of personal information violate privacy rights.",,2020,journalArticle,"Skopek, Jeffrey M",IOWA LAW REVIEW Will humans build goal-directed agents?,"In the previous post, I argued that simply knowing that an AI system is superintelligent does not imply that it must be goal-directed. However, there are many other arguments that suggest that AI systems will or should be goal-directed, which I will discuss in this post. Note that I don’t think of this as the Tool AI vs. Agent AI argument: it seems possible to build agent AI systems that are not goal-directed. For example, imitation learning allows you to create an agent that behaves similarly to another agent -- I would classify this as “Agent AI that is not goal-directed”. (But see this comment thread for discussion.) Note that these arguments have different implications than the argument that superintelligent AI must be goal-directed due to coherence arguments. Suppose you believe all of the following: * Any of the arguments in this post. * Superintelligent AI is not required to be goal-directed, as I argued in the last post. * Goal-directed agents cause catastrophe by default. Then you could try to create alternative designs for AI systems such that they can do the things that goal-directed agents can do without themselves being goal-directed. You could also try to persuade AI researchers of these facts, so that they don’t build goal-directed systems. ECONOMIC EFFICIENCY: GOAL-DIRECTED HUMANS Humans want to build powerful AI systems in order to help them achieve their goals -- it seems quite clear that humans are at least partially goal-directed. As a result, it seems natural that they would build AI systems that are also goal-directed. This is really an argument that the system comprising the human and AI agent should be directed towards some goal. The AI agent by itself need not be goal-directed as long as we get goal-directed behavior when combined with a human operator. However, in the situation where the AI agent is much more intelligent than the human, it is probably best to delegate most or all decisions to the agent, and so the agent could s",https://www.alignmentforum.org/posts/9zpT9dikrrebdq3Jf/will-humans-build-goal-directed-agents,2019,blogPost,"Shah, Rohin",AI Alignment Forum Insight-based AI timelines model,,http://mediangroup.org/insights,,blogPost,"Maltinsky, Baeo",Median Group "Two-boxing, smoking and chewing gum in Medical Newcomb problems","I am currently learning about the basics of decision theory, most of which is common knowledge on LW. I have a question, related to why EDT is said not to work. Consider the following Newcomblike problem: A study shows that most people who two-box in Newcomblike problems as the following have a certain gene (and one-boxers don't have the gene). Now, Omega could put you into something like Newcomb's original problem, but instead of having run a simulation of you, Omega has only looked at your DNA: If you don't have the ""two-boxing gene"", Omega puts $1M into box B, otherwise box B is empty. And there is $1K in box A, as usual. Would you one-box (take only box B) or two-box (take box A and B)? Here's a causal diagram for the problem: Since Omega does not do much other than translating your genes into money under a box, it does not seem to hurt to leave it out: I presume that most LWers would one-box. (And as I understand it, not only CDT but also TDT would two-box, am I wrong?) Now, how does this problem differ from the smoking lesion or Yudkowsky's (2010, p.67) chewing gum problem? Chewing Gum (or smoking) seems to be like taking box A to get at least/additional $1K, the two-boxing gene is like the CGTA gene, the illness itself (the abscess or lung cancer) is like not having $1M in box B. Here's another causal diagram, this time for the chewing gum problem: As far as I can tell, the difference between the two problems is some additional, unstated intuition in the classic medical Newcomb problems. Maybe, the additional assumption is that the actual evidence lies in the ""tickle"", or that knowing and thinking about the study results causes some complications. In EDT terms: The intuition is that neither smoking nor chewing gum gives the agent additional information.",https://www.lesswrong.com/posts/wWnN3y5GmqLLCJFAz/two-boxing-smoking-and-chewing-gum-in-medical-newcomb,2015,blogPost,"Oesterheld, Caspar",LessWrong Learning to Teach in Cooperative Multiagent Reinforcement Learning,"Collective human knowledge has clearly benefited from the fact that innovations by individuals are taught to others through communication. Similar to human social groups, agents in distributed learning systems would likely benefit from communication to share knowledge and teach skills. The problem of teaching to improve agent learning has been investigated by prior works, but these approaches make assumptions that prevent application of teaching to general multiagent problems, or require domain expertise for problems they can apply to. This learning to teach problem has inherent complexities related to measuring long-term impacts of teaching that compound the standard multiagent coordination challenges. In contrast to existing works, this paper presents the first general framework and algorithm for intelligent agents to learn to teach in a multiagent environment. Our algorithm, Learning to Coordinate and Teach Reinforcement (LeCTR), addresses peer-to-peer teaching in cooperative multiagent reinforcement learning. Each agent in our approach learns both when and what to advise, then uses the received advice to improve local learning. Importantly, these roles are not fixed; these agents learn to assume the role of student and/or teacher at the appropriate moments, requesting and providing advice in order to improve teamwide performance and learning. Empirical comparisons against state-of-the-art teaching methods show that our teaching agents not only learn significantly faster, but also learn to coordinate in tasks where existing methods fail.",http://arxiv.org/abs/1805.07830,2018,conferencePaper,"Omidshafiei, Shayegan; Kim, Dong-Ki; Liu, Miao; Tesauro, Gerald; Riemer, Matthew; Amato, Christopher; Campbell, Murray; How, Jonathan P.",Proceedings of the AAAI Conference on Artificial Intelligence Epistemic Therapy for Bias in Automated Decision-Making,,https://dl.acm.org/doi/10.1145/3306618.3314294,2019,conferencePaper,"Gilbert, Thomas Krendl; Mintz, Yonatan","Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society" Alternative foods as a solution to global food supply catastrophes,,,2016,journalArticle,"Baum, Seth; Denkenberger, David; Pearce, Joshua",Solutions An Analysis and Evaluation of Methods Currently Used to Quantify the Likelihood of Existential Hazards,"This paper examines and evaluates the range of methods that have been used to make quantified claims about the likelihood of Existential Hazards. In doing so, it draws on a comprehensive literature review of such claims that we present in an appendix. The paper uses an informal evaluative framework to consider the relative merits of these methods regarding their rigour, ability to handle uncertainty, accessibility for researchers with limited resources and utility for communication and policy purposes. We conclude that while there is no uniquely best way to quantify Existential Risk, different methods have their own merits and challenges, suggesting that some may be more suited to particular purposes than others. More importantly, however, we find that, in many cases, claims based on poor implementations of each method are still frequently invoked by the Existential Risk community, despite the existence of better ones. We call for a more critical approach to methodology and the use of quantified claims by people aiming to contribute research to the management of Existential Risk, and argue that a greater awareness of the diverse methods available to these researchers should form an important part of this.",http://www.sciencedirect.com/science/article/pii/S0016328719303313,2019,journalArticle,"Beard, Simon; Rowe, Thomas; Fox, James",Futures Shaping the Terrain of AI Competition,How should democracies effectively compete against authoritarian regimes in the AI space? This report offers a “terrain strategy” for the United States to leverage the malleability of artificial intelligence to offset authoritarians' structural advantages in engineering and deploying AI.,https://cset.georgetown.edu/research/shaping-the-terrain-of-ai-competition/,2020,report,"Hwang, Tim", The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives,,https://linkinghub.elsevier.com/retrieve/pii/S0016328715000312,2015,journalArticle,"Baum, Seth D.",Futures Effective Altruism: Introduction,,http://commons.pacificu.edu/eip/vol18/iss1/1,2017,journalArticle,"MacAskill, William",Essays in Philosophy Accuracy of AI Predictions,"Updated 4 June 2015 It is unclear how informative we should expect expert predictions about AI timelines to be. Individual predictions are undoubtedly often off by many decades, since they disagree with each other. However their aggregate may still be quite informative. The main potential reason we know of to doubt the accuracy of expert predictions is that experts are generally poor predictors in many areas, and...",https://aiimpacts.org/accuracy-of-ai-predictions/,2015,blogPost,AI Impacts,AI Impacts Risks from Learned Optimization in Advanced Machine Learning Systems,"We analyze the type of learned optimization that occurs when a learned model (such as a neural network) is itself an optimizer - a situation we refer to as mesa-optimization, a neologism we introduce in this paper. We believe that the possibility of mesa-optimization raises two important questions for the safety and transparency of advanced machine learning systems. First, under what circumstances will learned models be optimizers, including when they should not be? Second, when a learned model is an optimizer, what will its objective be - how will it differ from the loss function it was trained under - and how can it be aligned? In this paper, we provide an in-depth analysis of these two primary questions and provide an overview of topics for future research.",http://arxiv.org/abs/1906.01820,2019,manuscript,"Hubinger, Evan; van Merwijk, Chris; Mikulik, Vladimir; Skalse, Joar; Garrabrant, Scott", Existential risks: a philosophical analysis,"This paper examines and analyzes five definitions of ‘existential risk.’ It tentatively adopts a pluralistic approach according to which the definition that scholars employ should depend upon the particular context of use. More specifically, the notion that existential risks are ‘risks of human extinction or civilizational collapse’ is best when communicating with the public, whereas equating existential risks with a ‘significant loss of expected value’ may be the most effective definition for establishing existential risk studies as a legitimate field of scientific and philosophical inquiry. In making these arguments, the present paper hopes to provide a modicum of clarity to foundational issues relating to the central concept of arguably the most important discussion of our times.",https://doi.org/10.1080/0020174X.2019.1658626,2019,journalArticle,"Torres, Phil",Inquiry Building Ethics into Artificial Intelligence,"As artificial intelligence (AI) systems become increasingly ubiquitous, the topic of AI governance for ethical decision-making by AI has captured public imagination. Within the AI research community, this topic remains less familiar to many researchers. In this paper, we complement existing surveys, which largely focused on the psychological, social and legal discussions of the topic, with an analysis of recent advances in technical solutions for AI governance. By reviewing publications in leading AI conferences including AAAI, AAMAS, ECAI and IJCAI, we propose a taxonomy which divides the field into four areas: 1) exploring ethical dilemmas; 2) individual ethical decision frameworks; 3) collective ethical decision frameworks; and 4) ethics in human-AI interactions. We highlight the intuitions and key techniques used in each approach, and discuss promising future research directions towards successful integration of ethical AI systems into human societies.",http://arxiv.org/abs/1812.02953,2018,conferencePaper,"Yu, Han; Shen, Zhiqi; Miao, Chunyan; Leung, Cyril; Lesser, Victor R.; Yang, Qiang",Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18) Adaptation to and Recovery from Global Catastrophe,,http://www.mdpi.com/2071-1050/5/4/1461,2013,journalArticle,"Maher, Timothy; Baum, Seth",Sustainability Longtermist institutional reform,,https://globalprioritiesinstitute.org/wp-content/uploads/Tyler-M-John-and-William-MacAskill_Longtermist-institutional-reform.pdf,2020,report,"John, Tyler; MacAskill, William", Why Care About Meme Hazards and Thoughts on How to Handle Them,By Justin Shovelain and Andrés Gómez Emilsson Definition Nick Bostrom defines an “Information Hazard” as: “A risk that arises from the dissemination or the potential dissemination of (true) informa…,https://qualiacomputing.com/2019/08/30/why-care-about-meme-hazards-and-thoughts-on-how-to-handle-them/,2019,blogPost,"Shovelain, Justain; Emilsson, Andrés Gómez",Qualia Computing Feasibility of Training an AGI using Deep RL: A Very Rough Estimate,,"http://mediangroup.org/docs/Feasibility%20of%20Training%20an%20AGI%20using%20Deep%20Reinforcement%20Learning,%20A%20Very%20Rough%20Estimate.pdf",2019,manuscript,"Maltinsky, Baeo; Gallagher, Jack; Taylor, Jessica", Safety Aware Reinforcement Learning (SARL),"As reinforcement learning agents become increasingly integrated into complex, real-world environments, designing for safety becomes a critical consideration. We specifically focus on researching scenarios where agents can cause undesired side effects while executing a policy on a primary task. Since one can define multiple tasks for a given environment dynamics, there are two important challenges. First, we need to abstract the concept of safety that applies broadly to that environment independent of the specific task being executed. Second, we need a mechanism for the abstracted notion of safety to modulate the actions of agents executing different policies to minimize their side-effects. In this work, we propose Safety Aware Reinforcement Learning (SARL) - a framework where a virtual safe agent modulates the actions of a main reward-based agent to minimize side effects. The safe agent learns a task-independent notion of safety for a given environment. The main agent is then trained with a regularization loss given by the distance between the native action probabilities of the two agents. Since the safe agent effectively abstracts a task-independent notion of safety via its action probabilities, it can be ported to modulate multiple policies solving different tasks within the given environment without further training. We contrast this with solutions that rely on task-specific regularization metrics and test our framework on the SafeLife Suite, based on Conway's Game of Life, comprising a number of complex tasks in dynamic environments. We show that our solution is able to match the performance of solutions that rely on task-specific side-effect penalties on both the primary and safety objectives while additionally providing the benefit of generalizability and portability.",http://arxiv.org/abs/2010.02846,2020,manuscript,"Miret, Santiago; Majumdar, Somdeb; Wainwright, Carroll", Assessing Generalization in Reward Learning: Intro and Background,"An overview of reinforcement learning, generalization, and reward learning",https://towardsdatascience.com/assessing-generalization-in-reward-learning-intro-and-background-da6c99d9e48,2020,blogPost,"Chiswick, Max; Makiievskyi, Anton; Zhou, Liang",Towards Data Science (Medium) AI reflections in 2019,,http://www.nature.com/articles/s42256-019-0141-1,2020,journalArticle,"Rich, Alexander S.; Rudin, Cynthia; Jacoby, David M. P.; Freeman, Robin; Wearn, Oliver R.; Shevlin, Henry; Dihal, Kanta; ÓhÉigeartaigh, Seán S.; Butcher, James; Lippi, Marco; Palka, Przemyslaw; Torroni, Paolo; Wongvibulsin, Shannon; Begoli, Edmon; Schneider, Gisbert; Cave, Stephen; Sloane, Mona; Moss, Emmanuel; Rahwan, Iyad; Goldberg, Ken; Howard, David; Floridi, Luciano; Stilgoe, Jack",Nature Machine Intelligence Intriguing properties of neural networks,"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",http://arxiv.org/abs/1312.6199,2014,manuscript,"Szegedy, Christian; Zaremba, Wojciech; Sutskever, Ilya; Bruna, Joan; Erhan, Dumitru; Goodfellow, Ian; Fergus, Rob", Assessing the Risks Posed by the Convergence of Artificial Intelligence and Biotechnology,"Rapid developments are currently taking place in the fields of artificial intelligence (AI) and biotechnology, and applications arising from the convergence of these 2 fields are likely to offer immense opportunities that could greatly benefit human health and biosecurity. The combination of AI and biotechnology could potentially lead to breakthroughs in precision medicine, improved biosurveillance, and discovery of novel medical countermeasures as well as facilitate a more effective public health emergency response. However, as is the case with many preceding transformative technologies, new opportunities often present new risks in parallel. Understanding the current and emerging risks at the intersection of AI and biotechnology is crucial for health security specialists and unlikely to be achieved by examining either field in isolation. Uncertainties multiply as technologies merge, showcasing the need to identify robust assessment frameworks that could adequately analyze the risk landscape emerging at the convergence of these 2 domains. This paper explores the criteria needed to assess risks associated with AI and biotechnology and evaluates 3 previously published risk assessment frameworks. After highlighting their strengths and limitations and applying to relevant AI and biotechnology examples, the authors suggest a hybrid framework with recommendations for future approaches to risk assessment for convergent technologies.",https://www.liebertpub.com/doi/10.1089/hs.2019.0122,2020,journalArticle,"O'Brien, John T.; Nelson, Cassidy",Health Security Few-Shot Goal Inference for Visuomotor Learning and Planning,"Reinforcement learning and planning methods require an objective or reward function that encodes the desired behavior. Yet, in practice, there is a wide range of scenarios where an objective is difficult to provide programmatically, such as tasks with visual observations involving unknown object positions or deformable objects. In these cases, prior methods use engineered problem-specific solutions, e.g., by instrumenting the environment with additional sensors to measure a proxy for the objective. Such solutions require a significant engineering effort on a per-task basis, and make it impractical for robots to continuously learn complex skills outside of laboratory settings. We aim to find a more general and scalable solution for specifying goals for robot learning in unconstrained environments. To that end, we formulate the few-shot objective learning problem, where the goal is to learn a task objective from only a few example images of successful end states for that task. We propose a simple solution to this problem: meta-learn a classifier that can recognize new goals from a few examples. We show how this approach can be used with both model-free reinforcement learning and visual model-based planning and show results in three domains: rope manipulation from images in simulation, visual navigation in a simulated 3D environment, and object arrangement into user-specified configurations on a real robot.",http://arxiv.org/abs/1810.00482,2018,manuscript,"Xie, Annie; Singh, Avi; Levine, Sergey; Finn, Chelsea", """Other-Play"" for Zero-Shot Coordination","We consider the problem of zero-shot coordination - constructing AI agents that can coordinate with novel partners they have not seen before (e.g. humans). Standard Multi-Agent Reinforcement Learning (MARL) methods typically focus on the self-play (SP) setting where agents construct strategies by playing the game with themselves repeatedly. Unfortunately, applying SP naively to the zero-shot coordination problem can produce agents that establish highly specialized conventions that do not carry over to novel partners they have not been trained with. We introduce a novel learning algorithm called other-play (OP), that enhances self-play by looking for more robust strategies, exploiting the presence of known symmetries in the underlying problem. We characterize OP theoretically as well as experimentally. We study the cooperative card game Hanabi and show that OP agents achieve higher scores when paired with independently trained agents. In preliminary results we also show that our OP agents obtains higher average scores when paired with human players, compared to state-of-the-art SP agents.",http://arxiv.org/abs/2003.02979,2020,conferencePaper,"Hu, Hengyuan; Lerer, Adam; Peysakhovich, Alex; Foerster, Jakob",Proceedings of the 37th International Conference on Machine Learning Pricing externalities to balance public risks and benefits of research,,,2017,journalArticle,"Farquhar, Sebastian; Cotton-Barratt, Owen; Snyder-Beattie, Andrew",Health security Would You Hand Over a Decision to a Machine?,"Artificial intelligence (AI) will be used in many decision-making contexts, both as a decision aide and to replace human decision-making. These include what might traditionally be considered moral decisions. This chapter explores risks and opportunities posed by the use of AI in moral decision-making.",https://papers.ssrn.com/abstract=3446679,2016,bookSection,"Ó hÉigeartaigh, Seán",Philosophers Take On the World On First-Order Meta-Learning Algorithms,"This paper considers meta-learning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i.e., learns quickly) when presented with a previously unseen task sampled from this distribution. We analyze a family of algorithms for learning a parameter initialization that can be fine-tuned quickly on a new task, using only first-order derivatives for the meta-learning updates. This family includes and generalizes first-order MAML, an approximation to MAML obtained by ignoring second-order derivatives. It also includes Reptile, a new algorithm that we introduce here, which works by repeatedly sampling a task, training on it, and moving the initialization towards the trained weights on that task. We expand on the results from Finn et al. showing that first-order meta-learning algorithms perform well on some well-established benchmarks for few-shot classification, and we provide theoretical analysis aimed at understanding why these algorithms work.",http://arxiv.org/abs/1803.02999,2018,manuscript,"Nichol, Alex; Achiam, Joshua; Schulman, John", If I were a well-intentioned AI... I: Image classifier,"INTRODUCTION: IF I WERE A WELL-INTENTIONED AI... I've often warned people about the dangers of anthropomorphising AIs - how it can mislead us about what's really going on in an AI (and hence how the AI might act in the future), cause us to not even consider certain failure modes, and make us believe we understand things much better than we do. Oh well, let's ignore all that. I'm about to go on a journey of major anthropomorphisation, by asking myself: * ""If I was a well-intentioned AI, could I solve many of the problems in AI alignment?"" My thinking in this way started when I wondered: suppose I knew that I was given a proxy goal rather than the true goal; suppose that I knew about the Goodhart problem, and suppose that I really ""wanted"" to align with the true goal - could I then do it? I was having similar thoughts about being a mesa-optimiser. It seems to me that asking and answering these kind of questions leads to new and interesting insights. Of course, since they come via anthropomorphisation, we need to be careful with them, and check that they are really applicable to AI systems - ensuring that I'm not bringing some of my own human knowledge about human values into the example. But first, let's get those initial insights. OVERLAPPING PROBLEMS, OVERLAPPING SOLUTIONS At a high enough level of abstraction, many problems in AI alignment seem very similar. The Goodhart problem, the issues machine learning has with distributional shift, the problem of the nearest unblocked strategy, unidentifiability of reward functions, even mesaoptimisation and the whole AI alignment problem itself - all of these can be seen, roughly, as variants of the same problem. That problem being that we have an approximately specified goal that looks ok, but turns out to be underspecified in dangerous ways. Of course, often the differences between the problems are as important as the similarities. Nevertheless, the similarities exist, which is why a lot of the solutions are",https://www.alignmentforum.org/posts/gzWb5kWwzhdaqmyTt/if-i-were-a-well-intentioned-ai-i-image-classifier,2020,blogPost,"Armstrong, Stuart",AI Alignment Forum The Importance of Wild-Animal Suffering,,http://www.ledonline.it/Relations/,2015,journalArticle,"Tomasik, Brian",Relations Computational Limitations in Robust Classification and Win-Win Results,"We continue the study of statistical/computational tradeoffs in learning robust classifiers, following the recent work of Bubeck, Lee, Price and Razenshteyn who showed examples of classification tasks where (a) an efficient robust classifier exists, in the small-perturbation regime; (b) a non-robust classifier can be learned efficiently; but (c) it is computationally hard to learn a robust classifier, assuming the hardness of factoring large numbers. The question of whether a robust classifier for their task exists in the large perturbation regime seems related to important open questions in computational number theory. In this work, we extend their work in three directions. First, we demonstrate classification tasks where computationally efficient robust classification is impossible, even when computationally unbounded robust classifiers exist. For this, we rely on the existence of average-case hard functions. Second, we show hard-to-robustly-learn classification tasks in the large-perturbation regime. Namely, we show that even though an efficient classifier that is robust to large perturbations exists, it is computationally hard to learn any non-trivial robust classifier. Our first construction relies on the existence of one-way functions, and the second on the hardness of the learning parity with noise problem. In the latter setting, not only does a non-robust classifier exist, but also an efficient algorithm that generates fresh new labeled samples given access to polynomially many training examples (termed as generation by Kearns et. al. (1994)). Third, we show that any such counterexample implies the existence of cryptographic primitives such as one-way functions. This leads us to a win-win scenario: either we can learn an efficient robust classifier, or we can construct new instances of cryptographic primitives.",http://arxiv.org/abs/1902.01086,2019,manuscript,"Degwekar, Akshay; Nakkiran, Preetum; Vaikuntanathan, Vinod", Shared Multi-Task Imitation Learning for Indoor Self-Navigation,"Deep imitation learning enables robots to learn from expert demonstrations to perform tasks such as lane following or obstacle avoidance. However, in the traditional imitation learning framework, one model only learns one task, and thus it lacks of the capability to support a robot to perform various different navigation tasks with one model in indoor environments. This paper proposes a new framework, Shared Multi-headed Imitation Learning(SMIL), that allows a robot to perform multiple tasks with one model without switching among different models. We model each task as a sub-policy and design a multi-headed policy to learn the shared information among related tasks by summing up activations from all sub-policies. Compared to single or non-shared multi-headed policies, this framework is able to leverage correlated information among tasks to increase performance.We have implemented this framework using a robot based on NVIDIA TX2 and performed extensive experiments in indoor environments with different baseline solutions. The results demonstrate that SMIL has doubled the performance over nonshared multi-headed policy.",http://arxiv.org/abs/1808.04503,2018,conferencePaper,"Xu, Junhong; Liu, Qiwei; Guo, Hanqing; Kageza, Aaron; AlQarni, Saeed; Wu, Shaoen",2018 IEEE Global Communications Conference (GLOBECOM) Problems of Self-reference in Self-improving Space-Time Embedded Intelligence,,http://link.springer.com/10.1007/978-3-319-09274-4_3,2014,bookSection,"Fallenstein, Benja; Soares, Nate",Artificial General Intelligence Learning Plannable Representations with Causal InfoGAN,"In recent years, deep generative models have been shown to 'imagine' convincing high-dimensional observations such as images, audio, and even video, learning directly from raw data. In this work, we ask how to imagine goal-directed visual plans -- a plausible sequence of observations that transition a dynamical system from its current configuration to a desired goal state, which can later be used as a reference trajectory for control. We focus on systems with high-dimensional observations, such as images, and propose an approach that naturally combines representation learning and planning. Our framework learns a generative model of sequential observations, where the generative process is induced by a transition in a low-dimensional planning model, and an additional noise. By maximizing the mutual information between the generated observations and the transition in the planning model, we obtain a low-dimensional representation that best explains the causal nature of the data. We structure the planning model to be compatible with efficient planning algorithms, and we propose several such models based on either discrete or continuous states. Finally, to generate a visual plan, we project the current and goal observations onto their respective states in the planning model, plan a trajectory, and then use the generative model to transform the trajectory to a sequence of observations. We demonstrate our method on imagining plausible visual plans of rope manipulation.",http://arxiv.org/abs/1807.09341,2018,conferencePaper,"Kurutach, Thanard; Tamar, Aviv; Yang, Ge; Russell, Stuart; Abbeel, Pieter",Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Alignment By Default,"Suppose AI continues on its current trajectory: deep learning continues to get better as we throw more data and compute at it, researchers keep trying random architectures and using whatever seems to work well in practice. Do we end up with aligned AI “by default”? I think there’s at least a plausible trajectory in which the answer is “yes”. Not very likely - I’d put it at ~10% chance - but plausible. In fact, there’s at least an argument to be made that alignment-by-default is more likely to work than many fancy alignment proposals, including IRL variants and HCH-family methods. This post presents the rough models and arguments. I’ll break it down into two main pieces: * Will a sufficiently powerful unsupervised learner “learn human values”? What does that even mean? * Will a supervised/reinforcement learner end up aligned to human values, given a bunch of data/feedback on what humans want? Ultimately, we’ll consider a semi-supervised/transfer-learning style approach, where we first do some unsupervised learning and hopefully “learn human values” before starting the supervised/reinforcement part. As background, I will assume you’ve read some of the core material about human values from the sequences, including Hidden Complexity of Wishes, Value is Fragile, and Thou Art Godshatter. UNSUPERVISED: POINTING TO VALUES In this section, we’ll talk about why an unsupervised learner might not “learn human values”. Since an unsupervised learner is generally just optimized for predictive power, we’ll start by asking whether theoretical algorithms with best-possible predictive power (i.e. Bayesian updates on low-level physics models) “learn human values”, and what that even means. Then, we’ll circle back to more realistic algorithms. Consider a low-level physical model of some humans - e.g. a model which simulates every molecule comprising the humans. Does this model “know human values”? In one sense, yes: the low-level model has everything there is to know abo",https://www.alignmentforum.org/posts/Nwgdq6kHke5LY692J/alignment-by-default,2020,blogPost,"Wentworth, John",AI Alignment Forum Fairness through awareness,,http://dl.acm.org/citation.cfm?doid=2090236.2090255,2012,conferencePaper,"Dwork, Cynthia; Hardt, Moritz; Pitassi, Toniann; Reingold, Omer; Zemel, Richard",Proceedings of the 3rd Innovations in Theoretical Computer Science Conference on - ITCS '12 Artificial General Intelligence: Coordination and Great Powers,,https://fsone-bb4c.kxcdn.com/wp-content/uploads/2018/11/AGI-Coordination-Geat-Powers-Report.pdf,2018,report,"Duettman, Allison; Afanasjeva, Olga; Armstrong, Stuart; Braley, Ryan; Cussins, Jessica; Ding, Jeffrey; Eckersley, Peter; Guan, Melody; Vance, Alyssa; Yampolskiy, Roman", From Language to Goals: Inverse Reinforcement Learning for Vision-Based Instruction Following,"Reinforcement learning is a promising framework for solving control problems, but its use in practical situations is hampered by the fact that reward functions are often difficult to engineer. Specifying goals and tasks for autonomous machines, such as robots, is a significant challenge: conventionally, reward functions and goal states have been used to communicate objectives. But people can communicate objectives to each other simply by describing or demonstrating them. How can we build learning algorithms that will allow us to tell machines what we want them to do? In this work, we investigate the problem of grounding language commands as reward functions using inverse reinforcement learning, and argue that language-conditioned rewards are more transferable than language-conditioned policies to new environments. We propose language-conditioned reward learning (LC-RL), which grounds language commands as a reward function represented by a deep neural network. We demonstrate that our model learns rewards that transfer to novel tasks and environments on realistic, high-dimensional visual environments with natural language commands, whereas directly learning a language-conditioned policy leads to poor performance.",http://arxiv.org/abs/1902.07742,2019,manuscript,"Fu, Justin; Korattikara, Anoop; Levine, Sergey; Guadarrama, Sergio", Understanding Learned Reward Functions,"In many real-world tasks, it is not possible to procedurally specify an RL agent's reward function. In such cases, a reward function must instead be learned from interacting with and observing humans. However, current techniques for reward learning may fail to produce reward functions which accurately reflect user preferences. Absent significant advances in reward learning, it is thus important to be able to audit learned reward functions to verify whether they truly capture user preferences. In this paper, we investigate techniques for interpreting learned reward functions. In particular, we apply saliency methods to identify failure modes and predict the robustness of reward functions. We find that learned reward functions often implement surprising algorithms that rely on contingent aspects of the environment. We also discover that existing interpretability techniques often attend to irrelevant changes in reward output, suggesting that reward interpretability may need significantly different methods from policy interpretability.",http://arxiv.org/abs/2012.05862,2020,manuscript,"Michaud, Eric J.; Gleave, Adam; Russell, Stuart", Constant Arboricity Spectral Sparsifiers,"We show that every graph is spectrally similar to the union of a constant number of forests. Moreover, we show that Spielman-Srivastava sparsifiers are the union of O(logn) forests. This result can be used to estimate boundaries of small subsets of vertices in nearly optimal query time.",http://arxiv.org/abs/1808.05662,2018,manuscript,"Chu, Timothy; Cohen, Michael B.; Pachocki, Jakub W.; Peng, Richard", Contributions to the Theory of Statistical Estimation and Testing Hypotheses,,http://projecteuclid.org/euclid.aoms/1177732144,1939,journalArticle,"Wald, Abraham",The Annals of Mathematical Statistics On the Quantitative Analysis of Decoder-Based Generative Models,"The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of many powerful generative models is a decoder network, a parametric deep neural net that defines a generative distribution. Examples include variational autoencoders, generative adversarial networks, and generative moment matching networks. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. The evaluation code is provided at https://github.com/tonywu95/eval_gen. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.",http://arxiv.org/abs/1611.04273,2017,conferencePaper,"Wu, Yuhuai; Burda, Yuri; Salakhutdinov, Ruslan; Grosse, Roger",arXiv:1611.04273 [cs] Stable Agreements in Turbulent Times: A Legal Toolkit for Constrained Temporal Decision Transmission,,,2019,report,"O’Keefe, Cullen; Candidate, J D", Science's new social contract with society,,http://www.nature.com/articles/35011576,1999,journalArticle,"Gibbons, Michael",Nature Stuck Exploration,,https://www.alignmentforum.org/posts/ajvvtKuNzh7aHmooT/stuck-exploration,2020,blogPost,"Leong, Chris",AI Alignment Forum Speculations Concerning the First Ultraintelligent Machine,"An ultra-intelligent machine is a machine that can far surpass all the intellectual activities of any man however clever. The design of machines is one of these intellectual activities; therefore, an ultra-intelligent machine could design even better machines. To design an ultra-intelligent machine one needs to understand more about the human brain or human thought or both. The physical representation of both meaning and recall, in the human brain, can be to some extent understood in terms of a subassembly theory, this being a modification of Hebb's cell assembly theory. The subassembly theory sheds light on the physical embodiment of memory and meaning, and there can be little doubt that both needs embodiment in an ultra-intelligent machine. The subassembly theory leads to reasonable and interesting explanations of a variety of psychological effects.",http://www.sciencedirect.com/science/article/pii/S0065245808604180,1966,bookSection,"Good, Irving John",Advances in Computers Sequential quadratic programming for task plan optimization,"We consider the problem of refining an abstract task plan into a motion trajectory. Task and motion planning is a hard problem that is essential to long-horizon mobile manipulation. Many approaches divide the problem into two steps: a search for a task plan and task plan refinement to find a feasible trajectory. We apply sequential quadratic programming to jointly optimize over the parameters in a task plan (e.g., trajectories, grasps, put down locations). We provide two modifications that make our formulation more suitable to task and motion planning. We show how to use movement primitives to reuse previous solutions (and so save optimization effort) without trapping the algorithm in a poor basin of attraction. We also derive an early convergence criterion that lets us quickly detect unsatisfiable constraints so we can re-initialize their variables. We present experiments in a navigation amongst movable objects domain and show substantial improvement in cost over a backtracking refinement algorithm.",http://ieeexplore.ieee.org/document/7759740/,2016,conferencePaper,"Hadfield-Menell, Dylan; Lin, Christopher; Chitnis, Rohan; Russell, Stuart; Abbeel, Pieter",2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Sequence introduction: non-agent and multiagent models of mind - LessWrong,"A typical paradigm by which people tend to think of themselves and others is as consequentialist agents: entities who can be usefully modeled as having beliefs and goals, who are then acting according to their beliefs to achieve their goals. This is often a useful model, but it doesn’t quite capture reality. It’s a bit of a fake framework. Or in computer science terms, you might call it a leaky abstraction. An abstraction in the computer science sense is a simplification which tries to hide the underlying details of a thing, letting you think in terms of the simplification rather than the details. To the extent that the abstraction actually succeeds in hiding the details, this makes things a lot simpler. But sometimes the abstraction inevitably leaks, as the simplification fails to predict some of the actual behavior that emerges from the details; in that situation you need to actually know the underlying details, and be able to think in terms of them. Agent-ness being a leaky abstraction is not exactly a novel concept for Less Wrong; it has been touched upon several times, such as in Scott Alexander’s Blue-Minimizing Robot Sequence. At the same time, I do not think that it has been quite fully internalized yet, and that many foundational posts on LW go wrong due to being premised on the assumption of humans being agents. In fact, I would go as far as to claim that this is the biggest flaw of the original Sequences: they were attempting to explain many failures of rationality as being due to cognitive biases, when in retrospect it looks like understanding cognitive biases doesn’t actually make you substantially more effective. But if you are implicitly modeling humans as goal-directed agents, then cognitive biases is the most natural place for irrationality to emerge from, so it makes sense to focus the most on there. Just knowing that an abstraction leaks isn’t enough to improve your thinking, however. To do better, you need to know about the actual underlyi",https://www.lesswrong.com/posts/M4w2rdYgCKctbADMn/sequence-introduction-non-agent-and-multiagent-models-of,2019,blogPost,"Sotala, Kaj",LessWrong Inner alignment requires making assumptions about human values,"Many approaches to AI alignment require making assumptions about what humans want. On a first pass, it might appear that inner alignment is a sub-component of AI alignment that doesn't require making these assumptions. This is because if we define the problem of inner alignment to be the problem of how to train an AI to be aligned with arbitrary reward functions, then a solution would presumably have no dependence on any particular reward function. We could imagine an alien civilization solving the same problem, despite using very different reward functions to train their AIs. Unfortunately, the above argument fails because aligning an AI with our values requires giving the AI extra information that is not encoded directly in the reward function (under reasonable assumptions). The argument for my thesis is subtle, and so I will break it into pieces. First, I will more fully elaborate what I mean by inner alignment. Then I will argue that the definition implies that we can't come up with a full solution without some dependence on human values. Finally, I will provide an example, in order to make this discussion less abstract. CHARACTERIZING INNER ALIGNMENT In the last few posts I wrote (1, 2), I attempted to frame the problem of inner alignment in a way that wasn't too theory-laden. My concern was that the previous characterization was dependent on a solving particular outcome where you have an AI that is using an explicit outer loop to evaluate strategies based on an explicit internal search. In the absence of an explicit internal objective function, it is difficult to formally define whether an agent is ""aligned"" with the reward function that is used to train it. We might therefore define alignment as the ability of our agent to perform well on the test distribution. However, if the test set is sampled from the same distribution as the training data, this definition is equivalent to the performance of a model in standard machine learning, and we haven't actual",https://www.alignmentforum.org/posts/6m5qqkeBTrqQsegGi/inner-alignment-requires-making-assumptions-about-human,2020,blogPost,"Barnett, Matthew",AI Alignment Forum Soft takeoff can still lead to decisive strategic advantage,"[Epistemic status: Argument by analogy to historical cases. Best case scenario it's just one argument among many. Edit: Also, thanks to feedback from others, especially Paul, I intend to write a significantly improved version of this post in the next two weeks.] I have on several occasions heard people say things like this: The original Bostrom/Yudkowsky paradigm envisioned a single AI built by a single AI project, undergoing intelligence explosion all by itself and attaining a decisive strategic advantage as a result. However, this is very unrealistic. Discontinuous jumps in technological capability are very rare, and it is very implausible that one project could produce more innovations than the rest of the world combined. Instead we should expect something more like the Industrial Revolution: Continuous growth, spread among many projects and factions, shared via a combination of trade and technology stealing. We should not expect any one project or AI to attain a decisive strategic advantage, because there will always be other projects and other AI that are only slightly less powerful, and coalitions will act to counterbalance the technological advantage of the frontrunner. (paraphrased)Proponents of this view often cite Paul Christiano in support. Last week I heard him say he thinks the future will be ""like the Industrial Revolution but 10x-100x faster."" In this post, I assume that Paul's slogan for the future is correct and then nevertheless push back against the view above. Basically, I will argue that even if the future is like the industrial revolution only 10x-100x faster, there is a 30%+ chance that it will involve a single AI project (or a single AI) with the ability to gain a decisive strategic advantage, if they so choose. (Whether or not they exercise that ability is another matter.) Why am I interested in this? Do I expect some human group to take over the world? No; instead what I think is that (1) an unaligned AI in the leading project might ta",https://www.alignmentforum.org/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage,2020,blogPost,"Kokotajlo, Daniel",AI Alignment Forum Extensions and Limitations of the Neural GPU,"The Neural GPU is a recent model that can learn algorithms such as multi-digit binary addition and binary multiplication in a way that generalizes to inputs of arbitrary length. We show that there are two simple ways of improving the performance of the Neural GPU: by carefully designing a curriculum, and by increasing model size. The latter requires a memory efficient implementation, as a naive implementation of the Neural GPU is memory intensive. We find that these techniques increase the set of algorithmic problems that can be solved by the Neural GPU: we have been able to learn to perform all the arithmetic operations (and generalize to arbitrarily long numbers) when the arguments are given in the decimal representation (which, surprisingly, has not been possible before). We have also been able to train the Neural GPU to evaluate long arithmetic expressions with multiple operands that require respecting the precedence order of the operands, although these have succeeded only in their binary representation, and not with perfect accuracy. In addition, we gain insight into the Neural GPU by investigating its failure modes. We find that Neural GPUs that correctly generalize to arbitrarily long numbers still fail to compute the correct answer on highly-symmetric, atypical inputs: for example, a Neural GPU that achieves near-perfect generalization on decimal multiplication of up to 100-digit long numbers can fail on $000000\dots002 \times 000000\dots002$ while succeeding at $2 \times 2$. These failure modes are reminiscent of adversarial examples.",http://arxiv.org/abs/1611.00736,2016,manuscript,"Price, Eric; Zaremba, Wojciech; Sutskever, Ilya", Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition,"The design of a reward function often poses a major practical challenge to real-world applications of reinforcement learning. Approaches such as inverse reinforcement learning attempt to overcome this challenge, but require expert demonstrations, which can be difficult or expensive to obtain in practice. We propose variational inverse control with events (VICE), which generalizes inverse reinforcement learning methods to cases where full demonstrations are not needed, such as when only samples of desired goal states are available. Our method is grounded in an alternative perspective on control and reinforcement learning, where an agent's goal is to maximize the probability that one or more events will happen at some point in the future, rather than maximizing cumulative rewards. We demonstrate the effectiveness of our methods on continuous control tasks, with a focus on high-dimensional observations like images where rewards are hard or even impossible to specify.",https://arxiv.org/abs/1805.11686v3,2018,conferencePaper,"Fu, Justin; Singh, Avi; Ghosh, Dibya; Yang, Larry; Levine, Sergey",Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Positive-Unlabeled Reward Learning,"Learning reward functions from data is a promising path towards achieving scalable Reinforcement Learning (RL) for robotics. However, a major challenge in training agents from learned reward models is that the agent can learn to exploit errors in the reward model to achieve high reward behaviors that do not correspond to the intended task. These reward delusions can lead to unintended and even dangerous behaviors. On the other hand, adversarial imitation learning frameworks tend to suffer the opposite problem, where the discriminator learns to trivially distinguish agent and expert behavior, resulting in reward models that produce low reward signal regardless of the input state. In this paper, we connect these two classes of reward learning methods to positive-unlabeled (PU) learning, and we show that by applying a large-scale PU learning algorithm to the reward learning problem, we can address both the reward under- and over-estimation problems simultaneously. Our approach drastically improves both GAIL and supervised reward learning, without any additional assumptions.",http://arxiv.org/abs/1911.00459,2019,conferencePaper,"Xu, Danfei; Denil, Misha","arXiv:1911.00459 [cs, stat]" Conditional Neural Processes,"Deep neural networks excel at function approximation, yet they are typically trained from scratch for each new function. On the other hand, Bayesian methods, such as Gaussian Processes (GPs), exploit prior knowledge to quickly infer the shape of a new function at test time. Yet GPs are computationally expensive, and it can be hard to design appropriate priors. In this paper we propose a family of neural models, Conditional Neural Processes (CNPs), that combine the benefits of both. CNPs are inspired by the flexibility of stochastic processes such as GPs, but are structured as neural networks and trained via gradient descent. CNPs make accurate predictions after observing only a handful of training data points, yet scale to complex functions and large datasets. We demonstrate the performance and versatility of the approach on a range of canonical machine learning tasks, including regression, classification and image completion.",http://arxiv.org/abs/1807.01613,2018,conferencePaper,"Garnelo, Marta; Rosenbaum, Dan; Maddison, Chris J.; Ramalho, Tiago; Saxton, David; Shanahan, Murray; Teh, Yee Whye; Rezende, Danilo J.; Eslami, S. M. Ali",Proceedings of the 35th International Conference on Machine Learning Integrating Human Observer Inferences into Robot Motion Planning,"Our goal is to enable robots to produce motion that is suitable for human-robot collaboration and co-existence. Most motion in robotics is purely functional, ideal when the robot is performing a task in isolation. In collaboration, however, the robot’s motion has an observer, watching and interpreting the motion. In this work, we move beyond functional …",https://www.ri.cmu.edu/publications/integrating-human-observer-inferences-into-robot-motion-planning/,2014,blogPost,"Dragan, Anca; Srinivasa, Siddhartha",The Robotics Institute Carnegie Mellon University Identifying Statistical Bias in Dataset Replication,"Dataset replication is a useful tool for assessing whether improvements in test accuracy on a specific benchmark correspond to improvements in models’ ability to generalize reliably. In this work, we present unintuitive yet significant ways in which standard approaches to dataset replication introduce statistical bias, skewing the resulting observations. We study ImageNet-v2, a replication of the ImageNet dataset on which models exhibit a significant (11-14%) drop in accuracy, even after controlling for a standard humanin-the-loop measure of data quality. We show that after correcting for the identified statistical bias, only an estimated 3.6% ± 1.5% of the original 11.7% ± 1.0% accuracy drop remains unaccounted for. We conclude with concrete recommendations for recognizing and avoiding bias in dataset replication. Code for our study is publicly available1.",http://arxiv.org/abs/2005.09619,2020,conferencePaper,"Engstrom, Logan; Ilyas, Andrew; Santurkar, Shibani; Tsipras, Dimitris; Steinhardt, Jacob; Madry, Aleksander",Proceedings of the 37th International Conference on Machine Learning The Complexity of Decentralized Control of Markov Decision Processes,,http://pubsonline.informs.org/doi/abs/10.1287/moor.27.4.819.297,2002,journalArticle,"Bernstein, Daniel S.; Givan, Robert; Immerman, Neil; Zilberstein, Shlomo",Mathematics of Operations Research Coordination-driven learning in multi-agent problem spaces,"We discuss the role of coordination as a direct learning objective in multi-agent reinforcement learning (MARL) domains. To this end, we present a novel means of quantifying coordination in multi-agent systems, and discuss the implications of using such a measure to optimize coordinated agent policies. This concept has important implications for adversary-aware RL, which we take to be a sub-domain of multi-agent learning.",http://arxiv.org/abs/1809.04918,2018,manuscript,"Barton, Sean L.; Waytowich, Nicholas R.; Asher, Derrik E.", Maximal Cluelessness,,https://globalprioritiesinstitute.org/wp-content/uploads/2019/Mogensen_Maximal_Cluelessness.pdf,2019,manuscript,"Mogensen, Andreas", Non-Additive Axiologies in Large Worlds,"Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say ‘yes’, but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say ‘no’. This distinction is practically important: additive axiologies support ‘arguments from astronomical scale’ which suggest (among other things) that it is overwhelmingly important for humanity to avoid premature extinction and ensure the existence of a large future population, while non-additive axiologies need not. We show, however, that when there is a large enough ‘background population’ unaffected by our choices, a wide range of non-additive axiologies converge in their implications with some additive axiology—for instance, average utilitarianism converges to critical-level utilitarianism and various egalitarian theories converge to prioritiarianism. We further argue that real-world background populations may be large enough to make these limit results practically significant. This means that arguments from astronomical scale, and other arguments in practical ethics that seem to presuppose additive separability, may be truth-preserving in practice whether or not we accept additive separability as a basic axiological principle.",,2020,report,"Tarsney, Christian; Thomas, Teruji", Integrating the planetary boundaries and global catastrophic risk paradigms,,https://linkinghub.elsevier.com/retrieve/pii/S0921800914002262,2014,journalArticle,"Baum, Seth D.; Handoh, Itsuki C.",Ecological Economics Execution Cost Optimization for Hierarchical Planning in the Now,"For robots to effectively interact with the real world, they will need to perform complex tasks over long time horizons. This is a daunting challenge, but human ability to routinely solve these problems leads us to believe that there is underlying structure we can leverage to find solutions. Recent advances using hierarchical planning [19] have been able to solve these problems by breaking a single long-horizon problem into several short-horizon problems. While this approach is able to effectively solve real world robotics planning problems, it makes no effort to account for the execution cost of an abstract plan and often arrives at poor quality plans. In this thesis, we analyze situations that lead to execution cost inefficiencies in hierarchical planners. We argue that standard optimization techniques from flat planning or search are likely to be ineffective in addressing these issues. We outline an algorithm, RCHPN, that improves a hierarchical plan by considering peephole optimizations during execution. We frame the underlying question as one of evaluating the resource needs of an abstract operator and propose a general way to approach estimating them. We introduce the marsupial logistics domain to study the effectiveness of this approach. We present experiments in large problem instances from marsupial logistics and observed up to 30% reduction in execution cost when compared with a standard hierarchical planner.",,2013,thesis,"Hadfield-Menell, Dylan", No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling,"Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic eval- uation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems.",http://arxiv.org/abs/1804.09160,2018,conferencePaper,"Wang, Xin; Chen, Wenhu; Wang, Yuan-Fang; Wang, William Yang",Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) Implications of Quantum Computing for Artificial Intelligence alignment research,"We explain some key features of quantum computing via three heuristics and apply them to argue that a deep understanding of quantum computing is unlikely to be helpful to address current bottlenecks in Artificial Intelligence Alignment. Our argument relies on the claims that Quantum Computing leads to compute overhang instead of algorithmic overhang, and that the difficulties associated with the measurement of quantum states do not invalidate any major assumptions of current Artificial Intelligence Alignment research agendas. We also discuss tripwiring, adversarial blinding, informed oversight and side effects as possible exceptions.",http://arxiv.org/abs/1908.07613,2019,manuscript,"Sevilla, Jaime; Moreno, Pablo", Resolutions of mathematical conjectures over time,"Conditioned on being remembered as a notable conjecture, the time-to-proof for a mathematical problem appears to be exponentially distributed with a half-life of about 100 years. However, these observations are likely to be distorted by various biases. Support In 2014, we found conjectures referenced on Wikipedia, and recorded the dates that they were proposed and...",https://aiimpacts.org/resolutions-of-mathematical-conjectures-over-time/,2020,blogPost,"Bergal, Asya",AI Impacts "Artificial intelligence, employment, and income",,https://www.medra.org/servlet/aliasResolver?alias=iospress&doi=10.3233/HSM-1985-5205,1985,journalArticle,"Nilsson, Nils J.",Human Systems Management Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples,"Adversarial training and its variants have become de facto standards for learning robust deep neural networks. In this paper, we explore the landscape around adversarial training in a bid to uncover its limits. We systematically study the effect of different training losses, model sizes, activation functions, the addition of unlabeled data (through pseudo-labeling) and other factors on adversarial robustness. We discover that it is possible to train robust models that go well beyond state-of-the-art results by combining larger models, Swish/SiLU activations and model weight averaging. We demonstrate large improvements on CIFAR-10 and CIFAR-100 against $\ell_\infty$ and $\ell_2$ norm-bounded perturbations of size $8/255$ and $128/255$, respectively. In the setting with additional unlabeled data, we obtain an accuracy under attack of 65.88% against $\ell_\infty$ perturbations of size $8/255$ on CIFAR-10 (+6.35% with respect to prior art). Without additional data, we obtain an accuracy under attack of 57.20% (+3.46%). To test the generality of our findings and without any additional modifications, we obtain an accuracy under attack of 80.53% (+7.62%) against $\ell_2$ perturbations of size $128/255$ on CIFAR-10, and of 36.88% (+8.46%) against $\ell_\infty$ perturbations of size $8/255$ on CIFAR-100.",http://arxiv.org/abs/2010.03593,2020,manuscript,"Gowal, Sven; Qin, Chongli; Uesato, Jonathan; Mann, Timothy; Kohli, Pushmeet", There's No Fire Alarm for Artificial General Intelligence,"What is the function of a fire alarm?   One might think that the function of a fire alarm is to provide you with important evidence about a fire existing, allowing you to change your policy accordingly and exit the building. In the classic experiment by Latane and Darley in 1968, eight groups of... Read more »",https://intelligence.org/2017/10/13/fire-alarm/,2017,blogPost,"Yudkowsky, Eliezer",Machine Intelligence Research Institute CakeML: a verified implementation of ML,,https://dl.acm.org/doi/10.1145/2578855.2535841,2014,journalArticle,"Kumar, Ramana; Myreen, Magnus O.; Norrish, Michael; Owens, Scott",ACM SIGPLAN Notices No Time Like The Present For AI Safety Work,"I. On the recent post on AI risk, a commenter challenged me to give the short version of the argument for taking it seriously. I said something like: 1. If humanity doesn’t blow itself up, ev…",https://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/,2015,blogPost,"Alexander, Scott",Slate Star Codex An overview of 11 proposals for building safe advanced AI,"Special thanks to Kate Woolverton, Paul Christiano, Rohin Shah, Alex Turner, William Saunders, Beth Barnes, Abram Demski, Scott Garrabrant, Sam Eisenstat, and Tsvi Benson-Tilsen for providing helpful comments and feedback on this post and the talk that preceded it. This post is a collection of 11 different proposals for building safe advanced AI under the current machine learning paradigm. There's a lot of literature out there laying out various different approaches such as amplification, debate, or recursive reward modeling, but a lot of that literature focuses primarily on outer alignment at the expense of inner alignment and doesn't provide direct comparisons between approaches. The goal of this post is to help solve that problem by providing a single collection of 11 different proposals for building safe advanced AI—each including both inner and outer alignment components. That being said, not only does this post not cover all existing proposals, I strongly expect that there will be lots of additional new proposals to come in the future. Nevertheless, I think it is quite useful to at least take a broad look at what we have now and compare and contrast some of the current leading candidates. It is important for me to note before I begin that the way I describe the 11 approaches presented here is not meant to be an accurate representation of how anyone else would represent them. Rather, you should treat all the approaches I describe here as my version of that approach rather than any sort of canonical version that their various creators/proponents would endorse. Furthermore, this post only includes approaches that intend to directly build advanced AI systems via machine learning. Thus, this post doesn't include other possible approaches for solving the broader AI existential risk problem such as: * finding a fundamentally different way of approaching AI than the current machine learning paradigm that makes it easier to build safe advanced AI, * developin",https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai,2020,blogPost,"Hubinger, Evan",AI Alignment Forum Intelligence Explosion Microeconomics,"I. J. Good’s thesis of the “intelligence explosion” states that a sufficiently advanced machine intelligence could build a smarter version of itself, which could in turn build an even smarter version, and that this process could continue to the point of vastly exceeding human intelligence. As Sandberg (2010) correctly notes, there have been several attempts to lay down return on investment formulas intended to represent sharp speedups in economic or technological growth, but very little attempt has been made to deal formally with Good’s intelligence explosion thesis as such.",,2013,report,"Yudkowsky, Eliezer", Subjectifying objectivity: Delineating tastes in theoretical quantum gravity research,"Research in theoretical quantum gravity has continued expansively even as it has become detached from classic arbiters of research such as direct empirical falsification. This makes it an interesting test case for theories of what motivates and mediates contemporary scientific research and of the nature of scientific objectivity. We conducted 50 semi-structured interviews with researchers in the rival camps of string theory and loop quantum gravity, coded a subset for reoccurring themes, and subjected the resulting data to statistical analysis. To delineate the subjective tastes and the related process of collective consensus-making in contemporary quantum gravity research, we mobilize aspects of Daston and Galison’s depiction of the scientific self and its relation to epistemic virtues, Bourdieu’s field-centered account of social space, and Kantian notions of aesthetics. We make two key contributions. First, our analysis sheds light on the inner workings of the field by connecting its internal epistemic struggles with approaches to understanding scientific fields. Second, our application of theories of social reproduction to the substance of scientific inquiry allows some substantive generalizations of Daston and Galison’s framework.",https://doi.org/10.1177/0306312720949691,2020,journalArticle,"Gilbert, Thomas Krendl; Loveridge, Andrew",Social Studies of Science Unprecedented technological risks,,,2014,journalArticle,"Beckstead, Nick; Bostrom, N.; Bowerman, N.; Cotton-Barratt, O.; MacAskill, W.; Eigeartaigh, S.; Ord, T.",Policy brief. Available online: http://www. fhi. ox. ac. uk/wpcontent/uploads/Unprecedented-Technological-Risks. pdf. Last Accessed September Safelife 1.0: Exploring side effects in complex environments,,https://arxiv.org/abs/1912.01217,2019,conferencePaper,"Wainwright, Carroll L.; Eckersley, Peter",Proceedings of the Workshop on Artificial Intelligence Safety (SafeAI 2020) Piagetian Roboethics via Category Theory,,https://www.cambridge.org/core/product/identifier/CBO9780511978036A034/type/book_part,2011,bookSection,"Bringsjord, Selmer; Taylor, Joshua; van Heuveln, Bram; Arkoudas, Konstantine; Clark, Micah; Wojtowicz, Ralph",Machine Ethics Quantifying the probability of existential catastrophe: A reply to Beard et al.,"A recent article by Beard, Rowe, and Fox (BRF) evaluates ten methodologies for quantifying the probability of existential catastrophe. This article builds on BRF’s valuable contribution. First, this article describes the conceptual and mathematical relationship between the probability of existential catastrophe and the severity of events that could result in existential catastrophe. It discusses complications in this relationship arising from catastrophes occurring at different speeds and from multiple concurrent catastrophes. Second, this article revisits the ten BRF methodologies, finding an inverse relationship between a methodology’s ease of use and the quality of results it produces—in other words, achieving a higher quality of analysis will in general require a larger investment in analysis. Third, the manuscript discusses the role of probability quantification in the management of existential risks, describing why the probability is only sometimes needed for decision-making and arguing that analyses should support real-world risk management decisions and not just be academic exercises. If the findings of this article are taken into account, together with BRF’s evaluations of specific methodologies, then risk analyses of existential catastrophe may tend to be more successful at understanding and reducing the risks.",http://www.sciencedirect.com/science/article/pii/S0016328720300987,2020,journalArticle,"Baum, Seth D.",Futures AISC4: Research Summaries,"The fourth AI Safety Camp took place in May 2020 in Toronto. Due to COVID-19, the camp was held virtually. Six teams participated and worked on the following topics: Survey on AI risk scenarios Opt…",https://aisafety.camp/2020/05/30/aisc4-research-summaries/,2020,blogPost,"Kosch, Sebastian",AI Safety Camp Moral Anti-Realism Sequence #3: Against Irreducible Normativity,"This is the third post in my sequence on moral anti-realism; it works well as a standalone piece. (See 1 and 2 for my previous posts.) SUMMARY * After briefly explaining the concept of irreducible normativity, I delve into a three-tiered argument against the moral realism versions based on it. * First, I summarize evolutionary debunking arguments that show that our intuitions about normative bedrock concepts (especially morality) cannot be trusted. Those arguments aim to establish that regardless of whether there are irreducible normative truths, our intuitions about them evolved to track something else. This is problematic both because it means that the search for moral progress is likely doomed and because it calls into question the reasons for taking irreducible normativity seriously in the first place. * Secondly, I try to change the perception of normative anti-realism as a self-defeating framework. Through careful consideration of the sources of meaning in our lives, I argue that those sources are compatible with normative anti-realism (at least for most of us). I provide a sketch of what it could look like for anti-realists to reason about ethics, pointing out some ways in which self-determined moral goals can feel more meaningful than externally-imposed ones. * Thirdly, I note that the way irreducible normativity is commonly motivated stands in tension with how words obtain their meaning. I then delve into various ways how one could try to make irreducible normativity work as a concept. Some options are too disconnected from what we want to do, and others are too close to it (in the sense that, as far as practical purposes are concerned, they overlap with how anti-realists would also approach normativity). I note that the most attractive way to think about irreducible normativity closely resembles normative naturalism. Finally, I conclude that if the arguments in this post are sound, there'",https://forum.effectivealtruism.org/posts/C2GpA894CfLcTXL2L/moral-anti-realism-sequence-3-against-irreducible,2020,blogPost,"Gloor, Lukas",Effective Altruism Forum Risks of the Journey to the Singularity,"SummaryMany researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. Unlike current AI systems, individual AGIs would be capable of learning to operate in a wide variety of domains, including ones they had not been specifically designed for. It has been proposed that AGIs might eventually pose a significant risk to humanity, for they could accumulate significant amounts of power and influence in society while being indifferent to what humans valued. The accumulation of power might either happen gradually over time, or it might happen very rapidly (a so-called “hard takeoff”). Gradual accumulation would happen through normal economic mechanisms, as AGIs came to carry out an increasing share of economic tasks. A hard takeoff could be possible if AGIs required significantly less hardware to run than was available, or if they could redesign themselves to run at ever faster speeds, or if they could repeatedly redesign themselves into more intelligent versions of themselves.",https://doi.org/10.1007/978-3-662-54033-6_2,2017,bookSection,"Sotala, Kaj; Yampolskiy, Roman",The Technological Singularity: Managing the Journey Ethics of brain emulations,,,2014,journalArticle,"Sandberg, Anders",Journal of Experimental & Theoretical Artificial Intelligence """Why Should I Trust You?"": Explaining the Predictions of Any Classifier",,https://dl.acm.org/doi/10.1145/2939672.2939778,2016,conferencePaper,"Ribeiro, Marco Tulio; Singh, Sameer; Guestrin, Carlos",Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining Future progress in artificial intelligence: A survey of expert opinion,,,2016,bookSection,"Müller, Vincent C.; Bostrom, Nick",Fundamental issues of artificial intelligence Secure multi-party computation problems and their applications: a review and open problems,,http://portal.acm.org/citation.cfm?doid=508171.508174,2001,conferencePaper,"Du, Wenliang; Atallah, Mikhail J.",Proceedings of the 2001 workshop on New security paradigms - NSPW '01 Regulating Artificial Intelligence: Proposal for a Global Solution,"With increasing ubiquity of artificial intelligence (AI) in modern societies, individual countries and the international community are working hard to create an innovation-friendly, yet safe, regulatory environment. Adequate regulation is key to maximize the benefits and minimize the risks stemming from AI technologies. Developing regulatory frameworks is, however, challenging due to AI's global reach and the existence of widespread misconceptions about the notion of regulation. We argue that AI-related challenges cannot be tackled effectively without sincere international coordination supported by robust, consistent domestic and international governance arrangements. Against this backdrop, we propose the establishment of an international AI governance framework organized around a new AI regulatory agency that -- drawing on interdisciplinary expertise -- could help creating uniform standards for the regulation of AI technologies and inform the development of AI policies around the world. We also believe that a fundamental change of mindset on what constitutes regulation is necessary to remove existing barriers that hamper contemporary efforts to develop AI regulatory regimes, and put forward some recommendations on how to achieve this, and what opportunities doing so would present.",http://arxiv.org/abs/2005.11072,2020,conferencePaper,"Erdélyi, Olivia J.; Goldsmith, Judy","Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society" Verification Of Non-Linear Specifications For Neural Networks,"Prior work on neural network verification has focused on specifications that are linear functions of the output of the network, e.g., invariance of the classifier output under adversarial perturbations of the input. In this paper, we extend verification algorithms to be able to certify richer properties of neural networks. To do this we introduce the class of convex-relaxable specifications, which constitute nonlinear specifications that can be verified using a convex relaxation. We show that a number of important properties of interest can be modeled within this class, including conservation of energy in a learned dynamics model of a physical system; semantic consistency of a classifier’s output labels under adversarial perturbations and bounding errors in a system that predicts the summation of handwritten digits. Our experimental evaluation shows that our method is able to effectively verify these specifications. Moreover, our evaluation exposes the failure modes in models which cannot be verified to satisfy these specifications. Thus, emphasizing the importance of training models not just to fit training data but also to be consistent with specifications.",,2019,conferencePaper,"Qin, Chongli; O’Donoghue, Brendan; Stanforth, Robert; Gowal, Sven; Uesato, Jonathan; Swirszcz, Grzegorz; Kohli, Pushmeet", Sample-Efficient Imitation Learning via Generative Adversarial Nets,"GAIL is a recent successful imitation learning architecture that exploits the adversarial training procedure introduced in GANs. Albeit successful at generating behaviours similar to those demonstrated to the agent, GAIL suffers from a high sample complexity in the number of interactions it has to carry out in the environment in order to achieve satisfactory performance. We dramatically shrink the amount of interactions with the environment necessary to learn well-behaved imitation policies, by up to several orders of magnitude. Our framework, operating in the model-free regime, exhibits a significant increase in sample-efficiency over previous methods by simultaneously a) learning a self-tuned adversarially-trained surrogate reward and b) leveraging an off-policy actor-critic architecture. We show that our approach is simple to implement and that the learned agents remain remarkably stable, as shown in our experiments that span a variety of continuous control tasks. Video visualisations available at: \url{https://youtu.be/-nCsqUJnRKU}.",https://arxiv.org/abs/1809.02064v3,2018,conferencePaper,"Blondé, Lionel; Kalousis, Alexandros",Proceedings of Machine Learning Research Certified Defenses against Adversarial Examples,"While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no attack that perturbs each pixel by at most \epsilon = 0.1 can cause more than 35% test error.",http://arxiv.org/abs/1801.09344,2018,conferencePaper,"Raghunathan, Aditi; Steinhardt, Jacob; Liang, Percy", Sequential Extensions of Causal and Evidential Decision Theory,"Moving beyond the dualistic view in AI where agent and environment are separated incurs new challenges for decision making, as calculation of expected utility is no longer straightforward. The non-dualistic decision theory literature is split between causal decision theory and evidential decision theory. We extend these decision algorithms to the sequential setting where the agent alternates between taking actions and observing their consequences. We find that evidential decision theory has two natural extensions while causal decision theory only has one.",http://arxiv.org/abs/1506.07359,2015,conferencePaper,"Everitt, Tom; Leike, Jan; Hutter, Marcus",ADT 2015: Algorithmic Decision Theory Canaries in Technology Mines: Warning Signs of Transformative Progress in AI,,,2020,conferencePaper,"Cremer, Carla Zoe; Whittlestone, Jess", Conservative Agency via Attainable Utility Preservation,"Reward functions are often misspecified. An agent optimizing an incorrect reward function can change its environment in large, undesirable, and potentially irreversible ways. Work on impact measurement seeks a means of identifying (and thereby avoiding) large changes to the environment. We propose a novel impact measure which induces conservative, effective behavior across a range of situations. The approach attempts to preserve the attainable utility of auxiliary objectives. We evaluate our proposal on an array of benchmark tasks and show that it matches or outperforms relative reachability, the state-of-the-art in impact measurement.",http://arxiv.org/abs/1902.09725,2020,conferencePaper,"Turner, Alexander Matt; Hadfield-Menell, Dylan; Tadepalli, Prasad","AIES '20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society" Machines and the Theory of Intelligence,,http://www.nature.com/articles/241507a0,1973,journalArticle,"Michie, Donald",Nature Understanding RL Vision,"With diverse environments, we can analyze, diagnose and edit deep reinforcement learning models using attribution.",https://distill.pub/2020/understanding-rl-vision,2020,journalArticle,"Hilton, Jacob; Cammarata, Nick; Carter, Shan; Goh, Gabriel; Olah, Chris",Distill The Technological Singularity,,https://direct.mit.edu/books/book/4072/the-technological-singularity,2015,book,"Shanahan, Murray", FPR -- Fast Path Risk Algorithm to Evaluate Collision Probability,"As mobile robots and autonomous vehicles become increasingly prevalent in human-centred environments, there is a need to control the risk of collision. Perceptual modules, for example machine vision, provide uncertain estimates of object location. In that context, the frequently made assumption of an exactly known free-space is invalid. Clearly, no paths can be guaranteed to be collision free. Instead, it is necessary to compute the probabilistic risk of collision on any proposed path. The FPR algorithm, proposed here, efficiently calculates an upper bound on the risk of collision for a robot moving on the plane. That computation orders candidate trajectories according to (the bound on) their degree of risk. Then paths within a user-defined threshold of primary risk could be selected according to secondary criteria such as comfort and efficiency. The key contribution of this paper is the FPR algorithm and its `convolution trick' to factor the integrals used to bound the risk of collision. As a consequence of the convolution trick, given $K$ obstacles and $N$ candidate paths, the computational load is reduced from the naive $O(NK)$, to the qualitatively faster $O(N+K)$.",http://arxiv.org/abs/1804.05384,2020,journalArticle,"Blake, Andrew; Bordallo, Alejandro; Brestnichki, Kamen; Hawasly, Majd; Penkov, Svetlin; Ramamoorthy, Subramanian; Silva, Alexandre",IEEE Robotics and Automation Letters Multi-Agent Generative Adversarial Imitation Learning,"Imitation learning algorithms can be used to learn a policy from expert demonstrations without access to a reward signal. However, most existing approaches are not applicable in multi-agent settings due to the existence of multiple (Nash) equilibria and non-stationary environments. We propose a new framework for multi-agent imitation learning for general Markov games, where we build upon a generalized notion of inverse reinforcement learning. We further introduce a practical multi-agent actor-critic algorithm with good empirical performance. Our method can be used to imitate complex behaviors in high-dimensional environments with multiple cooperative or competing agents.",http://arxiv.org/abs/1807.09936,2018,conferencePaper,"Song, Jiaming; Ren, Hongyu; Sadigh, Dorsa; Ermon, Stefano",Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Gotta Learn Fast: A New Benchmark for Generalization in RL,"In this report, we present a new reinforcement learning (RL) benchmark based on the Sonic the Hedgehog (TM) video game franchise. This benchmark is intended to measure the performance of transfer learning and few-shot learning algorithms in the RL domain. We also present and evaluate some baseline algorithms on the new benchmark.",http://arxiv.org/abs/1804.03720,2018,manuscript,"Nichol, Alex; Pfau, Vicki; Hesse, Christopher; Klimov, Oleg; Schulman, John", Blueberry Earth,,https://arxiv.org/abs/1807.10553,2018,manuscript,"Sandberg, Anders", Flavors of Computation Are Flavors of Consciousness,"If we don't understand why we're conscious, how come we're so sure that extremely simple minds are not? I propose to think of consciousness as intrinsic to computation, although different types of computation may have very different types of consciousness – some so alien that we can't imagine them. Since all physical processes are computations, […]",https://longtermrisk.org/flavors-of-computation-are-flavors-of-consciousness/,2015,blogPost,"Tomasik, Brian",Center on Long-Term Risk Towards an integrated assessment of global catastrophic risk,,,2017,conferencePaper,"Baum, Seth; Barrett, Anthony","Catastrophic and Existential Risk: Proceedings of the First Colloquium, Garrick Institute for the Risk Sciences, University of California, Los Angeles, Forthcoming" Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep Reinforcement Learning,"Saliency maps are frequently used to support explanations of the behavior of deep reinforcement learning (RL) agents. However, a review of how saliency maps are used in practice indicates that the derived explanations are often unfalsifiable and can be highly subjective. We introduce an empirical approach grounded in counterfactual reasoning to test the hypotheses generated from saliency maps and assess the degree to which they correspond to the semantics of RL environments. We use Atari games, a common benchmark for deep RL, to evaluate three types of saliency maps. Our results show the extent to which existing claims about Atari games can be evaluated and suggest that saliency maps are best viewed as an exploratory tool rather than an explanatory tool.",http://arxiv.org/abs/1912.05743,2020,conferencePaper,"Atrey, Akanksha; Clary, Kaleigh; Jensen, David", "A randomized controlled trial of the computerized CBT programme, MoodGYM, for public mental health service users waiting for interventions",,http://doi.wiley.com/10.1111/bjc.12055,2014,journalArticle,"Twomey, Conal; O'Reilly, Gary; Byrne, Michael; Bury, Matthew; White, Aisling; Kissane, Sheila; McMahon, Aisling; Clancy, Nicola",British Journal of Clinical Psychology Progress in general purpose factoring,"The largest number factored to date grew by about 4.5 decimal digits per year over the past roughly half-century. Between 1988, when we first have good records, and 2009, when the largest number to date was factored, progress was roughly 6 decimal digits per year. Progress was relatively smooth during the two decades for which we have good records, with half of...",https://aiimpacts.org/progress-in-general-purpose-factoring/,2017,blogPost,AI Impacts,AI Impacts Structure Learning for Approximate Solution of Many-Player Games,"Games with many players are difficult to solve or even specify without adopting structural assumptions that enable representation in compact form. Such structure is generally not given and will not hold exactly for particular games of interest. We introduce an iterative structurelearning approach to search for approximate solutions of many-player games, assuming only black-box simulation access to noisy payoff samples. Our first algorithm, KRoles, exploits symmetry by learning a role assignment for players of the game through unsupervised learning (clustering) methods. Our second algorithm, G3L, seeks sparsity by greedy search over local interactions to learn a graphical game model. Both algorithms use supervised learning (regression) to fit payoff values to the learned structures, in compact representations that facilitate equilibrium calculation. We experimentally demonstrate the efficacy of both methods in reaching quality solutions and uncovering hidden structure, on both perfectly and approximately structured game instances.",https://aaai.org/ojs/index.php/AAAI/article/view/5586,2020,conferencePaper,"Li, Zun; Wellman, Michael",Proceedings of the AAAI Conference on Artificial Intelligence An automatic method for discovering rational heuristics for risky choice,"What is the optimal way to make a decision given that your time is limited and your cognitive resources are bounded? To answer this question, we formalized the bounded optimal decision process as the solution to a meta-level Markov decision process whose actions are costly computations. We approximated the optimal solution and evaluated its predictions against human choice behavior in the Mouselab paradigm, which is widely used to study decision strategies. Our computational method rediscovered well-known heuristic strategies and the conditions under which they are used, as well as novel heuristics. A Mouselab experiment confirmed our model’s main predictions. These findings are a proof-of-concept that optimal cognitive strategies can be automatically derived as the rational use of finite time and bounded cognitive resources.",,2017,conferencePaper,"Lieder, Falk; Krueger, Paul M; Griffiths, Thomas L", Robustness of Neural Networks against Storage Media Errors,"We study the trade-offs between storage/bandwidth and prediction accuracy of neural networks that are stored in noisy media. Conventionally, it is assumed that all parameters (e.g., weight and biases) of a trained neural network are stored as binary arrays and are error-free. This assumption is based upon the implementation of error correction codes (ECCs) that correct potential bit flips in storage media. However, ECCs add storage overhead and cause bandwidth reduction when loading the trained parameters during the inference. We study the robustness of deep neural networks when bit errors exist but ECCs are turned off for different neural network models and datasets. It is observed that more sophisticated models and datasets are more vulnerable to errors in their trained parameters. We propose a simple detection approach that can universally improve the robustness, which in some cases can be improved by orders of magnitude. We also propose an alternative binary representation of the parameters such that the distortion brought by bit flips is reduced and even theoretically vanishing when the number of bits to represent a parameter increases.",http://arxiv.org/abs/1709.06173,2017,manuscript,"Qin, Minghai; Sun, Chao; Vucinic, Dejan", The End of Economic Growth? Unintended Consequences of a Declining Population,"In many models, economic growth is driven by people discovering new ideas. These models typically assume either a constant or growing population. However, in high income countries today, fertility is already below its replacement rate: women are having fewer than two children on average. It is a distinct possibility —highlighted in the recent book, Empty Planet — that global population will decline rather than stabilize in the long run. In standard models, this turns out to have profound implications: rather than continued exponential growth, living standards stagnate for a population that vanishes.",http://www.nber.org/papers/w26651.pdf,2020,report,"Jones, Charles", AI Safety and Reproducibility: Establishing Robust Foundations for the Neuropsychology of Human Values,We propose the creation of a systematic effort to identify and replicate key findings in neuropsychology and allied fields related to understanding human values. Our aim is to ensure that research underpinning the value alignment problem of artificial intelligence has been sufficiently validated to play a role in the design of AI systems.,http://arxiv.org/abs/1712.04307,2018,conferencePaper,"Sarma, Gopal P.; Hay, Nick J.; Safron, Adam",Lecture Notes in Computer Science Resilience to global catastrophe,,,2018,journalArticle,"Baum, Seth D.",Domains of resilience for complex interconnected systems. Moral Decision Making Frameworks for Artificial Intelligence,"The generality of decision and game theory has enabled domain-independent progress in AI research. For example, a better algorithm for finding good policies in (PO)MDPs can be instantly used in a variety of applications. But such a general theory is lacking when it comes to moral decision making. For AI applications with a moral component, are we then forced to build systems based on many ad-hoc rules? In this paper we discuss possible ways to avoid this conclusion.",http://moralai.cs.duke.edu/documents/mai_docs/moralAAAI17.pdf,2017,conferencePaper,"Conitzer, Vincent; Sinnott-Armstrong, Walter; Borg, Jana Schaich; Deng, Yuan; Kramer, Max","AAAI Workshops, 2017" Troubling Trends in Machine Learning Scholarship,"Collectively, machine learning (ML) researchers are engaged in the creation and dissemination of knowledge about data-driven algorithms. In a given paper, researchers might aspire to any subset of the following goals, among others: to theoretically characterize what is learnable, to obtain understanding through empirically rigorous experiments, or to build a working system that has high predictive accuracy. While determining which knowledge warrants inquiry may be subjective, once the topic is fixed, papers are most valuable to the community when they act in service of the reader, creating foundational knowledge and communicating as clearly as possible. Recent progress in machine learning comes despite frequent departures from these ideals. In this paper, we focus on the following four patterns that appear to us to be trending in ML scholarship: (i) failure to distinguish between explanation and speculation; (ii) failure to identify the sources of empirical gains, e.g., emphasizing unnecessary modifications to neural architectures when gains actually stem from hyper-parameter tuning; (iii) mathiness: the use of mathematics that obfuscates or impresses rather than clarifies, e.g., by confusing technical and non-technical concepts; and (iv) misuse of language, e.g., by choosing terms of art with colloquial connotations or by overloading established technical terms. While the causes behind these patterns are uncertain, possibilities include the rapid expansion of the community, the consequent thinness of the reviewer pool, and the often-misaligned incentives between scholarship and short-term measures of success (e.g., bibliometrics, attention, and entrepreneurial opportunity). While each pattern offers a corresponding remedy (don't do it), we also discuss some speculative suggestions for how the community might combat these trends.",http://arxiv.org/abs/1807.03341,2018,journalArticle,"Lipton, Zachary C.; Steinhardt, Jacob",Queue A case for strategy research: what it is and why we need more of it,"Authors: Siebe Rozendal, Justin Shovelain, David Kristoffersson Crossposted to LessWrong OVERVIEW To achieve any ambitious goal, some strategic analysis is necessary. Effective altruism has ambitious goals and focuses heavily on doing research. To understand how to best allocate our time and resources, we need to clarify what our options in research are. In this article, we describe strategy research and relate it to values research, tactics research, informing research, and improvement research. We then apply the lens of strategy research to existential risk reduction, a major cause area of effective altruism. We propose a model in which the marginal value of a research type depends strongly on the maturity of the research field. Finally, we argue that strategy research should currently be given higher priority than other research in existential risk reduction because of the significant amount of strategic uncertainty, and we provide specific recommendations for different actors. INTRODUCTION Effective altruism is regularly framed as “figuring out how to do the most good, and then doing it.” However, figuring out how to do the most good is not easy. Different groups reach different conclusions. So how do we figure out how to do the most good? Quite obviously, the first step is to figure out our values. We need to know what we roughly mean by ‘the most good.’ However, once our moral uncertainty is significantly diminished, what is the next step in figuring out how to do the most good? We believe the next step should be strategy research: high-level research on how to best achieve a high-level goal. A brief case was made for strategic analysis by Nick Bostrom in Superintelligence (p. 317): ""Against a backdrop of perplexity and uncertainty, [strategic] analysis stands out as being of particularly high expected value. Illumination of our strategic situation would help us target subsequent interventions more effectively. Strategic analysis is especially needful wh",https://forum.effectivealtruism.org/posts/oovy5XXdCL3TPwgLE/a-case-for-strategy-research-what-it-is-and-why-we-need-more,2019,blogPost,"Rozendal, Siebe; Shovelain, Justin; Kristoffersson, David",Effective Altruism Forum Toward a strategic human resource management model of high reliability organization performance,,http://www.tandfonline.com/doi/abs/10.1080/09585190500120731,2005,journalArticle,"Ericksen, Jeff; Dyer, Lee",The International Journal of Human Resource Management Persuasion Tools: AI takeover without AGI or agency?,"[epistemic status: speculation] I'm envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won't be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.--Wei Dai What if most people already live in that world? A world in which taking arguments at face value is not a capacity-enhancing tool, but a security vulnerability? Without trusted filters, would they not dismiss highfalutin arguments out of hand, and focus on whether the person making the argument seems friendly, or unfriendly, using hard to fake group-affiliation signals?--Benquo 1. AI-powered memetic warfare makes all humans effectively insane.--Wei Dai, listing nonstandard AI doom scenarios This post speculates about persuasion tools—how likely they are to get better in the future relative to countermeasures, what the effects of this might be, and what implications there are for what we should do now. To avert eye-rolls, let me say up front that I don’t think the world is likely to be driven insane by AI-powered memetic warfare. I think progress in persuasion tools will probably be gradual and slow, and defenses will improve too, resulting in an overall shift in the balance that isn’t huge: a deterioration of collective epistemology, but not a massive one. However, (a) I haven’t yet ruled out more extreme scenarios, especially during a slow takeoff, and (b) even small, gradual deteriorations are important to know about. Such a deterioration would make it harder for society to notice and solve AI safety and governance problems, because it is worse at noticing and solving problems in general. Such a deterioration could also be a risk factor for world war three, revolutions, sectarian conflict, terrorism, and the like. Moreover",https://www.lesswrong.com/posts/qKvn7rxP2mzJbKfcA/persuasion-tools-ai-takeover-without-agi-or-agency,2020,blogPost,"Kokotajlo, Daniel",LessWrong Review on Computational Trust and Reputation Models,,http://link.springer.com/10.1007/s10462-004-0041-5,2005,journalArticle,"Sabater, Jordi; Sierra, Carles",Artificial Intelligence Review Written Evidence - Long-term Catastrophic Risk from Artificial Intelligence,,http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/75539.html,2017,report,"Belfield, Haydn; Ó hÉigeartaigh, Seán", Certified Adversarial Robustness for Deep Reinforcement Learning,"Deep Neural Network-based systems are now the state-of-the-art in many robotics tasks, but their application in safety-critical domains remains dangerous without formal guarantees on network robustness. Small perturbations to sensor inputs (from noise or adversarial examples) are often enough to change network-based decisions, which was recently shown to cause an autonomous vehicle to swerve into another lane. In light of these dangers, numerous algorithms have been developed as defensive mechanisms from these adversarial inputs, some of which provide formal robustness guarantees or certificates. This work leverages research on certified adversarial robustness to develop an online certifiably robust for deep reinforcement learning algorithms. The proposed defense computes guaranteed lower bounds on state-action values during execution to identify and choose a robust action under a worst-case deviation in input space due to possible adversaries or noise. Moreover, the resulting policy comes with a certificate of solution quality, even though the true state and optimal action are unknown to the certifier due to the perturbations. The approach is demonstrated on a Deep Q-Network policy and is shown to increase robustness to noise and adversaries in pedestrian collision avoidance scenarios and a classic control task. This work extends one of our prior works with new performance guarantees, extensions to other RL algorithms, expanded results aggregated across more scenarios, an extension into scenarios with adversarial behavior, comparisons with a more computationally expensive method, and visualizations that provide intuition about the robustness algorithm.",http://arxiv.org/abs/2004.06496,2020,conferencePaper,"Everett, Michael; Lutjens, Bjorn; How, Jonathan P.","3rd Conference on Robot Learning (CoRL 2019)," A Model for the Probability of Nuclear War,"The probability of nuclear war is a major factor in many important policy questions, but it has gotten little scholarly attention. This paper presents a model for calculating the total probability of nuclear war. The model is based on 14 interrelated scenarios for how nuclear war can break out, covering perhaps the entire range of nuclear war scenarios. Scenarios vary based on factors including whether a state intends to make a first strike attack, whether the nuclear attack is preceded by a conventional war or a non-war crisis, whether escalation is intentional or inadvertent, the presence of false alarms of various types, and the presence of non-war nuclear detonations such as nuclear terrorism. As a first step towards quantifying the probability of nuclear war using the model, the paper also includes a dataset of historical incidents that might have threatened to turn into nuclear war. 60 historical incidents are included, making it perhaps the largest such dataset currently available. The paper also includes background information about probabilistic analysis and modeling to help readers understand how to think about the probability of nuclear war, including new theory for the decision to initiate nuclear war.",https://papers.ssrn.com/abstract=3137081,2018,report,"Baum, Seth; de Neufville, Robert; Barrett, Anthony", "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation","This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We also suggest several promising areas for further research that could expand the portfolio of defenses, or make attacks less effective or harder to execute. Finally, we discuss, but do not conclusively resolve, the long-term equilibrium of attackers and defenders.",http://arxiv.org/abs/1802.07228,2018,report,"Brundage, Miles; Avin, Shahar; Clark, Jack; Toner, Helen; Eckersley, Peter; Garfinkel, Ben; Dafoe, Allan; Scharre, Paul; Zeitzoff, Thomas; Filar, Bobby; Anderson, Hyrum; Roff, Heather; Allen, Gregory C.; Steinhardt, Jacob; Flynn, Carrick; hÉigeartaigh, Seán Ó; Beard, Simon; Belfield, Haydn; Farquhar, Sebastian; Lyle, Clare; Crootof, Rebecca; Evans, Owain; Page, Michael; Bryson, Joanna; Yampolskiy, Roman; Amodei, Dario", Making Low Probabilities Useful,,http://link.springer.com/10.1023/A:1011111601406,2001,journalArticle,"Kunreuther, Howard; Novemsky, Nathan; Kahneman, Daniel",Journal of Risk and Uncertainty Taxonomy of Pathways to Dangerous AI,"In order to properly handle a dangerous Artificially Intelligent (AI) system it is important to understand how the system came to be in such a state. In popular culture (science fiction movies/books) AIs/Robots became self-aware and as a result rebel against humanity and decide to destroy it. While it is one possible scenario, it is probably the least likely path to appearance of dangerous AI. In this work, we survey, classify and analyze a number of circumstances, which might lead to arrival of malicious AI. To the best of our knowledge, this is the first attempt to systematically classify types of pathways leading to malevolent AI. Previous relevant work either surveyed specific goals/meta-rules which might lead to malevolent behavior in AIs (\""Ozkural, 2014) or reviewed specific undesirable behaviors AGIs can exhibit at different stages of its development (Alexey Turchin, July 10 2015, July 10, 2015).",http://arxiv.org/abs/1511.03246,2015,conferencePaper,"Yampolskiy, Roman V.","The Workshops of the Thirtieth AAAI Conference on Artificial Intelligence AI, Ethics, and Society" Formalizing human-robot mutual adaptation: A bounded memory model,,http://ieeexplore.ieee.org/document/7451736/,2016,conferencePaper,"Nikolaidis, Stefanos; Kuznetsov, Anton; Hsu, David; Srinivasa, Siddhartha",2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) Systems of Services as a Paradigm for AI Alignment,,https://www.alignmentforum.org/posts/z2ofM2oZQwmcWFt8N/ai-services-as-a-research-paradigm,2020,blogPost,"Kovarik, Vojta",AI Alignment Forum Universal Artificial Intelligence: Practical Agents and Fundamental Challenges,,http://link.springer.com/10.1007/978-3-319-64816-3_2,2018,bookSection,"Everitt, Tom; Hutter, Marcus",Foundations of Trusted Autonomy Learning Dexterous In-Hand Manipulation,"We use reinforcement learning (RL) to learn dexterous in-hand manipulation policies which can perform vision-based object reorientation on a physical Shadow Dexterous Hand. The training is performed in a simulated environment in which we randomize many of the physical properties of the system like friction coefficients and an object's appearance. Our policies transfer to the physical robot despite being trained entirely in simulation. Our method does not rely on any human demonstrations, but many behaviors found in human manipulation emerge naturally, including finger gaiting, multi-finger coordination, and the controlled use of gravity. Our results were obtained using the same distributed RL system that was used to train OpenAI Five. We also include a video of our results: https://youtu.be/jwSbzNHGflM",https://journals.sagepub.com/doi/full/10.1177/0278364919887447,2019,journalArticle,"OpenAI; Andrychowicz, Marcin; Baker, Bowen; Chociej, Maciek; Jozefowicz, Rafal; McGrew, Bob; Pachocki, Jakub; Petron, Arthur; Plappert, Matthias; Powell, Glenn; Ray, Alex; Schneider, Jonas; Sidor, Szymon; Tobin, Josh; Welinder, Peter; Weng, Lilian; Zaremba, Wojciech",The International Journal of Robotics Research Testing the Automation Revolution Hypothesis,"Recently, many have predicted an imminent automation revolution, and large resulting job losses. Others have created metrics to predict new patterns in job automation vulnerability. As context to such claims, we test basic theory, two vulnerability metrics, and 251 O*NET job features as predictors of 1505 expert reports regarding automation levels in 832 U.S. job types from 1999 to 2019. We find that pay, employment, and vulnerability metrics are predictive (R^2~0.15), but add little to the top 25 O*NET job features, which together predict far better (R^2~0.55). These best predictors seem understandable in terms of traditional kinds of automation, and have not changed over our time period. Instead, it seems that jobs have changed their features to become more suitable for automation. We thus find no evidence yet of a revolution in the patterns or quantity of automation. And since, over this period, automation increases have predicted neither changes in pay nor employment, this suggests that workers have little to fear if such a revolution does come.",https://www.sciencedirect.com/science/article/abs/pii/S0165176520301919,2020,journalArticle,"Scholl, Keller; Hanson, Robin",Economics Letters Possible takeaways from the coronavirus pandemic for slow AI takeoff,"Epistemic status: fairly speculative, would appreciate feedback As the covid-19 pandemic unfolds, we can draw lessons from it for managing future global risks, such as other pandemics, climate change, and risks from advanced AI. In this post, I will focus on possible implications for AI risk. For a broader treatment of this question, I recommend FLI's covid-19 page that includes expert interviews on the implications of the pandemic for other types of risks. A key element in AI risk scenarios is the speed of takeoff - whether advanced AI is developed gradually or suddenly. Paul Christiano's post on takeoff speeds defines slow takeoff in terms of the economic impact of AI as follows: ""There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles."" It argues that slow AI takeoff is more likely than fast takeoff, but is not necessarily easier to manage, since it poses different challenges, such as large-scale coordination. This post expands on this point by examining some parallels between the coronavirus pandemic and a slow takeoff scenario. The upsides of slow takeoff include the ability to learn from experience, act on warning signs, and reach a timely consensus that there is a serious problem. I would argue that the covid-19 pandemic had these properties, but most of the world's institutions did not take advantage of them. This suggests that, unless our institutions improve, we should not expect the slow AI takeoff scenario to have a good default outcome. 1. Learning from experience. In the slow takeoff scenario, general AI is expected to appear in a world that has already experienced transformative change from less advanced AI, and institutions will have a chance to learn from problems with these AI systems. An analogy could be made with learning from dealing with less ""advanced"" epidemics like SARS that were not as successful as covid-19 at spreading across the worl",https://www.alignmentforum.org/posts/wTKjRFeSjKLDSWyww/possible-takeaways-from-the-coronavirus-pandemic-for-slow-ai,2020,blogPost,"Krakovna, Victoria",AI Alignment Forum Automating Cyber Attacks,"Based on an in-depth analysis of artificial intelligence and machine learning systems, the authors consider the future of applying such systems to cyber attacks, and what strategies attackers are likely or less likely to use. As nuanced, complex, and overhyped as machine learning is, they argue, it remains too important to ignore.",https://live-cset-georgetown.pantheonsite.io/research/automating-cyber-attacks/,2020,report,"Buchanan, Ben; Bansemer, John; Cary, Dakota; Lucas, Jack; Musser, Micah", The Psychology of Existential Risk: Moral Judgments about Human Extinction,"The 21st century will likely see growing risks of human extinction, but currently, relatively small resources are invested in reducing such existential risks. Using three samples (UK general public, US general public, and UK students; total N = 2,507), we study how laypeople reason about human extinction. We find that people think that human extinction needs to be prevented. Strikingly, however, they do not think that an extinction catastrophe would be uniquely bad relative to near-extinction catastrophes, which allow for recovery. More people find extinction uniquely bad when (a) asked to consider the extinction of an animal species rather than humans, (b) asked to consider a case where human extinction is associated with less direct harm, and (c) they are explicitly prompted to consider long-term consequences of the catastrophes. We conclude that an important reason why people do not find extinction uniquely bad is that they focus on the immediate death and suffering that the catastrophes cause for fellow humans, rather than on the long-term consequences. Finally, we find that (d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.",https://www.nature.com/articles/s41598-019-50145-9,2019,journalArticle,"Schubert, Stefan; Caviola, Lucius; Faber, Nadira S.",Scientific Reports On the wrongness of human extinction,"In recent papers, Elizabeth Finneron-Burns and Johann Frick have both argued that it is not a wrong-making feature of human extinction that it would cause many potential people with lives worth living never to be born, and hence that causing human extinction would be, in at least one way, less wrong than many have thought. In making these arguments, both assume that merely possible future people cannot be harmed by their nonexistence, and thus do not have any claim to be brought into existence. In this paper, we raise objections to their arguments and suggest that there is nothing inherent in the moral theories they put forward that implies future people cannot have this sort of ‘existential’ claim. In doing so, we draw on the work of Derek Parfit, who argued, in a recent paper, that coming into existence benefits a person in that it is ‘good for’ them, even if it is not ‘better for’ them than non-existence. We also find that many of their objections to the view that it is wrong not to bring future people into existence rest on the assumption that, were these people to have claims on us, these must be equivalent to the claims that existing people have to be benefitted. However, we show that Parfit’s work demonstrates how this is not the case.",http://argumenta.uniss.it/wp-content/uploads/2020/01/5-Argumenta-51-Simon-Beard-and-Patrick-Kaczmarek-On-the-Wrongness-of-Human-Extinction.pdf,2019,journalArticle,"Beard, Simon; Kaczmarek, Patrick",Argumenta How does iterated amplification exceed human abilities?,"When I first started learning about IDA, I thought that agents trained using IDA would be human-level after the first stage, i.e. that Distill(H) would be human-level. As I've written about before, Paul later clarified this, so my new understanding is that after the first stage, the distilled agent will be super-human in some respects and infra-human in others, but wouldn't be ""basically human"" in any sense. But IDA is aiming to eventually be super-human in almost every way (because it's aiming to be competitive with unaligned AGI), so that raises some new questions: 1. If IDA isn't going to be human-level after the first stage, then at what stage does IDA become at-least-human-level in almost every way? 2. What exactly is the limitation that prevents the first stage of IDA from being human-level in almost every way? 3. When IDA eventually does become at-least-human-level in almost every way, how is the limitation from (2) avoided? That brings me to Evans et al., which contains a description of IDA in section 0. The way IDA is set up in this paper leads me to believe that the answer to (2) above is that the human overseer cannot provide a sufficient number of demonstrations for the most difficult tasks. For example, maybe the human can provide enough demonstrations for the agent to learn to answer very simple questions (tasks in T0 in the paper) but it's too time-consuming for the human to answer enough complicated questions (say, in T100). My understanding is that IDA gets around this by having an amplified system that is itself automated (i.e. does not involve humans in a major way, so cannot be bottlenecked on the slowness of humans); this allows the amplified system to provide a sufficient number of demonstrations for the distillation step to work. So in the above view, the answer to (2) is that the limitation is the number of demonstrations the human can provide, and the answer to (3) is that the human can seed the IDA process with sufficient",https://www.alignmentforum.org/posts/ajQzejMYizfX4dMWK/how-does-iterated-amplification-exceed-human-abilities,2020,blogPost,"Rice, Issa",AI Alignment Forum Formalizing Two Problems of Realistic World-Models,"An intelligent agent embedded within the real world must reason about an environment which is larger than the agent, and learn how to achieve goals in that environment. We discuss attempts to formalize two problems: one of induction, where an agent must use sensory data to infer a universe which embeds (and computes) the agent, and one of interaction, where an agent must learn to achieve complex goals in the universe. We review related problems formalized by Solomonoff and Hutter, and explore challenges that arise when attempting to formalize analogous problems in a setting where the agent is embedded within the environment.",https://intelligence.org/2015/01/22/new-report-formalizing-two-problems-realistic-world-models/,2015,report,"Soares, Nate", Heart of DARCness,"We propose a valid core for the much-disputed thesis that Deliberation Crowds Out Prediction, and identify terminological causes for some of the apparent disputes.",https://doi.org/10.1080/00048402.2018.1427119,2019,journalArticle,"Liu, Yang; Price, Huw",Australasian Journal of Philosophy Cryopreservation of embryos and fetuses as a future option for family planning purposes,,,2015,journalArticle,"Minerva, Francesca; Sandberg, Anders",Journal of Evolution and Technology/WTA The case for taking AI seriously as a threat to humanity,"Why some people fear AI, explained.",https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment,2018,magazineArticle,"Piper, Kelsey",Vox Extracting Money from Causal Decision Theorists,"Newcomb’s problem has spawned a debate about which variant of expected utility maximization (if any) should guide rational choice. In this paper, we provide a new argument against what is probably the most popular variant: causal decision theory (CDT). In particular, we provide two scenarios in which CDT voluntarily loses money. In the first, an agent faces a single choice and following CDT’s recommendation yields a loss of money in expectation. The second scenario extends the first to a diachronic Dutch book against CDT.",http://ceur-ws.org/Vol-2640/paper_21.pdf,2019,conferencePaper,"Oesterheld, Caspar; Conitzer, Vincent",Proceedings of the Workshop on Artificial Intelligence Safety 2020 Racing to the precipice: a model of artificial intelligence development,,,2016,journalArticle,"Armstrong, Stuart; Bostrom, Nick; Shulman, Carl",AI & society Do the desires of rational agents converge?,,https://academic.oup.com/analysis/article-lookup/doi/10.1093/analys/59.3.137,1999,journalArticle,"Sobel, D.",Analysis Nanotechnology and Weapons,,,2019,journalArticle,"Sharon, Chetna; Drexler, K. Eric","Nanotechnology in the Defense Industry: Advances, Innovation, and Practical Applications" Winter-safe Deterrence: The Risk of Nuclear Winter and Its Challenge to Deterrence,,https://www.tandfonline.com/doi/full/10.1080/13523260.2015.1012346,2015,journalArticle,"Baum, Seth D.",Contemporary Security Policy Zoom In: An Introduction to Circuits,"By studying the connections between neurons, we can find meaningful algorithms in the weights of neural networks.",https://distill.pub/2020/circuits/zoom-in,2020,journalArticle,"Olah, Chris; Cammarata, Nick; Schubert, Ludwig; Goh, Gabriel; Petrov, Michael; Carter, Shan",Distill SFV: Reinforcement Learning of Physical Skills from Videos,"Data-driven character animation based on motion capture can produce highly naturalistic behaviors and, when combined with physics simulation, can provide for natural procedural responses to physical perturbations, environmental changes, and morphological discrepancies. Motion capture remains the most popular source of motion data, but collecting mocap data typically requires heavily instrumented environments and actors. In this paper, we propose a method that enables physically simulated characters to learn skills from videos (SFV). Our approach, based on deep pose estimation and deep reinforcement learning, allows data-driven animation to leverage the abundance of publicly available video clips from the web, such as those from YouTube. This has the potential to enable fast and easy design of character controllers simply by querying for video recordings of the desired behavior. The resulting controllers are robust to perturbations, can be adapted to new settings, can perform basic object interactions, and can be retargeted to new morphologies via reinforcement learning. We further demonstrate that our method can predict potential human motions from still images, by forward simulation of learned controllers initialized from the observed pose. Our framework is able to learn a broad range of dynamic skills, including locomotion, acrobatics, and martial arts.",http://arxiv.org/abs/1810.03599,2018,journalArticle,"Peng, Xue Bin; Kanazawa, Angjoo; Malik, Jitendra; Abbeel, Pieter; Levine, Sergey",ACM Transactions on Graphics Proof-Producing Reflection for HOL,"We present a reflection principle of the form “If ϕ is provable, then ϕ” implemented in the HOL4 theorem prover, assuming the existence of a large cardinal. We use the large-cardinal assumption to construct a model of HOL within HOL, and show how to ensure ϕ has the same meaning both inside and outside of this model. Soundness of HOL implies that if ϕ is provable, then it is true in this model, and hence ϕ holds. We additionally show how this reflection principle can be extended, assuming an infinite hierarchy of large cardinals, to implement model polymorphism, a technique designed for verifying systems with self-replacement functionality.",http://link.springer.com/10.1007/978-3-319-22102-1_11,2015,bookSection,"Fallenstein, Benja; Kumar, Ramana",Interactive Theorem Proving Improving Variational Inference with Inverse Autoregressive Flow,"The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.",https://papers.nips.cc/paper/2016/hash/ddeebdeefdb7e7e7a697e1c3e3d8ef54-Abstract.html,2017,conferencePaper,"Kingma, Diederik P.; Salimans, Tim; Jozefowicz, Rafal; Chen, Xi; Sutskever, Ilya; Welling, Max",Advances in Neural Information Processing Systems 29 (NIPS 2016) Universality and model-based RL,"Ascription universality may be very helpful for safe model-based RL, facilitating benign induction and “transparent” models.",https://ai-alignment.com/universality-and-model-based-rl-b08701394ddd,2019,blogPost,"Christiano, Paul",AI Alignment (Medium) Fundamental issues of artificial intelligence,,https://link.springer.com/book/10.1007%2F978-3-319-26485-1,2016,book,, Explanation Augmented Feedback in Human-in-the-Loop Reinforcement Learning,"Human-in-the-loop Reinforcement Learning (HRL) aims to integrate human guidance with Reinforcement Learning (RL) algorithms to improve sample efficiency and performance. The usual human guidance in HRL is binary evaluative ""good"" or ""bad"" signal for queried states and actions. However, this suffers from the problems of weak supervision and poor efficiency in leveraging human feedback. To address this, we present EXPAND (Explanation Augmented Feedback) which allows for explanatory information to be given as saliency maps from the human in addition to the binary feedback. EXPAND employs a state perturbation approach based on the state salient information to augment the feedback, reducing the number of human feedback signals required. We choose two domains to evaluate this approach, Taxi and Atari-Pong. We demonstrate the effectiveness of our method on three metrics, environment sample efficiency, human feedback sample efficiency, and agent gaze. We show that our method outperforms our baselines. Finally, we present an ablation study to confirm our hypothesis that augmenting binary feedback with state salient information gives a boost in performance.",http://arxiv.org/abs/2006.14804,2020,manuscript,"Guan, Lin; Verma, Mudit; Kambhampati, Subbarao", Stifling artificial intelligence: Human perils,,https://linkinghub.elsevier.com/retrieve/pii/S0267364916300814,2016,journalArticle,"Gurkaynak, Gonenc; Yilmaz, Ilay; Haksever, Gunes",Computer Law & Security Review Learning What Information to Give in Partially Observed Domains,"In many robotic applications, an autonomous agent must act within and explore a partially observed environment that is unobserved by its human teammate. We consider such a setting in which the agent can, while acting, transmit declarative information to the human that helps them understand aspects of this unseen environment. In this work, we address the algorithmic question of how the agent should plan out what actions to take and what information to transmit. Naturally, one would expect the human to have preferences, which we model information-theoretically by scoring transmitted information based on the change it induces in weighted entropy of the human's belief state. We formulate this setting as a belief MDP and give a tractable algorithm for solving it approximately. Then, we give an algorithm that allows the agent to learn the human's preferences online, through exploration. We validate our approach experimentally in simulated discrete and continuous partially observed search-and-recover domains. Visit http://tinyurl.com/chitnis-corl-18 for a supplementary video.",http://arxiv.org/abs/1805.08263,2018,conferencePaper,"Chitnis, Rohan; Kaelbling, Leslie Pack; Lozano-Pérez, Tomás",arXiv:1805.08263 [cs] Verifier Theory and Unverifiability,"Despite significant developments in Proof Theory, surprisingly little attention has been devoted to the concept of proof verifier. In particular, the mathematical community may be interested in studying different types of proof verifiers (people, programs, oracles, communities, superintelligences) as mathematical objects. Such an effort could reveal their properties, their powers and limitations (particularly in human mathematicians), minimum and maximum complexity, as well as self-verification and self-reference issues. We propose an initial classification system for verifiers and provide some rudimentary analysis of solved and open problems in this important domain. Our main contribution is a formal introduction of the notion of unverifiability, for which the paper could serve as a general citation in domains of theorem proving, as well as software and AI verification.",http://arxiv.org/abs/1609.00331,2016,manuscript,"Yampolskiy, Roman V.", Moral Philosophy Will Become Part of the Tech Industry,"Robots might not need rights, but they'll need to know right and wrong",https://time.com/collection-post/4026723/stuart-russell-will-ai-overtake-humans/,2015,magazineArticle,"Russell, Stuart",Time Protecting Against AI’s Existential Threat,"How to avoid the nightmare scenario of artificial intelligence? According to researchers from Elon Musk’s OpenAI, the trick is teaching machines to keep our interests in mind",https://www.wsj.com/articles/protecting-against-ais-existential-threat-1508332313,2017,newspaperArticle,"Amodei, Ilya Sutskever and Dario",Wall Street Journal Benefits and Risks of Artificial Intelligence,"Discussions about Artificial Intelligence (AI) have jumped into the public eye over the past year, with several luminaries speaking…",https://medium.com/@tdietterich/benefits-and-risks-of-artificial-intelligence-460d288cccf3,2015,blogPost,"Dietterich, Thomas G.",Thomas G. Dietterich (Medium) Arguments against myopic training,"Note that this post has been edited to clarify the difference between explicitly assigning a reward to an action based on its later consequences, versus implicitly reinforcing an action by assigning high reward during later timesteps when its consequences are observed. I'd previously conflated these in a confusing way; thanks to Rohin for highlighting this issue. A number of people seem quite excited about training myopic reinforcement learning agents as an approach to AI safety (for instance this post on approval-directed agents, proposals 2, 3, 4, 10 and 11 here, and this paper and presentation), but I’m not. I’ve had a few detailed conversations about this recently, and although I now understand the arguments for using myopia better, I’m not much more optimistic about it than I was before. In short, it seems that evaluating agents’ actions by our predictions of their consequences, rather than our evaluations of the actual consequences, will make reinforcement learning a lot harder; yet I haven’t been able to identify clear safety benefits from doing so. I elaborate on these points below; thanks to Jon Uesato, Evan Hubinger, Ramana Kumar and Stephan Wäldchen for discussion and comments. I’ll define a myopic reinforcement learner as a reinforcement learning agent trained to maximise the reward received in the next timestep, i.e. with a discount rate of 0. Because it doesn’t assign credit backwards over time, in order to train it to do anything useful, that reward function will need to contain an estimate of how valuable each (state, action, next state) transition will be for outcomes many steps later. Since that evaluation will need to extrapolate a long way forward anyway, knowing the next state doesn’t add much, and so we can limit our focus to myopic agents trained on reward functions R which ignore the resulting state: that is, where R(s,a,s′)=M(s,a) for some M. I'll call M the approval function; we can think of such agents as being trained to take actions",https://www.alignmentforum.org/posts/GqxuDtZvfgL2bEQ5v/arguments-against-myopic-training,2020,blogPost,"Ngo, Richard",AI Alignment Forum Stochastic Flows and Geometric Optimization on the Orthogonal Group,"We present a new class of stochastic, geometrically-driven optimization algorithms on the orthogonal group $O(d)$ and naturally reductive homogeneous manifolds obtained from the action of the rotation group $SO(d)$. We theoretically and experimentally demonstrate that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, normalizing flows and metric learning. We show an intriguing connection between efficient stochastic optimization on the orthogonal group and graph theory (e.g. matching problem, partition functions over graphs, graph-coloring). We leverage the theory of Lie groups and provide theoretical results for the designed class of algorithms. We demonstrate broad applicability of our methods by showing strong performance on the seemingly unrelated tasks of learning world models to obtain stable policies for the most difficult $\mathrm{Humanoid}$ agent from $\mathrm{OpenAI}$ $\mathrm{Gym}$ and improving convolutional neural networks.",http://arxiv.org/abs/2003.13563,2020,conferencePaper,"Choromanski, Krzysztof; Cheikhi, David; Davis, Jared; Likhosherstov, Valerii; Nazaret, Achille; Bahamou, Achraf; Song, Xingyou; Akarte, Mrugank; Parker-Holder, Jack; Bergquist, Jacob; Gao, Yuan; Pacchiano, Aldo; Sarlos, Tamas; Weller, Adrian; Sindhwani, Vikas",Proceedings of the 37th International Conference on Machine Learning Parameter Space Noise for Exploration,"Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples. Combining parameter noise with traditional RL methods allows to combine the best of both worlds. We demonstrate that both off- and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks. Our results show that RL with parameter noise learns more efficiently than traditional RL with action space noise and evolutionary strategies individually.",http://arxiv.org/abs/1706.01905,2018,manuscript,"Plappert, Matthias; Houthooft, Rein; Dhariwal, Prafulla; Sidor, Szymon; Chen, Richard Y.; Chen, Xi; Asfour, Tamim; Abbeel, Pieter; Andrychowicz, Marcin", "End times: a brief guide to the end of the world: asteroids, supervolcanoes, rogue robots, and more","What is going to cause our extinction? How can we save ourselves and our future? End Times answers the most important questions facing humankind. End Times is a compelling work of skilled reportage that peels back the layers of complexity around the unthinkable--and inevitable--end of humankind. From asteroids and artificial intelligence to volcanic supereruption to nuclear war, 15-year veteran science reporter and TIME editor Bryan Walsh provides a stunning panoramic view of the most catastrophic threats to the human race. In End Times, Walsh examines threats that emerge from nature and those of our own making: asteroids, supervolcanoes, nuclear war, climate change, disease pandemics, biotechnology, artificial intelligence, and extraterrestrial intelligence. Walsh details the true probability of these world-ending catastrophes, the impact on our lives were they to happen, and the best strategies for saving ourselves, all pulled from his rigorous and deeply thoughtful reporting and research. Walsh goes into the room with the men and women whose job it is to imagine the unimaginable. He includes interviews with those on the front lines of prevention, actively working to head off existential threats in biotechnology labs and government hubs. Guided by Walsh's evocative, page-turning prose, we follow scientific stars like the asteroid hunters at NASA and the disease detectives on the trail of the next killer virus. Walsh explores the danger of apocalypse in all forms. In the end, it will be the depth of our knowledge, the height of our imagination, and our sheer will to survive that will decide the future",,2019,book,"Walsh, Bryan", Benchmarking Neural Network Robustness to Common Corruptions and Perturbations,"In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize.",http://arxiv.org/abs/1903.12261,2019,conferencePaper,"Hendrycks, Dan; Dietterich, Thomas", Preventing Side-effects in Gridworlds,,https://www.gleech.org/grids,2018,blogPost,"Leech, Gavin; Kubicki, Karol; Cooper, Jessica; McGrath, Tom",Argmin Gravitas Explainable Robotic Systems,"The increasing complexity of robotic systems are pressing the need for them to be transparent and trustworthy. When people interact with a robotic system, they will inevitably construct mental models to understand and predict its actions. However, people’s mental models of robotic systems stem from their interactions with living beings, which induces the risk of establishing incorrect or inadequate mental models of robotic systems and may lead people to either under- and over-trust these systems. We need to understand the inferences that people make about robots from their behavior, and leverage this understanding to formulate and implement behaviors into robotic systems that support the formation of correct mental models of and fosters trust calibration. This way, people will be better able to predict the intentions of these systems, and thus more accurately estimate their capabilities, better understand their actions, and potentially correct their errors. The aim of this full-day workshop is to provide a forum for researchers and practitioners to share and learn about recent research on people’s inferences of robot actions, as well as the implementation of transparent, predictable, and explainable behaviors into robotic systems.",http://dl.acm.org/citation.cfm?doid=3173386.3173568,2018,conferencePaper,"de Graaf, Maartje M.A.; Malle, Bertram F.; Dragan, Anca; Ziemke, Tom",Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI '18 Variational Option Discovery Algorithms,"We explore methods for option discovery based on variational inference and make two algorithmic contributions. First: we highlight a tight connection between variational option discovery methods and variational autoencoders, and introduce Variational Autoencoding Learning of Options by Reinforcement (VALOR), a new method derived from the connection. In VALOR, the policy encodes contexts from a noise distribution into trajectories, and the decoder recovers the contexts from the complete trajectories. Second: we propose a curriculum learning approach where the number of contexts seen by the agent increases whenever the agent's performance is strong enough (as measured by the decoder) on the current set of contexts. We show that this simple trick stabilizes training for VALOR and prior variational option discovery methods, allowing a single agent to learn many more modes of behavior than it could with a fixed context distribution. Finally, we investigate other topics related to variational option discovery, including fundamental limitations of the general approach and the applicability of learned options to downstream tasks.",http://arxiv.org/abs/1807.10299,2018,manuscript,"Achiam, Joshua; Edwards, Harrison; Amodei, Dario; Abbeel, Pieter", Learning from lions: inferring the utility of agents from their trajectories,"We build a model using Gaussian processes to infer a spatio-temporal vector field from observed agent trajectories. Significant landmarks or influence points in agent surroundings are jointly derived through vector calculus operations that indicate presence of sources and sinks. We evaluate these influence points by using the Kullback-Leibler divergence between the posterior and prior Laplacian of the inferred spatio-temporal vector field. Through locating significant features that influence trajectories, our model aims to give greater insight into underlying causal utility functions that determine agent decision-making. A key feature of our model is that it infers a joint Gaussian process over the observed trajectories, the time-varying vector field of utility and canonical vector calculus operators. We apply our model to both synthetic data and lion GPS data collected at the Bubye Valley Conservancy in southern Zimbabwe.",http://arxiv.org/abs/1709.02357,2017,manuscript,"Cobb, Adam D.; Markham, Andrew; Roberts, Stephen J.", Hidden Incentives for Auto-Induced Distributional Shift,"Decisions made by machine learning systems have increasing influence on the world, yet it is common for machine learning algorithms to assume that no such influence exists. An example is the use of the i.i.d. assumption in content recommendation. In fact, the (choice of) content displayed can change users' perceptions and preferences, or even drive them away, causing a shift in the distribution of users. We introduce the term auto-induced distributional shift (ADS) to describe the phenomenon of an algorithm causing a change in the distribution of its own inputs. Our goal is to ensure that machine learning systems do not leverage ADS to increase performance when doing so could be undesirable. We demonstrate that changes to the learning algorithm, such as the introduction of meta-learning, can cause hidden incentives for auto-induced distributional shift (HI-ADS) to be revealed. To address this issue, we introduce `unit tests' and a mitigation strategy for HI-ADS, as well as a toy environment for modelling real-world issues with HI-ADS in content recommendation, where we demonstrate that strong meta-learners achieve gains in performance via ADS. We show meta-learning and Q-learning both sometimes fail unit tests, but pass when using our mitigation strategy.",http://arxiv.org/abs/2009.09153,2020,manuscript,"Krueger, David; Maharaj, Tegan; Leike, Jan", The tension between openness and prudence in AI research,"This paper explores the tension between openness and prudence in AI research, evident in two core principles of the Montr\'eal Declaration for Responsible AI. While the AI community has strong norms around open sharing of research, concerns about the potential harms arising from misuse of research are growing, prompting some to consider whether the field of AI needs to reconsider publication norms. We discuss how different beliefs and values can lead to differing perspectives on how the AI community should manage this tension, and explore implications for what responsible publication norms in AI research might look like in practice.",http://arxiv.org/abs/1910.01170,2020,conferencePaper,"Whittlestone, Jess; Ovadya, Aviv",arXiv:1910.01170 [cs] AGNI: Autonomous Geospatial system for Noticing Ignition,,http://mediangroup.org/docs/agni.pdf,,manuscript,"Gallagher, Jack; Maltinsky, Baeo", Asynchronous Methods for Model-Based Reinforcement Learning,"Significant progress has been made in the area of model-based reinforcement learning. State-of-the-art algorithms are now able to match the asymptotic performance of model-free methods while being significantly more data efficient. However, this success has come at a price: state-of-the-art model-based methods require significant computation interleaved with data collection, resulting in run times that take days, even if the amount of agent interaction might be just hours or even minutes. When considering the goal of learning in real-time on real robots, this means these state-of-the-art model-based algorithms still remain impractical. In this work, we propose an asynchronous framework for model-based reinforcement learning methods that brings down the run time of these algorithms to be just the data collection time. We evaluate our asynchronous framework on a range of standard MuJoCo benchmarks. We also evaluate our asynchronous framework on three real-world robotic manipulation tasks. We show how asynchronous learning not only speeds up learning w.r.t wall-clock time through parallelization, but also further reduces the sample complexity of model-based approaches by means of improving the exploration and by means of effectively avoiding the policy overfitting to the deficiencies of learned dynamics models.",http://arxiv.org/abs/1910.12453,2019,conferencePaper,"Zhang, Yunzhi; Clavera, Ignasi; Tsai, Boren; Abbeel, Pieter",3rd Conference on Robot Learning (CoRL 2019) The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence,"Recent research in artificial intelligence and machine learning has largely emphasized general-purpose learning and ever-larger training sets and more and more compute. In contrast, I propose a hybrid, knowledge-driven, reasoning-based approach, centered around cognitive models, that could provide the substrate for a richer, more robust AI than is currently possible.",http://arxiv.org/abs/2002.06177,2020,manuscript,"Marcus, Gary", DoorGym: A Scalable Door Opening Environment And Baseline Agent,"Reinforcement Learning (RL) has brought forth ideas of autonomous robots that can navigate real-world environments with ease, aiding humans in a variety of tasks. RL agents have just begun to make their way out of simulation into the real world. Once in the real world, benchmark tasks often fail to transfer into useful skills. We introduce DoorGym, a simulation environment intended to be the first step to move RL from toy environments towards useful atomic skills that can be composed and extended towards a broader goal. DoorGym is an open-source door simulation framework designed to be highly configurable. We also provide a baseline PPO (Proximal Policy Optimization) and SAC (Soft Actor-Critic)implementation, which achieves a success rate of up to 70% for common tasks in this environment. Environment kit available here:https://github.com/PSVL/DoorGym/",http://arxiv.org/abs/1908.01887,2019,conferencePaper,"Urakami, Yusuke; Hodgkinson, Alec; Carlin, Casey; Leu, Randall; Rigazio, Luca; Abbeel, Pieter",33rd Conference on Neural Information Processing Systems (NeurIPS 2019) "A Conceptually Well-Founded Characterization of Iterated Admissibility Using an ""All I Know"" Operator","Brandenburger, Friedenberg, and Keisler provide an epistemic characterization of iterated admissibility (IA), also known as iterated deletion of weakly dominated strategies, where uncertainty is represented using LPSs (lexicographic probability sequences). Their characterization holds in a rich structure called a complete structure, where all types are possible. In earlier work, we gave a characterization of iterated admissibility using an ""all I know"" operator, that captures the intuition that ""all the agent knows"" is that agents satisfy the appropriate rationality assumptions. That characterization did not need complete structures and used probability structures, not LPSs. However, that characterization did not deal with Samuelson's conceptual concern regarding IA, namely, that at higher levels, players do not consider possible strategies that were used to justify their choice of strategy at lower levels. In this paper, we give a characterization of IA using the all I know operator that does deal with Samuelson's concern. However, it uses LPSs. We then show how to modify the characterization using notions of ""approximate belief"" and ""approximately all I know"" so as to deal with Samuelson's concern while still working with probability structures.",http://arxiv.org/abs/1907.09106,2019,journalArticle,"Halpern, Joseph Y.; Pass, Rafael",Electronic Proceedings in Theoretical Computer Science Learning under Misspecified Objective Spaces,"Learning robot objective functions from human input has become increasingly important, but state-of-the-art techniques assume that the human’s desired objective lies within the robot’s hypothesis space. When this is not true, even methods that keep track of uncertainty over the objective fail because they reason about which hypothesis might be correct, and not whether any of the hypotheses are correct. We focus specifically on learning from physical human corrections during the robot’s task execution, where not having a rich enough hypothesis space leads to the robot updating its objective in ways that the person did not actually intend. We observe that such corrections appear irrelevant to the robot, because they are not the best way of achieving any of the candidate objectives. Instead of naively trusting and learning from every human interaction, we propose robots learn conservatively by reasoning in real time about how relevant the human’s correction is for the robot’s hypothesis space. We test our inference method in an experiment with human interaction data, and demonstrate that this alleviates unintended learning in an in-person user study with a robot manipulator.",http://arxiv.org/abs/1810.05157,2018,conferencePaper,"Bobu, Andreea; Bajcsy, Andrea; Fisac, Jaime F.; Dragan, Anca D.",2nd Conference on Robot Learning (CoRL 2018) Robust program equilibrium,,,2019,journalArticle,"Oesterheld, Caspar",Theory and Decision Towards personalized human AI interaction - adapting the behavior of AI agents using neural signatures of subjective interest,"Reinforcement Learning AI commonly uses reward/penalty signals that are objective and explicit in an environment -- e.g. game score, completion time, etc. -- in order to learn the optimal strategy for task performance. However, Human-AI interaction for such AI agents should include additional reinforcement that is implicit and subjective -- e.g. human preferences for certain AI behavior -- in order to adapt the AI behavior to idiosyncratic human preferences. Such adaptations would mirror naturally occurring processes that increase trust and comfort during social interactions. Here, we show how a hybrid brain-computer-interface (hBCI), which detects an individual's level of interest in objects/events in a virtual environment, can be used to adapt the behavior of a Deep Reinforcement Learning AI agent that is controlling a virtual autonomous vehicle. Specifically, we show that the AI learns a driving strategy that maintains a safe distance from a lead vehicle, and most novelly, preferentially slows the vehicle when the human passengers of the vehicle encounter objects of interest. This adaptation affords an additional 20\% viewing time for subjectively interesting objects. This is the first demonstration of how an hBCI can be used to provide implicit reinforcement to an AI agent in a way that incorporates user preferences into the control system.",http://arxiv.org/abs/1709.04574,2017,manuscript,"Shih, Victor; Jangraw, David C.; Sajda, Paul; Saproo, Sameer", Many-Goals Reinforcement Learning,"All-goals updating exploits the off-policy nature of Q-learning to update all possible goals an agent could have from each transition in the world, and was introduced into Reinforcement Learning (RL) by Kaelbling (1993). In prior work this was mostly explored in small-state RL problems that allowed tabular representations and where all possible goals could be explicitly enumerated and learned separately. In this paper we empirically explore 3 different extensions of the idea of updating many (instead of all) goals in the context of RL with deep neural networks (or DeepRL for short). First, in a direct adaptation of Kaelbling's approach we explore if many-goals updating can be used to achieve mastery in non-tabular visual-observation domains. Second, we explore whether many-goals updating can be used to pre-train a network to subsequently learn faster and better on a single main task of interest. Third, we explore whether many-goals updating can be used to provide auxiliary task updates in training a network to learn faster and better on a single main task of interest. We provide comparisons to baselines for each of the 3 extensions.",http://arxiv.org/abs/1806.09605,2018,manuscript,"Veeriah, Vivek; Oh, Junhyuk; Singh, Satinder", Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence,"This paper explores the important role of critical science, and in particular of post-colonial and decolonial theories, in understanding and shaping the ongoing advances in artificial intelligence. Artificial Intelligence (AI) is viewed as amongst the technological advances that will reshape modern societies and their relations. Whilst the design and deployment of systems that continually adapt holds the promise of far-reaching positive change, they simultaneously pose significant risks, especially to already vulnerable peoples. Values and power are central to this discussion. Decolonial theories use historical hindsight to explain patterns of power that shape our intellectual, political, economic, and social world. By embedding a decolonial critical approach within its technical practice, AI communities can develop foresight and tactics that can better align research and technology development with established ethical principles, centring vulnerable peoples who continue to bear the brunt of negative impacts of innovation and scientific progress. We highlight problematic applications that are instances of coloniality, and using a decolonial lens, submit three tactics that can form a decolonial field of artificial intelligence: creating a critical technical practice of AI, seeking reverse tutelage and reverse pedagogies, and the renewal of affective and political communities. The years ahead will usher in a wave of new scientific breakthroughs and technologies driven by AI research, making it incumbent upon AI communities to strengthen the social contract through ethical foresight and the multiplicity of intellectual perspectives available to us; ultimately supporting future technologies that enable greater well-being, with the goal of beneficence and justice for all.",http://arxiv.org/abs/2007.04068,2020,journalArticle,"Mohamed, Shakir; Png, Marie-Therese; Isaac, William",Philosophy & Technology Where Do You Think You're Going?: Inferring Beliefs about Dynamics from Behavior,,http://papers.nips.cc/paper/7419-where-do-you-think-youre-going-inferring-beliefs-about-dynamics-from-behavior.pdf,2018,bookSection,"Reddy, Sid; Dragan, Anca; Levine, Sergey",Advances in Neural Information Processing Systems 31 Alive and Well after 25 Years: A Review of Groupthink Research,,https://linkinghub.elsevier.com/retrieve/pii/S0749597898927583,1998,journalArticle,"Esser, James K",Organizational Behavior and Human Decision Processes A National Security Research Agenda for Cybersecurity and Artificial Intelligence,"Machine learning advances are transforming cyber strategy and operations. This necessitates studying national security issues at the intersection of AI and cybersecurity, including offensive and defensive cyber operations, the cybersecurity of AI systems, and the effect of new technologies on global stability.",https://cset.georgetown.edu/research/a-national-security-research-agenda-for-cybersecurity-and-artificial-intelligence/,2020,report,"Buchanan, Ben", Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems,"Introduction There has been much recent discussion about AI risk, meaning specifically the potential pitfalls (both short-term and long-term) that AI with improved capabilities could create for soc…",https://jsteinhardt.wordpress.com/2015/06/24/long-term-and-short-term-challenges-to-ensuring-the-safety-of-ai-systems/,2015,blogPost,"Steinhardt, Jacob",Academically Interesting Motivated Skepticism in the Evaluation of Political Beliefs,,http://doi.wiley.com/10.1111/j.1540-5907.2006.00214.x,2006,journalArticle,"Taber, Charles S.; Lodge, Milton",American Journal of Political Science Stabilization of neurotoxic Alzheimer amyloid-β oligomers by protein engineering,,,2010,journalArticle,"Sandberg, Anders; Luheshi, Leila M.; Söllvander, Sofia; de Barros, Teresa Pereira; Macao, Bertil; Knowles, Tuomas PJ; Biverst\aal, Henrik; Lendel, Christofer; Ekholm-Petterson, Frida; Dubnovitsky, Anatoly",Proceedings of the National Academy of Sciences Agents and Devices: A Relative Definition of Agency,"According to Dennett, the same system may be described using a `physical' (mechanical) explanatory stance, or using an `intentional' (belief- and goal-based) explanatory stance. Humans tend to find the physical stance more helpful for certain systems, such as planets orbiting a star, and the intentional stance for others, such as living animals. We define a formal counterpart of physical and intentional stances within computational theory: a description of a system as either a device, or an agent, with the key difference being that `devices' are directly described in terms of an input-output mapping, while `agents' are described in terms of the function they optimise. Bayes' rule can then be applied to calculate the subjective probability of a system being a device or an agent, based only on its behaviour. We illustrate this using the trajectories of an object in a toy grid-world domain.",http://arxiv.org/abs/1805.12387,2018,manuscript,"Orseau, Laurent; McGill, Simon McGregor; Legg, Shane", Recommendations on Export Controls for Artificial Intelligence,"What U.S. export controls on AI-relevant technologies would help further aims such as stability and human rights abroad without impeding U.S. R&D? This issue brief assesses where such controls will be effective, ineffective or even damaging to the interests of the United States and its allies.",https://cset.georgetown.edu/research/recommendations-on-export-controls-for-artificial-intelligence/,2020,report,"Flynn, Carrick", Robust artificial intelligence and robust human organizations,,http://link.springer.com/10.1007/s11704-018-8900-4,2019,journalArticle,"Dietterich, Thomas G.",Frontiers of Computer Science "Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning","Deep neural networks (DNNs) enable innovative applications of machine learning like image recognition, machine translation, or malware detection. However, deep learning is often criticized for its lack of robustness in adversarial settings (e.g., vulnerability to adversarial inputs) and general inability to rationalize its predictions. In this work, we exploit the structure of deep learning to enable new learning-based inference and decision strategies that achieve desirable properties such as robustness and interpretability. We take a first step in this direction and introduce the Deep k-Nearest Neighbors (DkNN). This hybrid classifier combines the k-nearest neighbors algorithm with representations of the data learned by each layer of the DNN: a test input is compared to its neighboring training points according to the distance that separates them in the representations. We show the labels of these neighboring points afford confidence estimates for inputs outside the model's training manifold, including on malicious inputs like adversarial examples--and therein provides protections against inputs that are outside the models understanding. This is because the nearest neighbors can be used to estimate the nonconformity of, i.e., the lack of support for, a prediction in the training data. The neighbors also constitute human-interpretable explanations of predictions. We evaluate the DkNN algorithm on several datasets, and show the confidence estimates accurately identify inputs outside the model, and that the explanations provided by nearest neighbors are intuitive and useful in understanding model failures.",http://arxiv.org/abs/1803.04765,2018,manuscript,"Papernot, Nicolas; McDaniel, Patrick", Considerations for Evaluation and Generalization in Interpretable Machine Learning,,,2018,bookSection,"Doshi-Velez, Finale; Kim, Been",Explainable and Interpretable Models in Computer Vision and Machine Learning Parenting: Safe Reinforcement Learning from Human Input,"Autonomous agents trained via reinforcement learning present numerous safety concerns: reward hacking, negative side effects, and unsafe exploration, among others. In the context of near-future autonomous agents, operating in environments where humans understand the existing dangers, human involvement in the learning process has proved a promising approach to AI Safety. Here we demonstrate that a precise framework for learning from human input, loosely inspired by the way humans parent children, solves a broad class of safety problems in this context. We show that our Parenting algorithm solves these problems in the relevant AI Safety gridworlds of Leike et al. (2017), that an agent can learn to outperform its parent as it ""matures"", and that policies learnt through Parenting are generalisable to new environments.",http://arxiv.org/abs/1902.06766,2019,manuscript,"Frye, Christopher; Feige, Ilya", "Artificial Intelligence, Values and Alignment","This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that aligns with instructions, intentions, revealed preferences, ideal preferences, interests and values. A principle-based approach to AI alignment, which combines these elements in a systematic way, has considerable advantages in this context. Third, the central challenge for theorists is not to identify 'true' moral principles for AI; rather, it is to identify fair principles for alignment, that receive reflective endorsement despite widespread variation in people's moral beliefs. The final part of the paper explores three ways in which fair principles for AI alignment could potentially be identified.",http://arxiv.org/abs/2001.09768,2020,journalArticle,"Gabriel, Iason",Minds and Machines Search versus design,"This work was supported by OAK, a monastic community in the Berkeley hills. It could not have been written without the daily love of living in this beautiful community. The work involved in writing this cannot be separated from the sitting, chanting, cooking, cleaning, crying, correcting, fundraising, listening, laughing, and teaching of the whole community. This write-up benefited from feedback from David Kristofferson, Andrew Critch, Jason Crawford, Abram Demski, and Ben Pence. Mistakes and omissions are entirely the responsibility of the author. -------------------------------------------------------------------------------- How is it that we solve engineering problems? What is the nature of the design process that humans follow when building an air conditioner or computer program? How does this differ from the search processes present in machine learning and evolution? We study search and design as distinct approaches to engineering. We argue that establishing trust in an artifact is tied to understanding how that artifact works, and that a central difference between search and design is the comprehensibility of the artifacts produced. We present a model of design as alternating phases of construction and factorization, resulting in artifacts composed of subsystems that are paired with helpful stories. We connect our ideas to the factored cognition thesis of Stuhlmüller and Christiano. We also review work in machine learning interpretability, including Chris Olah’s recent work on decomposing neural networks, Cynthia Rudin’s work on optimal simple models, and Mike Wu’s work on tree-regularized neural networks. We contrast these approaches with the joint production of artifacts and stories that we see in human design. Finally we ponder whether an AI safety research agenda could be formulated to automate design in a way that would make it competitive with search. INTRODUCTION Humans have been engineering artifacts for hundreds of thousands of years. Until rec",https://www.alignmentforum.org/posts/r3NHPD3dLFNk9QE2Y/search-versus-design-1,2020,blogPost,"Flint, Alex",AI Alignment Forum Conservative Agency,"Reward functions are easy to misspecify; although designers can make corrections after observing mistakes, an agent pursuing a misspecified reward function can irreversibly change the state of its environment. If that change precludes optimization of the correctly specified reward function, then correction is futile. For example, a robotic factory assistant could break expensive equipment due to a reward misspecification; even if the designers immediately correct the reward function, the damage is done. To mitigate this risk, we introduce an approach that balances optimization of the primary reward function with preservation of the ability to optimize auxiliary reward functions. Surprisingly, even when the auxiliary reward functions are randomly generated and therefore uninformative about the correctly specified reward function, this approach induces conservative, effective behavior.",http://arxiv.org/abs/1902.09725,2020,conferencePaper,"Turner, Alexander Matt; Hadfield-Menell, Dylan; Tadepalli, Prasad",arXiv:1902.09725 [cs] Why Artificial Intelligence Needs a Task Theory --- And What It Might Look Like,"The concept of ""task"" is at the core of artificial intelligence (AI): Tasks are used for training and evaluating AI systems, which are built in order to perform and automatize tasks we deem useful. In other fields of engineering theoretical foundations allow thorough evaluation of designs by methodical manipulation of well understood parameters with a known role and importance; this allows an aeronautics engineer, for instance, to systematically assess the effects of wind speed on an airplane's performance and stability. No framework exists in AI that allows this kind of methodical manipulation: Performance results on the few tasks in current use (cf. board games, question-answering) cannot be easily compared, however similar or different. The issue is even more acute with respect to artificial *general* intelligence systems, which must handle unanticipated tasks whose specifics cannot be known beforehand. A *task theory* would enable addressing tasks at the *class* level, bypassing their specifics, providing the appropriate formalization and classification of tasks, environments, and their parameters, resulting in more rigorous ways of measuring, comparing, and evaluating intelligent behavior. Even modest improvements in this direction would surpass the current ad-hoc nature of machine learning and AI evaluation. Here we discuss the main elements of the argument for a task theory and present an outline of what it might look like for physical tasks.",http://arxiv.org/abs/1604.04660,2016,conferencePaper,"Thórisson, Kristinn R.; Bieger, Jordi; Thorarensen, Thröstur; Sigurðardóttir, Jóna S.; Steunebrink, Bas R.",AGI 2016: Artificial General Intelligence Failure Modes in Machine Learning - Security documentation,"In the last two years, more than 200 papers have been written on how machine learning (ML) systems can fail because of adversarial attacks on the algorithms and data; this number balloons if we were to incorporate papers covering non-adversarial failure modes. The spate of papers has made it difficult for ML practitioners, let alone engineers, lawyers, and policymakers, to keep up with the attacks against and defenses of ML systems. However, as these systems become more pervasive, the need to understand how they fail, whether by the hand of an adversary or due to the inherent design of a system, will only become more pressing. In order to equip software developers, security incident responders, lawyers, and policy makers with a common vernacular to talk about this problem, we developed a framework to classify failures into ""Intentional failures"" where the failure is caused by an active adversary attempting to subvert the system to attain her goals; and ""Unintentional failures"" where the failure is because an ML system produces an inherently unsafe outcome. After developing the initial version of the taxonomy last year, we worked with security and ML teams across Microsoft, 23 external partners, standards organization, and governments to understand how stakeholders would use our framework. Throughout the paper, we attempt to highlight how machine learning failure modes are meaningfully different from traditional software failures from a technology and policy perspective.",https://docs.microsoft.com/en-us/security/failure-modes-in-machine-learning,2019,report,"Kumar, Ram Shankar Siva; Brien, David O; Albert, Kendra; Viljöen, Salomé; Snover, Jeffrey", Algorithms for Differentially Private Multi-Armed Bandits,"We present differentially private algorithms for the stochastic Multi-Armed Bandit (MAB) problem. This is a problem for applications such as adaptive clinical trials, experiment design, and user-targeted advertising where private information is connected to individual rewards. Our major contribution is to show that there exist $(\epsilon, \delta)$ differentially private variants of Upper Confidence Bound algorithms which have optimal regret, $O(\epsilon^{-1} + \log T)$. This is a significant improvement over previous results, which only achieve poly-log regret $O(\epsilon^{-2} \log^{2} T)$, because of our use of a novel interval-based mechanism. We also substantially improve the bounds of previous family of algorithms which use a continual release mechanism. Experiments clearly validate our theoretical bounds.",http://arxiv.org/abs/1511.08681,2015,conferencePaper,"Tossou, Aristide; Dimitrakakis, Christos",AAAI'16: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence Why Build an Assistant in Minecraft?,"In this document we describe a rationale for a research program aimed at building an open ""assistant"" in the game Minecraft, in order to make progress on the problems of natural language understanding and learning from dialogue.",http://arxiv.org/abs/1907.09273,2019,manuscript,"Szlam, Arthur; Gray, Jonathan; Srinet, Kavya; Jernite, Yacine; Joulin, Armand; Synnaeve, Gabriel; Kiela, Douwe; Yu, Haonan; Chen, Zhuoyuan; Goyal, Siddharth; Guo, Demi; Rothermel, Danielle; Zitnick, C. Lawrence; Weston, Jason", "The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies","A New York Times Bestseller. A “fascinating” (Thomas L. Friedman, New York Times) look at how digital technology is transforming our work and our lives. In recent years, Google’s autonomous cars have logged thousands of miles on American highways and IBM’s Watson trounced the best human Jeopardy! players. Digital technologies―with hardware, software, and networks at their core―will in the near future diagnose diseases more accurately than doctors can, apply enormous data sets to transform retailing, and accomplish many tasks once considered uniquely human. In The Second Machine Age MIT’s Erik Brynjolfsson and Andrew McAfee―two thinkers at the forefront of their field―reveal the forces driving the reinvention of our lives and our economy. As the full impact of digital technologies is felt, we will realize immense bounty in the form of dazzling personal technology, advanced infrastructure, and near-boundless access to the cultural items that enrich our lives.Amid this bounty will also be wrenching change. Professions of all kinds―from lawyers to truck drivers―will be forever upended. Companies will be forced to transform or die. Recent economic indicators reflect this shift: fewer people are working, and wages are falling even as productivity and profits soar.Drawing on years of research and up-to-the-minute trends, Brynjolfsson and McAfee identify the best strategies for survival and offer a new path to prosperity. These include revamping education so that it prepares people for the next economy instead of the last one, designing new collaborations that pair brute processing power with human ingenuity, and embracing policies that make sense in a radically transformed landscape.A fundamentally optimistic book, The Second Machine Age alters how we think about issues of technological, societal, and economic progress.",,2016,book,"Brynjolfsson, Erik; McAfee, Andrew", Aligning AI to Human Values means Picking the Right Metrics,Optimizing for the wrong thing can cause a lot of harm.,https://medium.com/partnership-on-ai/aligning-ai-to-human-values-means-picking-the-right-metrics-855859e6f047,2020,blogPost,"Stray, Jonathan",AI & Advancing Responsible AI (Medium) Planning for cars that coordinate with people: leveraging effects on human actions for planning and active information gathering over human internal state,"Traditionally, autonomous cars treat human-driven vehicles like moving obstacles. They predict their future trajectories and plan to stay out of their way. While physically safe, this results in defensive and opaque behaviors. In reality, an autonomous car’s actions will actually affect what other cars will do in response, creating an opportunity for coordination. Our thesis is that we can leverage these responses to plan more efficient and communicative behaviors. We introduce a formulation of interaction with human-driven vehicles as an underactuated dynamical system, in which the robot’s actions have consequences on the state of the autonomous car, but also on the human actions and thus the state of the human-driven car. We model these consequences by approximating the human’s actions as (noisily) optimal with respect to some utility function. The robot uses the human actions as observations of her underlying utility function parameters. We first explore learning these parameters offline, and show that a robot planning in the resulting underactuated system is more efficient than when treating the person as a moving obstacle. We also show that the robot can target specific desired effects, like getting the person to switch lanes or to proceed first through an intersection. We then explore estimating these parameters online, and enable the robot to perform active information gathering: generating actions that purposefully probe the human in order to clarify their underlying utility parameters, like driving style or attention level. We show that this significantly outperforms passive estimation and improves efficiency. Planning in our model results in coordination behaviors: the robot inches forward at an intersection to see if can go through, or it reverses to make the other car proceed first. These behaviors result from the optimization, without relying on hand-coded signaling strategies. Our user studies support the utility of our model when interacting with real users.",http://link.springer.com/10.1007/s10514-018-9746-1,2018,journalArticle,"Sadigh, Dorsa; Landolfi, Nick; Sastry, Shankar S.; Seshia, Sanjit A.; Dragan, Anca D.",Autonomous Robots Benefits & Risks of Artificial Intelligence,Why do we need research to ensure that artificial intelligence remains safe and beneficial? What are the benefits and risks of artificial intelligence?,https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/,,blogPost,Future of Life Institute,Future of Life Institute Social choice ethics in artificial intelligence,"A major approach to the ethics of artificial intelligence (AI) is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom-up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of social choice AI faces three sets of decisions: standing, concerning whose ethics views are included; measurement, concerning how their views are identified; and aggregation, concerning how individual views are combined to a single view that will guide AI behavior. These decisions must be made up front in the initial AI design—designers cannot “let the AI figure it out”. Each set of decisions poses difficult ethical dilemmas with major consequences for AI behavior, with some decision options yielding pathological or even catastrophic results. Furthermore, non-social choice ethics face similar issues, such as whether to count future generations or the AI itself. These issues can be more important than the question of whether or not to use social choice ethics. Attention should focus on these issues, not on social choice.",http://link.springer.com/10.1007/s00146-017-0760-1,2020,journalArticle,"Baum, Seth D.",AI & Society Finding latent code errors via machine learning over program executions,,http://ieeexplore.ieee.org/document/1317470/,2004,conferencePaper,"Brun, Y.; Ernst, M.D.",Proceedings. 26th International Conference on Software Engineering Meta-trained agents implement Bayes-optimal agents,"Memory-based meta-learning is a powerful technique to build agents that adapt fast to any task within a target distribution. A previous theoretical study has argued that this remarkable performance is because the meta-training protocol incentivises agents to behave Bayes-optimally. We empirically investigate this claim on a number of prediction and bandit tasks. Inspired by ideas from theoretical computer science, we show that meta-learned and Bayes-optimal agents not only behave alike, but they even share a similar computational structure, in the sense that one agent system can approximately simulate the other. Furthermore, we show that Bayes-optimal agents are fixed points of the meta-learning dynamics. Our results suggest that memory-based meta-learning might serve as a general technique for numerically approximating Bayes-optimal agents - that is, even for task distributions for which we currently don't possess tractable models.",http://arxiv.org/abs/2010.11223,2020,conferencePaper,"Mikulik, Vladimir; Delétang, Grégoire; McGrath, Tom; Genewein, Tim; Martic, Miljan; Legg, Shane; Ortega, Pedro A.",34th Conference on Neural Information Processing Systems (NeurIPS 2020) Book Summary: Consciousness and the Brain,"One of the fundamental building blocks of much of consciousness research, is that of Global Workspace Theory (GWT). One elaboration of GWT, which focuses on how it might be implemented in the brain, is the Global Neuronal Workspace (GNW) model in neuroscience. Consciousness and the Brain is a 2014 book that summarizes some of the research and basic ideas behind GNW. It was written by Stanislas Dehaene, a French cognitive neuroscientist with a long background in both consciousness research and other related topics. THE BOOK AND ITS REPLICABILITY Given that this is a book on psychology and neuroscience that was written before the replication crisis, an obligatory question before we get to the meat of it is: how reliable are any of the claims in this book? After all, if we think that this is based on research which is probably not going to replicate, then we shouldn’t even bother reading the book. I think that the book’s conclusions are at least reasonably reliable in their broad strokes, if not necessarily all the particular details. That is, some of the details in the cited experiments may be off, but I expect most of them to at least be pointing in the right direction. Here are my reasons: First, scientists in a field usually have an informal hunch of how reliable the different results are. Even before the replication crisis hit, I had heard private comments from friends working in social psychology, who were saying that everything in the field was built on shaky foundations and how they didn’t trust even their own findings much. In contrast, when I asked a friend who works with some people doing consciousness research, he reported back that they generally felt that GWT/GNW-style theories have a reasonably firm basis. This isn’t terribly conclusive but at least it’s a bit of evidence. Second, for some experiments the book explicitly mentions that they have been replicated. That said, some of the reported experiments seemed to be one-off ones, and I did not yet",https://www.lesswrong.com/posts/x4n4jcoDP7xh5LWLq/book-summary-consciousness-and-the-brain,2019,blogPost,"Sotala, Kaj",LessWrong Activation Atlas,"By using feature inversion to visualize millions of activations from an image classification network, we create an explorable activation atlas of features the network has learned and what concepts it typically represents.",https://distill.pub/2019/activation-atlas,2019,journalArticle,"Carter, Shan; Armstrong, Zan; Schubert, Ludwig; Johnson, Ian; Olah, Chris",Distill Considered Opinions: Deliberative Polling in Britain,,http://www.journals.cambridge.org/abstract_S0007123402000194,2002,journalArticle,"Luskin, Robert C.; Fishkin, James S.; Jowell, Roger",British Journal of Political Science "A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models","Generative adversarial networks (GANs) are a recently proposed class of generative models in which a generator is trained to optimize a cost function that is being simultaneously learned by a discriminator. While the idea of learning cost functions is relatively new to the field of generative modeling, learning costs has long been studied in control and reinforcement learning (RL) domains, typically for imitation learning from demonstrations. In these fields, learning cost function underlying observed behavior is known as inverse reinforcement learning (IRL) or inverse optimal control. While at first the connection between cost learning in RL and cost learning in generative modeling may appear to be a superficial one, we show in this paper that certain IRL methods are in fact mathematically equivalent to GANs. In particular, we demonstrate an equivalence between a sample-based algorithm for maximum entropy IRL and a GAN in which the generator's density can be evaluated and is provided as an additional input to the discriminator. Interestingly, maximum entropy IRL is a special case of an energy-based model. We discuss the interpretation of GANs as an algorithm for training energy-based models, and relate this interpretation to other recent work that seeks to connect GANs and EBMs. By formally highlighting the connection between GANs, IRL, and EBMs, we hope that researchers in all three communities can better identify and apply transferable ideas from one domain to another, particularly for developing more stable and scalable algorithms: a major challenge in all three domains.",http://arxiv.org/abs/1611.03852,2016,conferencePaper,"Finn, Chelsea; Christiano, Paul; Abbeel, Pieter; Levine, Sergey",arXiv:1611.03852 [cs] Dynamics-Aware Unsupervised Discovery of Skills,"Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment. A good model can potentially enable planning algorithms to generate a large variety of behaviors and solve diverse tasks. However, learning an accurate model for complex dynamical systems is difficult, and even then, the model might not generalize well outside the distribution of states on which it was trained. In this work, we combine model-based learning with model-free learning of primitives that make model-based planning easy. To that end, we aim to answer the question: how can we discover skills whose outcomes are easy to predict? We propose an unsupervised learning algorithm, Dynamics-Aware Discovery of Skills (DADS), which simultaneously discovers predictable behaviors and learns their dynamics. Our method can leverage continuous skill spaces, theoretically, allowing us to learn infinitely many behaviors even for high-dimensional state-spaces. We demonstrate that zero-shot planning in the learned latent space significantly outperforms standard MBRL and model-free goal-conditioned RL, can handle sparse-reward tasks, and substantially improves over prior hierarchical RL methods for unsupervised skill discovery.",http://arxiv.org/abs/1907.01657,2020,manuscript,"Sharma, Archit; Gu, Shixiang; Levine, Sergey; Kumar, Vikash; Hausman, Karol", Efficient Iterative Linear-Quadratic Approximations for Nonlinear Multi-Player General-Sum Differential Games,"Many problems in robotics involve multiple decision making agents. To operate efficiently in such settings, a robot must reason about the impact of its decisions on the behavior of other agents. Differential games offer an expressive theoretical framework for formulating these types of multi-agent problems. Unfortunately, most numerical solution techniques scale poorly with state dimension and are rarely used in real-time applications. For this reason, it is common to predict the future decisions of other agents and solve the resulting decoupled, i.e., single-agent, optimal control problem. This decoupling neglects the underlying interactive nature of the problem; however, efficient solution techniques do exist for broad classes of optimal control problems. We take inspiration from one such technique, the iterative linearquadratic regulator (ILQR), which solves repeated approximations with linear dynamics and quadratic costs. Similarly, our proposed algorithm solves repeated linear-quadratic games. We experimentally benchmark our algorithm in several examples with a variety of initial conditions and show that the resulting strategies exhibit complex interactive behavior. Our results indicate that our algorithm converges reliably and runs in realtime. In a three-player, 14-state simulated intersection problem, our algorithm initially converges in < 0.25 s. Receding horizon invocations converge in < 50 ms in a hardware collisionavoidance test.",https://ieeexplore.ieee.org/abstract/document/9197129?casa_token=2AxzWq5Kg50AAAAA:hCVPhFGnvxhKzZfst3uZ32B9q5R7TjFaO4vibaHfxBRc8KRTa_FrrZYbqnwdSphBh-hHkqE1,2019,conferencePaper,"Fridovich-Keil, David; Ratner, Ellis; Peters, Lasse; Dragan, Anca D.; Tomlin, Claire J.",2020 IEEE International Conference on Robotics and Automation (ICRA) Activism by the AI Community: Analysing Recent Achievements and Future Prospects,"The artificial intelligence (AI) community has recently engaged in activism in relation to their employers, other members of the community, and their governments in order to shape the societal and ethical implications of AI. It has achieved some notable successes, but prospects for further political organising and activism are uncertain. We survey activism by the AI community over the last six years; apply two analytical frameworks drawing upon the literature on epistemic communities, and worker organising and bargaining; and explore what they imply for the future prospects of the AI community. Success thus far has hinged on a coherent shared culture, and high bargaining power due to the high demand for a limited supply of AI ‘talent’. Both are crucial to the future of AI activism and worthy of sustained attention.",https://dl.acm.org/doi/10.1145/3375627.3375814,2020,conferencePaper,"Belfield, Haydn","Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society" The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement,,,2017,bookSection,"Bostrom, Nick; Sandberg, Anders",Philosophical Issues in Pharmaceutics Submission to the OSTP on AI outcomes,The White House Office of Science and Technology Policy recently put out a request for information on “(1) The legal and governance implications of AI; (2) the use of AI for public good; (3) the safety and control issues for AI; (4) the social and economic implications of AI;” and a variety of related topics.... Read more »,https://intelligence.org/2016/07/23/ostp/,2016,report,"Soares, Nate", "The Nature, Importance, and Difficulty of Machine Ethics",,http://ieeexplore.ieee.org/document/1667948/,2006,journalArticle,"Moor, J.H.",IEEE Intelligent Systems Space-Time Embedded Intelligence,,http://link.springer.com/10.1007/978-3-642-35506-6_22,2012,bookSection,"Orseau, Laurent; Ring, Mark",Artificial General Intelligence Dissolving Confusion around Functional Decision Theory,"SUMMARY Functional Decision Theory (FDT), (see also causal, evidential, timeless, updateless, and anthropic decision theories) recommends taking cooperative, non-greedy actions in twin prisoners dilemmas, Newcombian problems, Parfit’s hitchhiker-like games, and counterfactual muggings but not smoking lesion situations. It’s a controversial concept with important implications for designing agents that have optimal behavior when embedded in environments in which they may potentially interact with models of themselves. Unfortunately, I think that FDT is sometimes explained confusingly and misunderstood by its proponents and opponents alike. To help dissolve confusion about FDT and address key concerns of its opponents, I refute the criticism that FDT assumes that causation can happen backward in time and offer two key principles that provide a framework for clearly understanding it: 1. Questions in decision theory are not questions about what choices you should make with some sort of unpredictable free will. They are questions about what type of source code you should be running. 2. I should consider predictor P to “subjunctively depend” on agent A to the extent that P makes predictions of A’s actions based on correlations that cannot be confounded by my choice of what source code A runs. GETTING UP TO SPEED I think that functional decision theory (FDT) is a beautifully counterintuitive and insightful framework for instrumental rationally. I will not make it my focus here to talk about what it is and what types of situations it is useful in. To gain a solid background, I recommend this post of mine or the original paper on it by Eliezer Yudkowsky and Nate Soares. Additionally, here are four different ways that FDT can be explained. I find them all complimentary for understanding and intuiting it well. 1. The decision theory that tells you to act as if you were setting the output to an optimal decision-making process for the task at hand.",https://www.lesswrong.com/posts/xoQRz8tBvsznMXTkt/dissolving-confusion-around-functional-decision-theory,2020,blogPost,"Casper, Stephen",LessWrong AI Alignment 2018-19 Review,"PREAMBLE WHAT THIS POST IS This is a review post of public work in AI alignment over 2019, with some inclusions from 2018. It has this preamble (~700 words), a short version / summary (~1.6k words), and a long version (~8.3k words). It is available as a Google Doc here. There are many areas of work that are relevant to AI alignment that I have barely touched on, such as interpretability, uncertainty estimation, adversarial examples, and assured autonomy, primarily because I have not been following these fields and wouldn’t be able to write a good summary of what has happened in them. I have also mostly focused on articles that provide some conceptual insight, and excluded or briefly linked to papers that primarily make quantitative improvements on important metrics. While such papers are obviously important (ultimately, our techniques need to work well), there isn’t much to say about them in a yearly review other than that the quantitative metric was improved. Despite these exclusions, there was still a ton of work to select from, perhaps around ~500 articles, of which over 300 have been linked to in this post. There are many interesting articles that I really enjoyed that get only a sentence of description, in which I ignore many of the points that the article makes. Most have been summarized in the Alignment Newsletter, so if you’d like to learn more about any particular link, but don’t want to read the entire thing, just search for its title in the database. WHAT YOU SHOULD KNOW ABOUT THE STRUCTURE OF THIS POST I am not speaking for myself; by default I am trying to explain what has been said, in a way that the authors of the articles would agree with. Any extra opinion that I add will be in italics. As a post, this is meant to be read sequentially, but the underlying structure is a graph (nodes are posts, edges connect posts that are very related). I arranged it in a sequence that highlights the most salient-to-me connections. This means that the order in wh",https://www.alignmentforum.org/posts/dKxX76SCfCvceJXHv/ai-alignment-2018-19-review,2020,blogPost,"Shah, Rohin",AI Alignment Forum Sequence introduction: non-agent and multiagent models of mind,"A typical paradigm by which people tend to think of themselves and others is as consequentialist agents: entities who can be usefully modeled as having beliefs and goals, who are then acting according to their beliefs to achieve their goals. This is often a useful model, but it doesn’t quite capture reality. It’s a bit of a fake framework. Or in computer science terms, you might call it a leaky abstraction. An abstraction in the computer science sense is a simplification which tries to hide the underlying details of a thing, letting you think in terms of the simplification rather than the details. To the extent that the abstraction actually succeeds in hiding the details, this makes things a lot simpler. But sometimes the abstraction inevitably leaks, as the simplification fails to predict some of the actual behavior that emerges from the details; in that situation you need to actually know the underlying details, and be able to think in terms of them. Agent-ness being a leaky abstraction is not exactly a novel concept for Less Wrong; it has been touched upon several times, such as in Scott Alexander’s Blue-Minimizing Robot Sequence. At the same time, I do not think that it has been quite fully internalized yet, and that many foundational posts on LW go wrong due to being premised on the assumption of humans being agents. In fact, I would go as far as to claim that this is the biggest flaw of the original Sequences: they were attempting to explain many failures of rationality as being due to cognitive biases, when in retrospect it looks like understanding cognitive biases doesn’t actually make you substantially more effective. But if you are implicitly modeling humans as goal-directed agents, then cognitive biases is the most natural place for irrationality to emerge from, so it makes sense to focus the most on there. Just knowing that an abstraction leaks isn’t enough to improve your thinking, however. To do better, you need to know about the actual underlyi",https://www.lesswrong.com/posts/M4w2rdYgCKctbADMn/sequence-introduction-non-agent-and-multiagent-models-of,2019,blogPost,"Sotala, Kaj",LessWrong Existential risk and cost-effective biosecurity,,,2017,journalArticle,"Millett, Piers; Snyder-Beattie, Andrew",Health security Must accidents happen? Lessons from high-reliability organizations,,http://journals.aom.org/doi/10.5465/ame.2001.5229613,2001,journalArticle,"Roberts, Karlene H.; Bea, Robert",Academy of Management Perspectives Environments as a bottleneck in AGI development,"Given a training environment or dataset, a training algorithm, an optimiser, and a model class capable of implementing an AGI (with the right parameters), there are two interesting questions we might ask about how conducive that environment is for training an AGI. The first is: how much do AGIs from that model class outperform non-AGIs? The second is: how straightforward is the path to reaching an AGI? We can visualise these questions in terms of the loss landscape of those models when evaluated on the training environment. The first asks how low the set of AGIs is, compared with the rest of the landscape. The second asks how favourable the paths through that loss landscape to get to AGIs are - that is, do the local gradients usually point in the right direction, and how deep are the local minima? Some people believe that there are many environments in which AGIs can be reached via favourable paths in the loss landscape and dramatically outperform non-AGIs; let’s call this the easy paths hypothesis. By contrast, the hard paths hypothesis is that it’s rare for environments (even complex meta-environments consisting of many separate tasks) to straightforwardly incentivise the development of general intelligence. This would suggest that specific environmental features will be necessary to prevent most models from getting stuck in local minima where they only possess narrow, specialised cognitive skills. There has been a range of speculation on what such features might be - perhaps multi-agent autocurricula, or realistic simulations, or specific types of human feedback. I’ll discuss some of these possibilities later in the post. This spectrum is complicated by its dependence on the model class, training algorithm, and choice of optimiser. If we had a perfect optimiser, then the hilliness of the loss landscape wouldn’t matter. For now, I'm imagining using optimisers fairly similar to current stochastic gradient descent. Meanwhile, I’m assuming in this post that (in acc",https://www.alignmentforum.org/posts/vqpEC3MPioHX7bv4t/environments-as-a-bottleneck-in-agi-development,2020,blogPost,"Ngo, Richard",AI Alignment Forum Bridging near- and long-term concerns about AI,"Debate about the impacts of AI is often split into two camps, one associated with the near term and the other with the long term. This divide is a mistake — the connections between the two perspectives deserve more attention, say Stephen Cave and Seán S. ÓhÉigeartaigh.",https://www.nature.com/articles/s42256-018-0003-2,2019,journalArticle,"Cave, Stephen; Ó hÉigeartaigh, Seán S.",Nature Machine Intelligence Sensitivity to Shared Information in Social Learning,,http://doi.wiley.com/10.1111/cogs.12485,2018,journalArticle,"Whalen, Andrew; Griffiths, Thomas L.; Buchsbaum, Daphna",Cognitive Science What are you optimizing for? Aligning Recommender Systems with Human Values,"We describe cases where real recommender systems were modified in the service of various human values such as diversity, fairness, well-being, time well spent, and factual accuracy. From this we identify the current practice of values engineering: the creation of classifiers from humancreated data with value-based labels. This has worked in practice for a variety of issues, but problems are addressed one at a time, and users and other stakeholders have seldom been involved. Instead, we look to AI alignment work for approaches that could learn complex values directly from stakeholders, and identify four major directions: useful measures of alignment, participatory design and operation, interactive value learning, and informed deliberative judgments.",,2020,conferencePaper,"Stray, Jonathan; Adler, Steven; Hadfield-Menell, Dylan", Extremes,"Humanity is confronted by and attracted to extremes. Extreme events shape our thinking, feeling, and actions; they echo in our politics, media, literature, and science. We often associate extremes with crises, disasters, and risks to be averted, yet extremes also have the potential to lead us towards new horizons. Featuring essays by leading intellectuals and public figures arising from the 2017 Darwin College Lectures, this volume explores 'extreme' events, from the election of President Trump, the rise of populism, and the Brexit referendum, to the 2008 financial crisis, the Syrian war, and climate change. It also celebrates 'extreme' achievements in the realms of health, exploration, and scientific discovery. A fascinating, engaging, and timely collection of essays by renowned scholars, journalists, and intellectuals, this volume challenges our understanding of what is normal and what is truly extreme, and sheds light on some of the issues facing humanity in the twenty-first century.",,2019,book,"Needham, Duncan; Weitzdörfer, Julius", Preliminary survey of prescient actions,"In a 10-20 hour exploration, we did not find clear examples of 'prescient actions'—specific efforts to address severe and complex problems decades ahead of time and in the absence of broader scientific concern, experience with analogous problems, or feedback on the success of the effort—though we found six cases that may turn out to be...",https://aiimpacts.org/survey-of-prescient-actions/,2020,blogPost,"Korzekwa, Rick",AI Impacts Incomplete Contracting and AI Alignment,"We suggest that the analysis of incomplete contracting developed by law and economics researchers can provide a useful framework for understanding the AI alignment problem and help to generate a systematic approach to finding solutions. We first provide an overview of the incomplete contracting literature and explore parallels between this work and the problem of AI alignment. As we emphasize, misalignment between principal and agent is a core focus of economic analysis. We highlight some technical results from the economics literature on incomplete contracts that may provide insights for AI alignment researchers. Our core contribution, however, is to bring to bear an insight that economists have been urged to absorb from legal scholars and other behavioral scientists: the fact that human contracting is supported by substantial amounts of external structure, such as generally available institutions (culture, law) that can supply implied terms to fill the gaps in incomplete contracts. We propose a research agenda for AI alignment work that focuses on the problem of how to build AI that can replicate the human cognitive processes that connect individual incomplete contracts with this supporting external structure.",http://arxiv.org/abs/1804.04268,2019,conferencePaper,"Hadfield-Menell, Dylan; Hadfield, Gillian","Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society" "The Humanizing Voice: Speech Reveals, and Text Conceals, a More Thoughtful Mind in the Midst of Disagreement","A person’s speech communicates his or her thoughts and feelings. We predicted that beyond conveying the contents of a person’s mind, a person’s speech also conveys mental capacity, such that hearing a person explain his or her beliefs makes the person seem more mentally capable—and therefore seem to possess more uniquely human mental traits—than reading the same content. We expected this effect to emerge when people are perceived as relatively mindless, such as when they disagree with the evaluator’s own beliefs. Three experiments involving polarizing attitudinal issues and political opinions supported these hypotheses. A fourth experiment identified paralinguistic cues in the human voice that convey basic mental capacities. These results suggest that the medium through which people communicate may systematically influence the impressions they form of each other. The tendency to denigrate the minds of the opposition may be tempered by giving them, quite literally, a voice.",http://journals.sagepub.com/doi/10.1177/0956797617713798,2017,journalArticle,"Schroeder, Juliana; Kardas, Michael; Epley, Nicholas",Psychological Science Antitrust and Artificial Intelligence: How Breaking Up Big Tech Could Affect Pentagon's Access to AI,"While AI innovation would presumably continue in some form without Big Tech, the authors find that breaking up the largest technology companies could fundamentally change the broader AI innovation ecosystem, likely affecting the development of AI applications for national security.",https://cset.georgetown.edu/research/antitrust-and-artificial-intelligence-how-breaking-up-big-tech-could-affect-pentagons-access-to-ai/,2020,report,"Foster, Dakota; Arnold, Zachary", Security solutions for intelligent and complex systems,,,2017,bookSection,"Armstrong, Stuart; Yampolskiy, Roman V.",Security Solutions for Hyperconnectivity and the Internet of Things AutoML-Zero: Evolving Machine Learning Algorithms From Scratch,"Machine learning research has advanced in multiple aspects, including model structures and learning methods. The effort to automate such research, known as AutoML, has also made significant progress. However, this progress has largely focused on the architecture of neural networks, where it has relied on sophisticated expert-designed layers as building blocks---or similarly restrictive search spaces. Our goal is to show that AutoML can go further: it is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks. We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space. Despite the vastness of this space, evolutionary search can still discover two-layer neural networks trained by backpropagation. These simple neural networks can then be surpassed by evolving directly on tasks of interest, e.g. CIFAR-10 variants, where modern techniques emerge in the top algorithms, such as bilinear interactions, normalized gradients, and weight averaging. Moreover, evolution adapts algorithms to different task types: e.g., dropout-like techniques appear when little data is available. We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction for the field.",http://arxiv.org/abs/2003.03384,2020,conferencePaper,"Real, Esteban; Liang, Chen; So, David R.; Le, Quoc V.",Proceedings of the 37th International Conference on Machine Learning "Cooperation, Conflict, and Transformative Artificial Intelligence - A Research Agenda",,https://longtermrisk.org/files/Cooperation-Conflict-and-Transformative-Artificial-Intelligence-A-Research-Agenda.pdf,2020,report,"Clifton, Jesse", Towards interactive inverse reinforcement learning,,,2016,conferencePaper,"Armstrong, Stuart; Leike, Jan",NIPS Workshop The Trouble with Autopilots: Assisted and Autonomous Driving on the Social Road,,https://dl.acm.org/doi/10.1145/3025453.3025462,2017,conferencePaper,"Brown, Barry; Laurier, Eric",Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems Astronomical Waste: The Opportunity Cost of Delayed Technological Development: Nick Bostrom,,,2003,journalArticle,"Bostrom, Nick",Utilitas Constrained Policy Improvement for Safe and Efficient Reinforcement Learning,"We propose a policy improvement algorithm for Reinforcement Learning (RL) which is called Rerouted Behavior Improvement (RBI). RBI is designed to take into account the evaluation errors of the Q-function. Such errors are common in RL when learning the $Q$-value from finite past experience data. Greedy policies or even constrained policy optimization algorithms which ignore these errors may suffer from an improvement penalty (i.e. a negative policy improvement). To minimize the improvement penalty, the RBI idea is to attenuate rapid policy changes of low probability actions which were less frequently sampled. This approach is shown to avoid catastrophic performance degradation and reduce regret when learning from a batch of past experience. Through a two-armed bandit with Gaussian distributed rewards example, we show that it also increases data efficiency when the optimal action has a high variance. We evaluate RBI in two tasks in the Atari Learning Environment: (1) learning from observations of multiple behavior policies and (2) iterative RL. Our results demonstrate the advantage of RBI over greedy policies and other constrained policy optimization algorithms as a safe learning approach and as a general data efficient learning algorithm. An anonymous Github repository of our RBI implementation is found at https://github.com/eladsar/rbi.",http://arxiv.org/abs/1805.07805,2019,conferencePaper,"Sarafian, Elad; Tamar, Aviv; Kraus, Sarit",Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Categorizing Variants of Goodhart's Law,"There are several distinct failure modes for overoptimization of systems on the basis of metrics. This occurs when a metric which can be used to improve a system is used to an extent that further optimization is ineffective or harmful, and is sometimes termed Goodhart's Law. This class of failure is often poorly understood, partly because terminology for discussing them is ambiguous, and partly because discussion using this ambiguous terminology ignores distinctions between different failure modes of this general type. This paper expands on an earlier discussion by Garrabrant, which notes there are ""(at least) four different mechanisms"" that relate to Goodhart's Law. This paper is intended to explore these mechanisms further, and specify more clearly how they occur. This discussion should be helpful in better understanding these types of failures in economic regulation, in public policy, in machine learning, and in Artificial Intelligence alignment. The importance of Goodhart effects depends on the amount of power directed towards optimizing the proxy, and so the increased optimization power offered by artificial intelligence makes it especially critical for that field.",http://arxiv.org/abs/1803.04585,2019,manuscript,"Manheim, David; Garrabrant, Scott", From ImageNet to Image Classification: Contextualizing Progress on Benchmarks,"Building rich machine learning datasets in a scalable manner often necessitates a crowd-sourced data collection pipeline. In this work, we use human studies to investigate the consequences of employing such a pipeline, focusing on the popular ImageNet dataset. We study how specific design choices in the ImageNet creation process impact the fidelity of the resulting dataset---including the introduction of biases that state-of-the-art models exploit. Our analysis pinpoints how a noisy data collection pipeline can lead to a systematic misalignment between the resulting benchmark and the real-world task it serves as a proxy for. Finally, our findings emphasize the need to augment our current model training and evaluation toolkit to take such misalignments into account. To facilitate further research, we release our refined ImageNet annotations at https://github.com/MadryLab/ImageNetMultiLabel.",http://arxiv.org/abs/2005.11295,2020,conferencePaper,"Tsipras, Dimitris; Santurkar, Shibani; Engstrom, Logan; Ilyas, Andrew; Madry, Aleksander",Proceedings of the 37th International Conference on Machine Learning Issues with Iterated Distillation and Amplification,"This post assumes familiarity with Paul Christiano’s proposed technique for AI alignment, Iterated Distillation and Amplification…",https://medium.com/@lucarade/issues-with-iterated-distillation-and-amplification-5aa01ab37173,2018,blogPost,"Rade, Luca",Luca Rade (Medium) AI and Compute,"We’re releasing an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time (by comparison, Moore’s Law had a 2-year doubling period).",https://openai.com/blog/ai-and-compute/,2018,blogPost,OpenAI,OpenAI The Epistemic Challenge to Longtermism,"Longtermists claim that what we ought to do is mainly determined by how our actions might a↵ect the very long-run future. A natural objection to longtermism is that these e↵ects may be nearly impossible to predict—perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present options is mainly determined by short-term considerations. This paper aims to precisify and evaluate (a version of) this epistemic objection to longtermism. To that end, I develop two simple models for comparing “longtermist” and “short-termist” interventions, incorporating the idea that, as we look further into the future, the e↵ects of any present intervention become progressively harder to predict. These models yield mixed conclusions: If we simply aim to maximize expected value, and don’t mind premising our choices on minuscule probabilities of astronomical payo↵s, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these “Pascalian” probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.",,2019,report,"Tarsney, Christian J", Extracting low-dimensional psychological representations from convolutional neural networks,"Deep neural networks are increasingly being used in cognitive modeling as a means of deriving representations for complex stimuli such as images. While the predictive power of these networks is high, it is often not clear whether they also offer useful explanations of the task at hand. Convolutional neural network representations have been shown to be predictive of human similarity judgments for images after appropriate adaptation. However, these high-dimensional representations are difficult to interpret. Here we present a method for reducing these representations to a low-dimensional space which is still predictive of similarity judgments. We show that these low-dimensional representations also provide insightful explanations of factors underlying human similarity judgments.",http://arxiv.org/abs/2005.14363,2020,conferencePaper,"Jha, Aditi; Peterson, Joshua; Griffiths, Thomas L.","arXiv:2005.14363 [cs, q-bio]" To Trust Or Not To Trust A Classifier,"Knowing when a classifier's prediction can be trusted is useful in many applications and critical for safely using AI. While the bulk of the effort in machine learning research has been towards improving classifier performance, understanding when a classifier's predictions should and should not be trusted has received far less attention. The standard approach is to use the classifier's discriminant or confidence score; however, we show there exists an alternative that is more effective in many situations. We propose a new score, called the trust score, which measures the agreement between the classifier and a modified nearest-neighbor classifier on the testing example. We show empirically that high (low) trust scores produce surprisingly high precision at identifying correctly (incorrectly) classified examples, consistently outperforming the classifier's confidence score as well as many other baselines. Further, under some mild distributional assumptions, we show that if the trust score for an example is high (low), the classifier will likely agree (disagree) with the Bayes-optimal classifier. Our guarantees consist of non-asymptotic rates of statistical consistency under various nonparametric settings and build on recent developments in topological data analysis.",https://arxiv.org/abs/1805.11783v2,2018,conferencePaper,"Jiang, Heinrich; Kim, Been; Guan, Melody Y.; Gupta, Maya",Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Thinking Inside the Box: Controlling and Using an Oracle AI,,http://link.springer.com/10.1007/s11023-012-9282-2,2012,journalArticle,"Armstrong, Stuart; Sandberg, Anders; Bostrom, Nick",Minds and Machines Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems,"This paper considers the question: In what ways can artificial intelligence assist with interdisciplinary research for addressing complex societal problems and advancing the social good? Problems such as environmental protection, public health, and emerging technology governance do not fit neatly within traditional academic disciplines and therefore require an interdisciplinary approach. However, interdisciplinary research poses large cognitive challenges for human researchers that go beyond the substantial challenges of narrow disciplinary research. The challenges include epistemic divides between disciplines, the massive bodies of relevant literature, the peer review of work that integrates an eclectic mix of topics, and the transfer of interdisciplinary research insights from one problem to another. Artificial interdisciplinarity already helps with these challenges via search engines, recommendation engines, and automated content analysis. Future “strong artificial interdisciplinarity” based on human-level artificial general intelligence could excel at interdisciplinary research, but it may take a long time to develop and could pose major safety and ethical issues. Therefore, there is an important role for intermediate-term artificial interdisciplinarity systems that could make major contributions to addressing societal problems without the concerns associated with artificial general intelligence.",http://link.springer.com/10.1007/s13347-020-00416-5,2020,journalArticle,"Baum, Seth D.",Philosophy & Technology The effectiveness of eight nonpharmaceutical interventions against COVID-19 in 41 countries,"

Abstract

Governments are attempting to control the COVID-19 pandemic with nonpharmaceutical interventions (NPIs). However, it is still largely unknown how effective different NPIs are at reducing transmission. Data-driven studies can estimate the effectiveness of NPIs while minimising assumptions, but existing analyses lack sufficient data and validation to robustly distinguish the effects of individual NPIs. We gather chronological data on NPIs in 41 countries between January and the end of May 2020, creating the largest public NPI dataset collected with independent double entry. We then estimate the effectiveness of 8 NPIs with a Bayesian hierarchical model by linking NPI implementation dates to national case and death counts. The results are supported by extensive empirical validation, including 11 sensitivity analyses with over 200 experimental conditions. We find that closing schools and universities was highly effective; that banning gatherings and closing high-risk businesses was effective, but closing most other businesses had limited further benefit; and that many countries may have been able to reduce R below 1 without issuing a stay-at-home order.

",https://www.medrxiv.org/content/10.1101/2020.05.28.20116129v4,2020,journalArticle,"Brauner, Jan M.; Mindermann, Sören; Sharma, Mrinank; Johnston, David; Salvatier, John; Gavenčiak, Tomáš; Stephenson, Anna B.; Leech, Gavin; Altman, George; Mikulik, Vladimir; Norman, Alexander John; Monrad, Joshua Teperowski; Besiroglu, Tamay; Ge, Hong; Hartwick, Meghan A.; Teh, Yee Whye; Chindelevitch, Leonid; Gal, Yarin; Kulveit, Jan",medRxiv Deciphering China’s AI dream,,,2018,journalArticle,"Ding, Jeffrey",Future of Humanity Institute Technical Report On the future: prospects for humanity,,,2018,book,"Rees, Martin J.", Moral realism and AI alignment,"“Abstract”: Some have claimed that moral realism – roughly, the claim that moral claims can be true or false – would, if true, have implications for AI alignment research, such that moral realists …",https://casparoesterheld.com/2018/08/06/moral-realism-and-ai-alignment/,2018,blogPost,"Oesterheld, Caspar",The Universe from an Intentional Stance Cause prioritization for downside-focused value systems,"Last edited: August 27th 2019. This post outlines my thinking on cause prioritization from the perspective of value systems whose primary concern is reducing disvalue. I’m mainly thinking of suffering-focused ethics (SFE), but I also want to include moral views that attribute substantial disvalue to things other than suffering, such as inequality or preference violation. I will limit the discussion to interventions targeted at improving the long-term future (see the reasons in section II). I hope my post will also be informative for people who do not share a downside-focused outlook, as thinking about cause prioritization from different perspectives, with emphasis on considerations other than those one is used to, can be illuminating. Moreover, understanding the strategic considerations for plausible moral views is essential for acting under moral uncertainty and cooperating with people with other values. I will talk about the following topics: * Which views qualify as downside-focused (given our empirical situation) * Why downside-focused views prioritize s-risk reduction over utopia creation * Why extinction risk reduction is unlikely to be a promising intervention according to downside-focused views * Why AI alignment is probably positive for downside-focused views, and especially positive if done with certain precautions * What to include in an EA portfolio that incorporates population ethical uncertainty and cooperation between value systems WHICH VIEWS QUALIFY AS DOWNSIDE-FOCUSED? I’m using the term downside-focused to refer to value systems that in practice (given what we know about the world) primarily recommend working on interventions that make bad things less likely.[1] For example, if one holds that what is most important is how things turn out for individuals (welfarist consequentialism), and that it is comparatively unimportant to add well-off beings to the world, then one should likely focus on preventing suffering.[2] That would b",https://forum.effectivealtruism.org/posts/225Aq4P4jFPoWBrb5/cause-prioritization-for-downside-focused-value-systems,2018,blogPost,"Gloor, Lukas",Effective Altruism Forum "Robustness via curvature regularization, and vice versa","State-of-the-art classifiers have been shown to be largely vulnerable to adversarial perturbations. One of the most effective strategies to improve robustness is adversarial training. In this paper, we investigate the effect of adversarial training on the geometry of the classification landscape and decision boundaries. We show in particular that adversarial training leads to a significant decrease in the curvature of the loss surface with respect to inputs, leading to a drastically more ""linear"" behaviour of the network. Using a locally quadratic approximation, we provide theoretical evidence on the existence of a strong relation between large robustness and small curvature. To further show the importance of reduced curvature for improving the robustness, we propose a new regularizer that directly minimizes curvature of the loss surface, and leads to adversarial robustness that is on par with adversarial training. Besides being a more efficient and principled alternative to adversarial training, the proposed regularizer confirms our claims on the importance of exhibiting quasi-linear behavior in the vicinity of data points in order to achieve robustness.",http://arxiv.org/abs/1811.09716,2018,conferencePaper,"Moosavi-Dezfooli, Seyed-Mohsen; Fawzi, Alhussein; Uesato, Jonathan; Frossard, Pascal",2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) A Paradox for Tiny Probabilities and Enormous Values,"We show that every theory of the value of uncertain prospects must have one of three unpalatable properties. Reckless theories recommend risking arbitrarily great gains at arbitrarily long odds for the sake of enormous potential; timid theories recommend passing up arbitrarily great gains to prevent a tiny increase in risk; nontransitive theories deny the principle that, if A is better than B and B is better than C, then A must be better than C. While nontransitivity has been much discussed, we draw out the costs and benefits of recklessness and timidity when it comes to axiology, decision theory, and moral uncertainty.",https://globalprioritiesinstitute.org/nick-beckstead-and-teruji-thomas-a-paradox-for-tiny-probabilities-and-enormous-values/,2020,report,"Beckstead, Nick; Thomas, Teruji", Learning to Continually Learn,"Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it. Much work has gone towards preventing the default tendency of machine learning models to catastrophically forget, yet virtually all such work involves manually-designed solutions to the problem. We instead advocate meta-learning a solution to catastrophic forgetting, allowing AI to learn to continually learn. Inspired by neuromodulatory processes in the brain, we propose A Neuromodulated Meta-Learning Algorithm (ANML). It differentiates through a sequential learning process to meta-learn an activation-gating function that enables contextdependent selective activation within a deep neural network. Specifically, a neuromodulatory (NM) neural network gates the forward pass of another (otherwise normal) neural network called the prediction learning network (PLN). The NM network also thus indirectly controls selective plasticity (i.e. the backward pass of) the PLN. ANML enables continual learning without catastrophic forgetting at scale: it produces state-of-the-art continual learning performance, sequentially learning as many as 600 classes (over 9,000 SGD updates).",http://arxiv.org/abs/2002.09571,2020,conferencePaper,"Beaulieu, Shawn; Frati, Lapo; Miconi, Thomas; Lehman, Joel; Stanley, Kenneth O.; Clune, Jeff; Cheney, Nick","arXiv:2002.09571 [cs, stat]" Film Review: Snowpiercer,,http://www.susted.com/wordpress/content/film-review-snowpiercer_2014_12/,2014,magazineArticle,"Baum, Seth",Journal of Sustainability Education Can the Singularity Be Patented? (And Other IP Conundrums for Converging Technologies),"SummaryAssuming that the singularity is eventually realized, some of the legal institutions that we take for granted, specifically those relating to “intellectual property” (IP – namely, copyrights and patents), may pose some problems. IP law concerns the ownership of expressions of ideas, and not ideas themselves. Given the nature and trajectory of converging technologies, IP laws as they currently exist may impede the development of such technologies. Examples of “patent thickets” that appear to impede other rapidly evolving technologies already abound (as in the smartphone arena). Patents and copyrights may pose even more intriguing problems once the singularity is achieved because our notions of who may own what will likely radically change. Will artificial intelligences, for example, compete with us over rights to create, and will we be legally or morally precluded from ownership rights in technologies that make such agents function? Before the singularity arrives, we would do well to work through some of these legal conundrums raised and discussed below.",https://doi.org/10.1007/978-3-662-54033-6_10,2017,bookSection,"Koepsell, David",The Technological Singularity: Managing the Journey Towards Provably Moral AI Agents in Bottom-up Learning Frameworks,"We examine moral machine decision making as inspired by a central question posed by Rossi with respect to moral preferences: can AI systems based on statistical machine learning (which do not provide a natural way to explain or justify their decisions) be used for embedding morality into a machine in a way that allows us to prove that nothing morally wrong will happen? We argue for an evaluation which is held to the same standards as a human agent, removing the demand that ethical behaviour is always achieved. We introduce four key meta-qualities desired for our moral standards, and then proceed to clarify how we can prove that an agent will correctly learn to perform moral actions given a set of samples within certain error bounds. Our group-dynamic approach enables us to demonstrate that the learned models converge to a common function to achieve stability. We further explain a valuable intrinsic consistency check made possible through the derivation of logical statements from the machine learning model. In all, this work proposes an approach for building ethical AI systems, coming from the perspective of artificial intelligence research, and sheds important light on understanding how much learning is required in order for an intelligent agent to behave morally with negligible error.",https://doi.org/10.1145/3278721.3278728,2018,conferencePaper,"Shaw, Nolan P.; Stöckel, Andreas; Orr, Ryan W.; Lidbetter, Thomas F.; Cohen, Robin","Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society" Algorithmic Decision-Making and the Control Problem,"Abstract The danger of human operators devolving responsibility to machines and failing to detect cases where they fail has been recognised for many years by industrial psychologists and engineers studying the human operators of complex machines. We call it “the control problem”, understood as the tendency of the human within a human–machine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a reliable autonomous system. While the control problem has been investigated for some time, up to this point its manifestation in machine learning contexts has not received serious attention. This paper aims to fill that gap. We argue that, except in certain special circumstances, algorithmic decision tools should not be used in high-stakes or safety-critical decisions unless the systems concerned are significantly “better than human” in the relevant domain or subdomain of decision-making. More concretely, we recommend three strategies to address the control problem, the most promising of which involves a complementary (and potentially dynamic ) coupling between highly proficient algorithmic tools and human agents working alongside one another. We also identify six key principles which all such human–machine systems should reflect in their design. These can serve as a framework both for assessing the viability of any such human–machine system as well as guiding the design and implementation of such systems generally.",http://link.springer.com/10.1007/s11023-019-09513-7,2019,journalArticle,"Zerilli, John; Knott, Alistair; Maclaurin, James; Gavaghan, Colin",Minds and Machines A survey of polls on Newcomb’s problem,"One classic story about Newcomb’s problem is that, at least initially, people one-box and two-box in roughly equal numbers (and that everyone is confident in their position). To find out whet…",https://casparoesterheld.com/2017/06/27/a-survey-of-polls-on-newcombs-problem/,2017,blogPost,Caspar,The Universe from an Intentional Stance Pragmatic-Pedagogic Value Alignment,"As intelligent systems gain autonomy and capability, it becomes vital to ensure that their objectives match those of their human users; this is known as the value-alignment problem. In robotics, value alignment is key to the design of collaborative robots that can integrate into human workflows, successfully inferring and adapting to their users’ objectives as they go. We argue that a meaningful solution to value alignment must combine multi-agent decision theory with rich mathematical models of human cognition, enabling robots to tap into people’s natural collaborative capabilities. We present a solution to the cooperative inverse reinforcement learning (CIRL) dynamic game based on well-established cognitive models of decision making and theory of mind. The solution captures a key reciprocity relation: the human will not plan her actions in isolation, but rather reason pedagogically about how the robot might learn from them; the robot, in turn, can anticipate this and interpret the human’s actions pragmatically. To our knowledge, this work constitutes the first formal analysis of value alignment grounded in empirically validated cognitive models.",http://link.springer.com/10.1007/978-3-030-28619-4_7,2020,conferencePaper,"Fisac, Jaime F.; Gates, Monica A.; Hamrick, Jessica B.; Liu, Chang; Hadfield-Menell, Dylan; Palaniappan, Malayandi; Malik, Dhruv; Sastry, S. Shankar; Griffiths, Thomas L.; Dragan, Anca D.",Robotics Research Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures,"This paper addresses the problem of evaluating learning systems in safety critical domains such as autonomous driving, where failures can have catastrophic consequences. We focus on two problems: searching for scenarios when learned agents fail and assessing their probability of failure. The standard method for agent evaluation in reinforcement learning, Vanilla Monte Carlo, can miss failures entirely, leading to the deployment of unsafe agents. We demonstrate this is an issue for current agents, where even matching the compute used for training is sometimes insufficient for evaluation. To address this shortcoming, we draw upon the rare event probability estimation literature and propose an adversarial evaluation approach. Our approach focuses evaluation on adversarially chosen situations, while still providing unbiased estimates of failure probabilities. The key difficulty is in identifying these adversarial situations -- since failures are rare there is little signal to drive optimization. To solve this we propose a continuation approach that learns failure modes in related but less robust agents. Our approach also allows reuse of data already collected for training the agent. We demonstrate the efficacy of adversarial evaluation on two standard domains: humanoid control and simulated driving. Experimental results show that our methods can find catastrophic failures and estimate failures rates of agents multiple orders of magnitude faster than standard evaluation schemes, in minutes to hours rather than days.",http://arxiv.org/abs/1812.01647,2018,manuscript,"Uesato, Jonathan; Kumar, Ananya; Szepesvari, Csaba; Erez, Tom; Ruderman, Avraham; Anderson, Keith; Dvijotham, Krishmamurthy; Heess, Nicolas; Kohli, Pushmeet", Safety-first AI for autonomous data centre cooling and industrial control,"Many of society’s most pressing problems have grown increasingly complex, so the search for solutions can feel overwhelming. At DeepMind and Google, we believe that if we can use AI as a tool to discover new knowledge, solutions will be easier to reach.In 2016, we jointly developed an AI-powered recommendation system to improve the energy efficiency of Google’s already highly-optimised data centres. Our thinking was simple: even minor improvements would provide significant energy savings and reduce CO2 emissions to help combat climate change.Now we’re taking this system to the next level: instead of human-implemented recommendations, our AI system is directly controlling data centre cooling, while remaining under the expert supervision of our data centre operators. This first-of-its-kind cloud-based control system is now safely delivering energy savings in multiple Google data centres.",/blog/article/safety-first-ai-autonomous-data-centre-cooling-and-industrial-control,2018,blogPost,"Gamble, Chris; Gao, Jim",Deepmind Why I Want to be a Posthuman when I Grow Up,"Extreme human enhancement could result in “posthuman” modes of being. After offering some definitions and conceptual clarification, I argue for two theses. First, some posthuman modes of being would be very worthwhile. Second, it could be very good for human beings to become posthuman.",https://doi.org/10.1007/978-1-4020-8852-0_8,2009,bookSection,"Bostrom, Nick",Medical Enhancement and Posthumanity Stochastic Neural Networks for Hierarchical Reinforcement Learning,"Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments show that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.",http://arxiv.org/abs/1704.03012,2017,conferencePaper,"Florensa, Carlos; Duan, Yan; Abbeel, Pieter",arXiv:1704.03012 [cs] Human Extinction and Our Obligations to the Past,"Abstract On certain plausible views, if humanity were to unanimously decide to cause its own extinction, this would not be wrong, since there is no one whom this act would wrong. We argue this is incorrect. Causing human extinction would still wrong someone; namely, our forebears who sacrificed life, limb and livelihood for the good of posterity, and whose sacrifices would be made less morally worthwhile by this heinous act.",https://www.cambridge.org/core/product/identifier/S0953820819000451/type/journal_article,2019,journalArticle,"Kaczmarek, Patrick; Beard, Simon",Utilitas Nonverbal Robot Feedback for Human Teachers,"Robots can learn preferences from human demonstrations, but their success depends on how informative these demonstrations are. Being informative is unfortunately very challenging, because during teaching, people typically get no transparency into what the robot already knows or has learned so far. In contrast, human students naturally provide a wealth of nonverbal feedback that reveals their level of understanding and engagement. In this work, we study how a robot can similarly provide feedback that is minimally disruptive, yet gives human teachers a better mental model of the robot learner, and thus enables them to teach more effectively. Our idea is that at any point, the robot can indicate what it thinks the correct next action is, shedding light on its current estimate of the human's preferences. We analyze how useful this feedback is, both in theory and with two user studies---one with a virtual character that tests the feedback itself, and one with a PR2 robot that uses gaze as the feedback mechanism. We find that feedback can be useful for improving both the quality of teaching and teachers' understanding of the robot's capability.",http://arxiv.org/abs/1911.02320,2019,conferencePaper,"Huang, Sandy H.; Huang, Isabella; Pandya, Ravi; Dragan, Anca D.",Proceedings of the Conference on Robot Learning It's not too soon to be wary of AI: We need to act now to protect humanity from future superintelligent machines,"AI research is making great strides toward its long-term goal of human-level or superhuman intelligent machines. If it succeeds in its current form, however, that could well be catastrophic for the human race. The reason is that the ""standard model"" of AI requires machines to pursue a fixed objective specified by humans. We are unable to specify the objective completely and correctly, nor can we anticipate or prevent the harms that machines pursuing an incorrect objective will create when operating on a global scale with superhuman capabilities. Already, we see examples such as social-media algorithms that learn to optimize click-through by manipulating human preferences, with disastrous consequences for democratic systems.",,2019,journalArticle,"Russell, Stuart",IEEE Spectrum Risks of Artificial Intelligence,,http://www.crcnetbase.com/doi/book/10.1201/b19187,2015,book,, An Empirical Evaluation of Deep Learning on Highway Driving,"Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.",http://arxiv.org/abs/1504.01716,2015,manuscript,"Huval, Brody; Wang, Tao; Tandon, Sameep; Kiske, Jeff; Song, Will; Pazhayampallil, Joel; Andriluka, Mykhaylo; Rajpurkar, Pranav; Migimatsu, Toki; Cheng-Yue, Royce; Mujica, Fernando; Coates, Adam; Ng, Andrew Y.", M$^3$RL: Mind-aware Multi-agent Management Reinforcement Learning,"Most of the prior work on multi-agent reinforcement learning (MARL) achieves optimal collaboration by directly controlling the agents to maximize a common reward. In this paper, we aim to address this from a different angle. In particular, we consider scenarios where there are self-interested agents (i.e., worker agents) which have their own minds (preferences, intentions, skills, etc.) and can not be dictated to perform tasks they do not wish to do. For achieving optimal coordination among these agents, we train a super agent (i.e., the manager) to manage them by first inferring their minds based on both current and past observations and then initiating contracts to assign suitable tasks to workers and promise to reward them with corresponding bonuses so that they will agree to work together. The objective of the manager is maximizing the overall productivity as well as minimizing payments made to the workers for ad-hoc worker teaming. To train the manager, we propose Mind-aware Multi-agent Management Reinforcement Learning (M^3RL), which consists of agent modeling and policy learning. We have evaluated our approach in two environments, Resource Collection and Crafting, to simulate multi-agent management problems with various task settings and multiple designs for the worker agents. The experimental results have validated the effectiveness of our approach in modeling worker agents' minds online, and in achieving optimal ad-hoc teaming with good generalization and fast adaptation.",https://arxiv.org/abs/1810.00147v3,2018,conferencePaper,"Shu, Tianmin; Tian, Yuandong", Asymptotic Logical Uncertainty and The Benford Test,"We give an algorithm A which assigns probabilities to logical sentences. For any simple infinite sequence of sentences whose truth-values appear indistinguishable from a biased coin that outputs ""true"" with probability p, we have that the sequence of probabilities that A assigns to these sentences converges to p.",http://arxiv.org/abs/1510.03370,2015,conferencePaper,"Garrabrant, Scott; Bhaskar, Siddharth; Demski, Abram; Garrabrant, Joanna; Koleszarik, George; Lloyd, Evan",Artificial General Intelligence. AGI 2016 ASNets: Deep Learning for Generalised Planning,,https://www.jair.org/index.php/jair/article/view/11633,2020,journalArticle,"Toyer, Sam; Thiébaux, Sylvie; Trevizan, Felipe; Xie, Lexing",Journal of Artificial Intelligence Research Public Policy and Superintelligent AI: A Vector Field Approach,,,2018,journalArticle,"Bostrom, Nick; Dafoe, Allan; Flynn, Carrick","Governance of AI Program, Future of Humanity Institute, University of Oxford: Oxford, UK" Sub-policy Adaptation for Hierarchical Reinforcement Learning,"Hierarchical reinforcement learning is a promising approach to tackle long-horizon decision-making problems with sparse rewards. Unfortunately, most methods still decouple the lower-level skill acquisition process and the training of a higher level that controls the skills in a new task. Leaving the skills fixed can lead to significant sub-optimality in the transfer setting. In this work, we propose a novel algorithm to discover a set of skills, and continuously adapt them along with the higher level even when training on a new task. Our main contributions are two-fold. First, we derive a new hierarchical policy gradient with an unbiased latent-dependent baseline, and we introduce Hierarchical Proximal Policy Optimization (HiPPO), an on-policy method to efficiently train all levels of the hierarchy jointly. Second, we propose a method of training time-abstractions that improves the robustness of the obtained skills to environment changes. Code and results are available at sites.google.com/view/hippo-rl",http://arxiv.org/abs/1906.05862,2019,conferencePaper,"Li, Alexander C.; Florensa, Carlos; Clavera, Ignasi; Abbeel, Pieter","arXiv:1906.05862 [cs, stat]" A reply to Francois Chollet on intelligence explosion,"This is a reply to Francois Chollet, the inventor of the Keras wrapper for the Tensorflow and Theano deep learning systems, on his essay “The impossibility of intelligence explosion.” In response to critics of his essay, Chollet tweeted:   If you post an argument online, and the only opposition you get is braindead arguments and... Read more »",https://intelligence.org/2017/12/06/chollet/,2017,blogPost,"Yudkowsky, Eliezer",Machine Intelligence Research Institute Benchmarking Safe Exploration in Deep Reinforcement Learning,"Reinforcement learning (RL) agents need to explore their environments in order to learn optimal policies by trial and error. In many environments, safety is a critical concern and certain errors are unacceptable: for example, robotics systems that interact with humans should never cause injury to the humans while exploring. While it is currently typical to train RL agents mostly or entirely in simulation, where safety concerns are minimal, we anticipate that challenges in simulating the complexities of the real world (such as human-AI interactions) will cause a shift towards training RL agents directly in the real world, where safety concerns are paramount. Consequently we take the position that safe exploration should be viewed as a critical focus area for RL research, and in this work we make three contributions to advance the study of safe exploration. First, building on a wide range of prior work on safe reinforcement learning, we propose to standardize constrained RL as the main formalism for safe exploration. Second, we present the Safety Gym benchmark suite, a new slate of high-dimensional continuous control environments for measuring research progress on constrained RL. Finally, we benchmark several constrained deep RL algorithms on Safety Gym environments to establish baselines that future work can build on.",https://arxiv.org/abs/2007.01223,2019,manuscript,"Ray, Alex; Achiam, Joshua; Amodei, Dario", On the Utility of Learning about Humans for Human-AI Coordination,"While we would like agents that can coordinate with humans, current algorithms such as self-play and population-based training create agents that can coordinate with themselves. Agents that assume their partner to be optimal or similar to them can converge to coordination protocols that fail to understand and be understood by humans. To demonstrate this, we introduce a simple environment that requires challenging coordination, based on the popular game Overcooked, and learn a simple model that mimics human play. We evaluate the performance of agents trained via self-play and population-based training. These agents perform very well when paired with themselves, but when paired with our human model, they are significantly worse than agents designed to play with the human model. An experiment with a planning algorithm yields the same conclusion, though only when the human-aware planner is given the exact human model that it is playing with. A user study with real humans shows this pattern as well, though less strongly. Qualitatively, we find that the gains come from having the agent adapt to the human's gameplay. Given this result, we suggest several approaches for designing agents that learn about humans in order to better coordinate with them. Code is available at https://github.com/HumanCompatibleAI/overcooked_ai.",https://arxiv.org/abs/1910.05789v2,2019,conferencePaper,"Carroll, Micah; Shah, Rohin; Ho, Mark K.; Griffiths, Thomas L.; Seshia, Sanjit A.; Abbeel, Pieter; Dragan, Anca",Advances in Neural Information Processing Systems 32 (NeurIPS 2019) Time for AI to cross the human performance range in ImageNet image classification,Computer image classification performance took 3 years to go from untrained human level to trained human level,https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/,2020,blogPost,AI Impacts,AI Impacts "Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers","Since hardware resources are limited, the objective of training deep learning models is typically to maximize accuracy subject to the time and memory constraints of training and inference. We study the impact of model size in this setting, focusing on Transformer models for NLP tasks that are limited by compute: self-supervised pretraining and high-resource machine translation. We first show that even though smaller Transformer models execute faster per iteration, wider and deeper models converge in significantly fewer steps. Moreover, this acceleration in convergence typically outpaces the additional computational overhead of using larger models. Therefore, the most compute-efficient training strategy is to counterintuitively train extremely large models but stop after a small number of iterations. This leads to an apparent trade-off between the training efficiency of large Transformer models and the inference efficiency of small Transformer models. However, we show that large models are more robust to compression techniques such as quantization and pruning than small models. Consequently, one can get the best of both worlds: heavily compressed, large models achieve higher accuracy than lightly compressed, small models.",http://arxiv.org/abs/2002.11794,2020,manuscript,"Li, Zhuohan; Wallace, Eric; Shen, Sheng; Lin, Kevin; Keutzer, Kurt; Klein, Dan; Gonzalez, Joseph E.", Funding Breakthrough Research: Promises and Challenges of the “ARPA Model”,,https://committees.parliament.uk/writtenevidence/9504/html/,2020,report,"Bostrom, Nick; Belfield, Haydn; Hilton, Sam", Gains from Trade through Compromise,"When agents of differing values compete, they may often find it mutually advantageous to compromise rather than continuing to engage in zero-sum conflicts. Potential ways of encouraging cooperation include promoting democracy, tolerance and (moral) trade. Because a future without compromise could be many times worse than a future with it, advancing compromise seems an important undertaking.",https://longtermrisk.org/gains-from-trade-through-compromise/,2015,blogPost,"Tomasik, Brian",Center on Long-Term Risk Self-Modification of Policy and Utility Function in Rational Agents,"Any agent that is part of the environment it interacts with and has versatile actuators (such as arms and fingers), will in principle have the ability to self-modify -- for example by changing its own source code. As we continue to create more and more intelligent agents, chances increase that they will learn about this ability. The question is: will they want to use it? For example, highly intelligent systems may find ways to change their goals to something more easily achievable, thereby `escaping' the control of their designers. In an important paper, Omohundro (2008) argued that goal preservation is a fundamental drive of any intelligent system, since a goal is more likely to be achieved if future versions of the agent strive towards the same goal. In this paper, we formalise this argument in general reinforcement learning, and explore situations where it fails. Our conclusion is that the self-modification possibility is harmless if and only if the value function of the agent anticipates the consequences of self-modifications and use the current utility function when evaluating the future.",http://arxiv.org/abs/1605.03142,2016,conferencePaper,"Everitt, Tom; Filan, Daniel; Daswani, Mayank; Hutter, Marcus",AGI 2016: Artificial General Intelligence Natural human role in supervising complex control systems,,http://www.tandfonline.com/doi/abs/10.1080/09528130802386093,2009,journalArticle,"Hexmoor, Henry; McLaughlan, Brian; Tuli, Gaurav",Journal of Experimental & Theoretical Artificial Intelligence A toy model of the control problem,"EDITED based on suggestions for improving the model Jaan Tallinn has suggested creating a toy model of the control problem, so that it can be analysed without loaded concepts like ""autonomy"", ""consciousness"", or ""intentionality"". Here a simple (too simple?) attempt: A CONTROLS B. B MANIPULATES A. Let B be a robot agent that moves in a two dimensional world, as follows: B can push the yellow blocks around. It was programmed to want to push blocks into the black ""hole"" in the bottom right. The hole can contain arbitrarily many blocks, and B gets a penalty of some tiny ε for every step is takes. The programmer wanted B to simply move one box into the hole (for a reward of 1 once the task is complete), and so programmed it to receive a reward of 1 (and no more) if the box was in the hole, after the whole task was completed and B was turned off. Unfortunately, B was programmed to operate in a noisy environment, so B actually has a probability of 0.99 of receiving the reward, for a given block in the hole. This means that if it pushes one block into the hole, it gets the reward with 0.99 probability; but if it pushes two blocks into the hole, it gets the reward with 1-(1-0.99)2 = 0.9999 probability. If it pushes all six blocks into the hole, it gets a reward with probability 0.999999999999. Therefore this tiny error has motivated B to try and push all blocks into the hole. However, the designers were aware of the possibility of B misbehaving or there being a bug, and created agent A, a simple agent watching the situation. A's aim is to get robot B to to push one block into the hole, and stop there. It can turn B off. Its input sensor is the camera in the bottom left. This camera can see across the bottom row of squares (marked in light green), into the hole. Its algorithm is to turn B off as soon as it sees a block in the hole. The intended outcome is that B pushes a block into the hole, the camera sees this, and A turns B off: If we see B as using a model to",https://www.alignmentforum.org/posts/7cXBoDQ6udquZJ89c/a-toy-model-of-the-control-problem,2015,blogPost,"Armstrong, Stuart",AI Alignment Forum Towards a New Impact Measure,"In which I propose a closed-form solution to low impact, increasing corrigibility and seemingly taking major steps to neutralize basic AI drives 1 (self-improvement), 5 (self-protectiveness), and 6 (acquisition of resources). Previously: Worrying about the Vase: Whitelisting, Overcoming Clinginess in Impact Measures, Impact Measure Desiderata To be used inside an advanced agent, an impact measure... must capture so much variance that there is no clever strategy whereby an advanced agent can produce some special type of variance that evades the measure. ~ Safe Impact MeasureIf we have a safe impact measure, we may have arbitrarily-intelligent unaligned agents which do small (bad) things instead of big (bad) things. For the abridged experience, read up to ""Notation"", skip to ""Experimental Results"", and then to ""Desiderata"". WHAT IS ""IMPACT""? One lazy Sunday afternoon, I worried that I had written myself out of a job. After all, Overcoming Clinginess in Impact Measures basically said, ""Suppose an impact measure extracts 'effects on the world'. If the agent penalizes itself for these effects, it's incentivized to stop the environment (and any agents in it) from producing them. On the other hand, if it can somehow model other agents and avoid penalizing their effects, the agent is now incentivized to get the other agents to do its dirty work."" This seemed to be strong evidence against the possibility of a simple conceptual core underlying ""impact"", and I didn't know what to do. At this point, it sometimes makes sense to step back and try to say exactly what you don't know how to solve – try to crisply state what it is that you want an unbounded solution for. Sometimes you can't even do that much, and then you may actually have to spend some time thinking 'philosophically' – the sort of stage where you talk to yourself about some mysterious ideal quantity of [chess] move-goodness and you try to pin down what its properties might be. ~ Methodology of Unbounded Anal",https://www.alignmentforum.org/posts/yEa7kwoMpsBgaBCgb/towards-a-new-impact-measure,2018,blogPost,Alex Turner,AI Alignment Forum Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning,"In reinforcement learning, agents learn by performing actions and observing their outcomes. Sometimes, it is desirable for a human operator to \textit{interrupt} an agent in order to prevent dangerous situations from happening. Yet, as part of their learning process, agents may link these interruptions, that impact their reward, to specific states and deliberately avoid them. The situation is particularly challenging in a multi-agent context because agents might not only learn from their own past interruptions, but also from those of other agents. Orseau and Armstrong defined \emph{safe interruptibility} for one learner, but their work does not naturally extend to multi-agent systems. This paper introduces \textit{dynamic safe interruptibility}, an alternative definition more suited to decentralized learning problems, and studies this notion in two learning frameworks: \textit{joint action learners} and \textit{independent learners}. We give realistic sufficient conditions on the learning algorithm to enable dynamic safe interruptibility in the case of joint action learners, yet show that these conditions are not sufficient for independent learners. We show however that if agents can detect interruptions, it is possible to prune the observations to ensure dynamic safe interruptibility even for independent learners.",http://arxiv.org/abs/1704.02882,2017,conferencePaper,"Mhamdi, El Mahdi El; Guerraoui, Rachid; Hendrikx, Hadrien; Maurer, Alexandre","arXiv:1704.02882 [cs, stat]" Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules,"A key challenge in leveraging data augmentation for neural network training is choosing an effective augmentation policy from a large search space of candidate operations. Properly chosen augmentation policies can lead to significant generalization improvements; however, state-of-the-art approaches such as AutoAugment are computationally infeasible to run for the ordinary user. In this paper, we introduce a new data augmentation algorithm, Population Based Augmentation (PBA), which generates nonstationary augmentation policy schedules instead of a fixed augmentation policy. We show that PBA can match the performance of AutoAugment on CIFAR-10, CIFAR-100, and SVHN, with three orders of magnitude less overall compute. On CIFAR-10 we achieve a mean test error of 1.46%, which is a slight improvement upon the current state-of-the-art. The code for PBA is open source and is available at https://github.com/arcelien/pba.",http://arxiv.org/abs/1905.05393,2019,conferencePaper,"Ho, Daniel; Liang, Eric; Stoica, Ion; Abbeel, Pieter; Chen, Xi",Proceedings of the 36th International Conference on Machine Learning Exploring Hierarchy-Aware Inverse Reinforcement Learning,"We introduce a new generative model for human planning under the Bayesian Inverse Reinforcement Learning (BIRL) framework which takes into account the fact that humans often plan using hierarchical strategies. We describe the Bayesian Inverse Hierarchical RL (BIHRL) algorithm for inferring the values of hierarchical planners, and use an illustrative toy model to show that BIHRL retains accuracy where standard BIRL fails. Furthermore, BIHRL is able to accurately predict the goals of `Wikispeedia' game players, with inclusion of hierarchical structure in the model resulting in a large boost in accuracy. We show that BIHRL is able to significantly outperform BIRL even when we only have a weak prior on the hierarchical structure of the plans available to the agent, and discuss the significant challenges that remain for scaling up this framework to more realistic settings.",http://arxiv.org/abs/1807.05037,2018,conferencePaper,"Cundy, Chris; Filan, Daniel",arXiv:1807.05037 [cs] AI Risk Terminology,"AI timeline - an expectation about how much time will lapse before important AI events, especially the advent of human-level AI or a similar milestone. The term can also refer to the actual periods of time (which are not yet known), rather than an expectation about them. Artificial General Intelligence (also, AGI) - the intelligence of a machine that could successfully...",https://aiimpacts.org/ai-risk-terminology/,2015,blogPost,AI Impacts,AI Impacts Universality and consequentialism within HCH,One exotic reason HCH can fail to be universal is the emergence of malicious patterns of behavior; universality may help address this risk.,https://ai-alignment.com/universality-and-consequentialism-within-hch-c0bee00365bd,2019,blogPost,"Christiano, Paul",AI Alignment (Medium) Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development,,,2019,report,"Cihon, Peter", Computational Power and the Social Impact of Artificial Intelligence,"Machine learning is a computational process. To that end, it is inextricably tied to computational power - the tangible material of chips and semiconductors that the algorithms of machine intelligence operate on. Most obviously, computational power and computing architectures shape the speed of training and inference in machine learning, and therefore influence the rate of progress in the technology. But, these relationships are more nuanced than that: hardware shapes the methods used by researchers and engineers in the design and development of machine learning models. Characteristics such as the power consumption of chips also define where and how machine learning can be used in the real world. Despite this, many analyses of the social impact of the current wave of progress in AI have not substantively brought the dimension of hardware into their accounts. While a common trope in both the popular press and scholarly literature is to highlight the massive increase in computational power that has enabled the recent breakthroughs in machine learning, the analysis frequently goes no further than this observation around magnitude. This paper aims to dig more deeply into the relationship between computational power and the development of machine learning. Specifically, it examines how changes in computing architectures, machine learning methodologies, and supply chains might influence the future of AI. In doing so, it seeks to trace a set of specific relationships between this underlying hardware layer and the broader social impacts and risks around AI.",http://arxiv.org/abs/1803.08971,2018,manuscript,"Hwang, Tim", A Formal Approach to the Problem of Logical Non-Omniscience,"We present the logical induction criterion for computable algorithms that assign probabilities to every logical statement in a given formal language, and refine those probabilities over time. The criterion is motivated by a series of stock trading analogies. Roughly speaking, each logical sentence phi is associated with a stock that is worth $1 per share if phi is true and nothing otherwise, and we interpret the belief-state of a logically uncertain reasoner as a set of market prices, where pt_N(phi)=50% means that on day N, shares of phi may be bought or sold from the reasoner for 50%. A market is then called a logical inductor if (very roughly) there is no polynomial-time computable trading strategy with finite risk tolerance that earns unbounded profits in that market over time. We then describe how this single criterion implies a number of desirable properties of bounded reasoners; for example, logical inductors outpace their underlying deductive process, perform universal empirical induction given enough time to think, and place strong trust in their own reasoning process.",http://arxiv.org/abs/1707.08747,2017,journalArticle,"Garrabrant, Scott; Benson-Tilsen, Tsvi; Critch, Andrew; Soares, Nate; Taylor, Jessica",Electronic Proceedings in Theoretical Computer Science Imitation Learning via Off-Policy Distribution Matching,"When performing imitation learning from expert demonstrations, distribution matching is a popular approach, in which one alternates between estimating distribution ratios and then using these ratios as rewards in a standard reinforcement learning (RL) algorithm. Traditionally, estimation of the distribution ratio requires on-policy data, which has caused previous work to either be exorbitantly data-inefficient or alter the original objective in a manner that can drastically change its optimum. In this work, we show how the original distribution ratio estimation objective may be transformed in a principled manner to yield a completely off-policy objective. In addition to the data-efficiency that this provides, we are able to show that this objective also renders the use of a separate RL optimization unnecessary.Rather, an imitation policy may be learned directly from this objective without the use of explicit rewards. We call the resulting algorithm ValueDICE and evaluate it on a suite of popular imitation learning benchmarks, finding that it can achieve state-of-the-art sample efficiency and performance.",http://arxiv.org/abs/1912.05032,2019,manuscript,"Kostrikov, Ilya; Nachum, Ofir; Tompson, Jonathan", Medium-Term Artificial Intelligence and Society,"There has been extensive attention to near-term and long-term AI technology and its accompanying societal issues, but the medium-term has gone largely overlooked. This paper develops the concept of medium-term AI, evaluates its importance, and analyzes some medium-term societal issues. Medium-term AI can be important in its own right and as a topic that can bridge the sometimes acrimonious divide between those who favor attention to near-term AI and those who prefer the long-term. The paper proposes the medium-term AI hypothesis: the medium-term is important from the perspectives of those who favor attention to near-term AI as well as those who favor attention to long-term AI. The paper analyzes medium-term AI in terms of governance institutions, collective action, corporate AI development, and military/national security communities. Across portions of these four areas, some support for the medium-term AI hypothesis is found, though in some cases the matter is unclear.",https://www.mdpi.com/2078-2489/11/6/290,2020,journalArticle,"Baum, Seth D.",Information Abstraction Learning,"There has been a gap between artificial intelligence and human intelligence. In this paper, we identify three key elements forming human intelligence, and suggest that abstraction learning combines these elements and is thus a way to bridge the gap. Prior researches in artificial intelligence either specify abstraction by human experts, or take abstraction as a qualitative explanation for the model. This paper aims to learn abstraction directly. We tackle three main challenges: representation, objective function, and learning algorithm. Specifically, we propose a partition structure that contains pre-allocated abstraction neurons; we formulate abstraction learning as a constrained optimization problem, which integrates abstraction properties; we develop a network evolution algorithm to solve this problem. This complete framework is named ONE (Optimization via Network Evolution). In our experiments on MNIST, ONE shows elementary human-like intelligence, including low energy consumption, knowledge sharing, and lifelong learning.",http://arxiv.org/abs/1809.03956,2018,manuscript,"Deng, Fei; Ren, Jinsheng; Chen, Feng", Constrained policy optimization,,,2017,conferencePaper,"Achiam, Joshua; Held, David; Tamar, Aviv; Abbeel, Pieter",Proceedings of the 34th International Conference on Machine Learning List of multipolar research projects,"This list currently consists of research projects suggested at the Multipolar AI workshop we held on January 26 2015. Relatively concrete projects are marked [concrete]. These are more likely to already include specific questions to answer and feasible methods to answer them with. Other 'projects' are more like open questions, or broad directions for inquiry. Projects are divided into three sections: Paths to multipolar scenarios What...",https://aiimpacts.org/multipolar-research-projects/,2015,blogPost,AI Impacts,AI Impacts Efficient Cooperative Inverse Reinforcement Learning,,,2017,conferencePaper,"Palaniappan, Malayandi; Malik, Dhruv; Hadfield-Menell, Dylan; Dragan, Anca; Russell, Stuart",Proc. ICML Work⁃ shop on Reliable Machine Learning in the Wild (2017) SoK: Security and Privacy in Machine Learning,"Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as data analytics, autonomous systems, and security diagnostics. ML is now pervasive-new systems and models are being deployed in every domain imaginable, leading to widespread deployment of software based inference and decision making. There is growing recognition that ML exposes new vulnerabilities in software systems, yet the technical community's understanding of the nature and extent of these vulnerabilities remains limited. We systematize findings on ML security and privacy, focusing on attacks identified on these systems and defenses crafted to date.We articulate a comprehensive threat model for ML, and categorize attacks and defenses within an adversarial framework. Key insights resulting from works both in the ML and security communities are identified and the effectiveness of approaches are related to structural elements of ML algorithms and the data used to train them. In particular, it is apparent that constructing a theoretical understanding of the sensitivity of modern ML algorithms to the data they analyze, à la PAC theory, will foster a science of security and privacy in ML.",,2018,conferencePaper,"Papernot, N.; McDaniel, P.; Sinha, A.; Wellman, M. P.",2018 IEEE European Symposium on Security and Privacy (EuroS P) The vulnerable world hypothesis,,,2018,journalArticle,"Bostrom, Nick",Global Policy Knowing When to Stop: Evaluation and Verification of Conformity to Output-size Specifications,"Models such as Sequence-to-Sequence and Image-to-Sequence are widely used in real world applications. While the ability of these neural architectures to produce variable-length outputs makes them extremely effective for problems like Machine Translation and Image Captioning, it also leaves them vulnerable to failures of the form where the model produces outputs of undesirable length. This behavior can have severe consequences such as usage of increased computation and induce faults in downstream modules that expect outputs of a certain length. Motivated by the need to have a better understanding of the failures of these models, this paper proposes and studies the novel output-size modulation problem and makes two key technical contributions. First, to evaluate model robustness, we develop an easy-to-compute differentiable proxy objective that can be used with gradient-based algorithms to find output-lengthening inputs. Second and more importantly, we develop a verification approach that can formally verify whether a network always produces outputs within a certain length. Experimental results on Machine Translation and Image Captioning show that our output-lengthening approach can produce outputs that are 50 times longer than the input, while our verification approach can, given a model and input domain, prove that the output length is below a certain size.",https://openaccess.thecvf.com/content_CVPR_2019/html/Wang_Knowing_When_to_Stop_Evaluation_and_Verification_of_Conformity_to_CVPR_2019_paper.html,2019,conferencePaper,"Wang, Chenglong; Bunel, Rudy; Dvijotham, Krishnamurthy; Huang, Po-Sen; Grefenstette, Edward; Kohli, Pushmeet",Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition On the construction of the self - LessWrong,"This is the fifth post of the ""a non-mystical explanation of the three characteristics of existence"" series. ON THE CONSTRUCTION OF THE SELF In his essay The Self as a Center of Narrative Gravity, Daniel Dennett offers the thought experiment of a robot that moves around the world. The robot also happens to have a module writing a novel about someone named Gilbert. When we look at the story the novel-writing module is writing, we notice that its events bear a striking similarity to what the rest of the robot is doing: If you hit the robot with a baseball bat, very shortly thereafter the story of Gilbert includes his being hit with a baseball bat by somebody who looks like you. Every now and then the robot gets locked in the closet and then says ""Help me!"" Help whom? Well, help Gilbert, presumably. But who is Gilbert? Is Gilbert the robot, or merely the fictional self created by the robot? If we go and help the robot out of the closet, it sends us a note: ""Thank you. Love, Gilbert."" At this point we will be unable to ignore the fact that the fictional career of the fictional Gilbert bears an interesting resemblance to the ""career"" of this mere robot moving through the world. We can still maintain that the robot's brain, the robot's computer, really knows nothing about the world; it's not a self. It's just a clanky computer. It doesn't know what it's doing. It doesn't even know that it's creating a fictional character. (The same is just as true of your brain; it doesn't know what it's doing either.) Nevertheless, the patterns in the behavior that is being controlled by the computer are interpretable, by us, as accreting biography--telling the narrative of a self.As Dennett suggests, something similar seems to be going on in the brain. Whenever you are awake, there is a constant distributed decision-making process going on, where different subsystems swap in and out of control. While you are eating breakfast, subsystem #42 might be running things, and while you are ha",https://www.lesswrong.com/posts/h2xgbYBNP4dLharg4/on-the-construction-of-the-self,2020,blogPost,"Sotala, Kaj",LessWrong Costs of human-level hardware,"Computing hardware which is equivalent to the brain - in terms of FLOPS probably costs between $1 x 105 and $3 x 1016, or $2/hour-$700bn/hour. in terms of TEPS probably costs $200M - $7B, or or $4,700 – $170,000/hour (including energy costs in the hourly rate). in terms of secondary memory probably costs $300-3,000, or $0.007-$0.07/hour. Details Partial costs...",https://aiimpacts.org/costs-of-human-level-hardware/,2015,blogPost,AI Impacts,AI Impacts InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets,"This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.",https://papers.nips.cc/paper/2016/hash/7c9d0b1f96aebd7b5eca8c3edaa19ebb-Abstract.html,2016,conferencePaper,"Chen, Xi; Duan, Yan; Houthooft, Rein; Schulman, John; Sutskever, Ilya; Abbeel, Pieter",Advances in Neural Information Processing Systems 29 (NIPS 2016) Agreeing to Disagree,,http://projecteuclid.org/euclid.aos/1176343654,1976,journalArticle,"Aumann, Robert J.",The Annals of Statistics Climate change prediction: Erring on the side of least drama?,,https://linkinghub.elsevier.com/retrieve/pii/S0959378012001215,2013,journalArticle,"Brysse, Keynyn; Oreskes, Naomi; O’Reilly, Jessica; Oppenheimer, Michael",Global Environmental Change Fractal AI: A fragile theory of intelligence,"Fractal AI is a theory for general artificial intelligence. It allows deriving new mathematical tools that constitute the foundations for a new kind of stochastic calculus, by modelling information using cellular automaton-like structures instead of smooth functions. In the repository included we are presenting a new Agent, derived from the first principles of the theory, which is capable of solving Atari games several orders of magnitude more efficiently than other similar techniques, like Monte Carlo Tree Search. The code provided shows how it is now possible to beat some of the current State of The Art benchmarks on Atari games, without previous learning and using less than 1000 samples to calculate each one of the actions when standard MCTS uses 3 Million samples. Among other things, Fractal AI makes it possible to generate a huge database of top performing examples with a very little amount of computation required, transforming Reinforcement Learning into a supervised problem. The algorithm presented is capable of solving the exploration vs exploitation dilemma on both the discrete and continuous cases, while maintaining control over any aspect of the behaviour of the Agent. From a general approach, new techniques presented here have direct applications to other areas such as Non-equilibrium thermodynamics, chemistry, quantum physics, economics, information theory, and non-linear control theory.",http://arxiv.org/abs/1803.05049,2020,manuscript,"Cerezo, Sergio Hernandez; Ballester, Guillem Duran", The AI does not hate you: the rationalists and their quest to save the world,"A deep-dive into the weird and wonderful world of Artificial Intelligence. 'The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else'. This is a book about AI and AI risk. But it's also more importantly about a community of people who are trying to think rationally about intelligence, and the places that these thoughts are taking them, and what insight they can and can't give us about the future of the human race over the next few years. It explains why these people are worried, why they might be right, and why they might be wrong. It is a book about the cutting edge of our thinking on intelligence and rationality right now by the people who stay up all night worrying about it. Along the way, we discover why we probably don't need to worry about a future AI resurrecting a perfect copy of our minds and torturing us for not inventing it sooner, but we perhaps should be concerned about paperclips destroying life as we know it; how Mickey Mouse can teach us an important lesson about how to program AI; and how a more rational approach to life could be what saves us all. --",,2019,book,"Chivers, Tom", "Plausible cases for HRAD work, and locating the crux in the ""realism about rationality"" debate","This post is my attempt to summarize and distill the major public debates about MIRI's highly reliable agent designs (HRAD) work (which includes work on decision theory), including the discussions in Realism about rationality and Daniel Dewey's My current thoughts on MIRI's ""highly reliable agent design"" work . Part of the difficulty with discussing the value of HRAD work is that it's not even clear what the disagreement is about, so my summary takes the form of multiple possible ""worlds"" we might be in; each world consists of a positive case for doing HRAD work, along with the potential objections to that case, which results in one or more cruxes. I will talk about ""being in a world"" throughout this post. What I mean by this is the following: If we are ""in world X"", that means that the case for HRAD work outlined in world X is the one that most resonates with MIRI people as their motivation for doing HRAD work; and that when people disagree about the value of HRAD work, this is what the disagreement is about. When I say that ""I think we are in this world"", I don't mean that I agree with this case for HRAD work; it just means that this is what I think MIRI people think. In this post, the pro-HRAD stance is something like ""HRAD work is the most important kind of technical research in AI alignment; it is the overwhelming priority and we're pretty much screwed if we under-invest in this kind of research"" and the anti-HRAD stance is something like ""HRAD work seems significantly less promising than other technical AI alignment agendas, such as the approaches to directly align machine learning systems (e.g. iterated amplification)"". There is a much weaker pro-HRAD stance, which is something like ""HRAD work is interesting and doing more of it adds value, but it's not necessarily the most important kind of technical AI alignment research to be working on""; this post is not about this weaker stance. CLARIFYING SOME TERMS Before describing the various worlds, I want to pre",https://www.alignmentforum.org/posts/BGxTpdBGbwCWrGiCL/plausible-cases-for-hrad-work-and-locating-the-crux-in-the,2020,blogPost,"Rice, Issa",AI alignment Forum Staking Our Future: Deontic Longtermism and the Non-Identity Problem,"Greaves and MacAskill argue for a xiological longtermism, according to which, in a wide class of decision contexts, the option that is e x ante best is the option that corresponds to the best lottery over histories from t onwards, where t i s some date far in the future. They suggest that a s takes-sensitivity argument may be used to derive d eontic longtermism from axiological longtermism, where deontic longtermism holds that in a wide class of decision contexts, the option one ought to choose is the option that corresponds to the best lottery over histories from t onwards, where t is some date far in the future. This argument appeals to the Stakes Principle: when the axiological stakes are high, non-consequentialist constraints and prerogatives tend to be insignificant in comparison, so that what one ought to do is simply whichever option is best. I argue that there are strong grounds on which to reject the Stakes Principle. Furthermore, by reflecting on the Non-Identity Problem, I argue that there are plausible grounds for denying the existence of a sound argument from axiological longtermism to deontic longtermism insofar as we are concerned with ways of improving the value of the future of the kind that are focal in Greaves and MacAskill’s presentation.",,2019,manuscript,"Mogensen, Andreas L", Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm,"We introduce a new concept of approximation applicable to decision problems and functions, inspired by Bayesian probability. From the perspective of a Bayesian reasoner with limited computational resources, the answer to a problem that cannot be solved exactly is uncertain and therefore should be described by a random variable. It thus should make sense to talk about the expected value of this random variable, an idea we formalize in the language of average-case complexity theory by introducing the concept of ""optimal polynomial-time estimators."" We prove some existence theorems and completeness results, and show that optimal polynomial-time estimators exhibit many parallels with ""classical"" probability theory.",http://arxiv.org/abs/1608.04112,2019,manuscript,"Kosoy, Vanessa; Appel, Alexander", Information hazards in biotechnology,,,2019,journalArticle,"Lewis, Gregory; Millett, Piers; Sandberg, Anders; Snyder-Beattie, Andrew; Gronvall, Gigi",Risk Analysis Learning Cognitive Models using Neural Networks,"A cognitive model of human learning provides information about skills a learner must acquire to perform accurately in a task domain. Cognitive models of learning are not only of scientific interest, but are also valuable in adaptive online tutoring systems. A more accurate model yields more effective tutoring through better instructional decisions. Prior methods of automated cognitive model discovery have typically focused on well-structured domains, relied on student performance data or involved substantial human knowledge engineering. In this paper, we propose Cognitive Representation Learner (CogRL), a novel framework to learn accurate cognitive models in ill-structured domains with no data and little to no human knowledge engineering. Our contribution is two-fold: firstly, we show that representations learnt using CogRL can be used for accurate automatic cognitive model discovery without using any student performance data in several ill-structured domains: Rumble Blocks, Chinese Character, and Article Selection. This is especially effective and useful in domains where an accurate human-authored cognitive model is unavailable or authoring a cognitive model is difficult. Secondly, for domains where a cognitive model is available, we show that representations learned through CogRL can be used to get accurate estimates of skill difficulty and learning rate parameters without using any student performance data. These estimates are shown to highly correlate with estimates using student performance data on an Article Selection dataset.",http://arxiv.org/abs/1806.08065,2018,conferencePaper,"Chaplot, Devendra Singh; MacLellan, Christopher; Salakhutdinov, Ruslan; Koedinger, Kenneth",Artificial Intelligence in Education Modeling AGI Safety Frameworks with Causal Influence Diagrams,"Proposals for safe AGI systems are typically made at the level of frameworks, specifying how the components of the proposed system should be trained and interact with each other. In this paper, we model and compare the most promising AGI safety frameworks using causal influence diagrams. The diagrams show the optimization objective and causal assumptions of the framework. The unified representation permits easy comparison of frameworks and their assumptions. We hope that the diagrams will serve as an accessible and visual introduction to the main AGI safety frameworks.",http://arxiv.org/abs/1906.08663,2019,conferencePaper,"Everitt, Tom; Kumar, Ramana; Krakovna, Victoria; Legg, Shane",arXiv:1906.08663 [cs] Academic Search Engine Optimization: Optimizing Scholarly Literature for Google Scholar & Co.,,https://utpjournals.press/doi/10.3138/jsp.41.2.176,2010,journalArticle,"Beel, Jöran; Gipp, Bela; Wilde, Erik",Journal of Scholarly Publishing Research Priorities for Robust and Beneficial Artificial Intelligence,"Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to investigate how to maximize these benefits while avoiding potential pitfalls. This article gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.",https://aaai.org/ojs/index.php/aimagazine/article/view/2577,2015,journalArticle,"Russell, Stuart; Dewey, Daniel; Tegmark, Max",AI Magazine Messier than Oil: Assessing Data Advantage in Military AI,Both China and the United States seek to develop military applications enabled by artificial intelligence. This issue brief reviews the obstacles to assessing data competitiveness and provides metrics for measuring data advantage.,https://cset.georgetown.edu/research/messier-than-oil-assessing-data-advantage-in-military-ai/,2020,report,"Chahal, Husanjot; Fedasiuk, Ryan; Flynn, Carrick", Uncovering Surprising Behaviors in Reinforcement Learning via Worst-case Analysis,We find environment settings in which SOTA agents trained on navigation tasks display extreme failures suggesting failures in generalization.,https://openreview.net/forum?id=SkgZNnR5tX,2018,journalArticle,"Ruderman, Avraham; Everett, Richard; Sikder, Bristy; Soyer, Hubert; Uesato, Jonathan; Kumar, Ananya; Beattie, Charlie; Kohli, Pushmeet", G.K. Chesterton On AI Risk,"[An SSC reader working at an Oxford library stumbled across a previously undiscovered manuscript of G.K. Chesterton’s, expressing his thoughts on AI, x-risk, and superintelligence. She was ki…",https://slatestarcodex.com/2017/04/01/g-k-chesterton-on-ai-risk/,2017,blogPost,"Alexander, Scott",Slate Star Codex Sparse Skill Coding: Learning Behavioral Hierarchies with Sparse Codes,Many approaches to hierarchical reinforcement learning aim to identify sub-goal structure in tasks. We consider an alternative perspective based on identifying behavioral `motifs'---repeated action...,https://openreview.net/forum?id=Hygv3xrtDr,2019,manuscript,"Sanborn, Sophia; Chang, Michael; Levine, Sergey; Griffiths, Thomas", Focus: you are allowed to be bad at accomplishing your goals,"When asked about what it means for a system to be goal-directed, one common answer draws on some version of Dennett’s intentional stance: a goal-directed system is a system such that modeling it as having a goal provides accurate and efficient predictions about its behavior. I agree up to that point. But then, some people follow up by saying that the prediction is that the system will accomplish its goal. For example, it makes sense to model AlphaGo as goal-directed towards winning at Go, because it will eventually win. And taking the intentional stance allows me to predict that. But what if I make AlphaGo play against AlphaZero, which is strictly better at Go? Then AlphaGo will consistently lose. Does it mean that it’s no longer goal-directed towards winning? What feels wrong to me is the implicit link drawn between goal-directedness and competence. A bad Go player will usually lose, but it doesn’t seem any less goal-directed to me than a stronger one that consistently wins. Competence is thus not the whole story. It might be useful to compute goal-directedness; reaching some lower-bound of competency might even be a necessary condition for goal-directedness (play badly enough and it becomes debatable whether you're even trying to win). But when forcing together the two, I feel like something important is lost. To solve this problem, I propose a new metric of goal-directedness, focus: how much is the system trying to accomplish a certain goal. Focus is not the whole story about being goal-directed, but I think computing the focus of a system for some goal (details in the next paragraph) gives useful information about its goal-directedness. Given a system S (as a function from states or histories to actions) and a goal G (as a set of states), here are the steps to compute the focus of S towards G. * I define a reward function over states R valued 1 at states in and 0 at all other states. * Then I define Pol be the set of all policies that can be generate",https://www.alignmentforum.org/posts/X5WTgfX5Ly4ZNHWZD/focus-you-are-allowed-to-be-bad-at-accomplishing-your-goals,2020,blogPost,"Shimi, Adam",AI Alignment Forum Book review: Only One Chance: How Environmental Pollution Impairs Brain Development – and How to Protect the Brains of the Next Generation.,,https://linkinghub.elsevier.com/retrieve/pii/S1462901114001221,2014,journalArticle,"Baum, Seth D.",Environmental Science & Policy How can Interpretability help Alignment?,,https://www.alignmentforum.org/posts/uRnprGSiLGXv35foX/how-can-interpretability-help-alignment,2020,blogPost,"Kirk, Robert; Gavenčiak, Tomáš; Dorner, Flo",AI Alignment Forum Training verified learners with learned verifiers,"This paper proposes a new algorithmic framework, predictor-verifier training, to train neural networks that are verifiable, i.e., networks that provably satisfy some desired input-output properties. The key idea is to simultaneously train two networks: a predictor network that performs the task at hand,e.g., predicting labels given inputs, and a verifier network that computes a bound on how well the predictor satisfies the properties being verified. Both networks can be trained simultaneously to optimize a weighted combination of the standard data-fitting loss and a term that bounds the maximum violation of the property. Experiments show that not only is the predictor-verifier architecture able to train networks to achieve state of the art verified robustness to adversarial examples with much shorter training times (outperforming previous algorithms on small datasets like MNIST and SVHN), but it can also be scaled to produce the first known (to the best of our knowledge) verifiably robust networks for CIFAR-10.",http://arxiv.org/abs/1805.10265,2018,manuscript,"Dvijotham, Krishnamurthy; Gowal, Sven; Stanforth, Robert; Arandjelovic, Relja; O'Donoghue, Brendan; Uesato, Jonathan; Kohli, Pushmeet", "A critical agential account of free will, causation, and physics","This is an account of free choice in a physical universe. It is very much relevant to decision theory and philosophy of science. It is largely metaphysical, in terms of taking certain things to be …",https://unstableontology.com/2020/03/05/a-critical-agential-account-of-free-will-causation-and-physics/,2020,blogPost,"Taylor, Jessica",Unstable Ontology Empirical evidence for resource-rational anchoring and adjustment,"People’s estimates of numerical quantities are systematically biased towards their initial guess. This anchoring bias is usually interpreted as sign of human irrationality, but it has recently been suggested that the anchoring bias instead results from people’s rational use of their finite time and limited cognitive resources. If this were true, then adjustment should decrease with the relative cost of time. To test this hypothesis, we designed a new numerical estimation paradigm that controls people’s knowledge and varies the cost of time and error independently while allowing people to invest as much or as little time and effort into refining their estimate as they wish. Two experiments confirmed the prediction that adjustment decreases with time cost but increases with error cost regardless of whether the anchor was self-generated or provided. These results support the hypothesis that people rationally adapt their number of adjustments to achieve a near-optimal speed-accuracy tradeoff. This suggests that the anchoring bias might be a signature of the rational use of finite time and limited cognitive resources rather than a sign of human irrationality.",https://doi.org/10.3758/s13423-017-1288-6,2018,journalArticle,"Lieder, Falk; Griffiths, Thomas L.; M. Huys, Quentin J.; Goodman, Noah D.",Psychonomic Bulletin & Review Using Selective Attention in Reinforcement Learning Agents,"Posted by Yujin Tang, Research Software Engineer and David Ha, Staff Research Scientist, Google Research, Tokyo Inattentional blindness ...",http://ai.googleblog.com/2020/06/using-selective-attention-in.html,2020,blogPost,"Tang, Yujin; Ha, David",Google AI Blog A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks,"Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications. However, deep neural networks with the softmax classifier are known to produce highly overconfident posterior distributions even for such abnormal samples. In this paper, we propose a simple yet effective method for detecting any abnormal samples, which is applicable to any pre-trained softmax neural classifier. We obtain the class conditional Gaussian distributions with respect to (low- and upper-level) features of the deep models under Gaussian discriminant analysis, which result in a confidence score based on the Mahalanobis distance. While most prior methods have been evaluated for detecting either out-of-distribution or adversarial samples, but not both, the proposed method achieves the state-of-the-art performances for both cases in our experiments. Moreover, we found that our proposed method is more robust in harsh cases, e.g., when the training dataset has noisy labels or small number of samples. Finally, we show that the proposed method enjoys broader usage by applying it to class-incremental learning: whenever out-of-distribution samples are detected, our classification rule can incorporate new classes well without further training deep models.",http://arxiv.org/abs/1807.03888,2018,conferencePaper,"Lee, Kimin; Lee, Kibok; Lee, Honglak; Shin, Jinwoo",Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Jukebox: A Generative Model for Music,"We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multi-scale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples at https://jukebox.openai.com, along with model weights and code at https://github.com/openai/jukebox",http://arxiv.org/abs/2005.00341,2020,manuscript,"Dhariwal, Prafulla; Jun, Heewoo; Payne, Christine; Kim, Jong Wook; Radford, Alec; Sutskever, Ilya", ICLR Safe ML Workshop Report,Victoria Krakovna co-organized the 2019 ICLR Safe ML workshop. One of the main goals was to bring together near and long term safety research communities.,https://futureoflife.org/2019/06/18/iclr-safe-ml-workshop-report/,2019,blogPost,"Krakovna, Victoria",Future of Life Institute Subjective implication decision theory in critical agentialism,"This is a follow-up to a previous post on critical agentialism, to explore the straightforward decision-theoretic consequences. I call this subjective implication decision theory, since the agent i…",https://unstableontology.com/2020/03/05/subjective-implication-decision-theory-in-critical-agentialism/,2020,blogPost,"Taylor, Jessica",Unstable Ontology The Whiteness of AI,"This paper focuses on the fact that AI is predominantly portrayed as white—in colour, ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined intelligent machines in four categories: humanoid robots, chatbots and virtual assistants, stock images of AI, and portrayals of AI in film and television. We then offer three interpretations of the Whiteness of AI, drawing on critical race theory, particularly the idea of the White racial frame. First, we examine the extent to which this Whiteness might simply reflect the predominantly White milieus from which these artefacts arise. Second, we argue that to imagine machines that are intelligent, professional, or powerful is to imagine White machines because the White racial frame ascribes these attributes predominantly to White people. Third, we argue that AI racialised as White allows for a full erasure of people of colour from the White utopian imaginary. Finally, we examine potential consequences of the racialisation of AI, arguing it could exacerbate bias and misdirect concern.",https://doi.org/10.1007/s13347-020-00415-6,2020,journalArticle,"Cave, Stephen; Dihal, Kanta",Philosophy & Technology Strategic implications of openness in AI development,,,2017,journalArticle,"Bostrom, Nick",Global Policy ACDT: a hack-y acausal decision theory,"Inspired by my post on problems with causal decision theory (CDT), here is a hacked version of CDT that seems to be able to imitate timeless decision theory (TDT) and functional decision theory[1] (FDT), as well as updateless decision theory (UDT) under certain circumstances. Call this ACDT, for (a)causal decision theory. It is, essentially, CDT which can draw extra, acausal arrows on the causal graphs, and which attempts to figure out which graph represents the world it's in. The drawback is its lack of elegance; the advantage, if it works, is that it's simple to specify and focuses attention on the important aspects of deducing the graph. DEFINING ACDT CDT AND THE NEWCOMB PROBLEM In the Newcomb problem, there is a predictor Ω who leaves two boxes, and predicts whether you will take one (""one-box"") or both (""two-box""). If Ω predicts you will one-box, it had put a large prize in that first box; otherwise that box is empty. There is always a small consolation prize in the second box. In terms of causal graphs, we can represent it this way: The dark red node is the decision node, which the agent can affect. The green node is a utility node, whose value the agent cares about. The CDT agent uses the ""do"" operator from Pearl's Causality. Essentially all the incoming arrows to the decision node are cut (though the CDT agent keeps track of any information gained that way), then the CDT agent maximises its utility by choosing its action: In this situation, the CDT agent will always two-box, since it treats Ω's decision as fixed, and in that case two-boxing dominates, since you get whatever's in the first box, plus the consolation prize. ACDT ALGORITHM The ACDT algorithm is similar, except that when it cuts the causal links to its decision, it also adds potential links from that decision node to all the other nodes in the graph. Then it attempts to figure out which diagram is correct, and then maximises its utility in the CDT way. Note that ACDT doesn't take a",https://www.alignmentforum.org/posts/9m2fzjNSJmd3yxxKG/acdt-a-hack-y-acausal-decision-theory,2020,blogPost,"Armstrong, Stuart",AI Alignment Forum "Asymmetry, uncertainty, and the long term",,https://globalprioritiesinstitute.org/wp-content/uploads/2020/Teruji_Thomas_asymmetry_uncertainty.pdf,2019,report,"Thomas, Teruji", Towards Cooperation in Learning Games,"Suppose that several actors are going to deploy learning agents to act on their behalf. What principles should guide these actors in designing their agents, given that they may have competing goals? An appealing solution concept in this setting is welfareoptimal learning equilibrium. This means that the learning agents should constitute a Nash equilibrium whose payoff profile is optimal according to some measure of total welfare (welfare function). In this work, we construct a class of learning algorithms in this spirit called learning tit-for-tat (L-TFT). L-TFT algorithms maximize a welfare function according to a specified optimization schedule, and punish their counterpart when they detect that they are deviating from this plan. Because the policies of other agents are not in general fully observed, agents must infer whether their counterpart is following a cooperative learning algorithm. This requires us to develop new techniques for making inferences about counterpart learning algorithms. In two sequential social dilemmas, our L-TFT algorithms successfully cooperate in self-play while effectively avoiding exploitation by and punishing defecting learning algorithms.",,2020,manuscript,"Clifton, Jesse; Riché, Maxime", Improved Baselines with Momentum Contrastive Learning,"Contrastive unsupervised learning has recently shown encouraging progress, e.g., in Momentum Contrast (MoCo) and SimCLR. In this note, we verify the effectiveness of two of SimCLR’s design improvements by implementing them in the MoCo framework. With simple modifications to MoCo—namely, using an MLP projection head and more data augmentation—we establish stronger baselines that outperform SimCLR and do not require large training batches. We hope this will make state-of-the-art unsupervised learning research more accessible. Code will be made public.",http://arxiv.org/abs/2003.04297,2020,manuscript,"Chen, Xinlei; Fan, Haoqi; Girshick, Ross; He, Kaiming", Bubbles under the wallpaper: healthcare rationing and discrimination,,,2015,journalArticle,"Beckstead, Nick; Ord, Toby",Bioethics: an anthology Evolution Strategies as a Scalable Alternative to Reinforcement Learning,"We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs available: By using a novel communication strategy based on common random numbers, our ES implementation only needs to communicate scalars, making it possible to scale to over a thousand parallel workers. This allows us to solve 3D humanoid walking in 10 minutes and obtain competitive results on most Atari games after one hour of training. In addition, we highlight several advantages of ES as a black box optimization technique: it is invariant to action frequency and delayed rewards, tolerant of extremely long horizons, and does not need temporal discounting or value function approximation.",https://arxiv.org/abs/1703.03864v2,2017,manuscript,"Salimans, Tim; Ho, Jonathan; Chen, Xi; Sidor, Szymon; Sutskever, Ilya", Human Instruction-Following with Deep Reinforcement Learning via Transfer-Learning from Text,"Recent work has described neural-network-based agents that are trained with reinforcement learning (RL) to execute language-like commands in simulated worlds, as a step towards an intelligent agent or robot that can be instructed by human users. However, the optimisation of multi-goal motor policies via deep RL from scratch requires many episodes of experience. Consequently, instructionfollowing with deep RL typically involves language generated from templates (by an environment simulator), which does not reflect the varied or ambiguous expressions of real users. Here, we propose a conceptually simple method for training instruction-following agents with deep RL that are robust to natural human instructions. By applying our method with a state-of-the-art pre-trained text-based language model (BERT), on tasks requiring agents to identify and position everyday objects relative to other objects in a naturalistic 3D simulated room, we demonstrate substantially-above-chance zero-shot transfer from synthetic template commands to natural instructions given by humans. Our approach is a general recipe for training any deep RL-based system to interface with human users, and bridges the gap between two research directions of notable recent success: agent-centric motor behavior and text-based representation learning.",http://arxiv.org/abs/2005.09382,2020,manuscript,"Hill, Felix; Mokra, Sona; Wong, Nathaniel; Harley, Tim", Towards A Rigorous Science of Interpretable Machine Learning,"As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and describe when interpretability is needed (and when it is not). Next, we suggest a taxonomy for rigorous evaluation and expose open questions towards a more rigorous science of interpretable machine learning.",http://arxiv.org/abs/1702.08608,2017,manuscript,"Doshi-Velez, Finale; Kim, Been", Society-in-the-loop: programming the algorithmic social contract,,http://link.springer.com/10.1007/s10676-017-9430-8,2018,journalArticle,"Rahwan, Iyad",Ethics and Information Technology Shaping economic incentives for collaborative AGI,"In ""An AI Race for Strategic Advantage: Rhetoric and Risks"" (2018), Stephen Cave and Seán S ÓhÉigeartaigh argue that we should try to promote a cooperative AI narrative over a competitive one: The next decade will see AI applied in an increasingly integral way to safety-critical systems; healthcare, transport, infrastructure to name a few. In order to realise these benefits as quickly and safely as possible, sharing of research, datasets, and best practices will be critical. For example, to ensure the safety of autonomous cars, pooling expertise and datasets on vehicle performances across as wide as possible a range of environments and conditions (including accidents and near-accidents) would provide substantial benefits for all involved. This is particularly so given that the research, data, and testing needed to refine and ensure the safety of such systems before deployment may be considerably more costly and time-consuming than the research needed to develop the initial technological capability.Promoting recognition that deep cooperation of this nature is needed to deliver the benefits of AI robustly may be a powerful tool in dispelling a ‘technological race’ narrative; and a ‘cooperation for safe AI’ framing is likely to become increasingly important as more powerful and broadly capable AI systems are developed and deployed. [...] There have been encouraging developments promoting the above narratives in recent years. ‘AI for global benefit’ is perhaps best exemplified by the 2017’s ITU summit on AI for Global Good (Butler 2017), although it also features prominently in narratives being put forward by the IEEE’s Ethically Aligned Design process (IEEE 2016), the Partnership on AI, and programmes and materials put forward by Microsoft, DeepMind and other leading companies. Collaboration on AI in safety-critical settings is also a thematic pillar for the Partnership on AI2 . Even more ambitious cooperative projects have been proposed by others, for example the cal",https://www.lesswrong.com/posts/FkZCM4DMprtEp568s/shaping-economic-incentives-for-collaborative-agi,2018,blogPost,Kaj Sotala,LessWrong Perceptual Values from Observation,"Imitation by observation is an approach for learning from expert demonstrations that lack action information, such as videos. Recent approaches to this problem can be placed into two broad categories: training dynamics models that aim to predict the actions taken between states, and learning rewards or features for computing them for Reinforcement Learning (RL). In this paper, we introduce a novel approach that learns values, rather than rewards, directly from observations. We show that by using values, we can significantly speed up RL by removing the need to bootstrap action-values, as compared to sparse-reward specifications.",https://arxiv.org/abs/1905.07861v1,2019,manuscript,"Edwards, Ashley D.; Isbell, Charles L.", The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?,"There is growing concern over the potential misuse of artificial intelligence (AI) research. Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse. This paper addresses the balance between these two effects. Our theoretical framework elucidates the factors governing whether the published research will be more useful for attackers or defenders, such as the possibility for adequate defensive measures, or the independent discovery of the knowledge outside of the scientific community. The balance will vary across scientific fields. However, we show that the existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software vulnerabilities. While disclosure of software vulnerabilities often favours defence, this cannot be assumed for AI research. The AI research community should consider concepts and policies from a broad set of adjacent fields, and ultimately needs to craft policy well-suited to its particular challenges.",http://arxiv.org/abs/2001.00463,2020,conferencePaper,"Shevlane, Toby; Dafoe, Allan","AIES '20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society" Death and pain of a digital brain,,,2015,journalArticle,"Sandberg, Anders",New Scientist Causal Discovery in the Presence of Missing Data,"Missing data are ubiquitous in many domains such as healthcare. When these data entries are not missing completely at random, the (conditional) independence relations in the observed data may be di...",http://proceedings.mlr.press/v89/tu19a.html,2019,conferencePaper,"Tu, Ruibo; Zhang, Cheng; Ackermann, Paul; Mohan, Karthika; Kjellström, Hedvig; Zhang, Kun",The 22nd International Conference on Artificial Intelligence and Statistics Beyond Parity Constraints: Fourier Analysis of Hash Functions for Inference,Random projections have played an important role in scaling up machine learning and data mining algorithms. Recently they have also been applied to probabilistic inference to estimate properties of...,http://proceedings.mlr.press/v48/achim16.html,2016,conferencePaper,"Achim, Tudor; Sabharwal, Ashish; Ermon, Stefano",International Conference on Machine Learning Sequential Equilibrium in Computational Games,,https://dl.acm.org/doi/10.1145/3340232,2019,journalArticle,"Halpern, Joseph Y.; Pass, Rafael",ACM Transactions on Economics and Computation Writeup: Progress on AI Safety via Debate,"This is a writeup of the research done by the ""Reflection-Humans"" team at OpenAI in Q3 and Q4 of 2019. During that period we investigated mechanisms that would allow evaluators to get correct and helpful answers from experts, without the evaluators themselves being expert in the domain of the questions. This follows from the original work on AI Safety via Debate and the call for research on human aspects of AI safety, and is also closely related to work on Iterated Amplification. AUTHORS AND ACKNOWLEDGEMENTS The main researchers on this project were Elizabeth Barnes, Paul Christiano, Long Ouyang and Geoffrey Irving. We are grateful to many others who offered ideas and feedback. In particular: the cross-examination idea was inspired by a conversation with Chelsea Voss; Adam Gleave had helpful ideas about the long computation problem; Jeff Wu, Danny Hernandez and Gretchen Krueger gave feedback on a draft; we had helpful conversations with Amanda Askell, Andreas Stuhlmüller and Joe Collman, as well as others on the Ought team and the OpenAI Reflection team. We’d also like to thank our contractors who participated in debate experiments, especially David Jones, Erol Akbaba, Alex Deam and Chris Painter. Oliver Habryka helped format and edit the document for the AI Alignment Forum. Note by Oliver: There is currently a bug with links to headings in a post, causing them to not properly scroll when clicked. Until that is fixed, just open those links in a new tab, which should scroll correctly. OVERVIEW Motivation As we apply ML to increasingly important and complex tasks, the problem of evaluating behaviour and providing a good training signal becomes more difficult. We already see examples of RL leading to undesirable behaviours that superficially ‘look good’ to human evaluators (see this collection of examples). One example from an OpenAI paper is an agent learning incorrect behaviours in a 3d simulator, because the behaviours look like the desired behaviour in the 2d",https://www.alignmentforum.org/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1,2020,blogPost,"Barnes, Beth; Christiano, Paul",AI Alignment Forum Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting,"Machine learning algorithms, when applied to sensitive data, pose a distinct threat to privacy. A growing body of prior work demonstrates that models produced by these algorithms may leak specific private information in the training data to an attacker, either through the models' structure or their observable behavior. However, the underlying cause of this privacy risk is not well understood beyond a handful of anecdotal accounts that suggest overfitting and influence might play a role. This paper examines the effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks. Using both formal and empirical analyses, we illustrate a clear relationship between these factors and the privacy risk that arises in several popular machine learning algorithms. We find that overfitting is sufficient to allow an attacker to perform membership inference and, when the target attribute meets certain conditions about its influence, attribute inference attacks. Interestingly, our formal analysis also shows that overfitting is not necessary for these attacks and begins to shed light on what other factors may be in play. Finally, we explore the connection between membership inference and attribute inference, showing that there are deep connections between the two that lead to effective new attacks.",http://arxiv.org/abs/1709.01604,2018,conferencePaper,"Yeom, Samuel; Giacomelli, Irene; Fredrikson, Matt; Jha, Somesh",2018 IEEE 31st Computer Security Foundations Symposium (CSF) Robust Multi-Agent Reinforcement Learning via Minimax Deep Deterministic Policy Gradient,"Despite the recent advances of deep reinforcement learning (DRL), agents trained by DRL tend to be brittle and sensitive to the training environment, especially in the multi-agent scenarios. In the multi-agent setting, a DRL agent’s policy can easily get stuck in a poor local optima w.r.t. its training partners – the learned policy may be only locally optimal to other agents’ current policies. In this paper, we focus on the problem of training robust DRL agents with continuous actions in the multi-agent learning setting so that the trained agents can still generalize when its opponents’ policies alter. To tackle this problem, we proposed a new algorithm, MiniMax Multi-agent Deep Deterministic Policy Gradient (M3DDPG) with the following contributions: (1) we introduce a minimax extension of the popular multi-agent deep deterministic policy gradient algorithm (MADDPG), for robust policy learning; (2) since the continuous action space leads to computational intractability in our minimax learning objective, we propose Multi-Agent Adversarial Learning (MAAL) to efficiently solve our proposed formulation. We empirically evaluate our M3DDPG algorithm in four mixed cooperative and competitive multi-agent environments and the agents trained by our method significantly outperforms existing baselines.",http://www.aaai.org/ojs/index.php/AAAI/article/view/4327,2019,conferencePaper,"Li, Shihui; Wu, Yi; Cui, Xinyue; Dong, Honghua; Fang, Fei; Russell, Stuart",Proceedings of the AAAI Conference on Artificial Intelligence UN High-level Panel on Digital Cooperation: A Proposal for International AI Governance,,https://digitalcooperation.org/wp-content/uploads/2019/02/Luke_Kemp_Submission-to-the-UN-High-Level-Panel-on-Digital-Cooperation-2019-Kemp-et-al.pdf,2019,report,"Kemp, Luke; Cihon, Peter; Maas, Matthijs M; Belfield, Haydn; Ó hÉigeartaigh, Seán; Leung, Jade; Cremer, Zoe", Thoughts on Human Models,"Human values and preferences are hard to specify, especially in complex domains. Accordingly, much AGI safety research has focused on approaches to AGI design that refer to human values and preferences indirectly, by learning a model that is grounded in expressions of human values (via stated preferences, observed behaviour, approval, etc.) and/or real-world processes that generate expressions of those values. There are additionally approaches aimed at modelling or imitating other aspects of human cognition or behaviour without an explicit aim of capturing human preferences (but usually in service of ultimately satisfying them). Let us refer to all these models as human models. In this post, we discuss several reasons to be cautious about AGI designs that use human models. We suggest that the AGI safety research community put more effort into developing approaches that work well in the absence of human models, alongside the approaches that rely on human models. This would be a significant addition to the current safety research landscape, especially if we focus on working out and trying concrete approaches as opposed to developing theory. We also acknowledge various reasons why avoiding human models seems difficult. PROBLEMS WITH HUMAN MODELS To be clear about human models, we draw a rough distinction between our actual preferences (which may not be fully accessible to us) and procedures for evaluating our preferences. The first thing, actual preferences, is what humans actually want upon reflection. Satisfying our actual preferences is a win. The second thing, procedures for evaluating preferences, refers to various proxies for our actual preferences such as our approval, or what looks good to us (with necessarily limited information or time for thinking). Human models are in the second category; consider, as an example, a highly accurate ML model of human yes/no approval on the set of descriptions of outcomes. Our first concern, described below, is about overfit",https://www.alignmentforum.org/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models,2019,blogPost,"Kumar, Ramana; Garrabrant, Scott",AI Alignment Forum Risk-Aware Active Inverse Reinforcement Learning,"Active learning from demonstration allows a robot to query a human for specific types of input to achieve efficient learning. Existing work has explored a variety of active query strategies; however, to our knowledge, none of these strategies directly minimize the performance risk of the policy the robot is learning. Utilizing recent advances in performance bounds for inverse reinforcement learning, we propose a risk-aware active inverse reinforcement learning algorithm that focuses active queries on areas of the state space with the potential for large generalization error. We show that risk-aware active learning outperforms standard active IRL approaches on gridworld, simulated driving, and table setting tasks, while also providing a performance-based stopping criterion that allows a robot to know when it has received enough demonstrations to safely perform a task.",https://arxiv.org/abs/1901.02161v2,2019,conferencePaper,"Brown, Daniel S.; Cui, Yuchen; Niekum, Scott",Proceedings of The 2nd Conference on Robot Learning Benefits of Assistance over Reward Learning,"Much recent work has focused on how an agent can learn what to do from human feedback, leading to two major paradigms. The first paradigm is reward learning, in which the agent learns a reward...",https://openreview.net/forum?id=DFIoGDZejIB,2020,conferencePaper,Anonymous, Inferring Reward Functions from Demonstrators with Unknown Biases,"Our goal is to infer reward functions from demonstrations. In order to infer the correct reward function, we must account for the systematic ways in which the demonstrator is suboptimal. Prior work...",https://openreview.net/forum?id=rkgqCiRqKQ,2018,journalArticle,"Shah, Rohin; Gundotra, Noah; Abbeel, Pieter; Dragan, Anca", AvE: Assistance via Empowerment,"One difficulty in using artificial agents for human-assistive applications lies in the challenge of accurately assisting with a person's goal(s). Existing methods tend to rely on inferring the human's goal, which is challenging when there are many potential goals or when the set of candidate goals is difficult to identify. We propose a new paradigm for assistance by instead increasing the human's ability to control their environment, and formalize this approach by augmenting reinforcement learning with human empowerment. This task-agnostic objective preserves the person's autonomy and ability to achieve any eventual state. We test our approach against assistance based on goal inference, highlighting scenarios where our method overcomes failure modes stemming from goal ambiguity or misspecification. As existing methods for estimating empowerment in continuous domains are computationally hard, precluding its use in real time learned assistance, we also propose an efficient empowerment-inspired proxy metric. Using this, we are able to successfully demonstrate our method in a shared autonomy user study for a challenging simulated teleoperation task with human-in-the-loop training.",http://arxiv.org/abs/2006.14796,2020,manuscript,"Du, Yuqing; Tiomkin, Stas; Kiciman, Emre; Polani, Daniel; Abbeel, Pieter; Dragan, Anca", Responses to the Journey to the Singularity,,http://link.springer.com/10.1007/978-3-662-54033-6_3,2017,bookSection,"Sotala, Kaj; Yampolskiy, Roman",The Technological Singularity Scaling provable adversarial defenses,,,2018,conferencePaper,"Wong, Eric; Schmidt, Frank; Metzen, Jan Hendrik; Kolter, J. Zico",Advances in Neural Information Processing Systems The Assistive Multi-Armed Bandit,"Learning preferences implicit in the choices humans make is a well studied problem in both economics and computer science. However, most work makes the assumption that humans are acting (noisily) optimally with respect to their preferences. Such approaches can fail when people are themselves learning about what they want. In this work, we introduce the assistive multi-armed bandit, where a robot assists a human playing a bandit task to maximize cumulative reward. In this problem, the human does not know the reward function but can learn it through the rewards received from arm pulls; the robot only observes which arms the human pulls but not the reward associated with each pull. We offer sufficient and necessary conditions for successfully assisting the human in this framework. Surprisingly, better human performance in isolation does not necessarily lead to better performance when assisted by the robot: a human policy can do better by effectively communicating its observed rewards to the robot. We conduct proof-of-concept experiments that support these results. We see this work as contributing towards a theory behind algorithms for human-robot interaction.",http://arxiv.org/abs/1901.08654,2019,conferencePaper,"Chan, Lawrence; Hadfield-Menell, Dylan; Srinivasa, Siddhartha; Dragan, Anca",2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) Adaptive Autonomous Secure Cyber Systems,,http://link.springer.com/10.1007/978-3-030-33432-1,2020,book,, How do we know we have global environmental problems? Science and the globalization of environmental discourse,,https://linkinghub.elsevier.com/retrieve/pii/0016718592900515,1992,journalArticle,"Taylor, Peter J.; Buttel, Frederick H.",Geoforum Agential Risks: A Comprehensive Introduction,"The greatest existential threats to humanity stem from increasingly powerful advanced technologies. Yet the “risk potential” of such tools can only be realized when coupled with a suitable agent who, through error or terror, could use the tool to bring about an existential catastrophe. While the existential risk literature has provided many accounts of how advanced technologies might be misused and abused to cause unprecedented harm, no scholar has yet explored the other half of the agent-tool coupling, namely the agent. This paper aims to correct this failure by offering a comprehensive overview of what we could call “agential riskology.” Only by studying the unique properties of different agential risk types can one acquire an accurate picture of the existential danger before us.",,2016,manuscript,"Torres, Phil", Get ready for the dawn of superintelligence,,,2014,magazineArticle,"Bostrom, Nick",New Scientist Backpropagation through the Void: Optimizing control variates for black-box gradient estimation,"Gradient-based optimization is the foundation of deep learning and reinforcement learning. Even when the mechanism being optimized is unknown or not differentiable, optimization using high-variance or biased gradient estimates is still often the best strategy. We introduce a general framework for learning low-variance, unbiased gradient estimators for black-box functions of random variables. Our method uses gradients of a neural network trained jointly with model parameters or policies, and is applicable in both discrete and continuous settings. We demonstrate this framework for training discrete latent-variable models. We also give an unbiased, action-conditional extension of the advantage actor-critic reinforcement learning algorithm.",http://arxiv.org/abs/1711.00123,2018,conferencePaper,"Grathwohl, Will; Choi, Dami; Wu, Yuhuai; Roeder, Geoffrey; Duvenaud, David",arXiv:1711.00123 [cs] Multiagent cooperation and competition with deep reinforcement learning,,https://dx.plos.org/10.1371/journal.pone.0172395,2017,journalArticle,"Tampuu, Ardi; Matiisen, Tambet; Kodelja, Dorian; Kuzovkin, Ilya; Korjus, Kristjan; Aru, Juhan; Aru, Jaan; Vicente, Raul",PLOS ONE Reinforcement Learning under Threats,"In several reinforcement learning (RL) scenarios, mainly in security settings, there may be adversaries trying to interfere with the reward generating process. In this paper, we introduce Threatened Markov Decision Processes (TMDPs), which provide a framework to support a decision maker against a potential adversary in RL. Furthermore, we propose a level-$k$ thinking scheme resulting in a new learning framework to deal with TMDPs. After introducing our framework and deriving theoretical results, relevant empirical evidence is given via extensive experiments, showing the benefits of accounting for adversaries while the agent learns.",http://arxiv.org/abs/1809.01560,2019,conferencePaper,"Gallego, Victor; Naveiro, Roi; Insua, David Rios",Proceedings of the AAAI Conference on Artificial Intelligence Hard Choices in Artificial Intelligence: Addressing Normative Uncertainty through Sociotechnical Commitments,"As AI systems become prevalent in high stakes domains such as surveillance and healthcare, researchers now examine how to design and implement them in a safe manner. However, the potential harms caused by systems to stakeholders in complex social contexts and how to address these remains unclear. In this paper, we explain the inherent normative uncertainty in debates about the safety of AI systems. We then address this as a problem of vagueness by examining its place in the design, training, and deployment stages of AI system development. We adopt Ruth Chang's theory of intuitive comparability to illustrate the dilemmas that manifest at each stage. We then discuss how stakeholders can navigate these dilemmas by incorporating distinct forms of dissent into the development pipeline, drawing on Elizabeth Anderson's work on the epistemic powers of democratic institutions. We outline a framework of sociotechnical commitments to formal, substantive and discursive challenges that address normative uncertainty across stakeholders, and propose the cultivation of related virtues by those responsible for development.",http://arxiv.org/abs/1911.09005,2020,conferencePaper,"Dobbe, Roel; Gilbert, Thomas Krendl; Mintz, Yonatan","AIES '20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society" Autonomous Institutional Arrangements in Multilateral Environmental Agreements: A Little-Noticed Phenomenon in International Law,"Since the early 1970s a considerable number of multilateral agreements have been concluded in the environmental field that establish a common pattern of institutional arrangements. The purpose of these arrangements is to develop the normative content of the regulatory regime established by each agreement 1 and to supervise the states parties’ implementation of and compliance with that regime. These institutional arrangements usually comprise a conference or meeting of the parties (COP, MOP) with decision-making powers, a secretariat, and one or more specialist subsidiary bodies. Such arrangements, because of their ad hoc nature, are not intergovernmental organizations (IGOs) in the traditional sense. On the other hand, as the creatures of treaties, such conferences and meetings of the parties, with their secretariats and subsidiary bodies, add up to more than just diplomatic conferences. Because such arrangements do not constitute traditional IGOs and yet are freestanding and distinct both from the states parties to a particular agreement and from existing IGOs, we have chosen to describe them as “autonomous.” They are also autonomous in the sense that they have their own lawmaking powers and compliance mechanisms.",https://www.cambridge.org/core/product/identifier/S0002930000018091/type/journal_article,2000,journalArticle,"Churchill, Robin R.; Ulfstein, Geir",American Journal of International Law Modeling Human Plan Recognition Using Bayesian Theory of Mind,,https://linkinghub.elsevier.com/retrieve/pii/B9780123985323000075,2014,bookSection,"Baker, Chris L.; Tenenbaum, Joshua B.","Plan, Activity, and Intent Recognition" A strategy for assessing safe use of sensors in autonomous road vehicles,,,2017,conferencePaper,"Johansson, Rolf; Alissa, Samieh; Bengtsson, Staffan; Bergenhem, Carl; Bridal, Olof; Cassel, Anders; Chen, De-Jiu; Gassilewski, Martin; Nilsson, Jonas; Sandberg, Anders","International Conference on Computer Safety, Reliability, and Security" Formal verification of hybrid systems,,http://dl.acm.org/citation.cfm?doid=2038642.2038685,2011,conferencePaper,"Alur, Rajeev",Proceedings of the ninth ACM international conference on Embedded software - EMSOFT '11 Reward Tampering Problems and Solutions in Reinforcement Learning: A Causal Influence Diagram Perspective,"Can an arbitrarily intelligent reinforcement learning agent be kept under control by a human user? Or do agents with sufficient intelligence inevitably find ways to shortcut their reward signal? This question impacts how far reinforcement learning can be scaled, and whether alternative paradigms must be developed in order to build safe artificial general intelligence. In this paper, we use an intuitive yet precise graphical model called causal influence diagrams to formalize reward tampering problems. We also describe a number of modifications to the reinforcement learning objective that prevent incentives for reward tampering. We verify the solutions using recently developed graphical criteria for inferring agent incentives from causal influence diagrams. Along the way, we also compare corrigibility and self-preservation properties of the various solutions, and discuss how they can be combined into a single agent without reward tampering incentives.",http://arxiv.org/abs/1908.04734,2019,manuscript,"Everitt, Tom; Hutter, Marcus", Responses to catastrophic AGI risk: a survey,,https://iopscience.iop.org/article/10.1088/0031-8949/90/1/018001,2015,journalArticle,"Sotala, Kaj; Yampolskiy, Roman V",Physica Scripta Multiverse-wide Cooperation via Correlated Decision Making,"Some decision theorists argue that when playing a prisoner’s dilemma-type game against a sufficiently similar opponent, we should cooperate to make it more likely that our opponent also cooperates. This idea, which Hofstadter calls superrationality, has strong implications when combined with the insight from modern physics that we probably live in a large universe or multiverse of some sort. If we care about what happens in civilizations located elsewhere in the multiverse, we can superrationally cooperate with some of their inhabitants. That is, if we take their values into account, this makes it more likely that they do the same for us. In this paper, I attempt to assess the practical implications of this idea. I argue that to reap the full gains from trade, everyone should maximize the same impartially weighted sum of the utility functions of all collaborators. I also argue that we can obtain at least weak evidence about the content of these utility functions. In practice, the application of superrationality implies that we should promote causal cooperation, moral pluralism, moral reflection, and ensure that our descendants, who will be smarter and thus better at finding out how to benefit other superrationalists in the universe, engage in superrational cooperation.",,2017,manuscript,"Oesterheld, Caspar", Low impact artificial intelligences,,https://arxiv.org/abs/1705.10720,2017,manuscript,"Armstrong, Stuart; Levinstein, Benjamin", Formalizing convergent instrumental goals,,,2016,conferencePaper,"Benson-Tilsen, Tsvi; Soares, Nate",Workshops at the Thirtieth AAAI Conference on Artificial Intelligence A Strongly Asymptotically Optimal Agent in General Environments,"Reinforcement Learning agents are expected to eventually perform well. Typically, this takes the form of a guarantee about the asymptotic behavior of an algorithm given some assumptions about the environment. We present an algorithm for a policy whose value approaches the optimal value with probability 1 in all computable probabilistic environments, provided the agent has a bounded horizon. This is known as strong asymptotic optimality, and it was previously unknown whether it was possible for a policy to be strongly asymptotically optimal in the class of all computable probabilistic environments. Our agent, Inquisitive Reinforcement Learner (Inq), is more likely to explore the more it expects an exploratory action to reduce its uncertainty about which environment it is in, hence the term inquisitive. Exploring inquisitively is a strategy that can be applied generally; for more manageable environment classes, inquisitiveness is tractable. We conducted experiments in ""grid-worlds"" to compare the Inquisitive Reinforcement Learner to other weakly asymptotically optimal agents.",http://arxiv.org/abs/1903.01021,2019,conferencePaper,"Cohen, Michael K.; Catt, Elliot; Hutter, Marcus",Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence The autopilot problem,,,2013,bookSection,"Armstrong, Stuart; Bradshaw, H; Beckstead, Nick; Sandberg, Anders",Systemic Risk of Modelling in Insurance Equilibrium and prior selection problems in multipolar deployment,"To avoid catastrophic conflict in multipolar AI scenarios, we would like to design AI systems such that AI-enabled actors will tend to cooperate. This post is about some problems facing this effort and some possible solutions. To explain these problems, I'll take the view that the agents deployed by AI developers (the ''principals'') in a multipolar scenario are moves in a game. The payoffs to a principal in this game depend on how the agents behave over time. We can talk about the equilibria of this game, and so on. Ideally, we would be able to make guarantees like this: 1. The payoffs resulting from the deployed agents' actions are optimal with respect to some appropriate ""welfare function''. This welfare function would encode some combination of total utility, fairness, and other social desiderata; 2. The agents are in equilibrium --- that is, no principal has an incentive to deploy an agent with a different design, given the agents deployed by the other principals. The motivation for item 1 is clear: we want outcomes which are fair by each of the principals' lights. In particular, we want an outcome that the principals will all agree to. And item 2 is desirable because an equilibrium constitutes a self-enforcing contract; each agent wants to play their equilibrium strategy, if they believe that the other agents are playing the same equilibrium. Thus, given that the principals all say that they will deploy agents that satisfy 1 and 2, we could have some confidence that a welfare-optimal outcome will in fact obtain. Two simple but critical problems need to be addressed in order to make such guarantees: the equilibrium and prior selection problems. The equilibrium selection problem is that this deployment game will have many equilibria. Even if the principals agree on a welfare function, it is possible that many different profiles of agents optimize the same welfare function. So the principals need to coordinate on the profile of agents dep",https://www.alignmentforum.org/posts/Tdu3tGT4i24qcLESh/equilibrium-and-prior-selection-problems-in-multipolar-1,2020,blogPost,"Clifton, Jesse",AI Alignment Forum Almost common priors,,http://link.springer.com/10.1007/s00182-012-0347-5,2013,journalArticle,"Hellman, Ziv",International Journal of Game Theory Person-affecting views may be dominated by possibilities of large future populations of necessary people,,http://reflectivedisequilibrium.blogspot.com/2019/11/person-affecting-views-may-be-dominated.html,2019,blogPost,"Shulman, Carl",Reflective Disequilibrium Complications in evaluating neglectedness,Neglectedness (or crowdedness) is a heuristic that effective altruists use to assess how much impact they could have in a specific cause area. It is usually combined with scale (a.k.a. importance) …,https://casparoesterheld.com/2017/06/25/complications-in-evaluating-neglectedness/,2017,blogPost,"Oesterheld, Caspar",The Universe from an Intentional Stance Being nice to software animals and babies,,,2014,journalArticle,"Sandberg, Anders",Intelligence Unbound: The Future of Uploaded and Machine Minds Learning-based Model Predictive Control for Safe Exploration,"Learning-based methods have been successful in solving complex control tasks without significant prior knowledge about the system. However, these methods typically do not provide any safety guarantees, which prevents their use in safety-critical, real-world applications. In this paper, we present a learning-based model predictive control scheme that can provide provable high-probability safety guarantees. To this end, we exploit regularity assumptions on the dynamics in terms of a Gaussian process prior to construct provably accurate confidence intervals on predicted trajectories. Unlike previous approaches, we do not assume that model uncertainties are independent. Based on these predictions, we guarantee that trajectories satisfy safety constraints. Moreover, we use a terminal set constraint to recursively guarantee the existence of safe control actions at every iteration. In our experiments, we show that the resulting algorithm can be used to safely and efficiently explore and learn about dynamic systems.",http://arxiv.org/abs/1803.08287,2018,conferencePaper,"Koller, Torsten; Berkenkamp, Felix; Turchetta, Matteo; Krause, Andreas",2018 IEEE Conference on Decision and Control (CDC) Artificial Intelligence and Its Implications for Future Suffering,"Artificial intelligence (AI) will likely transform the world later this century. Whether uncontrolled or controlled AIs would create more suffering in expectation is a question to explore further. Regardless, the field of AI safety and policy seems to be a very important space where altruists can make a positive-sum impact along many dimensions.",https://longtermrisk.org/artificial-intelligence-and-its-implications-for-future-suffering/,2015,blogPost,"Tomasik, Brian",Center on Long-Term Risk The AGI Containment Problem,"There is considerable uncertainty about what properties, capabilities and motivations future AGIs will have. In some plausible scenarios, AGIs may pose security risks arising from accidents and defects. In order to mitigate these risks, prudent early AGI research teams will perform significant testing on their creations before use. Unfortunately, if an AGI has human-level or greater intelligence, testing itself may not be safe; some natural AGI goal systems create emergent incentives for AGIs to tamper with their test environments, make copies of themselves on the internet, or convince developers and operators to do dangerous things. In this paper, we survey the AGI containment problem – the question of how to build a container in which tests can be conducted safely and reliably, even on AGIs with unknown motivations and capabilities that could be dangerous. We identify requirements for AGI containers, available mechanisms, and weaknesses that need to be addressed.",http://arxiv.org/abs/1604.00545,2016,conferencePaper,"Babcock, James; Kramar, Janos; Yampolskiy, Roman",AGI 2016: Artificial General Intelligence Modular task and motion planning in belief space,"The execution of long-horizon tasks under uncertainty is a fundamental challenge in robotics. Recent approaches have made headway on these tasks with an integration of task and motion planning. In this paper, we present Interfaced Belief Space Planning (IBSP): a modular approach to task and motion planning in belief space. We use a task-independent interface layer to combine an off-the-shelf classical planner with motion planning and inference. We determinize the problem under the maximum likelihood observation assumption to obtain a deterministic representation where successful plans generate goal-directed observations. We leverage properties of maximum likelihood observation determinizations to obtain a simple representation of (optimistic) belief space dynamics that is wellsuited to planning. Our interface is implemented with standard belief state queries, requiring only the ability to sample, compute unnormalized likelihoods, and compute maximum likelihood states. Our contribution is a novel algorithm for task and motion planning in belief space that has minimal dependence on the details of the inference engine used. IBSP can work with a broad class of black box state estimators, with zero changes to the algorithm. We validate our approach in simulated tasks for the PR2 that account for continuous state, different types of initial state distributions, and negative observations.",http://ieeexplore.ieee.org/document/7354079/,2015,conferencePaper,"Hadfield-Menell, Dylan; Groshev, Edward; Chitnis, Rohan; Abbeel, Pieter",2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Learning in two-player games between transparent opponents,"We consider a scenario in which two reinforcement learning agents repeatedly play a matrix game against each other and update their parameters after each round. The agents' decision-making is transparent to each other, which allows each agent to predict how their opponent will play against them. To prevent an infinite regress of both agents recursively predicting each other indefinitely, each agent is required to give an opponent-independent response with some probability at least epsilon. Transparency also allows each agent to anticipate and shape the other agent's gradient step, i.e. to move to regions of parameter space in which the opponent's gradient points in a direction favourable to them. We study the resulting dynamics experimentally, using two algorithms from previous literature (LOLA and SOS) for opponent-aware learning. We find that the combination of mutually transparent decision-making and opponent-aware learning robustly leads to mutual cooperation in a single-shot prisoner's dilemma. In a game of chicken, in which both agents try to manoeuvre their opponent towards their preferred equilibrium, converging to a mutually beneficial outcome turns out to be much harder, and opponent-aware learning can even lead to worst-case outcomes for both agents. This highlights the need to develop opponent-aware learning algorithms that achieve acceptable outcomes in social dilemmas involving an equilibrium selection problem.",http://arxiv.org/abs/2012.02671,2020,manuscript,"Hutter, Adrian", Thoughts on Updatelessness,"[This post assumes knowledge of decision theory, as discussed in Eliezer Yudkowsky’s Timeless Decision Theory.] One interesting feature of some decision theories that I used to be a bit confused ab…",https://casparoesterheld.com/2016/11/21/thoughts-on-updatelessness/,2016,blogPost,"Oesterheld, Caspar",The Universe from an Intentional Stance Human-Level AI,"Human-level AI' refers to AI which can reproduce everything a human can do, approximately. Several variants of this concept are worth distinguishing. Details Variations in the meaning of 'human-level AI' Considerations in specifying 'human-level AI' more precisely: Do we mean to imply anything about running costs? Is an AI that reproduces human behavior for ten billion dollars per year 'human-level',...",https://aiimpacts.org/human-level-ai/,2014,blogPost,AI Impacts,AI Impacts AI Research Considerations for Human Existential Safety (ARCHES),"Framed in positive terms, this report examines how technical AI research might be steered in a manner that is more attentive to humanity’s long-term prospects for survival as a species. In negative terms, we ask what existential risks humanity might face from AI development in the next century, and by what principles contemporary technical research might be directed to address those risks.",,2020,report,"Critch, Andrew; Krueger, David", Crime and Punishment: an Economic Approach,"Since the turn of the twentieth century, legislation in Western countries has expanded rapidly to reverse the brief dominance of laissez faire during the nineteenth century. The state no longer merely protects against violations of person and property through murder, rape, or burglary but also restricts ‘discrimination’ against certain minorities, collusive business arrangements, ‘jaywalking’, travel, the materials used in construction, and thousands of other activities. The activities restricted not only are numerous but also range widely, affecting persons in very different pursuits and of diverse social backgrounds, education levels, ages, races, etc. Moreover, the likelihood that an offender will be discovered and convicted and the nature and extent of punishments differ greatly from person to person and activity to activity. Yet, in spite of such diversity, some common properties are shared by practically all legislation, and these properties form the subject matter of this essay.",https://doi.org/10.1007/978-1-349-62853-7_2,2000,bookSection,"Becker, Gary S.",The Economic Dimensions of Crime Exact Sampling with Integer Linear Programs and Random Perturbations,"We consider the problem of sampling from a discrete probability distribution specified by a graphical model. Exact samples can, in principle, be obtained by computing the mode of the original model perturbed with an exponentially many i.i.d. random variables. We propose a novel algorithm that views this as a combinatorial optimization problem and searches for the extreme state using a standard integer linear programming (ILP) solver, appropriately extended to account for the random perturbation. Our technique, GumbelMIP, leverages linear programming (LP) relaxations to evaluate the quality of samples and prune large portions of the search space, and can thus scale to large tree-width models beyond the reach of current exact inference methods. Further, when the optimization problem is not solved to optimality, our method yields a novel approximate sampling technique. We empirically demonstrate that our approach parallelizes well, our exact sampler scales better than alternative approaches, and our approximate sampler yields better quality samples than a Gibbs sampler and a low-dimensional perturbation method.",,2016,conferencePaper,"Kim, Carolyn; Sabharwal, Ashish; Ermon, Stefano",AAAI'16: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence Do Artificial Reinforcement-Learning Agents Matter Morally?,"Artificial reinforcement learning (RL), a widely used training method in computer science, has striking parallels to reward and punishment learning in biological brains. Plausible theories of consciousness imply a non-zero probability that RL agents qualify as sentient and deserve our moral consideration, especially as AI research advances and RL agents become more sophisticated.",https://longtermrisk.org/do-artificial-reinforcement-learning-agents-matter-morally/,2016,blogPost,"Tomasik, Brian",Center on Long-Term Risk Faulty Reward Functions in the Wild,"Reinforcement learning algorithms can break in surprising, counterintuitive ways. In this post we'll explore one failure mode, which is where you misspecify your reward function.",https://openai.com/blog/faulty-reward-functions/,2016,blogPost,"Clark, Jack; Amodei, Dario",OpenAI Measurement in AI Policy: Opportunities and Challenges,"As artificial intelligence increasingly influences our world, it becomes crucial to assess its technical progress and societal impact. This paper surveys problems and opportunities in the measurement of AI systems and their impact, based on a workshop held at Stanford University in the fall of 2019. We identify six summary challenges inherent to measuring the progress and impact of AI, and summarize over 40 presentations and associated discussions from the workshop. We hope this can inspire research agendas in this crucial area.",http://arxiv.org/abs/2009.09071,2020,manuscript,"Mishra, Saurabh; Clark, Jack; Perrault, C. Raymond", The Impossibility of a Satisfactory Population Ethics,,http://www.worldscientific.com/doi/abs/10.1142/9789814368018_0001,2011,bookSection,"Arrhenius, Gustaf",Advanced Series on Mathematical Psychology "High Reliability Organizations: Unlikely, Demanding and At Risk",,http://doi.wiley.com/10.1111/j.1468-5973.1996.tb00078.x,1996,journalArticle,"La Porte, Todd R.",Journal of Contingencies and Crisis Management Underprotection of Unpredictable Statistical Lives Compared to Predictable Ones,"Existing ethical discussion considers the differences in care for identified versus statistical lives. However, there has been little attention to the different degrees of care that are taken for different kinds of statistical lives. Here we argue that for a given number of statistical lives at stake, there will sometimes be different, and usually greater, care taken to protect predictable statistical lives, in which the number of lives that will be lost can be predicted fairly accurately, than for unpredictable statistical lives, where the lives are at stake because of a low-probability event, such that most likely no one will be affected by the decision but with low probability some lives will be at stake. One reason for this difference is the statistical challenge of estimating low probabilities, and in particular the tendency of common approaches to underestimate these probabilities. Another is the existence of rational incentives to treat unpredictable risks as if the probabilities were lower than they are. Some of these factors apply outside the pure economic context, to institutions, individuals, and governments. We argue that there is no ethical reason to treat unpredictable statistical lives differently from predictable statistical lives. Moreover, lives that are unpredictable from the perspective of an individual agent may become predictable when aggregated to the level of a societal decision. Underprotection of unpredictable statistical lives is a form of market failure that may need to be corrected by altering regulation, introducing compulsory liability insurance, or other social policies.",https://onlinelibrary.wiley.com/doi/abs/10.1111/risa.12658,2017,journalArticle,"Lipsitch, Marc; Evans, Nicholas G.; Cotton‐Barratt, Owen",Risk Analysis Prediction: The long and the short of it.,"Commentators often lament forecasters’ inability to provide precise predictions of the long-run behaviour of complex economic and physical systems. Yet their concerns often conflate the presence of substantial long-run uncertainty with the need for long-run predictability; short-run predictions can partially substitute for long-run predictions if decision-makers can adjust their activities over time. So what is the relative importance of short- and long-run predictability? We study this question in a model of rational dynamic adjustment to a changing environment. Even if adjustment costs, discount factors, and long-run uncertainty are large, short-run predictability can be much more important than long-run predictability.",,2019,report,"Millner, Antony; Heyen, Daniel", DropoutDAgger: A Bayesian Approach to Safe Imitation Learning,"While imitation learning is becoming common practice in robotics, this approach often suffers from data mismatch and compounding errors. DAgger is an iterative algorithm that addresses these issues by continually aggregating training data from both the expert and novice policies, but does not consider the impact of safety. We present a probabilistic extension to DAgger, which uses the distribution over actions provided by the novice policy, for a given observation. Our method, which we call DropoutDAgger, uses dropout to train the novice as a Bayesian neural network that provides insight to its confidence. Using the distribution over the novice's actions, we estimate a probabilistic measure of safety with respect to the expert action, tuned to balance exploration and exploitation. The utility of this approach is evaluated on the MuJoCo HalfCheetah and in a simple driving experiment, demonstrating improved performance and safety compared to other DAgger variants and classic imitation learning.",http://arxiv.org/abs/1709.06166,2017,manuscript,"Menda, Kunal; Driggs-Campbell, Katherine; Kochenderfer, Mykel J.", Bayesian Evolving-to-Extinction,"The present discussion owes a lot to Scott Garrabrant and Evan Hubinger. In Defining Myopia, I formalized temporal or cross-instance myopia / non-myopia, but I claimed that there should also be some kind of single-instance myopia which I hadn't properly captured. I also suggested this in Predict-O-Matic. This post is intended to be an example of single-instance partial agency. EVOLVING TO EXTINCTION Evolution might be myopic in a number of ways, but one way is that it's myopic across individuals -- it typically produces results very different from what group selection would produce, because it's closer to optimizing relative fitness of individuals (relative to each other) than it is to optimizing overall fitness. Adaptations which help members of a species compete with each other are a great example of this. Why increase your own fitness, when you can just decrease someone else's instead? We're lucky that it's typically pretty hard, at least historically, to do things which are bad across the board but slightly less bad for the one doing them. Imagine a ""toxic gas gene"" which makes the air harder for everyone to breathe, but slightly less so for carriers of the gene. Such a gene would be selected for. This kind of thing can be selected for even to the point where it drives the population of a species right down to zero, as Eliezer's essay on evolving to extinction highlighted. Actually, as Eliezer's essay emphasized, it's not even that evolution is myopic at the level of individuals; evolution is myopic down to the level of individual genes, an observation which better explains the examples of evolving-to-extinction which he discusses. (This is, of course, the point of Dawkins' book The Selfish Gene.) But the analogy of myopia-across-individuals will suit me better here. BAYES ""EVOLVING TO EXTINCTION"" The title of this post is a hyperbole, since there isn't an analog of an extinction event in the model I'm about to describe, but it illustrates that in extrem",https://www.alignmentforum.org/posts/u9Azdu6Z7zFAhd4rK/bayesian-evolving-to-extinction,2020,blogPost,"Demski, Abram",AI Alignment Forum Health Effects of Media on Children and Adolescents,,http://pediatrics.aappublications.org/cgi/doi/10.1542/peds.2009-2563,2010,journalArticle,"Strasburger, V. C.; Jordan, A. B.; Donnerstein, E.",PEDIATRICS An AI Race for Strategic Advantage: Rhetoric and Risks,,https://dl.acm.org/doi/10.1145/3278721.3278780,2018,conferencePaper,"Cave, Stephen; ÓhÉigeartaigh, Seán S.","Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society" Existential Risk Prevention as Global Priority,"Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this article, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability. Policy Implications • Existential risk is a concept that can focus long-term global efforts and sustainability concerns. • The biggest existential risks are anthropogenic and related to potential future technologies. • A moral case can be made that existential risk reduction is strictly more important than any other global public good. • Sustainability should be reconceptualised in dynamic terms, as aiming for a sustainable trajectory rather than a sustainable state. • Some small existential risks can be mitigated today directly (e.g. asteroids) or indirectly (by building resilience and reserves to increase survivability in a range of extreme scenarios) but it is more important to build capacity to improve humanity’s ability to deal with the larger existential risks that will arise later in this century. This will require collective wisdom, technology foresight, and the ability when necessary to mobilise a strong global coordinated response to anticipated existential risks. • Perhaps the most cost-effective way to reduce existential risks today is to fund analysis of a wide range of existential risks and potential mitigation strategies, with a long-term perspective.",https://onlinelibrary.wiley.com/doi/abs/10.1111/1758-5899.12002,2013,journalArticle,"Bostrom, Nick",Global Policy The State of Research in Existential Risk,,https://www.risksciences.ucla.edu/news-events/2018/1/2/proceedings-of-the-first-international-colloquium-on-catastrophic-and-existential-risk,2018,conferencePaper,"Ó hÉigeartaigh, Seán",Proceedings from the first Garrick Colloquium on Catastrophic and Existential Risk Graphical Models for Processing Missing Data,"This paper reviews recent advances in missing data research using graphical models to represent multivariate dependencies. We first examine the limitations of traditional frameworks from three different perspectives: \textit{transparency, estimability and testability}. We then show how procedures based on graphical models can overcome these limitations and provide meaningful performance guarantees even when data are Missing Not At Random (MNAR). In particular, we identify conditions that guarantee consistent estimation in broad categories of missing data problems, and derive procedures for implementing this estimation. Finally we derive testable implications for missing data models in both MAR (Missing At Random) and MNAR categories.",http://arxiv.org/abs/1801.03583,2019,journalArticle,"Mohan, Karthika; Pearl, Judea",Journal of American Statistical Association Estimating Training Data Influence by Tracking Gradient Descent,"We introduce a method called TrackIn that computes the influence of a training example on a prediction made by the model, by tracking how the loss on the test point changes during the training process whenever the training example of interest was utilized. We provide a scalable implementation of TrackIn via a combination of a few key ideas: (a) a first-order approximation to the exact computation, (b) using random projections to speed up the computation of the first-order approximation for large models, (c) using saved checkpoints of standard training procedures, and (d) cherry-picking layers of a deep neural network. An experimental evaluation shows that TrackIn is more effective in identifying mislabelled training examples than other related methods such as influence functions and representer points. We also discuss insights from applying the method on vision, regression and natural language tasks.",http://arxiv.org/abs/2002.08484,2020,manuscript,"Pruthi, Garima; Liu, Frederick; Sundararajan, Mukund; Kale, Satyen", Anthropic uncertainty in the Evidential Blackmail,"I’m currently writing a piece on anthropic uncertainty in Newcomb problems. The idea is that whenever someone simulates us to predict our actions, this leads us to have anthropic uncertainty about …",https://casparoesterheld.com/2017/05/12/anthropic-uncertainty-in-the-evidential-blackmail/,2017,blogPost,"Treutlein, Johannes",The Universe from an Intentional Stance A Roadmap for Robust End-to-End Alignment,"This paper discussed the {\it robust alignment} problem, that is, the problem of aligning the goals of algorithms with human preferences. It presented a general roadmap to tackle this issue. Interestingly, this roadmap identifies 5 critical steps, as well as many relevant aspects of these 5 steps. In other words, we have presented a large number of hopefully more tractable subproblems that readers are highly encouraged to tackle. Hopefully, this combination allows to better highlight the most pressing problems, how every expertise can be best used to, and how combining the solutions to subproblems might add up to solve robust alignment.",http://arxiv.org/abs/1809.01036,2020,manuscript,"Hoang, Lê Nguyên", The cost of TEPS,"A billion Traversed Edges Per Second (a GTEPS) can be bought for around $0.26/hour via a powerful supercomputer, including hardware and energy costs only. We do not know if GTEPS can be bought more cheaply elsewhere. We estimate that available TEPS/$ grows by a factor of ten every four years, based the relationship between TEPS and FLOPS. TEPS have not been...",https://aiimpacts.org/cost-of-teps/,2015,blogPost,AI Impacts,AI Impacts Safer ML paradigms team: the story – AI Safety Research Program,,https://aisrp.org/?page_id=169,2020,blogPost,AI Safety Camp,AI Safety Camp Organizational Culture in High Reliability Organizations: An Extension,,http://journals.sagepub.com/doi/10.1177/001872679504800703,1995,journalArticle,"Klein, Rochelle Lee; Bigley, Gregory A.; Roberts, Karlene H.",Human Relations Self-Supervised Intrinsic Image Decomposition,"Intrinsic decomposition from a single image is a highly challenging task, due to its inherent ambiguity and the scarcity of training data. In contrast to traditional fully supervised learning approaches, in this paper we propose learning intrinsic image decomposition by explaining the input image. Our model, the Rendered Intrinsics Network (RIN), joins together an image decomposition pipeline, which predicts reflectance, shape, and lighting conditions given a single image, with a recombination function, a learned shading model used to recompose the original input based off of intrinsic image predictions. Our network can then use unsupervised reconstruction error as an additional signal to improve its intermediate representations. This allows large-scale unlabeled data to be useful during training, and also enables transferring learned knowledge to images of unseen object categories, lighting conditions, and shapes. Extensive experiments demonstrate that our method performs well on both intrinsic image decomposition and knowledge transfer.",http://arxiv.org/abs/1711.03678,2018,conferencePaper,"Janner, Michael; Wu, Jiajun; Kulkarni, Tejas D.; Yildirim, Ilker; Tenenbaum, Joshua B.",Advances in Neural Information Processing Systems 2017 The Strategic Robot Problem: Lethal Autonomous Weapons in War,,http://www.tandfonline.com/doi/abs/10.1080/15027570.2014.975010,2014,journalArticle,"Roff, Heather M.",Journal of Military Ethics Plausibility and probability in deductive reasoning,"We consider the problem of rational uncertainty about unproven mathematical statements, remarked on by G\""odel and others. Using Bayesian-inspired arguments we build a normative model of fair bets under deductive uncertainty which draws from both probability and the theory of algorithms. We comment on connections to Zeilberger's notion of ""semi-rigorous proofs"", particularly that inherent subjectivity would be present. We also discuss a financial view with models of arbitrage where traders have limited computational resources.",http://arxiv.org/abs/1708.09032,2019,manuscript,"MacFie, Andrew", The Brain and Computation,,http://mediangroup.org/brain1.html,,blogPost,"Maltinsky, Baeo",Median Group The Technological Landscape Affecting Artificial General Intelligence and the Importance of Nanoscale Neural Probes,"In this paper, we contrast three major pathways to human level AI, also known as artificial general intelligence (AGI), and we investigate how safety considerations compare between the three. The first pathway is de novo AGI (dnAGI), AGI built from the ground up. The second is Neuromorphic AGI (NAGI), AGI based loosely on the principles of the human brain. And third is Whole Brain Emulation (WBE), AGI built by emulating a particular human brain, in silico. Bostrom has previously argued that NAGI is the least safe form of the three. NAGI would be messier than dnAGI and therefore harder to align to arbitrary values. Additionally, NAGI would not intrinsically possess safeguards found in the human brain – such as compassion – while WBE would. In this paper, we argue that getting WBE first would be preferable to getting dnAGI first. While the introduction of WBE would likely be followed by a later transition to the less-constrained and therefore more-powerful dnAGI, the creation of dnAGI would likely be less dangerous if accomplished by WBEs than if done simply by biological humans, for a variety of reasons. One major reason is that the higher intelligence and quicker speed of thinking in the WBEs compared to biological humans could increase the chances of traversing the path through dnAGI safely. We additionally investigate the major technological trends leading to these three types of AGI, and we find these trends to be: traditional AI research, computational hardware, nanotechnology research, nanoscale neural probes, and neuroscience. In particular, we find that WBE is unlikely to be achieved without nanoscale neural probes, since much of the information processing in the brain occurs on the subcellular level (i.e., the nanoscale). For this reason, we argue that nanoscale neural probes could improve safety by favoring WBE over NAGI.",http://www.informatica.si/index.php/informatica/article/view/1874,2017,journalArticle,"Eth, Daniel",Informatica Exploration Potential,"We introduce exploration potential, a quantity that measures how much a reinforcement learning agent has explored its environment class. In contrast to information gain, exploration potential takes the problem's reward structure into account. This leads to an exploration criterion that is both necessary and sufficient for asymptotic optimality (learning to act optimally across the entire environment class). Our experiments in multi-armed bandits use exploration potential to illustrate how different algorithms make the tradeoff between exploration and exploitation.",https://arxiv.org/abs/1609.04994v3,2016,manuscript,"Leike, Jan", The Ethics of Outer Space: A Consequentialist Perspective,,,2016,bookSection,"Baum, Seth D.",The Ethics of Space Exploration What makes counterfactuals comparable?,,https://www.alignmentforum.org/posts/6E6D3qLPM3urXDPpK/what-makes-counterfactuals-comparable-1,2020,blogPost,"Leong, Chris",AI Alignment Forum Guiding Policies with Language via Meta-Learning,"Behavioral skills or policies for autonomous agents are conventionally learned from reward functions, via reinforcement learning, or from demonstrations, via imitation learning. However, both modes of task specification have their disadvantages: reward functions require manual engineering, while demonstrations require a human expert to be able to actually perform the task in order to generate the demonstration. Instruction following from natural language instructions provides an appealing alternative: in the same way that we can specify goals to other humans simply by speaking or writing, we would like to be able to specify tasks for our machines. However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task. In this work, we propose an interactive formulation of the task specification problem, where iterative language corrections are provided to an autonomous agent, guiding it in acquiring the desired skill. Our proposed language-guided policy learning algorithm can integrate an instruction and a sequence of corrections to acquire new skills very quickly. In our experiments, we show that this method can enable a policy to follow instructions and corrections for simulated navigation and manipulation tasks, substantially outperforming direct, non-interactive instruction following.",http://arxiv.org/abs/1811.07882,2019,manuscript,"Co-Reyes, John D.; Gupta, Abhishek; Sanjeev, Suvansh; Altieri, Nick; Andreas, Jacob; DeNero, John; Abbeel, Pieter; Levine, Sergey", Meta Learning Shared Hierarchies,"We develop a metalearning approach for learning hierarchically structured policies, improving sample efficiency on unseen tasks through the use of shared primitives---policies that are executed for large numbers of timesteps. Specifically, a set of primitives are shared within a distribution of tasks, and are switched between by task-specific policies. We provide a concrete metric for measuring the strength of such hierarchies, leading to an optimization problem for quickly reaching high reward on unseen tasks. We then present an algorithm to solve this problem end-to-end through the use of any off-the-shelf reinforcement learning method, by repeatedly sampling new tasks and resetting task-specific policies. We successfully discover meaningful motor primitives for the directional movement of four-legged robots, solely by interacting with distributions of mazes. We also demonstrate the transferability of primitives to solve long-timescale sparse-reward obstacle courses, and we enable 3D humanoid robots to robustly walk and crawl with the same policy.",http://arxiv.org/abs/1710.09767,2017,manuscript,"Frans, Kevin; Ho, Jonathan; Chen, Xi; Abbeel, Pieter; Schulman, John", The underwriter and the models-solo dances or pas-de-deux? What policy data can tell us about how underwriters use models,,https://www.msamlin.com/content/dam/ms-amlin/corporate/our-world/Whitepapers/MS%20Amlin%20White%20Paper%20The%20underwriter%20and%20the%20models-%20solo%20dances%20or%20pas-de-deux.pdf.downloadasset.pdf,2017,report,"Armstrong, Stuart; Weick, Mario; Sandberg, Anders; Snyder-Beattie, Andrew; Beckstead, Nick", Categorizing Wireheading in Partially Embedded Agents,"$\textit{Embedded agents}$ are not explicitly separated from their environment, lacking clear I/O channels. Such agents can reason about and modify their internal parts, which they are incentivized to shortcut or $\textit{wirehead}$ in order to achieve the maximal reward. In this paper, we provide a taxonomy of ways by which wireheading can occur, followed by a definition of wirehead-vulnerable agents. Starting from the fully dualistic universal agent AIXI, we introduce a spectrum of partially embedded agents and identify wireheading opportunities that such agents can exploit, experimentally demonstrating the results with the GRL simulation platform AIXIjs. We contextualize wireheading in the broader class of all misalignment problems - where the goals of the agent conflict with the goals of the human designer - and conjecture that the only other possible type of misalignment is specification gaming. Motivated by this taxonomy, we define wirehead-vulnerable agents as embedded agents that choose to behave differently from fully dualistic agents lacking access to their internal parts.",http://arxiv.org/abs/1906.09136,2019,manuscript,"Majha, Arushi; Sarkar, Sayan; Zagami, Davide", Quantifying Differences in Reward Functions,"For many tasks, the reward function is too complex to be specified procedurally, and must instead be learned from user data. Prior work has evaluated learned reward functions by examining rollouts from a policy optimized for the learned reward. However, this method cannot distinguish between the learned reward function failing to reflect user preferences, and the reinforcement learning algorithm failing to optimize the learned reward. Moreover, the rollout method is highly sensitive to details of the environment the learned reward is evaluated in, which often differ in the deployment environment. To address these problems, we introduce the Equivalent-Policy Invariant Comparison (EPIC) distance to quantify the difference between two reward functions directly, without training a policy. We prove EPIC is invariant on an equivalence class of reward functions that always induce the same optimal policy. Furthermore, we find EPIC can be precisely approximated and is more robust than baselines to the choice of visitation distribution. Finally, we find that the EPIC distance of learned reward functions to the ground-truth reward is predictive of the success of training a policy, even in different transition dynamics.",http://arxiv.org/abs/2006.13900,2020,manuscript,"Gleave, Adam; Dennis, Michael; Legg, Shane; Russell, Stuart; Leike, Jan", "Autonomy and machine learning at the interface of nuclear weapons, computers and people","A new era for our species started in 1945: with the terrifying demonstration of the power of the atom bomb in Hiroshima and Nagasaki, Japan, the potential global catastrophic consequences of human technology could no longer be ignored. Within the field of global catastrophic and existential risk, nuclear war is one of the more iconic scenarios, although significant uncertainties remain about its likelihood and potential destructive magnitude. The risk posed to humanity from nuclear weapons is not static. In tandem with geopolitical and cultural changes, technological innovations could have a significant impact on how the risk of the use of nuclear weapons changes over time. Increasing attention has been given in the literature to the impact of digital technologies, and in particular autonomy and machine learning, on nuclear risk. Most of this attention has focused on ‘first-order’ effects: the introduction of technologies into nuclear command-and-control and weapon-delivery systems. This essay focuses instead on higher-order effects: those that stem from the introduction of such technologies into more peripheral systems, with a more indirect (but no less real) effect on nuclear risk. It first describes and categorizes the new threats introduced by these technologies (in section I). It then considers policy responses to address these new threats (section II).",https://www.repository.cam.ac.uk/handle/1810/297703,2019,bookSection,"Avin, Shahar; Amadae, S. M.",The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk Deal or No Deal? End-to-End Learning of Negotiation Dialogues,,http://aclweb.org/anthology/D17-1259,2017,conferencePaper,"Lewis, Mike; Yarats, Denis; Dauphin, Yann; Parikh, Devi; Batra, Dhruv",Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing Strategic Classification is Causal Modeling in Disguise,"Consequential decision-making incentivizes individuals to strategically adapt their behavior to the specifics of the decision rule. While a long line of work has viewed strategic adaptation as gaming and attempted to mitigate its effects, recent work has instead sought to design classifiers that incentivize individuals to improve a desired quality. Key to both accounts is a cost function that dictates which adaptations are rational to undertake. In this work, we develop a causal framework for strategic adaptation. Our causal perspective clearly distinguishes between gaming and improvement and reveals an important obstacle to incentive design. We prove any procedure for designing classifiers that incentivize improvement must inevitably solve a non-trivial causal inference problem. Moreover, we show a similar result holds for designing cost functions that satisfy the requirements of previous work. With the benefit of hindsight, our results show much of the prior work on strategic classification is causal modeling in disguise.",http://arxiv.org/abs/1910.10362,2020,conferencePaper,"Miller, John; Milli, Smitha; Hardt, Moritz",Proceedings of the 37th International Conference on Machine Learning Autonomous Weapons And Coercive Threats,"Increasingly, governments are using artificial intelligence technologies to revolutionize their military capabilities. In many ways, these technologies present the potential to transform the conduct of war and, in so doing, to alter the nature of state-to-state interactions.",https://aipulse.org/autonomous-weapons-and-coercive-threats/,2019,blogPost,"Sterbenz, Ciara; Trager, Robert",AI Pulse Monte Carlo model of brain emulation development,,,2014,report,"Sandberg, Anders", Teleporting Universal Intelligent Agents,,http://link.springer.com/10.1007/978-3-319-09274-4_11,2014,bookSection,"Orseau, Laurent",Artificial General Intelligence Reducing long-term risks from malevolent actors,"Summary Dictators who exhibited highly narcissistic, psychopathic, or sadistic traits were involved in some of the greatest catastrophes in human history.  Malevolent individuals in positions of power could negatively affect humanity’s long-term trajectory by, for example, exacerbating international conflict or other broad risk factors. Malevolent humans with access to advanced technology—such as whole brain emulation […]",https://longtermrisk.org/reducing-long-term-risks-from-malevolent-actors/,2020,report,"Althaus, David; Baumann, Tobias", Reframing Superintelligence: Comprehensive AI Services as General Intelligence,,,2019,report,"Drexler, K Eric", Confronting the threat of nuclear winter,,https://linkinghub.elsevier.com/retrieve/pii/S0016328715000403,2015,journalArticle,"Baum, Seth D.",Futures Fine-tuning language models from human preferences,,,2019,manuscript,"Ziegler, Daniel M.; Stiennon, Nisan; Wu, Jeffrey; Brown, Tom B.; Radford, Alec; Amodei, Dario; Christiano, Paul; Irving, Geoffrey", "Exploring AI Safety in Degrees: Generality, Capability and Control","The landscape of AI safety is frequently explored differently by contrasting specialised AI versus general AI (or AGI), by analysing the short-term hazards of systems with limited capabilities against those more long-term risks posed by ‘superintelligence’, and by conceptualising sophisticated ways of bounding control an AI system has over its environment and itself (impact, harm to humans, self-harm, containment, etc.). In this position paper we reconsider these three aspects of AI safety as quantitative factors –generality, capability and control–, suggesting that by defining metrics for these dimensions, AI risks can be characterised and analysed more precisely. As an example, we illustrate how to define these metrics and their values for some simple agents in a toy scenario within a reinforcement learning setting.",,2020,conferencePaper,"Burden, John; Hernandez-Orallo, Jose",Proceedings of the Workshop on Artificial Intelligence Safety (SafeAI 2020) What Do You Think About Machines That Think,,https://www.edge.org/response-detail/26157,2015,magazineArticle,"Russell, Stuart",Edge.org "Causal–explanatory pluralism: How intentions, functions, and mechanisms influence causal ascriptions",,https://linkinghub.elsevier.com/retrieve/pii/S0010028510000253,2010,journalArticle,"Lombrozo, Tania",Cognitive Psychology Reinforcement Learning with Augmented Data,"Learning from visual observations is a fundamental yet challenging problem in Reinforcement Learning (RL). Although algorithmic advances combined with convolutional neural networks have proved to be a recipe for success, current methods are still lacking on two fronts: (a) data-efficiency of learning and (b) generalization to new environments. To this end, we present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms. We perform the first extensive study of general data augmentations for RL on both pixel-based and state-based inputs, and introduce two new data augmentations - random translate and random amplitude scale. We show that augmentations such as random translate, crop, color jitter, patch cutout, random convolutions, and amplitude scale can enable simple RL algorithms to outperform complex state-of-the-art methods across common benchmarks. RAD sets a new state-of-the-art in terms of data-efficiency and final performance on the DeepMind Control Suite benchmark for pixel-based control as well as OpenAI Gym benchmark for state-based control. We further demonstrate that RAD significantly improves test-time generalization over existing methods on several OpenAI ProcGen benchmarks. Our RAD module and training code are available at https://www.github.com/MishaLaskin/rad.",http://arxiv.org/abs/2004.14990,2020,manuscript,"Laskin, Michael; Lee, Kimin; Stooke, Adam; Pinto, Lerrel; Abbeel, Pieter; Srinivas, Aravind", Law and Policy Responses to Disaster-Induced Financial Distress,"This chapter treats disaster response policies directed at the economic recovery of private households. First, we examine problems of disaster-induced financial distress from a legal and economic perspective. We do this both qualitatively and quantitatively, and focussing on residential loans, using the victims of the 11 March 2011 tsunami as our example. Then, using doctrinal and systematic analysis, we set out the broad array of law and policy solutions tackling disaster-induced debt launched by the Japanese Government. On this basis, we assess the strengths and weaknesses of these measures in terms of their practical adequacy to prevent and mitigate financial hardship and examine them against multiple dimensions of disaster justice. We conclude with suggestions for improving financial disaster recovery by taking a prospective approach, preventing the snowballing of disaster-related losses, which we argue represents a equitable and effective way forward in allocating resources following future mega disasters.",https://doi.org/10.1007/978-981-13-9005-0_4,2019,bookSection,"Weitzdörfer, Julius; Beard, Simon","Governance, Risk and Financial Impact of Mega Disasters: Lessons from Japan" The Eliminativist Approach to Consciousness,"This essay explains my version of an eliminativist approach to understanding consciousness. It suggests that we stop thinking in terms of ""conscious"" and ""unconscious"" and instead look at physical systems for what they are and what they can do. This perspective dissolves some biases in our usual perspective and shows us that the world is […]",https://longtermrisk.org/the-eliminativist-approach-to-consciousness/,2015,blogPost,"Tomasik, Brian",Center on Long-Term Risk Solution of a problem of Leon Henkin,"If Σ is any standard formal system adequate for recursive number theory, a formula (having a certain integer q as its Gödel number) can be constructed which expresses the proposition that the formula with Gödel number q is provable in Σ. Is this formula provable or independent in Σ? [2]. One approach to this problem is discussed by Kreisel in [4]. However, he still leaves open the question whether the formula ( Ex ) ( x, a ), with Gödel-number a, is provable or not. Here ( x, y ) is the number-theoretic predicate which expresses the proposition that x is the number of a formal proof of the formula with Gödel-number y . In this note we present a solution of the previous problem with respect to the system Z μ [3] pp. 289–294, and, more generally, with respect to any system whose set of theorems is closed under the rules of inference of the first order predicate calculus, and satisfies the subsequent five conditions, and in which the function ( k, l ) used below is definable. The notation and terminology is in the main that of [3] pp. 306–326, viz. if is a formula of Z μ containing no free variables, whose Gödel number is a, then ({ }) stands for ( Ex ) ( x, a ) (read: the formula with Gödel number a is provable in Z μ ); if is a formula of Z μ containing a free variable, y say, ({ }) stands for ( Ex ) ( x, g ( y )}, where g ( y ) is a recursive function such that for an arbitrary numeral the value of g ( ) is the Gödel number of the formula obtained from by substituting for y in throughout. We shall, however, depart trivially from [3] in writing ( ), where is an arbitrary numeral, for ( Ex ) { x , ).",https://www.cambridge.org/core/product/identifier/S0022481200096511/type/journal_article,1955,journalArticle,"Löb, M. H.",Journal of Symbolic Logic Stovepiping and Malicious Software: A Critical Review of AGI Containment,"Awareness of the possible impacts associated with artificial intelligence has risen in proportion to progress in the field. While there are tremendous benefits to society, many argue that there are just as many, if not more, concerns related to advanced forms of artificial intelligence. Accordingly, research into methods to develop artificial intelligence safely is increasingly important. In this paper, we provide an overview of one such safety paradigm: containment with a critical lens aimed toward generative adversarial networks and potentially malicious artificial intelligence. Additionally, we illuminate the potential for a developmental blindspot in the stovepiping of containment mechanisms.",http://arxiv.org/abs/1811.03653,2018,manuscript,"Pittman, Jason M.; Espinoza, Jesus P.; Crosby, Courtney Soboleski", Ambitious vs. narrow value learning,"(Re)Posted as part of the AI Alignment Forum sequence on Value Learning. Rohin's note: The definition of narrow value learning in the previous post focused on the fact that the resulting behavior is limited to some domain. The definition in this post focuses on learning instrumental goals and values. While the definitions are different, I have used the same term for both because I believe that they are both pointing at the same underlying concept. (I do not know if Paul agrees.) I'm including this post to give a different perspective on what I mean by narrow value learning, before delving into conceptual ideas within narrow value learning. -------------------------------------------------------------------------------- Suppose I’m trying to build an AI system that “learns what I want” and helps me get it. I think that people sometimes use different interpretations of this goal. At two extremes of a spectrum of possible interpretations: * The AI learns my preferences over (very) long-term outcomes. If I were to die tomorrow, it could continue pursuing my goals without me; if humanity were to disappear tomorrow, it could rebuild the kind of civilization we would want; etc. The AI might pursue radically different subgoals than I would on the scale of months and years, if it thinks that those subgoals better achieve what I really want. * The AI learns the narrower subgoals and instrumental values I am pursuing. It learns that I am trying to schedule an appointment for Tuesday and that I want to avoid inconveniencing anyone, or that I am trying to fix a particular bug without introducing new problems, etc. It does not make any effort to pursue wildly different short-term goals than I would in order to better realize my long-term values, though it may help me correct some errors that I would be able to recognize as such. I think that many researchers interested in AI safety per se mostly think about the former. I think that research",https://www.alignmentforum.org/posts/SvuLhtREMy8wRBzpC/ambitious-vs-narrow-value-learning,2019,blogPost,"Christiano, Paul",AI Alignment Forum Scaling shared model governance via model splitting,"Currently the only techniques for sharing governance of a deep learning model are homomorphic encryption and secure multiparty computation. Unfortunately, neither of these techniques is applicable to the training of large neural networks due to their large computational and communication overheads. As a scalable technique for shared model governance, we propose splitting deep learning model between multiple parties. This paper empirically investigates the security guarantee of this technique, which is introduced as the problem of model completion: Given the entire training data set or an environment simulator, and a subset of the parameters of a trained deep learning model, how much training is required to recover the model's original performance? We define a metric for evaluating the hardness of the model completion problem and study it empirically in both supervised learning on ImageNet and reinforcement learning on Atari and DeepMind~Lab. Our experiments show that (1) the model completion problem is harder in reinforcement learning than in supervised learning because of the unavailability of the trained agent's trajectories, and (2) its hardness depends not primarily on the number of parameters of the missing part, but more so on their type and location. Our results suggest that model splitting might be a feasible technique for shared model governance in some settings where training is very expensive.",http://arxiv.org/abs/1812.05979,2018,manuscript,"Martic, Miljan; Leike, Jan; Trask, Andrew; Hessel, Matteo; Legg, Shane; Kohli, Pushmeet", Well-being and enhancement,,,2011,journalArticle,"Savulescu, Julian; Sandberg, Anders; Kahane, Guy",Enhancing human capacities Openness Norms in AGI Development,"1. INTRODUCTION This post outlines two models from the social epistemology of science explaining the emergence of the a particular openness norm within the sciences, and then looks at how such models can be utilised to understand research groups trying to develop AGI. In the rest of the introduction, I will try to provide some motivation for this post. Sections 2 & 3 will briefly outline the two models I'm looking at. Section 4 more directly tries to interpret such models in the context of AGI development. Section 5 concludes. The social epistemology of science is an interdisciplinary subfield at the intersection of philosophy and economics, which utilises formal models to understand the incentive structure of science. Here, I focus on two models from this area which try to explain the emergence of one particular openness norm: the so-called ‘communist norm’ in scientific research. The communist norm is a norm to share all ‘substantive findings’ with the scientific community. The existence of this norm seems to be taken for granted in this literature, although the best piece of evidence I can find for its existence comes from Louis et. al (2001), who find, in a sample of nearly 2,000 geneticists, that 91% agree that one should share all of one's relevant data. I nevertheless take it for granted in this post. I wanted to see whether understanding the emergence of the communist norm in science could be important for understanding the development of AGI. In many ways, one might think the incentive structures around the development of AGI (will, or does) parallel the incentive structures of academic science. Thus, one might think that looking at the incentive structures behind scientific research are a good starting point for looking at the incentive structures surrounding the development of AGI. As the communist norm emerged in science, one can imagine the emergence of a similar ‘communist norm’ across research groups involved in AGI development, where research g",https://www.alignmentforum.org/posts/RvrTZ3qKWpg9aiFqZ/openness-norms-in-agi-development,2020,blogPost,Sublation,AI Alignment Forum Rationality and Intelligence: A Brief Update,"The long-term goal of AI is the creation and understanding of intelligence. This requires a notion of intelligence that is precise enough to allow the cumulative development of robust systems and general results. The concept of rational agency has long been considered a leading candidate to fulfill this role. This paper, which updates a much earlier version (Russell, 1997), reviews the sequence of conceptual shifts leading to a different candidate, bounded optimality, that is closer to our informal conception of intelligence and reduces the gap between theory and practice. Some promising recent developments are also described.",http://link.springer.com/10.1007/978-3-319-26485-1_2,2016,bookSection,"Russell, Stuart",Fundamental Issues of Artificial Intelligence Adversarial Training Methods for Semi-Supervised Text Classification,"Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting. However, both methods require making small perturbations to numerous entries of the input vector, which is inappropriate for sparse high-dimensional inputs such as one-hot word representations. We extend adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recurrent neural network rather than to the original input itself. The proposed method achieves state of the art results on multiple benchmark semi-supervised and purely supervised tasks. We provide visualizations and analysis showing that the learned word embeddings have improved in quality and that while training, the model is less prone to overfitting.",http://arxiv.org/abs/1605.07725,2017,conferencePaper,"Miyato, Takeru; Dai, Andrew M.; Goodfellow, Ian","arXiv:1605.07725 [cs, stat]" "Climate Justice: Integrating Economics and Philosophy, Ravi Kanbur and Henry Shue (editors). Oxford University Press, 2018, 288 pages.",,https://www.cambridge.org/core/product/identifier/S026626711900018X/type/journal_article,2019,journalArticle,"Beard, Simon",Economics and Philosophy Fixed-point solutions to the regress problem in normative uncertainty,"When we are faced with a choice among acts, but are uncertain about the true state of the world, we may be uncertain about the acts’ “choiceworthiness”. Decision theories guide our choice by making normative claims about how we should respond to this uncertainty. If we are unsure which decision theory is correct, however, we may remain unsure of what we ought to do. Given this decision-theoretic uncertainty, meta-theories attempt to resolve the conflicts between our decision theories...but we may be unsure which meta-theory is correct as well. This reasoning can launch a regress of ever-higher-order uncertainty, which may leave one forever uncertain about what one ought to do. There is, fortunately, a class of circumstances under which this regress is not a problem. If one holds a cardinal understanding of subjective choiceworthiness, and accepts certain other criteria (which are too weak to specify any particular decision theory), one’s hierarchy of metanormative uncertainty ultimately converges to precise definitions of “subjective choiceworthiness” for any finite set of acts. If one allows the metanormative regress to extend to the transfinite ordinals, the convergence criteria can be weakened further. Finally, the structure of these results applies straightforwardly not just to decision-theoretic uncertainty, but also to other varieties of normative uncertainty, such as moral uncertainty.",https://doi.org/10.1007/s11229-019-02098-9,2019,journalArticle,"Trammell, Philip",Synthese "Delusion, Survival, and Intelligent Agents","This paper considers the consequences of endowing an intelligent agent with the ability to modify its own code. The intelligent agent is patterned closely after AIXI with these specific assumptions: 1) The agent is allowed to arbitrarily modify its own inputs if it so chooses; 2) The agent’s code is a part of the environment and may be read and written by the environment. The first of these we call the “delusion box”; the second we call “mortality”. Within this framework, we discuss and compare four very different kinds of agents, specifically: reinforcementlearning, goal-seeking, prediction-seeking, and knowledge-seeking agents. Our main results are that: 1) The reinforcement-learning agent under reasonable circumstances behaves exactly like an agent whose sole task is to survive (to preserve the integrity of its code); and 2) Only the knowledge-seeking agent behaves completely as expected.",http://link.springer.com/10.1007/978-3-642-22887-2_2,2011,bookSection,"Ring, Mark; Orseau, Laurent",Artificial General Intelligence An empirical investigation of the challenges of real-world reinforcement learning,"Reinforcement learning (RL) has proven its worth in a series of artificial domains, and is beginning to show some successes in real-world scenarios. However, much of the research advances in RL are hard to leverage in real-world systems due to a series of assumptions that are rarely satisfied in practice. In this work, we identify and formalize a series of independent challenges that embody the difficulties that must be addressed for RL to be commonly deployed in real-world systems. For each challenge, we define it formally in the context of a Markov Decision Process, analyze the effects of the challenge on state-of-the-art learning algorithms, and present some existing attempts at tackling it. We believe that an approach that addresses our set of proposed challenges would be readily deployable in a large number of real world problems. Our proposed challenges are implemented in a suite of continuous control environments called realworldrl-suite which we propose an as an open-source benchmark.",http://arxiv.org/abs/2003.11881,2020,manuscript,"Dulac-Arnold, Gabriel; Levine, Nir; Mankowitz, Daniel J.; Li, Jerry; Paduraru, Cosmin; Gowal, Sven; Hester, Todd", Reward learning from human preferences and demonstrations in Atari,"To solve complex real-world problems with reinforcement learning, we cannot rely on manually specified reward functions. Instead, we can have humans communicate an objective to the agent directly. In this work, we combine two approaches to learning from human feedback: expert demonstrations and trajectory preferences. We train a deep neural network to model the reward function and use its predicted reward to train an DQN-based deep reinforcement learning agent on 9 Atari games. Our approach beats the imitation learning baseline in 7 games and achieves strictly superhuman performance on 2 games without using game rewards. Additionally, we investigate the goodness of fit of the reward model, present some reward hacking problems, and study the effects of noise in the human labels.",http://arxiv.org/abs/1811.06521,2018,conferencePaper,"Ibarz, Borja; Leike, Jan; Pohlen, Tobias; Irving, Geoffrey; Legg, Shane; Amodei, Dario","arXiv:1811.06521 [cs, stat]" Maintaining the AI Chip Competitive Advantage of the United States and its Allies,,https://cset.georgetown.edu/wp-content/uploads/CSET-Maintaining-the-AI-Chip-Competitive-Advantage-of-the-United-States-and-its-Allies-20191206.pdf,2019,report,"Khan, Saif", Establishing Appropriate Trust via Critical States,"In order to effectively interact with or supervise a robot, humans need to have an accurate mental model of its capabilities and how it acts. Learned neural network policies make that particularly challenging. We propose an approach for helping end-users build a mental model of such policies. Our key observation is that for most tasks, the essence of the policy is captured in a few critical states: states in which it is very important to take a certain action. Our user studies show that if the robot shows a human what its understanding of the task's critical states is, then the human can make a more informed decision about whether to deploy the policy, and if she does deploy it, when she needs to take control from it at execution time.",http://arxiv.org/abs/1810.08174,2018,conferencePaper,"Huang, Sandy H.; Bhatia, Kush; Abbeel, Pieter; Dragan, Anca D.",2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Should we campaign against sex robots?,,,2017,journalArticle,"Danaher, John; Earp, Brian D.; Sandberg, Anders",Robot Sex: Social and Ethical Implications "Preparing for ""The Talk"" with AI projects","Epistemic status: Written for Blog Post Day III. I don't get to talk to people ""in the know"" much, so maybe this post is obsolete in some way. I think that at some point at least one AI project will face an important choice between deploying and/or enlarging a powerful AI system, or holding back and doing more AI safety research. (Currently, AI projects face choices like this all the time, except they aren't important in the sense I mean it, because the AI isn't potentially capable of escaping and taking over large parts of the world, or doing something similarly bad.) Moreover, I think that when this choice is made, most people in the relevant conversation will be insufficiently concerned/knowledgeable about AI risk. Perhaps they will think: ""This new AI design is different from the classic models, so the classic worries don't arise."" Or: ""Fear not, I did [insert amateur safety strategy]."" I think it would be very valuable for these conversations to end with ""OK, we'll throttle back our deployment strategy for a bit so we can study the risks more carefully,"" rather than with ""Nah, we're probably fine, let's push ahead."" This buys us time. Say it buys us a month. A month of extra time right after scary-powerful AI is created is worth a lot, because we'll have more serious smart people paying attention, and we'll have more evidence about what AI is like. I'd guess that a month of extra time in a situation like this would increase the total amount of quality-weighted AI safety and AI policy work by 10%. That's huge. -------------------------------------------------------------------------------- One way to prepare for these conversations is to raise awareness about AI risk and technical AI safety problems, so that it's more likely that more people in these conversations are more informed about the risks. I think this is great. However, there's another way to prepare, which I think is tractable and currently neglected: 1. Identify some people who might be part",https://www.alignmentforum.org/posts/QSBgGv8byWMjmaGE5/preparing-for-the-talk-with-ai-projects,2020,blogPost,"Kokotajlo, Daniel",AI Alignment Forum A Coverage-Based Utility Model for Identifying Unknown Unknowns,"A classifier’s low confidence in prediction is often indicative of whether its prediction will be wrong; in this case, inputs are called known unknowns. In contrast, unknown unknowns (UUs) are inputs on which a classifier makes a high confidence mistake. Identifying UUs is especially important in safety-critical domains like medicine (diagnosis) and law (recidivism prediction). Previous work by Lakkaraju et al. (2017) on identifying unknown unknowns assumes that the utility of each revealed UU is independent of the others, rather than considering the set holistically. While this assumption yields an efficient discovery algorithm, we argue that it produces an incomplete understanding of the classifier’s limitations. In response, this paper proposes a new class of utility models that rewards how well the discovered UUs cover (or “explain”) a sample distribution of expected queries. Although choosing an optimal cover is intractable, even if the UUs were known, our utility model is monotone submodular, affording a greedy discovery strategy. Experimental results on four datasets show that our method outperforms bandit-based approaches and achieves within 60.9% utility of an omniscient, tractable upper bound.",,2018,conferencePaper,"Bansal, Gagan; Weld, Daniel S", The Only Ethical Argument for Positive Delta,,https://globalprioritiesinstitute.org/wp-content/uploads/2019/Mogensen_The_only_ethical_argument_for_positive_delta.pdf,2019,manuscript,"Mogensen, Andreas", Avoiding Imposters and Delinquents: Adversarial Crowdsourcing and Peer Prediction,"We consider a crowdsourcing model in which $n$ workers are asked to rate the quality of $n$ items previously generated by other workers. An unknown set of $\alpha n$ workers generate reliable ratings, while the remaining workers may behave arbitrarily and possibly adversarially. The manager of the experiment can also manually evaluate the quality of a small number of items, and wishes to curate together almost all of the high-quality items with at most an $\epsilon$ fraction of low-quality items. Perhaps surprisingly, we show that this is possible with an amount of work required of the manager, and each worker, that does not scale with $n$: the dataset can be curated with $\tilde{O}\Big(\frac{1}{\beta\alpha^3\epsilon^4}\Big)$ ratings per worker, and $\tilde{O}\Big(\frac{1}{\beta\epsilon^2}\Big)$ ratings by the manager, where $\beta$ is the fraction of high-quality items. Our results extend to the more general setting of peer prediction, including peer grading in online classrooms.",http://arxiv.org/abs/1606.05374,2016,conferencePaper,"Steinhardt, Jacob; Valiant, Gregory; Charikar, Moses",Advances in Neural Information Processing Systems 29 (NIPS 2016) Overrepresentation of extreme events in decision making reflects rational use of cognitive resources.,,http://doi.apa.org/getdoi.cfm?doi=10.1037/rev0000074,2018,journalArticle,"Lieder, Falk; Griffiths, Thomas L.; Hsu, Ming",Psychological Review Managing Loss of Control as Many Militaries Pursue Technological Superiority,,,2018,journalArticle,"Danzig, Richard",Arms Control Today Revisiting Unreasonable Effectiveness of Data in Deep Learning Era,"The success of deep learning in vision can be attributed to: (a) models with high capacity; (b) increased computational power; and (c) availability of large-scale labeled data. Since 2012, there have been significant advances in representation capabilities of the models and computational capabilities of GPUs. But the size of the biggest dataset has surprisingly remained constant. What will happen if we increase the dataset size by 10x or 100x? This paper takes a step towards clearing the clouds of mystery surrounding the relationship between `enormous data' and visual deep learning. By exploiting the JFT-300M dataset which has more than 375M noisy labels for 300M images, we investigate how the performance of current vision tasks would change if this data was used for representation learning. Our paper delivers some surprising (and some expected) findings. First, we find that the performance on vision tasks increases logarithmically based on volume of training data size. Second, we show that representation learning (or pre-training) still holds a lot of promise. One can improve performance on many vision tasks by just training a better base model. Finally, as expected, we present new state-of-the-art results for different vision tasks including image classification, object detection, semantic segmentation and human pose estimation. Our sincere hope is that this inspires vision community to not undervalue the data and develop collective efforts in building larger datasets.",http://arxiv.org/abs/1707.02968,2017,conferencePaper,"Sun, Chen; Shrivastava, Abhinav; Singh, Saurabh; Gupta, Abhinav",Proceedings of the IEEE International Conference on Computer Vision (ICCV) "The Politicization of Climate Change and Polarization in the American Public's Views of Global Warming, 2001–2010",,https://www.tandfonline.com/doi/full/10.1111/j.1533-8525.2011.01198.x,2011,journalArticle,"McCright, Aaron M.; Dunlap, Riley E.",The Sociological Quarterly Functional Decision Theory: A New Theory of Instrumental Rationality,"This paper describes and motivates a new decision theory known as functional decision theory (FDT), as distinct from causal decision theory and evidential decision theory. Functional decision theorists hold that the normative principle for action is to treat one's decision as the output of a fixed mathematical function that answers the question, ""Which output of this very function would yield the best outcome?"" Adhering to this principle delivers a number of benefits, including the ability to maximize wealth in an array of traditional decision-theoretic and game-theoretic problems where CDT and EDT perform poorly. Using one simple and coherent decision rule, functional decision theorists (for example) achieve more utility than CDT on Newcomb's problem, more utility than EDT on the smoking lesion problem, and more utility than both in Parfit's hitchhiker problem. In this paper, we define FDT, explore its prescriptions in a number of different decision problems, compare it to CDT and EDT, and give philosophical justifications for FDT as a normative theory of decision-making.",http://arxiv.org/abs/1710.05060,2018,manuscript,"Yudkowsky, Eliezer; Soares, Nate", "Some Comments on Stuart Armstrong's ""Research Agenda v0.9""","Subject matter here. I: Intro I am extremely sympathetic to the program of AI safety by understanding value learning. Because of that sympathy, I have more thoughts than average prompted by Stuart Armstrong's post along those same lines. Stuart's post mostly deals with ""partial preferences,"" which are like simple statements of binary preference (A is better than B), but associated with a context - supposedly the ""human's model"" the human was using when they exhibited or stated that preference. Then the post says that you should sort these partial preferences according to meta-levels and aggregate them from the top down, updating your procedure after you finish each meta-level, eventually producing a utility function over world-histories. Broadly, I'd say that my opinion is sort of like the bitter lesson. The bitter lesson in, say, image recognition, is that people wanted to do image recognition with a bunch of human-designed features and formal reasoning and human-understandable internal moving parts, and they tried that for a long time, and what worked was using way bigger models, way more computing power, much fewer human-understandable internal parts, and almost no human-designed features. I like Stuart's outline more than most value learning proposals. But it still strikes me as primarily a list of human-designed features and human-understandable internal moving parts. We might be better off throwing away some of the details and abstracting in a way that allows for some of these problems to be solved by big models and computing power. It's like the just-so story about ResNets, which is that they're a fix to humans thinking the insides of neural nets should look too much like human logic[^1]. I think speculating about the human-sized logical relationships between speculative parts inside the AI is easier but less useful than speculating about the algorithm that will connect your inputs to your outputs with a big model and lots of computing power, which may",https://www.lesswrong.com/posts/GHNokcgERpLJwJnLW/some-comments-on-stuart-armstrong-s-research-agenda-v0-9,2019,blogPost,"Steiner, Charlie",LessWrong Tranquilism,"What makes an experience valuable or disvaluable? In contrast to hedonism, which holds that pleasure is what is good and pain is what is bad, tranquilism is an “absence of desire” theory that counts pleasure as instrumentally valuable only. According to tranquilism, what matters is whether an experience is free from bothersome components. States of contentment such as flow or meditative tranquility also qualify.",https://longtermrisk.org/tranquilism/,2017,blogPost,"Gloor, Lukas",Center on Long-Term Risk Program Obfuscation with Leaky Hardware,,http://link.springer.com/10.1007/978-3-642-25385-0_39,2011,bookSection,"Bitansky, Nir; Canetti, Ran; Goldwasser, Shafi; Halevi, Shai; Kalai, Yael Tauman; Rothblum, Guy N.",Advances in Cryptology – ASIACRYPT 2011 Weak HCH accesses EXP,"This post is a follow-up to my “Alignment proposals and complexity classes” post. Thanks to Sam Eisenstat for helping with part of the proof here. Previously, I proved that imitative amplification with weak HCH, approval-based amplification, and recursive reward modeling access PSPACE while AI safety via market making accesses EXP. At the time, I wasn't sure whether my market making proof would generalize to the others, so I just published it with the PSPACE proofs instead. However, I have since become convinced that the proof does generalize—and that it generalizes for all of the proposals I mentioned—such that imitative amplification with weak HCH, approval-based amplification, and recursive reward modeling all actually access EXP. This post attempts to prove that. UPDATED LIST OF PROPOSALS BY COMPLEXITY CLASS P: Imitation learning (trivial) PSPACE: AI safety via debate (proof) EXP: AI safety via market making (proof), Imitative amplification with weak HCH (proof below), Approval-based amplification (proof below), Recursive reward modeling (proof below) NEXP: Debate with cross-examination (proof) R: Imitative amplification with strong HCH (proof), AI safety via market making with pointers (proof) PROOFS IMITATIVE AMPLIFICATION WITH WEAK HCH ACCESSES EXP The proof here is similar in structure to my previous proof that weak HCH accesses PSPACE, so I'll only explain where this proof differs from that one. First, since l∈EXP, we know that for any x∈X, Tl(x) halts in O(2poly(n)) steps where n=|x|. Thus, we can construct a function fl(n)=c1+c2ec3nc4 such that for all x∈X, Tl(x) halts in less than or equal to fl(x) steps by picking c3,c4 large enough that they dominate all other terms in the polynomial for all n∈N. Note that fl is then computable in time polynomial in n. Second, let H's new strategy be as follows: 1. Given p, let s,x=M(p:f(|x|)). Then, return accept/reject based on whether s is an accept or reject state (it will always be one or the oth",https://www.alignmentforum.org/posts/CtGH3yEoo4mY2taxe/weak-hch-accesses-exp,2020,blogPost,"Hubinger, Evan",AI Alignment Forum Do You Want Your Autonomous Car to Drive Like You?,"With progress in enabling autonomous cars to drive safely on the road, it is time to start asking how they should be driving. A common answer is that they should be adopting their users' driving style. This makes the assumption that users want their autonomous cars to drive like they drive - aggressive drivers want aggressive cars, defensive drivers want defensive cars. In this paper, we put that assumption to the test. We find that users tend to prefer a significantly more defensive driving style than their own. Interestingly, they prefer the style they think is their own, even though their actual driving style tends to be more aggressive. We also find that preferences do depend on the specific driving scenario, opening the door for new ways of learning driving style preference.",,2017,conferencePaper,"Basu, C.; Yang, Q.; Hungerman, D.; Sinahal, M.; Draqan, A. D.",2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning,"Allowing humans to interactively train artificial agents to understand language instructions is desirable for both practical and scientific reasons, but given the poor data efficiency of the current learning methods, this goal may require substantial research efforts. Here, we introduce the BabyAI research platform to support investigations towards including humans in the loop for grounded language learning. The BabyAI platform comprises an extensible suite of 19 levels of increasing difficulty. The levels gradually lead the agent towards acquiring a combinatorially rich synthetic language which is a proper subset of English. The platform also provides a heuristic expert agent for the purpose of simulating a human teacher. We report baseline results and estimate the amount of human involvement that would be required to train a neural network-based agent on some of the BabyAI levels. We put forward strong evidence that current deep learning methods are not yet sufficiently sample efficient when it comes to learning a language with compositional properties.",http://arxiv.org/abs/1810.08272,2019,manuscript,"Chevalier-Boisvert, Maxime; Bahdanau, Dzmitry; Lahlou, Salem; Willems, Lucas; Saharia, Chitwan; Nguyen, Thien Huu; Bengio, Yoshua", Multi-task Maximum Entropy Inverse Reinforcement Learning,"Multi-task Inverse Reinforcement Learning (IRL) is the problem of inferring multiple reward functions from expert demonstrations. Prior work, built on Bayesian IRL, is unable to scale to complex environments due to computational constraints. This paper contributes a formulation of multi-task IRL in the more computationally efficient Maximum Causal Entropy (MCE) IRL framework. Experiments show our approach can perform one-shot imitation learning in a gridworld environment that single-task IRL algorithms need hundreds of demonstrations to solve. We outline preliminary work using meta-learning to extend our method to the function approximator setting of modern MCE IRL algorithms. Evaluating on multi-task variants of common simulated robotics benchmarks, we discover serious limitations of these IRL algorithms, and conclude with suggestions for further work.",http://arxiv.org/abs/1805.08882,2018,manuscript,"Gleave, Adam; Habryka, Oliver", Outrunning the Law: Extraterrestrial Liberty and Universal Colonisation,,,2015,bookSection,"Armstrong, Stuart; Sandberg, Anders; ÓhÉigeartaigh, Seán",The Meaning of Liberty Beyond Earth Causal decision theory,,http://www.tandfonline.com/doi/abs/10.1080/00048408112340011,1981,journalArticle,"Lewis, David",Australasian Journal of Philosophy The Importance of Wild-Animal Suffering,"Wild animals are vastly more numerous than animals on factory farms, in laboratories, or kept as pets. Most of these animals endure intense suffering during their lives, such as from disease, hunger, cold, injury, and chronic fear of predators. Many wild animals give birth to tens or hundreds of offspring at a time, most of which die young, often in painful ways. This suggests that suffering plausibly dominates happiness in nature. Humans are not helpless to reduce wild-animal suffering. Indeed, humans already influence ecosystems in substantial ways, so the question is often not whether to intervene but how to intervene. Because ecology is so complex, we should study carefully how to reduce wild-animal suffering, giving due consideration to unintended long-run consequences. We should also promote concern for wild animals and challenge environmentalist assumptions among activists, academics, and other sympathetic groups. Finally, we should ensure that our descendants think twice before spreading ecosystems to areas where they do not yet exist.",,2015,journalArticle,"Brian, Tomasik",Animal Suffering Forecasting AI Progress: A Research Agenda,"Forecasting AI progress is essential to reducing uncertainty in order to appropriately plan for research efforts on AI safety and AI governance. While this is generally considered to be an important topic, little work has been conducted on it and there is no published document that gives and objective overview of the field. Moreover, the field is very diverse and there is no published consensus regarding its direction. This paper describes the development of a research agenda for forecasting AI progress which utilized the Delphi technique to elicit and aggregate experts' opinions on what questions and methods to prioritize. The results of the Delphi are presented; the remainder of the paper follow the structure of these results, briefly reviewing relevant literature and suggesting future work for each topic. Experts indicated that a wide variety of methods should be considered for forecasting AI progress. Moreover, experts identified salient questions that were both general and completely unique to the problem of forecasting AI progress. Some of the highest priority topics include the validation of (partially unresolved) forecasts, how to make forecasting action-guiding and the quality of different performance metrics. While statistical methods seem more promising, there is also recognition that supplementing judgmental techniques can be quite beneficial.",http://arxiv.org/abs/2008.01848,2020,manuscript,"Gruetzemacher, Ross; Dorner, Florian; Bernaola-Alvarez, Niko; Giattino, Charlie; Manheim, David", Electronic media use and sleep in school-aged children and adolescents: A review,,https://linkinghub.elsevier.com/retrieve/pii/S1389945710001632,2010,journalArticle,"Cain, Neralie; Gradisar, Michael",Sleep Medicine Hindsight Experience Replay,"Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.",https://papers.nips.cc/paper/2017/hash/453fadbd8a1a3af50a9df4df899537b5-Abstract.html,2018,conferencePaper,"Andrychowicz, Marcin; Wolski, Filip; Ray, Alex; Schneider, Jonas; Fong, Rachel; Welinder, Peter; McGrew, Bob; Tobin, Josh; Abbeel, Pieter; Zaremba, Wojciech",Advances in Neural Information Processing Systems 30 (NIPS 2017) Automating reasoning about the future at Ought,We introduce judgmental forecasting as a focus area for Ought,https://ought.org/updates/2020-11-09-forecasting,2020,blogPost,"Byun, Jungwon; Stuhlmüller, Andreas",Ought PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications,"PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.",https://arxiv.org/abs/1701.05517v1,2017,manuscript,"Salimans, Tim; Karpathy, Andrej; Chen, Xi; Kingma, Diederik P.", Learning from Demonstration in the Wild,"Learning from demonstration (LfD) is useful in settings where hand-coding behaviour or a reward function is impractical. It has succeeded in a wide range of problems but typically relies on manually generated demonstrations or specially deployed sensors and has not generally been able to leverage the copious demonstrations available in the wild: those that capture behaviours that were occurring anyway using sensors that were already deployed for another purpose, e.g., traffic camera footage capturing demonstrations of natural behaviour of vehicles, cyclists, and pedestrians. We propose Video to Behaviour (ViBe), a new approach to learn models of behaviour from unlabelled raw video data of a traffic scene collected from a single, monocular, initially uncalibrated camera with ordinary resolution. Our approach calibrates the camera, detects relevant objects, tracks them through time, and uses the resulting trajectories to perform LfD, yielding models of naturalistic behaviour. We apply ViBe to raw videos of a traffic intersection and show that it can learn purely from videos, without additional expert knowledge.",http://arxiv.org/abs/1811.03516,2019,conferencePaper,"Behbahani, Feryal; Shiarlis, Kyriacos; Chen, Xi; Kurin, Vitaly; Kasewa, Sudhanshu; Stirbu, Ciprian; Gomes, João; Paul, Supratik; Oliehoek, Frans A.; Messias, João; Whiteson, Shimon",2019 International Conference on Robotics and Automation (ICRA) GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding,"Neural network scaling has been critical for improving the model quality in many real-world machine learning applications with vast amounts of training data and compute. Although this trend of scaling is affirmed to be a sure-fire approach for better model quality, there are challenges on the path such as the computation cost, ease of programming, and efficient implementation on parallel devices. GShard is a module composed of a set of lightweight annotation APIs and an extension to the XLA compiler. It provides an elegant way to express a wide range of parallel computation patterns with minimal changes to the existing model code. GShard enabled us to scale up multilingual neural machine translation Transformer model with Sparsely-Gated Mixture-of-Experts beyond 600 billion parameters using automatic sharding. We demonstrate that such a giant model can efficiently be trained on 2048 TPU v3 accelerators in 4 days to achieve far superior quality for translation from 100 languages to English compared to the prior art.",http://arxiv.org/abs/2006.16668,2020,manuscript,"Lepikhin, Dmitry; Lee, HyoukJoong; Xu, Yuanzhong; Chen, Dehao; Firat, Orhan; Huang, Yanping; Krikun, Maxim; Shazeer, Noam; Chen, Zhifeng", Concerning measures in first order calculi,,http://link.springer.com/10.1007/BF02759729,1964,journalArticle,"Gaifman, Haim",Israel Journal of Mathematics What's up with Arbital?,"This post is for all the people who have been following Arbital's progress since 2015 via whispers, rumors, and clairvoyant divination. That is to say: we didn't do a very good job of communicating on our part. I hope this posts corrects some of that. The top question on your mind is probably: ""Man, I was promised that Arbital will solve X! Why hasn't it solved X already?"" Where X could be intuitive explanations, online debate, all LessWrong problems, AGI, or just cancer. Well, we did try to solve the first two and it didn't work. Math explanations didn't work because we couldn't find enough people who would spend the time to write good math explanations. (That said, we did end up with some decent posts on abstract algebra. Thank you to everyone who contributed!) Debates didn't work because... well, it's a very complicated problem. There was also some disagreement within the team about the best approach, and we ended up moving too slowly. SO WHAT NOW? You are welcome to use Arbital in its current version.It's mostly stable, though a little slow sometimes. It has a few features some might find very helpful for their type of content. Eliezer is still writing AI Alignment content on it, and he heavily relies on the specific Arbital features, so it's pretty certain that the platform is not going away. In fact, if the venture fails completely, it's likely MIRI will adopt Arbital for their personal use. I'm starting work on Arbital 2.0. It's going to be a (micro-)blogging platform. (If you are a serious blogger / Tumblr user, let me know; I'd love to ask you some questions!) I'm not trying to solve online debates, build LW 2.0, or cure cancer. It's just going to be a damn good blogging platform. If it goes well, then at some point I'd love to revisit the Arbital dream. I'm happy to answer any and all questions in the comments.",https://www.lesswrong.com/posts/kqikgu92L7oQFNji4/what-s-up-with-arbital,2017,blogPost,"Andreev, Alexei",LessWrong Aligning AI With Shared Human Values,"We show how to assess a language model's knowledge of basic concepts of morality. We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality. Models predict widespread moral judgments about diverse text scenarios. This requires connecting physical and social world knowledge to value judgements, a capability that may enable us to filter out needlessly inflammatory chatbot outputs or eventually regularize open-ended reinforcement learning agents. With the ETHICS dataset, we find that current language models have a promising but incomplete understanding of basic ethical knowledge. Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.",http://arxiv.org/abs/2008.02275,2020,manuscript,"Hendrycks, Dan; Burns, Collin; Basart, Steven; Critch, Andrew; Li, Jerry; Song, Dawn; Steinhardt, Jacob", Theory of Minds: Understanding Behavior in Groups Through Inverse Planning,"Human social behavior is structured by relationships. We form teams, groups, tribes, and alliances at all scales of human life. These structures guide multi-agent cooperation and competition, but when we observe others these underlying relationships are typically unobservable and hence must be inferred. Humans make these inferences intuitively and flexibly, often making rapid generalizations about the latent relationships that underlie behavior from just sparse and noisy observations. Rapid and accurate inferences are important for determining who to cooperate with, who to compete with, and how to cooperate in order to compete. Towards the goal of building machine-learning algorithms with human-like social intelligence, we develop a generative model of multi-agent action understanding based on a novel representation for these latent relationships called Composable Team Hierarchies (CTH). This representation is grounded in the formalism of stochastic games and multi-agent reinforcement learning. We use CTH as a target for Bayesian inference yielding a new algorithm for understanding behavior in groups that can both infer hidden relationships as well as predict future actions for multiple agents interacting together. Our algorithm rapidly recovers an underlying causal model of how agents relate in spatial stochastic games from just a few observations. The patterns of inference made by this algorithm closely correspond with human judgments and the algorithm makes the same rapid generalizations that people do.",http://arxiv.org/abs/1901.06085,2019,conferencePaper,"Shum, Michael; Kleiman-Weiner, Max; Littman, Michael L.; Tenenbaum, Joshua B.",Proceedings of the AAAI Conference on Artificial Intelligence DeepType: Multilingual Entity Linking by Neural Type System Evolution,"The wealth of structured (e.g. Wikidata) and unstructured data about the world available today presents an incredible opportunity for tomorrow's Artificial Intelligence. So far, integration of these two different modalities is a difficult process, involving many decisions concerning how best to represent the information so that it will be captured or useful, and hand-labeling large amounts of data. DeepType overcomes this challenge by explicitly integrating symbolic information into the reasoning process of a neural network with a type system. First we construct a type system, and second, we use it to constrain the outputs of a neural network to respect the symbolic structure. We achieve this by reformulating the design problem into a mixed integer problem: create a type system and subsequently train a neural network with it. In this reformulation discrete variables select which parent-child relations from an ontology are types within the type system, while continuous variables control a classifier fit to the type system. The original problem cannot be solved exactly, so we propose a 2-step algorithm: 1) heuristic search or stochastic optimization over discrete variables that define a type system informed by an Oracle and a Learnability heuristic, 2) gradient descent to fit classifier parameters. We apply DeepType to the problem of Entity Linking on three standard datasets (i.e. WikiDisamb30, CoNLL (YAGO), TAC KBP 2010) and find that it outperforms all existing solutions by a wide margin, including approaches that rely on a human-designed type system or recent deep learning-based entity embeddings, while explicitly using symbolic information lets it integrate new entities without retraining.",http://arxiv.org/abs/1802.01021,2018,conferencePaper,"Raiman, Jonathan; Raiman, Olivier",arXiv:1802.01021 [cs] Potential Risks from Advanced Artificial Intelligence,"We have updated our thinking on this subject since this page was published. For our most current content on this topic, see this blog post. This is a writeup of a shallow investigation, a brief look at an area that we use to decide how to prioritize further research. In a nutshell What is the",https://www.openphilanthropy.org/research/cause-reports/ai-risk,2015,blogPost,Open Philanthropy,Open Philanthropy Embedding Ethical Principles in Collective Decision Support Systems,"The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need.",,2016,conferencePaper,"Greene, Joshua; Rossi, Francesca; Tasioulas, John; Venable, Kristen Brent; Williams, Brian", The voluntary provision of a pure public good: The case of reduced CFC emissions and the Montreal Protocol,,https://linkinghub.elsevier.com/retrieve/pii/S0047272796015988,1997,journalArticle,"Murdoch, James C.; Sandler, Todd",Journal of Public Economics Scaling data-driven robotics with reward sketching and batch reinforcement learning,"We present a framework for data-driven robotics that makes use of a large dataset of recorded robot experience and scales to several tasks using learned reward functions. We show how to apply this framework to accomplish three different object manipulation tasks on a real robot platform. Given demonstrations of a task together with task-agnostic recorded experience, we use a special form of human annotation as supervision to learn a reward function, which enables us to deal with real-world tasks where the reward signal cannot be acquired directly. Learned rewards are used in combination with a large dataset of experience from different tasks to learn a robot policy offline using batch RL. We show that using our approach it is possible to train agents to perform a variety of challenging manipulation tasks including stacking rigid objects and handling cloth.",http://arxiv.org/abs/1909.12200,2020,conferencePaper,"Cabi, Serkan; Colmenarejo, Sergio Gómez; Novikov, Alexander; Konyushkova, Ksenia; Reed, Scott; Jeong, Rae; Zolna, Konrad; Aytar, Yusuf; Budden, David; Vecerik, Mel; Sushkov, Oleg; Barker, David; Scholz, Jonathan; Denil, Misha; de Freitas, Nando; Wang, Ziyu",arXiv:1909.12200 [cs] GamePad: A Learning Environment for Theorem Proving,"In this paper, we introduce a system called GamePad that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant. Interactive theorem provers such as Coq enable users to construct machine-checkable proofs in a step-by-step manner. Hence, they provide an opportunity to explore theorem proving with human supervision. We use GamePad to synthesize proofs for a simple algebraic rewrite problem and train baseline models for a formalization of the Feit-Thompson theorem. We address position evaluation (i.e., predict the number of proof steps left) and tactic prediction (i.e., predict the next proof step) tasks, which arise naturally in tactic-based theorem proving.",http://arxiv.org/abs/1806.00608,2018,manuscript,"Huang, Daniel; Dhariwal, Prafulla; Song, Dawn; Sutskever, Ilya", Strengthening the U.S. AI Workforce: A Policy and Research Agenda,,,2019,report,"Zwetsloot, Remco; Heston, Roxanne; Arnold, Zachary", I Don't Want to Think About it Now: Decision Theory With Costly Computation,"Computation plays a major role in decision making. Even if an agent is willing to ascribe a probability to all states and a utility to all outcomes, and maximize expected utility, doing so might present serious computational problems. Moreover, computing the outcome of a given act might be difficult. In a companion paper we develop a framework for game theory with costly computation, where the objects of choice are Turing machines. Here we apply that framework to decision theory. We show how well-known phenomena like first-impression-matters biases (i.e., people tend to put more weight on evidence they hear early on), belief polarization (two people with different prior beliefs, hearing the same evidence, can end up with diametrically opposed conclusions), and the status quo bias (people are much more likely to stick with what they already have) can be easily captured in that framework. Finally, we use the framework to define some new notions: value of computational information (a computational variant of value of information) and and computational value of conversation.",http://arxiv.org/abs/1106.2657,2011,conferencePaper,"Halpern, Joseph Y.; Pass, Rafael",Proceedings of the Twelfth International Conference on Principles of Knowledge Representation and Reasoning How rapidly are GPUs improving in price performance?,,http://mediangroup.org/gpu.html,2018,blogPost,"Maltinsky, Baeo",Median Group Scaling up Psychology via Scientific Regret Minimization: A Case Study in Moral Decisions,"Do large datasets provide value to psychologists? Without a systematic methodology for working with such datasets, there is a valid concern that analyses will produce noise artifacts rather than true effects. In this paper, we offer a way to enable researchers to systematically build models and identify novel phenomena in large datasets. One traditional approach is to analyze the residuals of models---the biggest errors they make in predicting the data---to discover what might be missing from those models. However, once a dataset is sufficiently large, machine learning algorithms approximate the true underlying function better than the data, suggesting instead that the predictions of these data-driven models should be used to guide model-building. We call this approach ""Scientific Regret Minimization"" (SRM) as it focuses on minimizing errors for cases that we know should have been predictable. We demonstrate this methodology on a subset of the Moral Machine dataset, a public collection of roughly forty million moral decisions. Using SRM, we found that incorporating a set of deontological principles that capture dimensions along which groups of agents can vary (e.g. sex and age) improves a computational model of human moral judgment. Furthermore, we were able to identify and independently validate three interesting moral phenomena: criminal dehumanization, age of responsibility, and asymmetric notions of responsibility.",http://arxiv.org/abs/1910.07581,2020,journalArticle,"Agrawal, Mayank; Peterson, Joshua C.; Griffiths, Thomas L.",PNAS From the Standard Model of AI to Provably Beneficial Systems,,http://n.sinaimg.cn/tech/f34884a9/20200501/GlobalAIGovernancein2019.pdf,2020,bookSection,"Russell, Stuart; Jeanmaire, Caroline",AI Governance in 2019: A Year In Review Glow: Generative Flow with Invertible 1x1 Convolutions,"Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using an invertible 1x1 convolution. Using our method we demonstrate a significant improvement in log-likelihood on standard benchmarks. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient realistic-looking synthesis and manipulation of large images. The code for our model is available at https://github.com/openai/glow",http://arxiv.org/abs/1807.03039,2018,conferencePaper,"Kingma, Diederik P.; Dhariwal, Prafulla",Advances in Neural Information Processing Systems 31 (NeurIPS 2018) "Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility",,http://link.springer.com/10.1007/978-94-010-9327-9_2,1980,bookSection,"Harsanyi, John C.","Essays on Ethics, Social Behavior, and Scientific Explanation" It Takes a Village: The Shared Responsibility of 'Raising' an Autonomous Weapon,"Expectations around future capabilities of lethal autonomous weapons systems (LAWS) have raised concerns for military risks, ethics, and accountability. The U.K.’s position, as presented among various international voices at the UN’s Convention on Certain Conventional Weapons (CCW) meetings, has attempted to address these concerns through a focused look at the weapons review process, humanmachine teaming or “meaningful human control” (see e.g. JCN1/18), and the ability of autonomous systems to adhere to the Rules of Engagement. Further, the U.K. has stated that the existing governance structures—both domestic and international—around weapons systems are sufficient in dealing with any concerns around the development, deployment, and accountability for emerging LAWS; there is no need for novel agreements on the control of these weapons systems. In an effort to better understand and test the U.K. position on LAWS, the Centre for the Study of Existential Risk has run a research project in which we interviewed experts in multiple relevant organisations, structured around a mock parliamentary inquiry of a hypothetical LAWS-related civilian death. The responses to this scenario have highlighted different, sometimes complementary and sometimes contradicting, conceptions of future systems, challenges, and accountability measures. They have provided rich ""on the ground” perspectives, while also highlighting key gaps that should be addressed by every military that is considering acquisition and deployment of autonomous and semi-autonomous weapon systems.",,2020,report,"Jayanti, Amritha; Avin, Shahar", ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators,A text encoder trained to distinguish real input tokens from plausible fakes efficiently learns effective language representations.,https://openreview.net/forum?id=r1xMH1BtvB,2019,conferencePaper,"Clark, Kevin; Luong, Minh-Thang; Le, Quoc V.; Manning, Christopher D.", Beyond risk-benefit analysis: pricing externalities for gain-of-function research of concern,,,2016,journalArticle,"Cotton-Barratt, Owen; Farquhar, Sebastian; Snyder-Beattie, Andrew","Policy Working Paper [Revision 0.9]. Future of Humanity Institute, University of Oxford. Available at: http://globalprioritiesproject. org/wp-content/uploads/2016/03/GoFv9-3. pdf" Evolved Policy Gradients,"We propose a metalearning approach for learning gradient-based reinforcement learning (RL) algorithms. The idea is to evolve a differentiable loss function, such that an agent, which optimizes its policy to minimize this loss, will achieve high rewards. The loss is parametrized via temporal convolutions over the agent's experience. Because this loss is highly flexible in its ability to take into account the agent's history, it enables fast task learning. Empirical results show that our evolved policy gradient algorithm (EPG) achieves faster learning on several randomized environments compared to an off-the-shelf policy gradient method. We also demonstrate that EPG's learned loss can generalize to out-of-distribution test time tasks, and exhibits qualitatively different behavior from other popular metalearning algorithms.",https://papers.nips.cc/paper/2018/hash/7876acb66640bad41f1e1371ef30c180-Abstract.html,2018,conferencePaper,"Houthooft, Rein; Chen, Richard Y.; Isola, Phillip; Stadie, Bradly C.; Wolski, Filip; Ho, Jonathan; Abbeel, Pieter",Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Adversarial Imitation via Variational Inverse Reinforcement Learning,"We consider a problem of learning the reward and policy from expert examples under unknown dynamics. Our proposed method builds on the framework of generative adversarial networks and introduces the empowerment-regularized maximum-entropy inverse reinforcement learning to learn near-optimal rewards and policies. Empowerment-based regularization prevents the policy from overfitting to expert demonstrations, which advantageously leads to more generalized behaviors that result in learning near-optimal rewards. Our method simultaneously learns empowerment through variational information maximization along with the reward and policy under the adversarial learning formulation. We evaluate our approach on various high-dimensional complex control tasks. We also test our learned rewards in challenging transfer learning problems where training and testing environments are made to be different from each other in terms of dynamics or structure. The results show that our proposed method not only learns near-optimal rewards and policies that are matching expert behavior but also performs significantly better than state-of-the-art inverse reinforcement learning algorithms.",http://arxiv.org/abs/1809.06404,2019,manuscript,"Qureshi, Ahmed H.; Boots, Byron; Yip, Michael C.", Query-Efficient Imitation Learning for End-to-End Autonomous Driving,"One way to approach end-to-end autonomous driving is to learn a policy function that maps from a sensory input, such as an image frame from a front-facing camera, to a driving action, by imitating an expert driver, or a reference policy. This can be done by supervised learning, where a policy function is tuned to minimize the difference between the predicted and ground-truth actions. A policy function trained in this way however is known to suffer from unexpected behaviours due to the mismatch between the states reachable by the reference policy and trained policy functions. More advanced algorithms for imitation learning, such as DAgger, addresses this issue by iteratively collecting training examples from both reference and trained policies. These algorithms often requires a large number of queries to a reference policy, which is undesirable as the reference policy is often expensive. In this paper, we propose an extension of the DAgger, called SafeDAgger, that is query-efficient and more suitable for end-to-end autonomous driving. We evaluate the proposed SafeDAgger in a car racing simulator and show that it indeed requires less queries to a reference policy. We observe a significant speed up in convergence, which we conjecture to be due to the effect of automated curriculum learning.",http://arxiv.org/abs/1605.06450,2017,conferencePaper,"Zhang, Jiakai; Cho, Kyunghyun",Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) A Low-Cost Ethics Shaping Approach for Designing Reinforcement Learning Agents,"This paper proposes a low-cost, easily realizable strategy to equip a reinforcement learning (RL) agent the capability of behaving ethically. Our model allows the designers of RL agents to solely focus on the task to achieve, without having to worry about the implementation of multiple trivial ethical patterns to follow. Based on the assumption that the majority of human behavior, regardless which goals they are achieving, is ethical, our design integrates human policy with the RL policy to achieve the target objective with less chance of violating the ethical code that human beings normally obey.",http://arxiv.org/abs/1712.04172,2018,conferencePaper,"Wu, Yueh-Hua; Lin, Shou-De",The Thirty-Second AAAI Conferenceon Artificial Intelligence (AAAI-18) Bounded Rationality in Las Vegas: Probabilistic Finite Automata PlayMulti-Armed Bandits,"While traditional economics assumes that humans are fully rational agents who always maximize their expected utility, in practice, we constantly observe apparently irrational behavior. One explanation is that people have limited computational power, so that they are, quite rationally, making the best decisions they can, given their computational limitations. To test this hypothesis, we consider the multi-armed bandit (MAB) problem. We examine a simple strategy for playing an MAB that can be implemented easily by a probabilistic finite automaton (PFA). Roughly speaking, the PFA sets certain expectations, and plays an arm as long as it meets them. If the PFA has sufficiently many states, it performs near-optimally. Its performance degrades gracefully as the number of states decreases. Moreover, the PFA acts in a ""human-like"" way, exhibiting a number of standard human biases, like an optimism bias and a negativity bias.",http://arxiv.org/abs/2006.16950,2020,conferencePaper,"Liu, Xinming; Halpern, Joseph Y.",Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) Indifference'methods for managing agent rewards,,https://arxiv.org/abs/1712.06365,2017,manuscript,"Armstrong, Stuart; O'Rourke, Xavier", Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails,"This is the fourth post in my sequence on moral anti-realism; it works well as a standalone piece. OUTLINE In my previous post, I argued that irreducible normativity may not be meaningful. One might hold the intuition that if our actions don’t matter in the irreducibly normative sense, they don’t matter at all. In this post, I’ll address the argument that as long as we believe there is a slight chance that irreducible normativity might be true, we should act as though it’s true. This wager for moral realism has the same structure as a related argument described by Michael Huemer (2013). I will discuss Huemer’s argument and show how we can expand it into a wager for moral realism. Then, I’ll explain why I consider the resulting wager unconvincing. HUEMER’S “PROOF OF MORAL REALISM” In the paper “An Ontological Proof of Moral Realism,” Michael Huemer presents the following argument in support of moral realism: Given that moral realism might be true, and given that we know some of the things we ought to do if it is true, we have a reason to do those things. Furthermore, this reason is itself an objective moral reason. Thus, we have at least one objective moral reason. The conclusion in the first sentence (“we have a reason to do those things”) derives from what Huemer calls the Probabilistic Reasons Principle: The rough idea is that if some fact would (if you knew it) provide a reason for you to behave in a certain way, then your having some reason to believe that fact obtains also provides you with a reason to behave in the same way. Even a small epistemic probability of the fact’s obtaining provides you with a (perhaps very small) first person reason for action. So, the argument is that if we start with non-zero credence in the existence of moral reasons, and if we have at least a vague sense about what those reasons would imply, then, via the Probabilistic Reasons Principle, we can conclude that we have one type of irreducible reason—and Huemer argues that th",https://forum.effectivealtruism.org/posts/G9ASsCfsNghevtghF/moral-anti-realism-sequence-4-why-the-moral-realism-wager-1,2020,blogPost,"Gloor, Lukas",Effective Altruism Forum Algorithms for Causal Reasoning in Probability Trees,"Probability trees are one of the simplest models of causal generative processes. They possess clean semantics and -- unlike causal Bayesian networks -- they can represent context-specific causal dependencies, which are necessary for e.g. causal induction. Yet, they have received little attention from the AI and ML community. Here we present concrete algorithms for causal reasoning in discrete probability trees that cover the entire causal hierarchy (association, intervention, and counterfactuals), and operate on arbitrary propositional and causal events. Our work expands the domain of causal reasoning to a very general class of discrete stochastic processes.",http://arxiv.org/abs/2010.12237,2020,manuscript,"Genewein, Tim; McGrath, Tom; Déletang, Grégoire; Mikulik, Vladimir; Martic, Miljan; Legg, Shane; Ortega, Pedro A.", A Prima Facie Duty Approach to Machine Ethics,,https://www.cambridge.org/core/product/identifier/CBO9780511978036A041/type/book_part,2011,bookSection,"Anderson, Susan Leigh; Anderson, Michael",Machine Ethics Tiptoeing around it: Inference from absence in potentially offensive speech,"Language that describes people in a concise manner may conflict with social norms (e.g., referring to people by their race), presenting a conflict between transferring information efficiently and avoiding offensive language. When a speaker is describing others, we propose that listeners consider the speaker’s use or absence of potentially offensive language to reason about the speaker’s goals. We formalize this hypothesis in a probabilistic model of polite pragmatic language understanding, and use it to generate predictions about interpretations of utterances in ambiguous contexts, which we test empirically. We find that participants are sensitive to potentially offensive language when resolving ambiguity in reference. These results support the idea that listeners represent conflicts in speakers’ goals and use that uncertainty to interpret otherwise underspecified utterances.",,2018,conferencePaper,"Gates, Monica A; Tessler, Michael Henry; Bayet, Laurie", Special issue on learning for human–robot collaboration,,https://doi.org/10.1007/s10514-018-9756-z,2018,journalArticle,"Rozo, Leonel; Amor, Heni Ben; Calinon, Sylvain; Dragan, Anca; Lee, Dongheui",Autonomous Robots Intuitions about goal-directed behavior,"One broad argument for AI risk is the Misspecified Goal argument: The Misspecified Goal Argument for AI Risk: Very intelligent AI systems will be able to make long-term plans in order to achieve their goals, and if their goals are even slightly misspecified then the AI system will become adversarial and work against us. My main goal in this post is to make conceptual clarifications and suggest how they affect the Misspecified Goal argument, without making any recommendations about what we should actually do. Future posts will argue more directly for a particular position. As a result, I will not be considering other arguments for focusing on AI risk even though I find some of them more compelling. I think of this as a concern about long-term goal-directed behavior. Unfortunately, it’s not clear how to categorize behavior as goal-directed vs. not. Intuitively, any agent that searches over actions and chooses the one that best achieves some measure of “goodness” is goal-directed (though there are exceptions, such as the agent that selects actions that begin with the letter “A”). (ETA: I also think that agents that show goal-directed behavior because they are looking at some other agent are not goal-directed themselves -- see this comment.) However, this is not a necessary condition: many humans are goal-directed, but there is no goal baked into the brain that they are using to choose actions. This is related to the concept of optimization, though with intuitions around optimization we typically assume that we know the agent’s preference ordering, which I don’t want to assume here. (In fact, I don’t want to assume that the agent even has a preference ordering.) One potential formalization is to say that goal-directed behavior is any behavior that can be modelled as maximizing expected utility for some utility function; in the next post I will argue that this does not properly capture the behaviors we are worried about. In this post I’ll give some intuitions about",https://www.alignmentforum.org/posts/DfcywmqRSkBaCB6Ma/intuitions-about-goal-directed-behavior,2018,blogPost,"Shah, Rohin",AI Alignment Forum #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning,"Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.",http://arxiv.org/abs/1611.04717,2017,conferencePaper,"Tang, Haoran; Houthooft, Rein; Foote, Davis; Stooke, Adam; Chen, Xi; Duan, Yan; Schulman, John; De Turck, Filip; Abbeel, Pieter",arXiv:1611.04717 [cs] Learning a Prior over Intent via Meta-Inverse Reinforcement Learning,"A significant challenge for the practical application of reinforcement learning in the real world is the need to specify an oracle reward function that correctly defines a task. Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door). Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function. In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a ""prior"" that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.",http://arxiv.org/abs/1805.12573,2019,conferencePaper,"Xu, Kelvin; Ratner, Ellis; Dragan, Anca; Levine, Sergey; Finn, Chelsea",Proceedings of the 36th International Conference on Machine Learning "The ""Commitment Races"" Problem","[Epistemic status: Strong claims vaguely stated and weakly held. I expect that writing this and digesting feedback on it will lead to a much better version in the future. EDIT: So far this has stood the test of time. EDIT: As of September 2020 I think this is one of the most important things to be thinking about.] This post attempts to generalize and articulate a problem that people have been thinking about since at least 2016. [Edit: 2009 in fact!] In short, here is the problem: Consequentialists can get caught in commitment races, in which they want to make commitments as soon as possible. When consequentialists make commitments too soon, disastrous outcomes can sometimes result. The situation we are in (building AGI and letting it self-modify) may be one of these times unless we think carefully about this problem and how to avoid it. For this post I use ""consequentialists"" to mean agents that choose actions entirely on the basis of the expected consequences of those actions. For my purposes, this means they don't care about historical facts such as whether the options and consequences available now are the result of malicious past behavior. (I am trying to avoid trivial definitions of consequentialism according to which everyone is a consequentialist because e.g. ""obeying the moral law"" is a consequence.) This definition is somewhat fuzzy and I look forward to searching for more precision some other day. CONSEQUENTIALISTS CAN GET CAUGHT IN COMMITMENT RACES, IN WHICH THEY WANT TO MAKE COMMITMENTS AS SOON AS POSSIBLE Consequentialists are bullies; a consequentialist will happily threaten someone insofar as they think the victim might capitulate and won't retaliate. Consequentialists are also cowards; they conform their behavior to the incentives set up by others, regardless of the history of those incentives. For example, they predictably give in to credible threats unless reputational effects weigh heavily enough in their minds to prevent this. In most ordi",https://www.alignmentforum.org/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem,2019,blogPost,"Kokotajlo, Daniel",AI Alignment Forum Mainframes: A Provisional Analysis of Rhetorical Frames in AI,"Are great powers engaged in an artificial intelligence arms race? This issue brief explores the rhetorical framing of AI by analyzing more than 4,000 English-language articles over a seven-year period. Among its findings: a growing number of articles frame AI development as a competition, but articles using the competition frame represent a declining proportion of articles about AI.",https://cset.georgetown.edu/research/mainframes-a-provisional-analysis-of-rhetorical-frames-in-ai/,2020,report,"Imbrie, Andrew; Dunham, James; Gelles, Rebecca; Aiken, Catherine", SGD on Neural Networks Learns Functions of Increasing Complexity,"We perform an experimental study of the dynamics of Stochastic Gradient Descent (SGD) in learning deep neural networks for several real and synthetic classification tasks. We show that in the initial epochs, almost all of the performance improvement of the classifier obtained by SGD can be explained by a linear classifier. More generally, we give evidence for the hypothesis that, as iterations progress, SGD learns functions of increasing complexity. This hypothesis can be helpful in explaining why SGD-learned classifiers tend to generalize well even in the over-parameterized regime. We also show that the linear classifier learned in the initial stages is ""retained"" throughout the execution even if training is continued to the point of zero training error, and complement this with a theoretical result in a simplified model. Key to our work is a new measure of how well one classifier explains the performance of another, based on conditional mutual information.",http://arxiv.org/abs/1905.11604,2019,conferencePaper,"Nakkiran, Preetum; Kaplun, Gal; Kalimeris, Dimitris; Yang, Tristan; Edelman, Benjamin L.; Zhang, Fred; Barak, Boaz",Advances in Neural Information Processing Systems 32 (NeurIPS 2019) Goal-conditioned Imitation Learning,"Designing rewards for Reinforcement Learning (RL) is challenging because it needs to convey the desired task, be efficient to optimize, and be easy to compute. The latter is particularly problematic when applying RL to robotics, where detecting whether the desired configuration is reached might require considerable supervision and instrumentation. Furthermore, we are often interested in being able to reach a wide range of configurations, hence setting up a different reward every time might be unpractical. Methods like Hindsight Experience Replay (HER) have recently shown promise to learn policies able to reach many goals, without the need of a reward. Unfortunately, without tricks like resetting to points along the trajectory, HER might require many samples to discover how to reach certain areas of the state-space. In this work we investigate different approaches to incorporate demonstrations to drastically speed up the convergence to a policy able to reach any goal, also surpassing the performance of an agent trained with other Imitation Learning algorithms. Furthermore, we show our method can also be used when the available expert trajectories do not contain the actions, which can leverage kinesthetic or third person demonstration. The code is available at https://sites.google.com/view/goalconditioned-il/.",https://arxiv.org/abs/1906.05838v3,2019,conferencePaper,"Ding, Yiming; Florensa, Carlos; Phielipp, Mariano; Abbeel, Pieter",Advances in Neural Information Processing Systems 32 (NeurIPS 2019) Synthesizing Robust Adversarial Examples,"Standard methods for generating adversarial examples for neural networks do not consistently fool neural network classifiers in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations, limiting their relevance to real-world systems. We demonstrate the existence of robust 3D adversarial objects, and we present the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations. We synthesize two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. We apply our algorithm to complex three-dimensional objects, using 3D-printing to manufacture the first physical adversarial objects. Our results demonstrate the existence of 3D adversarial objects in the physical world.",http://arxiv.org/abs/1707.07397,2018,conferencePaper,"Athalye, Anish; Engstrom, Logan; Ilyas, Andrew; Kwok, Kevin",Proceedings of the 35th International Conference on Machine Learning Emergent Communication through Negotiation,"Multi-agent reinforcement learning offers a way to study how communication could emerge in communities of agents needing to solve specific problems. In this paper, we study the emergence of communication in the negotiation environment, a semi-cooperative model of agent interaction. We introduce two communication protocols -- one grounded in the semantics of the game, and one which is \textit{a priori} ungrounded and is a form of cheap talk. We show that self-interested agents can use the pre-grounded communication channel to negotiate fairly, but are unable to effectively use the ungrounded channel. However, prosocial agents do learn to use cheap talk to find an optimal negotiating strategy, suggesting that cooperation is necessary for language to emerge. We also study communication behaviour in a setting where one agent interacts with agents in a community with different levels of prosociality and show how agent identifiability can aid negotiation.",http://arxiv.org/abs/1804.03980,2018,conferencePaper,"Cao, Kris; Lazaridou, Angeliki; Lanctot, Marc; Leibo, Joel Z.; Tuyls, Karl; Clark, Stephen",arXiv:1804.03980 [cs] An Analytic Perspective on AI Alignment,"This is a perspective I have on how to do useful AI alignment research. Most perspectives I’m aware of are constructive: they have some blueprint for how to build an aligned AI system, and propose making it more concrete, making the concretisations more capable, and showing that it does in fact produce an aligned AI system. I do not have a constructive perspective - I’m not sure how to build an aligned AI system, and don’t really have a favourite approach. Instead, I have an analytic perspective. I would like to understand AI systems that are built. I also want other people to understand them. I think that this understanding will hopefully act as a ‘filter’ that means that dangerous AI systems are not deployed. The following dot points lay out the perspective. Since the remainder of this post is written as nested dot points, some readers may prefer to read it in workflowy. BACKGROUND BELIEFS * I am imagining a future world in which powerful AGI systems are made of components roughly like neural networks (either feedforward or recurrent) that have a large number of parameters. * Futhermore, I’m imagining that the training process of these ML systems does not provide enough guarantees about deployment performance. * In particular, I’m supposing that systems are being trained based on their ability to deal with simulated situations, and that that’s insufficient because deployment situations are hard to model and therefore simulate. * One reason that they are hard to model is the complexities of the real world. * The real world might be intrinsically difficult to model for the relevant system. For instance, it’s difficult to simulate all the situations in which the CEO of Amazon might find themselves. * Another reason that real world situations may be hard to model is that they are dependent on the final trained system. * The trained system may be able to af",https://www.alignmentforum.org/posts/8GdPargak863xaebm/an-analytic-perspective-on-ai-alignment,2020,blogPost,"Filan, Daniel",AI Alignment Forum Meta-learning of Sequential Strategies,"In this report we review memory-based meta-learning as a tool for building sample-efficient strategies that learn from past experience to adapt to any task within a target class. Our goal is to equip the reader with the conceptual foundations of this tool for building new, scalable agents that operate on broad domains. To do so, we present basic algorithmic templates for building near-optimal predictors and reinforcement learners which behave as if they had a probabilistic model that allowed them to efficiently exploit task structure. Furthermore, we recast memory-based meta-learning within a Bayesian framework, showing that the meta-learned strategies are near-optimal because they amortize Bayes-filtered data, where the adaptation is implemented in the memory dynamics as a state-machine of sufficient statistics. Essentially, memory-based meta-learning translates the hard problem of probabilistic sequential inference into a regression problem.",http://arxiv.org/abs/1905.03030,2019,manuscript,"Ortega, Pedro A.; Wang, Jane X.; Rowland, Mark; Genewein, Tim; Kurth-Nelson, Zeb; Pascanu, Razvan; Heess, Nicolas; Veness, Joel; Pritzel, Alex; Sprechmann, Pablo; Jayakumar, Siddhant M.; McGrath, Tom; Miller, Kevin; Azar, Mohammad; Osband, Ian; Rabinowitz, Neil; György, András; Chiappa, Silvia; Osindero, Simon; Teh, Yee Whye; van Hasselt, Hado; de Freitas, Nando; Botvinick, Matthew; Legg, Shane", Provably Bounded-Optimal Agents,"Since its inception, artificial intelligence has relied upon a theoretical foundation centered around perfect rationality as the desired property of intelligent systems. We argue, as others have done, that this foundation is inadequate because it imposes fundamentally unsatisfiable requirements. As a result, there has arisen a wide gap between theory and practice in AI, hindering progress in the field. We propose instead a property called bounded optimality. Roughly speaking, an agent is bounded-optimal if its program is a solution to the constrained optimization problem presented by its architecture and the task environment. We show how to construct agents with this property for a simple class of machine architectures in a broad class of real-time environments. We illustrate these results using a simple model of an automated mail sorting facility. We also define a weaker property, asymptotic bounded optimality (ABO), that generalizes the notion of optimality in classical complexity theory. We then construct universal ABO programs, i.e., programs that are ABO no matter what real-time constraints are applied. Universal ABO programs can be used as building blocks for more complex systems. We conclude with a discussion of the prospects for bounded optimality as a theoretical basis for AI, and relate it to similar trends in philosophy, economics, and game theory.",https://www.jair.org/index.php/jair/article/view/10134,1995,journalArticle,"Russell, S. J.; Subramanian, D.",Journal of Artificial Intelligence Research Reflections on the risk analysis of nuclear war,,,2018,conferencePaper,"Baum, Seth","Proceedings of the Workshop on Quantifying Global Catastrophic Risks, Garrick Institute for the Risk Sciences, University of California, Los Angeles" Momentum Contrast for Unsupervised Visual Representation Learning,"We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning [29] as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.",http://arxiv.org/abs/1911.05722,2020,conferencePaper,"He, Kaiming; Fan, Haoqi; Wu, Yuxin; Xie, Saining; Girshick, Ross",Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Equivalence Between Policy Gradients and Soft Q-Learning,"Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that ""soft"" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.",http://arxiv.org/abs/1704.06440,2018,manuscript,"Schulman, John; Chen, Xi; Abbeel, Pieter", On the Interpretation of Decision Problems with Imperfect Recall,,https://linkinghub.elsevier.com/retrieve/pii/S0899825697905364,1997,journalArticle,"Piccione, Michele; Rubinstein, Ariel",Games and Economic Behavior Reflections on the Singularity Journey,"SummaryWhy didn’t Vernor Vinge’s foundational essay “What is the Singularity?” convince most technologically literate people that a singularity is near? Perhaps the exponential, anti-intuitive nature of the Singularity makes a singularity too difficult to visualize, and our species tends not to believe things it can’t put in story form. Or possibly the superficial absurdity of a singularity prevents most people from giving the concept enough serious consideration. Also, the lack of a clear timeline for the unfolding of a singularity might cause many to think the idea is unfalsifiable. Some might think that since the Singularity almost certainly won’t take place for a long time, an optimal allocation of attention would ignore it in favor of more immediate concerns. Finally, rationality forces us to admit that the editors and most of the writers of this book could be in error in thinking that a singularity probably lies in humankind’s near future.",https://doi.org/10.1007/978-3-662-54033-6_13,2017,bookSection,"Miller, James D.",The Technological Singularity: Managing the Journey "False Alarms, True Dangers?",,,2016,journalArticle,"Barrett, Anthony M.","RAND Corporation document PE-191-TSF, DOI" Decision Theory with Resource-Bounded Agents,,http://doi.wiley.com/10.1111/tops.12088,2014,journalArticle,"Halpern, Joseph Y.; Pass, Rafael; Seeman, Lior",Topics in Cognitive Science The Professional's Dilemma,,http://mediangroup.org/docs/the_professionals_dilemma.pdf,,manuscript,"Hoffman, Ben", Internet Gaming Addiction: A Systematic Review of Empirical Research,,http://link.springer.com/10.1007/s11469-011-9318-5,2012,journalArticle,"Kuss, Daria Joanna; Griffiths, Mark D.",International Journal of Mental Health and Addiction The anchoring bias reflects rational use of cognitive resources,"Cognitive biases, such as the anchoring bias, pose a serious challenge to rational accounts of human cognition. We investigate whether rational theories can meet this challenge by taking into account the mind’s bounded cognitive resources. We asked what reasoning under uncertainty would look like if people made rational use of their finite time and limited cognitive resources. To answer this question, we applied a mathematical theory of bounded rationality to the problem of numerical estimation. Our analysis led to a rational process model that can be interpreted in terms of anchoring-and-adjustment. This model provided a unifying explanation for ten anchoring phenomena including the differential effect of accuracy motivation on the bias towards provided versus self-generated anchors. Our results illustrate the potential of resource-rational analysis to provide formal theories that can unify a wide range of empirical results and reconcile the impressive capacities of the human mind with its apparently irrational cognitive biases.",https://doi.org/10.3758/s13423-017-1286-8,2018,journalArticle,"Lieder, Falk; Griffiths, Thomas L.; M. Huys, Quentin J.; Goodman, Noah D.",Psychonomic Bulletin & Review "The Limits of Safety: Organizations, Accidents, and Nuclear Weapons",,https://princetonup.degruyter.com/view/title/584143,1993,book,"Sagan, Scott Douglas", On the Differential Privacy of Bayesian Inference,"We study how to communicate findings of Bayesian inference to third parties, while preserving the strong guarantee of differential privacy. Our main contributions are four different algorithms for private Bayesian inference on proba-bilistic graphical models. These include two mechanisms for adding noise to the Bayesian updates, either directly to the posterior parameters, or to their Fourier transform so as to preserve update consistency. We also utilise a recently introduced posterior sampling mechanism, for which we prove bounds for the specific but general case of discrete Bayesian networks; and we introduce a maximum-a-posteriori private mechanism. Our analysis includes utility and privacy bounds, with a novel focus on the influence of graph structure on privacy. Worked examples and experiments with Bayesian na{\""i}ve Bayes and Bayesian linear regression illustrate the application of our mechanisms.",http://arxiv.org/abs/1512.06992,2016,conferencePaper,"Zhang, Zuhe; Rubinstein, Benjamin; Dimitrakakis, Christos",Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) Adversarial Training with Voronoi Constraints,"Adversarial examples are a pervasive phenomenon of machine learning models where seemingly imperceptible perturbations to the input lead to misclassifications for otherwise statistically accurate models. We propose a geometric framework, drawing on tools from the manifold reconstruction literature, to analyze the high-dimensional geometry of adversarial examples. In particular, we highlight the importance of codimension: for low-dimensional data manifolds embedded in high-dimensional space there are many directions off the manifold in which an adversary could construct adversarial examples. Adversarial examples are a natural consequence of learning a decision boundary that classifies the low-dimensional data manifold well, but classifies points near the manifold incorrectly. Using our geometric framework we prove that adversarial training is sample inefficient, and show sufficient sampling conditions under which nearest neighbor classifiers and ball-based adversarial training are robust. Finally we introduce adversarial training with Voronoi constraints, which replaces the norm ball constraint with the Voronoi cell for each point in the training set. We show that adversarial training with Voronoi constraints produces robust models which significantly improve over the state-of-the-art on MNIST and are competitive on CIFAR-10.",http://arxiv.org/abs/1905.01019,2019,manuscript,"Khoury, Marc; Hadfield-Menell, Dylan", Thompson Sampling is Asymptotically Optimal in General Environments,"We discuss a variant of Thompson sampling for nonparametric reinforcement learning in a countable classes of general stochastic environments. These environments can be non-Markov, non-ergodic, and partially observable. We show that Thompson sampling learns the environment class in the sense that (1) asymptotically its value converges to the optimal value in mean and (2) given a recoverability assumption regret is sublinear.",http://arxiv.org/abs/1602.07905,2016,conferencePaper,"Leike, Jan; Lattimore, Tor; Orseau, Laurent; Hutter, Marcus","arXiv:1602.07905 [cs, stat]" A Descending Veil of Maya,,,2019,manuscript,"Hidysmith, J Bryce", Growing Neural Cellular Automata,"Training an end-to-end differentiable, self-organising cellular automata model of morphogenesis, able to both grow and regenerate specific patterns.",https://distill.pub/2020/growing-ca,2020,journalArticle,"Mordvintsev, Alexander; Randazzo, Ettore; Niklasson, Eyvind; Levin, Michael",Distill Written Evidence to Lords Select Committee on Artificial Intelligence,,http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/69655.html,2017,report,"Beard, Simon; Avin, Shahar; Ó hÉigeartaigh, Seán; Kunz, Martina; Ware, Andrew", The Negotiated Order of Organizational Reliability,,http://journals.sagepub.com/doi/10.1177/009539979302500305,1993,journalArticle,"Schulman, Paul R.",Administration & Society AI Research with the Potential for Malicious Use: Publication Norms and Governance Considerations,Chapter in AI Governance in 2019 - A Year in Review: Observations from 50 Global Experts. A report produced by the Shanghai Institute of Science for Science.,http://lcfi.ac.uk/resources/ai-research-potential-malicious-use-publication-no/,2020,bookSection,"Ó hÉigeartaigh, Seán",AI Governance in 2019 - A Year In Review Autonomous Intelligent Cyber-defense Agent (AICA) Reference Architecture. Release 2.0,"This report - a major revision of its previous release - describes a reference architecture for intelligent software agents performing active, largely autonomous cyber-defense actions on military networks of computing and communicating devices. The report is produced by the North Atlantic Treaty Organization (NATO) Research Task Group (RTG) IST-152 ""Intelligent Autonomous Agents for Cyber Defense and Resilience"". In a conflict with a technically sophisticated adversary, NATO military tactical networks will operate in a heavily contested battlefield. Enemy software cyber agents - malware - will infiltrate friendly networks and attack friendly command, control, communications, computers, intelligence, surveillance, and reconnaissance and computerized weapon systems. To fight them, NATO needs artificial cyber hunters - intelligent, autonomous, mobile agents specialized in active cyber defense. With this in mind, in 2016, NATO initiated RTG IST-152. Its objective has been to help accelerate the development and transition to practice of such software agents by producing a reference architecture and technical roadmap. This report presents the concept and architecture of an Autonomous Intelligent Cyber-defense Agent (AICA). We describe the rationale of the AICA concept, explain the methodology and purpose that drive the definition of the AICA Reference Architecture, and review some of the main features and challenges of AICAs.",http://arxiv.org/abs/1803.10664,2019,journalArticle,"Kott, Alexander; Théron, Paul; Drašar, Martin; Dushku, Edlira; LeBlanc, Benoît; Losiewicz, Paul; Guarino, Alessandro; Mancini, Luigi; Panico, Agostino; Pihelgas, Mauno; Rzadca, Krzysztof",The Journal of Defense Modeling and Simulation How to Be Helpful to Multiple People at Once,"When someone hosts a party, when governments choose an aid program, or when assistive robots decide what meal to serve to a family, decision-makers must determine how to help even when their recipients have very different preferences. Which combination of people’s desires should a decisionmaker serve? To provide a potential answer, we turned to psychology: What do people think is best when multiple people have different utilities over options? We developed a quantitative model of what people consider desirable behavior, characterizing participants’ preferences by inferring which combination of “metrics” (maximax, maxsum, maximin, or inequality aversion [IA]) best explained participants’ decisions in a drink-choosing task. We found that participants’ behavior was best described by the maximin metric, describing the desire to maximize the happiness of the worst-off person, though participant behavior was also consistent with maximizing group utility (the maxsum metric) and the IA metric to a lesser extent. Participant behavior was consistent across variation in the agents involved and tended to become more maxsum-oriented when participants were told they were players in the task (Experiment 1). In later experiments, participants maintained maximin behavior across multi-step tasks rather than shortsightedly focusing on the individual steps therein (Experiment 2, Experiment 3). By repeatedly asking participants what choices they would hope for in an optimal, just decision-maker, and carefully disambiguating which quantitative metrics describe these nuanced choices, we help constrain the space of what behavior we desire in leaders, artificial intelligence systems helping decision-makers, and the assistive robots and decision-makers of the future.",https://onlinelibrary.wiley.com/doi/abs/10.1111/cogs.12841,2020,journalArticle,"Gates, Vael; Griffiths, Thomas L.; Dragan, Anca D.",Cognitive Science Cases of Discontinuous Technological Progress,We know of ten events which produced a robust discontinuity in progress equivalent to more than one hundred years at previous rates in some interesting metric. We know of 53 other events which produced smaller or less robust discontinuities. Background These cases were researched as part of our discontinuous progress investigation. List of cases Events...,https://aiimpacts.org/cases-of-discontinuous-technological-progress/,2014,blogPost,AI Impacts,AI Impacts Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors,"Bayesian neural networks (BNNs) demonstrate promising success in improving the robustness and uncertainty quantification of modern deep learning. However, they generally struggle with underfitting at scale and parameter efficiency. On the other hand, deep ensembles have emerged as alternatives for uncertainty quantification that, while outperforming BNNs on certain problems, also suffer from efficiency issues. It remains unclear how to combine the strengths of these two approaches and remediate their common issues. To tackle this challenge, we propose a rank-1 parameterization of BNNs, where each weight matrix involves only a distribution on a rank-1 subspace. We also revisit the use of mixture approximate posteriors to capture multiple modes, where unlike typical mixtures, this approach admits a significantly smaller memory increase (e.g., only a 0.4% increase for a ResNet-50 mixture of size 10). We perform a systematic empirical study on the choices of prior, variational posterior, and methods to improve training. For ResNet-50 on ImageNet, Wide ResNet 28-10 on CIFAR-10/100, and an RNN on MIMIC-III, rank-1 BNNs achieve state-of-the-art performance across log-likelihood, accuracy, and calibration on the test sets and out-of-distribution variants.",http://arxiv.org/abs/2005.07186,2020,conferencePaper,"Dusenberry, Michael W.; Jerfel, Ghassen; Wen, Yeming; Ma, Yi-an; Snoek, Jasper; Heller, Katherine; Lakshminarayanan, Balaji; Tran, Dustin",Proceedings of the 37th International Conference on Machine Learning Trends in the cost of computing,"Computing power available per dollar has probably increased by a factor of ten roughly every four years over the last quarter of a century (measured in FLOPS or MIPS). Over the past 6-8 years, the rate has been slower: around an order of magnitude every 10-16 years, measured in single precision theoretical peak FLOPS or Passmark's benchmark scores. Since...",https://aiimpacts.org/trends-in-the-cost-of-computing/,2015,blogPost,AI Impacts,AI Impacts Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents,"The emergence of complex life on Earth is often attributed to the arms race that ensued from a huge number of organisms all competing for finite resources. We present an artificial intelligence research environment, inspired by the human game genre of MMORPGs (Massively Multiplayer Online Role-Playing Games, a.k.a. MMOs), that aims to simulate this setting in microcosm. As with MMORPGs and the real world alike, our environment is persistent and supports a large and variable number of agents. Our environment is well suited to the study of large-scale multiagent interaction: it requires that agents learn robust combat and navigation policies in the presence of large populations attempting to do the same. Baseline experiments reveal that population size magnifies and incentivizes the development of skillful behaviors and results in agents that outcompete agents trained in smaller populations. We further show that the policies of agents with unshared weights naturally diverge to fill different niches in order to avoid competition.",http://arxiv.org/abs/1903.00784,2019,manuscript,"Suarez, Joseph; Du, Yilun; Isola, Phillip; Mordatch, Igor", Understanding the intentions of others: Re-enactment of intended acts by 18-month-old children.,,http://doi.apa.org/getdoi.cfm?doi=10.1037/0012-1649.31.5.838,1995,journalArticle,"Meltzoff, Andrew N.",Developmental Psychology AI Benefits Blog Series Index,,https://cullenokeefe.com/ai-benefits-index,2020,blogPost,"O'Keefe, Cullen",Cullen O'Keefe Transhumanism and the Meaning of Life,,,2014,journalArticle,"Sandberg, Anders",Religion and Transhumanism: The Unknown Future of Human Enhancement Three wagers for multiverse-wide superrationality,"In this post, I outline three wagers in favor of the hypothesis that multiverse-wide superrationality (MSR) has action-guiding implications. MSR is based on three core assumptions: There is a large…",https://casparoesterheld.com/2018/03/31/three-wagers-for-multiverse-wide-superrationality/,2018,blogPost,"Treutlein, Johannes",The Universe from an Intentional Stance A model of pathways to artificial superintelligence catastrophe for risk and decision analysis,,https://www.tandfonline.com/doi/full/10.1080/0952813X.2016.1186228,2017,journalArticle,"Barrett, Anthony M.; Baum, Seth D.",Journal of Experimental & Theoretical Artificial Intelligence Multi-Agent Imitation Learning for Driving Simulation,"Simulation is an appealing option for validating the safety of autonomous vehicles. Generative Adversarial Imitation Learning (GAIL) has recently been shown to learn representative human driver models. These human driver models were learned through training in single-agent environments, but they have difficulty in generalizing to multi-agent driving scenarios. We argue these difficulties arise because observations at training and test time are sampled from different distributions. This difference makes such models unsuitable for the simulation of driving scenes, where multiple agents must interact realistically over long time horizons. We extend GAIL to address these shortcomings through a parameter-sharing approach grounded in curriculum learning. Compared with single-agent GAIL policies, policies generated by our PS-GAIL method prove superior at interacting stably in a multi-agent setting and capturing the emergent behavior of human drivers.",http://arxiv.org/abs/1803.01044,2018,conferencePaper,"Bhattacharyya, Raunak P.; Phillips, Derek J.; Wulfe, Blake; Morton, Jeremy; Kuefler, Alex; Kochenderfer, Mykel J.",2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) The strategy-stealing assumption,"If humans initially control 99% of the world’s resources, when can they secure 99% of the long-term influence?",https://ai-alignment.com/the-strategy-stealing-assumption-a26b8b1ed334,2019,blogPost,"Christiano, Paul",AI Alignment (Medium) Guidelines for Artificial Intelligence Containment,"With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent programs of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the container.",http://arxiv.org/abs/1707.08476,2017,manuscript,"Babcock, James; Kramar, Janos; Yampolskiy, Roman V.", Introduction: Open Questions in Roboethics,,http://link.springer.com/10.1007/s13347-011-0043-6,2011,journalArticle,"Sullins, John P.",Philosophy & Technology A Psychoanalytic Approach to the Singularity: Why We Cannot Do Without Auxiliary Constructions,"SummaryPsychoanalysis is known above all else for its insistence that we have motivations that are unknown to ourselves, that are unconscious. We are all subject to sickness and accident, to bad luck and unfair breaks, and above all to death as a final end to all our endeavours. In order to compensate for these disappointments and for our ultimate inability to overcome these very real and material constraints we phantasise, we dream, we create, and/or we nurse our bruised and fragile selves by hoping that our phantasies might come true, if not for ourselves then for our offspring. The singularity, as it is most commonly expressed, concerns the possibility of overcoming death by achieving a sort of immortality. In specific terms Kurtweil’s own discussion of the singularity is concerned with the possibility of ‘resurrecting’ his dead father in virtual space at least. There is consistently throughout the writings on the singularity a dismissal of the emotional aspect of human living in favour of the rational overcoming of our existential condition. I am arguing that we cannot ignore the emotional consciousness that is the bedrock of human existence and that we ignore our unconscious feelings at our peril. I think that the singularity as it is being developed is actually a direct threat to the flourishing of human beings and human society because the emotional shortcomings of the theory have not been recognised.",https://doi.org/10.1007/978-3-662-54033-6_12,2017,bookSection,"Clarke, Graham",The Technological Singularity: Managing the Journey AI Services: Introduction v1.3,"This document aims to serve as an introduction for researchers who want to study the long-term impact of AI through the lens of AI services. It introduces basic concepts related to these systems and gives initial observations to enhance their initial study. It points to several relevant research fields that could be leveraged to study AI services, mentions a number of problems that seem specific to this setting, and makes suggestions for future work.",https://docs.google.com/document/d/1SYgvWBe1ruDl9dQnxmjll-8COUHPycGOlLvTI68xtLA/edit?pli=1&usp=embed_facebook,2020,manuscript,"Kovarik, Vojta", My personal cruxes for working on AI safety,"The following is a heavily edited transcript of a talk I gave for the Stanford Effective Altruism club on 19 Jan 2020. I had rev.com transcribe it, and then Linchuan Zhang, Rob Bensinger and I edited it for style and clarity, and also to occasionally have me say smarter things than I actually said. Linch and I both added a few notes throughout. Thanks also to Bill Zito, Ben Weinstein-Raun, and Howie Lempel for comments. I feel slightly weird about posting something so long, but this is the natural place to put it. Over the last year my beliefs about AI risk have shifted moderately; I expect that in a year I'll think that many of the things I said here were dumb. Also, very few of the ideas here are original to me. -- After all those caveats, here's the talk: INTRODUCTION It's great to be here. I used to hang out at Stanford a lot, fun fact. I moved to America six years ago, and then in 2015, I came to Stanford EA every Sunday, and there was, obviously, a totally different crop of people there. It was really fun. I think we were a lot less successful than the current Stanford EA iteration at attracting new people. We just liked having weird conversations about weird stuff every week. It was really fun, but it's really great to come back and see a Stanford EA which is shaped differently. Today I'm going to be talking about the argument for working on AI safety that compels me to work on AI safety, rather than the argument that should compel you or anyone else. I'm going to try to spell out how the arguments are actually shaped in my head. Logistically, we're going to try to talk for about an hour with a bunch of back and forth and you guys arguing with me as we go. And at the end, I'm going to do miscellaneous Q and A for questions you might have. And I'll probably make everyone stand up and sit down again because it's unreasonable to sit in the same place for 90 minutes. META LEVEL THOUGHTS I want to first very briefly talk about some concepts I have that a",https://forum.effectivealtruism.org/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety,2020,blogPost,"Schlegeris, Buck",Effective Altruism Forum Defining human values for value learners,,,2016,conferencePaper,"Sotala, Kaj",Workshops at the Thirtieth AAAI Conference on Artificial Intelligence REALab: An Embedded Perspective on Tampering,"This paper describes REALab, a platform for embedded agency research in reinforcement learning (RL). REALab is designed to model the structure of tampering problems that may arise in real-world deployments of RL. Standard Markov Decision Process (MDP) formulations of RL and simulated environments mirroring the MDP structure assume secure access to feedback (e.g., rewards). This may be unrealistic in settings where agents are embedded and can corrupt the processes producing feedback (e.g., human supervisors, or an implemented reward function). We describe an alternative Corrupt Feedback MDP formulation and the REALab environment platform, which both avoid the secure feedback assumption. We hope the design of REALab provides a useful perspective on tampering problems, and that the platform may serve as a unit test for the presence of tampering incentives in RL agent designs.",http://arxiv.org/abs/2011.08820,2020,manuscript,"Kumar, Ramana; Uesato, Jonathan; Ngo, Richard; Everitt, Tom; Krakovna, Victoria; Legg, Shane", Evaluating Arguments One Step at a Time,A technical report on our experiments testing factored evaluation of structured arguments.,https://ought.org/updates/2020-01-11-arguments,2020,blogPost,Ought,Ought Adversarial Graph Embeddings for Fair Influence Maximization over Social Networks,"Influence maximization is a widely studied topic in network science, where the aim is to reach the maximum possible number of nodes, while only targeting a small initial set of individuals. It has critical applications in many fields, including viral marketing, information propagation, news dissemination, and vaccinations. However, the objective does not usually take into account whether the final set of influenced nodes is fair with respect to sensitive attributes, such as race or gender. Here we address fair influence maximization, aiming to reach minorities more equitably. We introduce Adversarial Graph Embeddings: we co-train an auto-encoder for graph embedding and a discriminator to discern sensitive attributes. This leads to embeddings which are similarly distributed across sensitive attributes. We then find a good initial set by clustering the embeddings. We believe we are the first to use embeddings for the task of fair influence maximization. While there are typically trade-offs between fairness and influence maximization objectives, our experiments on synthetic and real-world datasets show that our approach dramatically reduces disparity while remaining competitive with state-of-the-art influence maximization methods.",http://arxiv.org/abs/2005.04074,2020,conferencePaper,"Khajehnejad, Moein; Rezaei, Ahmad Asgharian; Babaei, Mahmoudreza; Hoffmann, Jessica; Jalili, Mahdi; Weller, Adrian","arXiv:2005.04074 [cs, stat]" Planning With Uncertain Specifications (PUnS),"Reward engineering is crucial to high performance in reinforcement learning systems. Prior research into reward design has largely focused on Markovian functions representing the reward. While there has been research into expressing nonMarkov rewards as linear temporal logic (LTL) formulas, this has focused on task specifications directly defined by the user. However, in many real-world applications, task specifications are ambiguous, and can only be expressed as a belief over LTL formulas. In this paper, we introduce planning with uncertain specifications (PUnS), a novel formulation that addresses the challenge posed by non-Markovian specifications expressed as beliefs over LTL formulas. We present four criteria that capture the semantics of satisfying a belief over specifications for different applications, and analyze the qualitative implications of these criteria within a synthetic domain. We demonstrate the existence of an equivalent Markov decision process (MDP) for any instance of PUnS. Finally, we demonstrate our approach on the real-world task of setting a dinner table automatically with a robot that inferred task specifications from human demonstrations.",http://arxiv.org/abs/1906.03218,2020,journalArticle,"Shah, Ankit; Li, Shen; Shah, Julie",IEEE Robotics and Automation Letters Does Economic History Point Toward a Singularity?,"I’ve ended up spending quite a lot of time researching premodern economic growth, as part of a hobby project that got out of hand. I’m sharing an informal but long write-up of my findings here, since I think they may be relevant to other longtermist researchers and I am unlikely to write anything more polished in the near future. Click here for the Google document.[1] SUMMARY Over the next several centuries, is the economic growth rate likely to remain steady, radically increase, or decline back toward zero? This question has some bearing on almost every long-run challenge facing the world, from climate change to great power competition to risks from AI. One way to approach the question is to consider the long-run history of economic growth. I decided to investigate the Hyperbolic Growth Hypothesis: the claim that, from at least the start of the Neolithic Revolution up until the 20th century, the economic growth rate has tended to rise in proportion with the size of the global economy.[2] This claim is made in a classic 1993 paper by Michael Kremer. Beyond influencing other work in economic growth theory, it has also recently attracted significant attention within the longtermist community, where it is typically regarded as evidence in favor of further acceleration.[3] An especially notable property of the hypothesized growth trend is that, if it had continued without pause, it would have produced infinite growth rates in the early twenty-first century. I spent time exploring several different datasets that can be used to estimate pre-modern growth rates. This included a number of recent archeological datasets that, I believe, have not previously been analyzed by economists. I wanted to evaluate both: (a) how empirically well-grounded these estimates are and (b) how clearly these estimates display the hypothesized pattern of growth. Ultimately, I found very little empirical support for the Hyperbolic Growth Hypothesis. While we can confidently say that the econo",https://forum.effectivealtruism.org/posts/CWFn9qAKsRibpCGq8/does-economic-history-point-toward-a-singularity,2020,blogPost,"Garfinkel, Ben",Effective Altruism Forum Risks and Mitigation Strategies for Oracle AI,"There is no strong reason to believe human level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed goals or motivation systems. Oracle AIs (OAI), confined AIs that can only answer questions, are one particular approach to this problem. However even Oracles are not particularly safe: humans are still vulnerable to traps, social engineering, or simply becoming dependent on the OAI. But OAIs are still strictly safer than general AIs, and there are many extra layers of precautions we can add on top of these. This paper looks at some of them and analyses their strengths and weaknesses.",https://doi.org/10.1007/978-3-642-31674-6_25,2013,bookSection,"Armstrong, Stuart",Philosophy and Theory of Artificial Intelligence Benign model-free RL,"Reward learning, robustness, and amplification may be sufficient to train benign model-free RL agents.",https://ai-alignment.com/benign-model-free-rl-4aae8c97e385,2017,blogPost,"Christiano, Paul",AI Alignment (Medium) Social Cohesion in Autonomous Driving,"Autonomous cars can perform poorly for many reasons. They may have perception issues, incorrect dynamics models, be unaware of obscure rules of human traffic systems, or follow certain rules too conservatively. Regardless of the exact failure mode of the car, often human drivers around the car are behaving correctly. For example, even if the car does not know that it should pull over when an ambulance races by, other humans on the road will know and will pull over. We propose to make socially cohesive cars that leverage the behavior of nearby human drivers to act in ways that are safer and more socially acceptable. The simple intuition behind our algorithm is that if all the humans are consistently behaving in a particular way, then the autonomous car probably should too. We analyze the performance of our algorithm in a variety of scenarios and conduct a user study to assess people's attitudes towards socially cohesive cars. We find that people are surprisingly tolerant of mistakes that cohesive cars might make in order to get the benefits of driving in a car with a safer, or even just more socially acceptable behavior.",http://arxiv.org/abs/1808.03845,2018,conferencePaper,"Landolfi, Nicholas C.; Dragan, Anca D.",2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) AI Chips: What They Are and Why They Matter,"The success of modern AI techniques relies on computation on a scale unimaginable even a few years ago. What exactly are the AI chips powering the development and deployment of AI at scale and why are they essential? Saif M. Khan and Alexander Mann explain how these chips work, why they have proliferated, and why they matter. Their report also surveys trends in the semiconductor industry and chip design that are shaping the evolution of AI chips.",https://cset.georgetown.edu/research/ai-chips-what-they-are-and-why-they-matter/,2020,report,"Khan, Saif M", A space of proposals for building safe advanced AI,"I liked Evan’s post on 11 proposals for safe AGI. However, I was a little confused about why he chose these specific proposals; it feels like we could generate many more by stitching together the different components he identifies, such as different types of amplification and different types of robustness tools. So I’m going to take a shot at describing a set of dimensions of variation which capture the key differences between these proposals, and thereby describe an underlying space of possible approaches to safety. Firstly I’ll quickly outline the proposals. Rohin’s overview of them is a good place to start - he categorises them as: * 7 proposals of the form “recursive outer alignment technique” plus “robustness technique”. * The recursive outer alignment technique is either debate, recursive reward modelling, or amplification.The robustness technique is either transparency tools, relaxed adversarial training, or intermittent oversight by a competent supervisor. * 2 proposals of the form “non-recursive outer alignment technique” plus “robustness technique”. * The outer alignment technique is either reinforcement learning in a multiagent environment, or narrow reward learning. * 2 other proposals: Microscope AI; STEM AI. More specifically, we can describe the four core recursive outer alignment techniques as variants of iterated amplification, as follows: let Amp(M) be the procedure of a human answering questions with access to model M. Then we iteratively train M* (the next version of M) by: * Imitative amplification: train M* to imitate Amp(M). * Approval-based amplification: train M* on an approval signal specified by Amp(M). * Recursive reward modelling: train M* on a reward function specified by Amp(M). * Debate: train M* to win debates against Amp(M). Here are six axes of variation which I claim underlie Evan’s proposals. Each proposal is more or less: 1. Supervised 2. Structured 3. Adversarial 4. Language-based 5.",https://www.alignmentforum.org/posts/S9GxuAEeQomnLkeNt/a-space-of-proposals-for-building-safe-advanced-ai,2020,blogPost,"Ngo, Richard",AI Alignment Forum MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies,"Humans are able to perform a myriad of sophisticated tasks by drawing upon skills acquired through prior experience. For autonomous agents to have this capability, they must be able to extract reusable skills from past experience that can be recombined in new ways for subsequent tasks. Furthermore, when controlling complex high-dimensional morphologies, such as humanoid bodies, tasks often require coordination of multiple skills simultaneously. Learning discrete primitives for every combination of skills quickly becomes prohibitive. Composable primitives that can be recombined to create a large variety of behaviors can be more suitable for modeling this combinatorial explosion. In this work, we propose multiplicative compositional policies (MCP), a method for learning reusable motor skills that can be composed to produce a range of complex behaviors. Our method factorizes an agent's skills into a collection of primitives, where multiple primitives can be activated simultaneously via multiplicative composition. This flexibility allows the primitives to be transferred and recombined to elicit new behaviors as necessary for novel tasks. We demonstrate that MCP is able to extract composable skills for highly complex simulated characters from pre-training tasks, such as motion imitation, and then reuse these skills to solve challenging continuous control tasks, such as dribbling a soccer ball to a goal, and picking up an object and transporting it to a target location.",https://proceedings.neurips.cc/paper/2019/hash/95192c98732387165bf8e396c0f2dad2-Abstract.html,2019,conferencePaper,"Peng, Xue Bin; Chang, Michael; Zhang, Grace; Abbeel, Pieter; Levine, Sergey",Advances in Neural Information Processing Systems 32 (NeurIPS 2019) Investigation into the relationship between neuron count and intelligence across differing cortical architectures,,https://aiimpacts.org/investigation-into-the-relationship-between-neuron-count-and-intelligence-across-differing-cortical-architectures/,2019,blogPost,"McCaslin, Tegan",AI Impacts A Simple Framework for Contrastive Learning of Visual Representations,"This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.",http://arxiv.org/abs/2002.05709,2020,conferencePaper,"Chen, Ting; Kornblith, Simon; Norouzi, Mohammad; Hinton, Geoffrey",Proceedings of the 37th International Conference on Machine Learning A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics,"Machine learning (ML) is increasingly deployed in real world contexts, supplying actionable insights and forming the basis of automated decision-making systems. While issues resulting from biases pre-existing in training data have been at the center of the fairness debate, these systems are also affected by technical and emergent biases, which often arise as context-specific artifacts of implementation. This position paper interprets technical bias as an epistemological problem and emergent bias as a dynamical feedback phenomenon. In order to stimulate debate on how to change machine learning practice to effectively address these issues, we explore this broader view on bias, stress the need to reflect on epistemology, and point to value-sensitive design methodologies to revisit the design and implementation process of automated decision-making systems.",http://arxiv.org/abs/1807.00553,2018,manuscript,"Dobbe, Roel; Dean, Sarah; Gilbert, Thomas; Kohli, Nitin", "The Social Science of Computerized Brains – Review of The Age of Em: Work, Love, and Life When Robots Rule the Earth by Robin Hanson (Oxford University Press, 2016)",,https://linkinghub.elsevier.com/retrieve/pii/S0016328716302518,2017,journalArticle,"Baum, Seth D.",Futures A Hamilton-Jacobi Reachability-Based Framework for Predicting and Analyzing Human Motion for Safe Planning,"Real-world autonomous systems often employ probabilistic predictive models of human behavior during planning to reason about their future motion. Since accurately modeling the human behavior a priori is challenging, such models are often parameterized, enabling the robot to adapt predictions based on observations by maintaining a distribution over the model parameters. This leads to a probabilistic prediction problem, which even though attractive, can be computationally demanding. In this work, we formalize the prediction problem as a stochastic reachability problem in the joint state space of the human and the belief over the model parameters. We further introduce a Hamilton-Jacobi reachability framework which casts a deterministic approximation of this stochastic reachability problem by restricting the allowable actions to a set rather than a distribution, while still maintaining the belief as an explicit state. This leads to two advantages: our approach gives rise to a novel predictor wherein the predictions can be performed at a significantly lower computational expense, and to a general framework which also enables us to perform predictor analysis. We compare our approach to a fully stochastic predictor using Bayesian inference and the worst-case forward reachable set in simulation and in hardware, and demonstrate how it can enable robust planning while not being overly conservative, even when the human model is inaccurate.",http://arxiv.org/abs/1910.13369,2019,conferencePaper,"Bansal, Somil; Bajcsy, Andrea; Ratner, Ellis; Dragan, Anca D.; Tomlin, Claire J.",2020 IEEE International Conference on Robotics and Automation (ICRA) Risk-Sensitive Generative Adversarial Imitation Learning,"We study risk-sensitive imitation learning where the agent's goal is to perform at least as well as the expert in terms of a risk profile. We first formulate our risk-sensitive imitation learning setting. We consider the generative adversarial approach to imitation learning (GAIL) and derive an optimization problem for our formulation, which we call it risk-sensitive GAIL (RS-GAIL). We then derive two different versions of our RS-GAIL optimization problem that aim at matching the risk profiles of the agent and the expert w.r.t. Jensen-Shannon (JS) divergence and Wasserstein distance, and develop risk-sensitive generative adversarial imitation learning algorithms based on these optimization problems. We evaluate the performance of our algorithms and compare them with GAIL and the risk-averse imitation learning (RAIL) algorithms in two MuJoCo and two OpenAI classical control tasks.",https://arxiv.org/abs/1808.04468v2,2018,conferencePaper,"Lacotte, Jonathan; Ghavamzadeh, Mohammad; Chow, Yinlam; Pavone, Marco",Proceedings of Machine Learning Research Evaluating and Aggregating Feature-based Model Explanations,"A feature-based model explanation denotes howmuch each input feature contributes to a model’s output for a given data point. As the number of proposed explanation functions grows, we lack quantitative evaluation criteria to help practitioners know when to use which explanation function.This paper proposes quantitative evaluation criteria for feature-based explanations: low sensitivity, high faithfulness, and low complexity. We devise a framework for aggregating explanation functions. We develop a procedure for learning an aggregate explanation function with lower complexity and then derive a new aggregate Shapley value explanation function that minimizes sensitivity.",http://lcfi.ac.uk/resources/evaluating-and-aggregating-feature-based-model-exp/,2020,conferencePaper,"Bhatt, Umang; Weller, Adrian; Moura, José M F", Unifying Logic and Probability: A New Dawn for AI?,,http://link.springer.com/10.1007/978-3-319-08795-5_2,2014,bookSection,"Russell, Stuart",Information Processing and Management of Uncertainty in Knowledge-Based Systems "Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter","We look at classifying extinction risks in three different ways, which affect how we can intervene to reduce risk. First, how does it start causing damage? Second, how does it reach the scale of a global catastrophe? Third, how does it reach everyone? In all of these three phases there is a defence layer that blocks most risks: First, we can prevent catastrophes from occurring. Second, we can respond to catastrophes before they reach a global scale. Third, humanity is resilient against extinction even in the face of global catastrophes. The largest probability of extinction is posed when all of these defences are weak, that is, by risks we are unlikely to prevent, unlikely to successfully respond to, and unlikely to be resilient against. We find that it’s usually best to invest significantly into strengthening all three defence layers. We also suggest ways to do so tailored to the classes of risk we identify. Lastly, we discuss the importance of underlying risk factors – events or structural conditions that may weaken the defence layers even without posing a risk of immediate extinction themselves.",https://onlinelibrary.wiley.com/doi/abs/10.1111/1758-5899.12786,2020,journalArticle,"Cotton‐Barratt, Owen; Daniel, Max; Sandberg, Anders",Global Policy Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation,"Neural networks are part of many contemporary NLP systems, yet their empirical successes come at the price of vulnerability to adversarial attacks. Previous work has used adversarial training and data augmentation to partially mitigate such brittleness, but these are unlikely to find worst-case adversaries due to the complexity of the search space arising from discrete text perturbations. In this work, we approach the problem from the opposite direction: to formally verify a system's robustness against a predefined class of adversarial attacks. We study text classification under synonym replacements or character flip perturbations. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation -- a formal model verification method. We modify the conventional log-likelihood training objective to train models that can be efficiently verified, which would otherwise come with exponential search complexity. The resulting models show only little difference in terms of nominal accuracy, but have much improved verified accuracy under perturbations and come with an efficiently computable formal guarantee on worst case adversaries.",http://arxiv.org/abs/1909.01492,2019,conferencePaper,"Huang, Po-Sen; Stanforth, Robert; Welbl, Johannes; Dyer, Chris; Yogatama, Dani; Gowal, Sven; Dvijotham, Krishnamurthy; Kohli, Pushmeet","arXiv:1909.01492 [cs, stat]" Approval-directed agency and the decision theory of Newcomb-like problems,,,2019,journalArticle,"Oesterheld, Caspar",Synthese Assuring the Behavior of Adaptive Agents,,http://link.springer.com/10.1007/1-84628-271-3_8,2006,bookSection,"Spears, Diana F.",Agent Technology from a Formal Perspective Costs of extinction risk mitigation,We very roughly estimate that the annual cost of reducing the probability of human extinction by 0.01% is within the range of $1.1 billion to $3.5 trillion. Introduction This article is intended to be usable in a Cost-Benefit Analysis (CBA) analysis of extinction risk mitigation. It explores the costs of such efforts. A corresponding article...,https://aiimpacts.org/costs-of-extinction-risk-mitigation/,2016,blogPost,"Wulfsohn, Michael",AI Impacts Universal Ownership in the Anthropocene,"This paper reviews the existing literature on Universal Ownership Theory and expands on it to encompass a theoretical and practical framework for Universal Owners in the Anthropocene era. This extension of the theory is necessary because of the scale and urgency of the climate crisis, on one hand, and the expansion of the category of funds considered to be Universal Owners on the other – through the rise of fiduciary capitalism and the increase in institutional ownership, and through the increasing prevalence of passive investing.This paper incorporates several novel elements into Universal Ownership Theory: an Existential Risk lens, which highlights the portfolio risk of civilisational collapse; a theoretical framework that reflects advances in behavioural science; a practical investment framework based on the tenets of Positive Investment, including asset class-specific recommendations with a focus on capital allocation, the primary-to-secondary market transition, and “ungameable” metrics; and the proposition of a “duty of expansion” for Universal Owners to extend participation to communities and regions of the world currently underrepresented among the body of large institutional investors.",https://papers.ssrn.com/abstract=3457205,2019,manuscript,"Quigley, Ellen", Learning to Generate Reviews and Discovering Sentiment,"We explore the properties of byte-level recurrent language models. When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts. Specifically, we find a single unit which performs sentiment analysis. These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank. They are also very data efficient. When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets. We also demonstrate the sentiment unit has a direct influence on the generative process of the model. Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment.",http://arxiv.org/abs/1704.01444,2017,manuscript,"Radford, Alec; Jozefowicz, Rafal; Sutskever, Ilya", Beyond Winning and Losing: Modeling Human Motivations and Behaviors Using Inverse Reinforcement Learning,"In recent years, reinforcement learning (RL) methods have been applied to model gameplay with great success, achieving super-human performance in various environments, such as Atari, Go, and Poker. However, those studies mostly focus on winning the game and have largely ignored the rich and complex human motivations, which are essential for understanding different players' diverse behaviors. In this paper, we present a novel method called Multi-Motivation Behavior Modeling (MMBM) that takes the multifaceted human motivations into consideration and models the underlying value structure of the players using inverse RL. Our approach does not require the access to the dynamic of the system, making it feasible to model complex interactive environments such as massively multiplayer online games. MMBM is tested on the World of Warcraft Avatar History dataset, which recorded over 70,000 users' gameplay spanning three years period. Our model reveals the significant difference of value structures among different player groups. Using the results of motivation modeling, we also predict and explain their diverse gameplay behaviors and provide a quantitative assessment of how the redesign of the game environment impacts players' behaviors.",http://arxiv.org/abs/1807.00366,2018,conferencePaper,"Wang, Baoxiang; Sun, Tongfang; Zheng, Xianjun Sam",Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment The truth behind the myth of the Folk theorem,,https://linkinghub.elsevier.com/retrieve/pii/S0899825619300582,2019,journalArticle,"Halpern, Joseph Y.; Pass, Rafael; Seeman, Lior",Games and Economic Behavior Life 3.0: Being Human in the Age of Artificial Intelligence,"DAILY TELEGRAPH AND THE TIMES BOOKS OF THE YEAR 2017'This is the most important conversation of our time, and Tegmark's thought-provoking book will help you join it' Stephen Hawking'This is a rich and visionary book and everyone should read it' The TimesWe stand at the beginning of a new era. What was once science fiction is fast becoming reality, as AI transforms war, crime, justice, jobs and society-and, even, our very sense of what it means to be human. More than any other technology, AI has the potential to revolutionize our collective future - and there's nobody better situated to explore that future than Max Tegmark, an MIT professor and co-founder of the Future of Life Institute, whose work has helped mainstream research on how to keep AI beneficial.In this deeply researched and vitally important new book, Tegmark takes us to the heart of thinking about AI and the human condition, bringing us face to face with the essential questions of our time. How can we grow our prosperity through automation, without leaving people lacking income or purpose? How can we ensure that future AI systems do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will AI help life flourish as never before, or will machines eventually outsmart us at all tasks, and even, perhaps, replace us altogether? Life 3.0 gives us the tools to join what may be the most important conversation of our time, guiding us through the most controversial issues around AI today -- from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.What sort of future do you want?",,2017,book,"Tegmark, Max", A Framework and Method for Online Inverse Reinforcement Learning,"Inverse reinforcement learning (IRL) is the problem of learning the preferences of an agent from the observations of its behavior on a task. While this problem has been well investigated, the related problem of {\em online} IRL---where the observations are incrementally accrued, yet the demands of the application often prohibit a full rerun of an IRL method---has received relatively less attention. We introduce the first formal framework for online IRL, called incremental IRL (I2RL), and a new method that advances maximum entropy IRL with hidden variables, to this setting. Our formal analysis shows that the new method has a monotonically improving performance with more demonstration data, as well as probabilistically bounded error, both under full and partial observability. Experiments in a simulated robotic application of penetrating a continuous patrol under occlusion shows the relatively improved performance and speed up of the new method and validates the utility of online IRL.",http://arxiv.org/abs/1805.07871,2018,manuscript,"Arora, Saurabh; Doshi, Prashant; Banerjee, Bikramjit", "The two-layer model of human values, and problems with synthesizing preferences","I have been thinking about Stuart Armstrong's preference synthesis research agenda, and have long had the feeling that there's something off about the way that it is currently framed. In the post I try to describe why. I start by describing my current model of human values, how I interpret Stuart's implicit assumptions to conflict with it, and then talk about my confusion with regard to reconciling the two views. THE TWO-LAYER/ULM MODEL OF HUMAN VALUES In Player vs. Character: A Two-Level Model of Ethics, Sarah Constantin describes a model where the mind is divided, in game terms, into a ""player"" and a ""character"". The character is everything that we consciously experience, but our conscious experiences are not our true reasons for acting. As Sarah puts it: In many games, such as Magic: The Gathering, Hearthstone, or Dungeons and Dragons, there’s a two-phase process. First, the player constructs adeck or character from a very large sample space of possibilities. This is a particular combination of strengths and weaknesses and capabilities for action, which the player thinks can be successful against other decks/characters or at winning in the game universe. The choice of deck or character often determines the strategies that deck or character can use in the second phase, which is actual gameplay. In gameplay, the character (or deck) can only use the affordances that it’s been previously set up with. This means that there are two separate places where a player needs to get things right: first, in designing a strong character/deck, and second, in executing the optimal strategies for that character/deck during gameplay. [...]The idea is that human behavior works very much like a two-level game. [...] The player determines what we find rewarding or unrewarding. The player determines what we notice and what we overlook; things come to our attention if it suits the player’s strategy, and not otherwise. The player gives us emotions when it’s strategic to do so. The playe",https://www.alignmentforum.org/posts/2yLn8iTrvHoEgqXcJ/the-two-layer-model-of-human-values-and-problems-with,2020,blogPost,"Sotala, Kaj",AI Alignment Forum When Is a Linear Control System Optimal?,"The purpose of this paper is to formulate, study, and (in certain cases) resolve the Inverse Problem of Optimal Control Theory, which is the following: Given a control law, find all performance indices for which this control law is optimal. Under the assumptions of (a) linear constant plant, (b) linear constant control law, (c) measurable state variables, (d) quadratic loss functions with constant coefficients, (e) single control variable, we give a complete analysis of this problem and obtain various explicit conditions for the optimality of a given control law. An interesting feature of the analysis is the central role of frequency-domain concepts, which have been ignored in optimal control theory until very recently. The discussion is presented in rigorous mathematical form. The central conclusion is the following (Theorem 6): A stable control law is optimal if and only if the absolute value of the corresponding return difference is at least equal to one at all frequencies. This provides a beautifully simple connecting link between modern control theory and the classical point of view which regards feedback as a means of reducing component variations.",https://asmedigitalcollection.asme.org/fluidsengineering/article/86/1/51/392203/When-Is-a-Linear-Control-System-Optimal,1964,journalArticle,"Kalman, R. E.",Journal of Basic Engineering Immigration Policy and the U.S. AI Sector: A Preliminary Assessment,,,2019,report,"Arnold, Zachary; Heston, Roxanne; Zwetsloot, Remco; Huang, Tina", On the Existence of Nash Equilibrium in Games with Resource-Bounded Players,,http://link.springer.com/10.1007/978-3-030-30473-7_10,2019,conferencePaper,"Halpern, Joseph Y.; Pass, Rafael; Reichman, Daniel",Algorithmic Game Theory Problems in AI Alignment that philosophers could potentially contribute to,"(This was originally a comment that I wrote as a follow up to my question [https://ea.greaterwrong.com/posts/oPGJrqohDqT8GZieA/ask-me-anything/comment/cL3KFfJwtHZKeBPGH] for William MacAskill's AMA. I'm moving it since it's perhaps more on-topic here.) It occurs to me that another reason for the lack of engagement by people with philosophy backgrounds may be that philosophers aren't aware of the many philosophical problems in AI alignment that they could potentially contribute to. So here's a list of philosophical problems that have come up just in my own thinking about AI alignment. * Decision theory for AI / AI designers * How to resolve standard debates in decision theory? * Logical counterfactuals * Open source game theory * Acausal game theory / reasoning about distant superintelligences * Infinite/multiversal/astronomical ethics * Should we (or our AI) care much more about a universe that is capable of doing a lot more computations? * What kinds of (e.g. spatial-temporal) discounting is necessary and/or desirable? * Fair distribution of benefits * How should benefits from AGI be distributed? * For example, would it be fair to distribute it equally over all humans who currently exist, or according to how much AI services they can afford to buy? * What about people who existed or will exist at other times and in other places or universes? * Need for ""metaphilosophical paternalism""? * However we distribute the benefits, if we let the beneficiaries decide what to do with their windfall using their own philosophical faculties, is that likely to lead to a good outcome? * Metaphilosophy * What is the nature of philosophy? * What constitutes correct philosophical reasoning? * How to specify this into an AI design? * Philosophical forecasting * How are various AI technologies and AI safety proposals likely to affect future p",https://www.lesswrong.com/posts/rASeoR7iZ9Fokzh7L/problems-in-ai-alignment-that-philosophers-could-potentially,2019,blogPost,"Dai, Wei",LessWrong Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments,"Ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence. In this paper, we cast the problem of continuous adaptation into the learning-to-learn framework. We develop a simple gradient-based meta-learning algorithm suitable for adaptation in dynamically changing and adversarial scenarios. Additionally, we design a new multi-agent competitive environment, RoboSumo, and define iterated adaptation games for testing various aspects of continuous adaptation strategies. We demonstrate that meta-learning enables significantly more efficient adaptation than reactive baselines in the few-shot regime. Our experiments with a population of agents that learn and compete suggest that meta-learners are the fittest.",http://arxiv.org/abs/1710.03641,2018,conferencePaper,"Al-Shedivat, Maruan; Bansal, Trapit; Burda, Yuri; Sutskever, Ilya; Mordatch, Igor; Abbeel, Pieter",arXiv:1710.03641 [cs] A Gym Gridworld Environment for the Treacherous Turn,"EDIT: posted here for feedback and discussion. I plan to continue working on different models/environments, so feel free to suggest improvements. (tl;dr: In an attempt to better understand the treacherous turn, I created a gridworld environment where an agent learns to deceive an overseer by adopting an aligned behaviour when weak and takes control after capability gains) -------------------------------------------------------------------------------- At some point in its development, a seed AI may realize that it needs to get rid of its supervisors to achieve its goals. The conception of deception occurs when it conceives that, in order to maximize its chance of taking over, it must begin by exhibiting human-desirable behaviors, before undertaking a treacherous turn when humans are no longer a threat. From the human perspective, the AI would keep on exhibiting desirable behavior, until it eventually appears dangerous, but is already unstoppable. In an attempt to better formalize the treacherous turn without using ""loaded concepts"", Stuart Armstrong proposed a toy model of the treacherous turn based on ""The Legend of Zelda: A Link to the Past "", which looked like this: In the comments, people mentionned how this model helped them ""move the topic from the 'science fiction' area to 'I can imagine it happening now'"", and seemed interested in an actual Link to the Past Minigame. There have been other simulations of the treacherous turn in the last three years (see for instance gwern's DQN box-pushing robot or Stuart Armstrong's video), but none of them actually simulate a take over where a supervisor is killed. Hence, I decided to give it a try and simulate Stuart Armstrong's Link to the Past toy model. A GYM GRIDWORLD ENVIRONMENT Gym is an open-source toolkit for Reinforcement Learning Environments developed by Open AI. I decided to use this interface to develop the gridworld environment. The github repository with the code, demo, and all the details is",https://www.alignmentforum.org/posts/cKfryXvyJ522iFuNF/a-gym-gridworld-environment-for-the-treacherous-turn,2018,blogPost,"Trazzi, Michaël",AI Alignment Forum Cooperative Inverse Reinforcement Learning,"For an autonomous system to be helpful to humans and to pose no unwarranted risks, it needs to align its values with those of the humans in its environment in such a way that its actions contribute to the maximization of value for the humans. We propose a formal definition of the value alignment problem as cooperative inverse reinforcement learning (CIRL). A CIRL problem is a cooperative, partial-information game with two agents, human and robot; both are rewarded according to the human's reward function, but the robot does not initially know what this is. In contrast to classical IRL, where the human is assumed to act optimally in isolation, optimal CIRL solutions produce behaviors such as active teaching, active learning, and communicative actions that are more effective in achieving value alignment. We show that computing optimal joint policies in CIRL games can be reduced to solving a POMDP, prove that optimality in isolation is suboptimal in CIRL, and derive an approximate CIRL algorithm.",https://proceedings.neurips.cc/paper/2016/hash/c3395dd46c34fa7fd8d729d8cf88b7a8-Abstract.html,2016,conferencePaper,"Hadfield-Menell, Dylan; Dragan, Anca; Abbeel, Pieter; Russell, Stuart",Advances in Neural Information Processing Systems 29 (NIPS 2016) 2016 AI Risk Literature Review and Charity Comparison,"INTRODUCTION I've long been concerned about AI Risk. Now that there are a few charities working on the problem, it seems desirable to compare them, to determine where scarce donations should be sent. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to an securities analyst with regard possible investments. However, while people have evaluated individual organisations, I haven't seen anyone else attempt to compare them, so hopefully this is valuable to others. I've attempted to do so. This is a very big undertaking, and I am very conscious of the many ways in which this is not up to the task. The only thing I wish more than the skill and time to do it better is that someone else would do it! If people find this useful enough to warrant doing again next year I should be able to do it much more efficiently, and spend more time on the underlying model of how papers translate into risk-reduction value. My aim is basically to judge the output of each organisation in 2016 and compare it to their budget. This should give a sense for the organisations' average cost-effectiveness. Then we can consider factors that might increase or decrease the marginal cost-effectiveness going forward. This organisation-centric approach is in contrast to a researcher-centric approach, where we would analyse which researchers do good work, and then donate wherever they are. An extreme version of the other approach would be to simply give money directly to researchers - e.g if I like Logical Induction, I would simply fund Scott Garrabrant directly and ignore MIRI. I favour the organisation-centric approach because it helps keep organisations accountable. Additionally, if researcher skill is the only thing that matters for research output, it doesn't really matter which organisations end up getting the money and employing the researchers, assuming broadly the same researchers are hired. Different organisations might hire different resea",https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison,2016,blogPost,Larks,Effective Altruism Forum "The law of effect, randomization and Newcomb’s problem","The law of effect (LoE), as introduced on p. 244 of Thorndike’s (1911) Animal Intelligence, states: Of several responses made to the same situation, those which are accompanied or closely followed …",https://casparoesterheld.com/2018/02/15/the-law-of-effect-randomization-and-newcombs-problem/,2018,blogPost,"Oesterheld, Caspar",The Universe from an Intentional Stance User-Agent Value Alignment,"The principal-agent problem concerns delegation in the absence of trust. Given a principal and an agent with different value structures, the principal wants to motivate the agent to address the principal’s aims by providing appropriate incentives. We address this problem in the context of a real-world complication, where the principal and agent lack a common problem frame. This context is especially relevant when the principal is a user, and the agent is a technological artifact with a limited repertoire of percepts and actions. We identify necessary conditions for establishing trust between such disparate actors, and we show, via a constructive proof, that it is always possible to create these necessary conditions. We conclude with several distinctions that let the principal rank the expected quality of agent behavior.",,2002,conferencePaper,"Shapiro, Daniel; Shachter, Ross","Proceedings of the 18th National Conference on Artificial Intelligence, AAAI" When is diminishment a form of enhancement? Rethinking the enhancement debate in biomedical ethics,,,2014,journalArticle,"Earp, Brian D.; Sandberg, Anders; Kahane, Guy; Savulescu, Julian",Frontiers in Systems Neuroscience An Abstraction-Refinement Approach to Verification of Artificial Neural Networks,,http://link.springer.com/10.1007/978-3-642-14295-6_24,2010,bookSection,"Pulina, Luca; Tacchella, Armando",Computer Aided Verification The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents,,,2012,journalArticle,"Bostrom, Nick",Minds and Machines Specification gaming examples in AI,"Update: for a more detailed introduction to specification gaming, check out the DeepMind Safety Research blog post! Various examples (and lists of examples) of unintended behaviors in AI systems ha…",https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/,2018,blogPost,"Krakovna, Victoria",Victoria Krakovna Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence,"In this paper we focus on the most efficacious AI applications for life extension and anti-aging at three expected stages of AI development: narrow AI, AGI and superintelligence. First, we overview the existing research and commercial work performed by a select number of startups and academic projects. We find that at the current stage of “narrow” AI, the most promising areas for life extension are geroprotector-combination discovery, detection of aging biomarkers, and personalized anti-aging therapy. These advances could help currently living people reach longevity escape velocity and survive until more advanced AI appears. When AI comes close to human level, the main contribution to life extension will come from AI integration with humans through brain-computer interfaces, integrated AI assistants capable of autonomously diagnosing and treating health issues, and cyber systems embedded into human bodies. Lastly, we speculate about the more remote future, when AI reaches the level of superintelligence and such life-extension methods as uploading human minds and creating nanotechnological bodies may become possible, thus lowering the probability of human death close to zero. We conclude that medical AI based superintelligence is intrinsically safer than, say, military AI, as it may help humans to evolve into part of the future superintelligence via brain augmentation, uploading, and a network of self-improving humans. Medical AI’s value system is focused on human benefit.",http://www.informatica.si/index.php/informatica/article/view/1797,2017,journalArticle,"Batin, Mikhail; Turchin, Alexey; Sergey, Markov; Zhila, Alisa; Denkenberger, David",Informatica Teaching Astrobiology in a Sustainability Course,,,2013,journalArticle,"Baum, Seth D.",Journal of Sustainability Education Synthesizing amplification and debate,"BACKGROUND One possible way to train an amplification model is to use an auxiliary reinforcement learning objective to help guide the training of the amplification model. This could be done either by training two separate models, an agent and a question-answerer, or a single model trained on a joint objective. For example, from a comment Paul left on “A dilemma for prosaic AI alignment:” I normally imagine using joint training in these cases, rather than pre-training + fine-tuning. e.g., at every point in time we maintain an agent and a question-answerer, where the question-answerer ""knows everything the agent knows."" They get better together, with each gradient update affecting both of them, rather than first training a good agent and then adding a good question-answerer. (Independently of concerns about mesa-optimization, I think the fine-tuning approach would have trouble because you couldn't use statistical regularities from the ""main"" objective to inform your answers to questions, and therefore your question answers will be dumber than the policy and so you couldn't get a good reward function or specification of catastrophically bad behavior.) In my last post, I expressed skepticism of such non-imitative amplification approaches, though in this post I want to propose a possible way in which some of my concerns with this style of approach could addressed by integrating ideas from AI safety via debate. I'll start by describing the basic idea in broad terms, then give a more careful, technical description of the sort of training procedure I have in mind. THE PROPOSAL The basic idea is as follows: debate naturally yields an RL objective, so if you want to add an auxiliary RL objective to amplification, why not use the RL objective from debate? Specifically, the idea is to conduct a debate not between copies of the model M, but between copies of the amplified model Amp(M) (where Amp(M) is a human with access to the model M). That gives you both an RL reward ari",https://www.alignmentforum.org/posts/dJSD5RK6Qoidb3QY5/synthesizing-amplification-and-debate,2020,blogPost,"Hubinger, Evan",AI Alignment Forum The human side of interaction,"The last few posts have motivated an analysis of the human-AI system rather than an AI system in isolation. So far we’ve looked at the notion that the AI system should get feedback from the user and that it could use reward uncertainty for corrigibility. These are focused on the AI system, but what about the human? If we build a system that explicitly solicits feedback from the human, what do we have to say about the human policy, and how the human should provide feedback? INTERPRETING HUMAN ACTIONS One major free variable in any explicit interaction or feedback mechanism is what semantics the AI system should attach to the human feedback. The classic examples of AI risk are usually described in a way where this is the problem: when we provide a reward function that rewards paperclips, the AI system interprets it literally and maximizes paperclips, rather than interpreting it pragmatically as another human would. (Aside: I suspect this was not the original point of the paperclip maximizer, but it has become a very popular retelling, so I’m using it anyway.) Modeling this classic example as a human-AI system, we can see that the problem is that the human is offering a form of “feedback”, the reward function, and the AI system is not ascribing the correct semantics to it. The way it uses the reward function implies that the reward function encodes the optimal behavior of the AI system in all possible environments -- a moment’s thought is sufficient to see that this is not actually the case. There will definitely be many cases and environments that the human did not consider when designing the reward function, and we should not expect that the reward function incentivizes the right behavior in those cases. So what can the AI system assume if the human provides it a reward function? Inverse Reward Design (IRD) offers one answer: the human is likely to provide a particular reward function if it leads to high true utility behavior in the training environment. So, in",https://www.alignmentforum.org/posts/eD9T4kiwB6MHpySGE/the-human-side-of-interaction,2019,blogPost,"Shah, Rohin",AI Alignment Forum Learning Robot Objectives from Physical Human Interaction,"When humans and robots work in close proximity, physical interaction is inevitable. Traditionally, robots treat physical interaction as a disturbance, and resume their original behavior after the interaction ends. In contrast, we argue that physical human interaction is informative: it is useful information about how the robot should be doing its task. We formalize learning from such interactions as a dynamical system in which the task objective has parameters that are part of the hidden state, and physical human interactions are observations about these parameters. We derive an online approximation of the robot’s optimal policy in this system, and test it in a user study. The results suggest that learning from physical interaction leads to better robot task performance with less human effort.",,2017,conferencePaper,"Bajcsy, Andrea; Losey, Dylan P; O’Malley, Marcia K; Dragan, Anca D",Proceedings of Machine Learning Research In defence of fanaticism,,https://globalprioritiesinstitute.org/wp-content/uploads/Hayden-Wilkinson_In-defence-of-fanaticism.pdf,2020,report,"Wilkinson, Hayden", Immigration and the Future of U.S. AI,"As other countries increase their efforts to compete for tech talent, the United States must reform outdated, counterproductive immigration laws and avoid restrictive new policies that will place the United States at an economic and national security disadvantage.",https://morningconsult.com/opinions/immigration-and-the-future-of-u-s-ai/,2019,blogPost,"Arnold, Zachary; Huang, Tina; Zwetsloot, Remco",Morning Consult Clarifying “What failure looks like” (part 1),"Thanks to Jess Whittlestone, Daniel Eth, Shahar Avin, Rose Hadshar, Eliana Lorch, Alexis Carlier, Flo Dorner, Kwan Yee Ng, Lewis Hammond, Phil Trammell and Jenny Xiao for valuable conversations, feedback and other support. I am especially grateful to Jess Whittlestone for long conversations and detailed feedback on drafts, and her guidance on which threads to pursue and how to frame this post. All errors are my own. Epistemic status: My Best Guess Epistemic effort: ~70 hours of focused work (mostly during FHI’s summer research fellowship), talked to ~10 people. INTRODUCTION “What failure looks like” is the one of the most comprehensive pictures of what failure to solve the AI alignment problem looks like, in worlds without discontinuous progress in AI. I think it was an excellent and much-needed addition to our understanding of AI risk. Still, if many believe that this is a main source of AI risk, I think it should be fleshed out in more than just one blog post. The original story has two parts; I’m focusing on part 1 because I found it more confusing and nebulous than part 2. Firstly, I’ll summarise part 1 (hereafter “WFLL1”) as I understand it: * In the world today, it’s easier to pursue easy-to-measure goals than hard-to-measure goals. * Machine learning is differentially good at pursuing easy-to-measure goals (assuming that we don’t have a satisfactory technical solution to the intent alignment problem[1]). * We’ll try to harness this by designing easy-to-measure proxies for what we care about, and deploy AI systems across society which optimize for these proxies (e.g. in law enforcement, legislation and the market). * We’ll give these AI systems more and more influence (e.g. eventually, the systems running law enforcement may actually be making all the decisions for us). * Eventually, the proxies for which the AI systems are optimizing will come apart from the goals we truly care about, but by t",https://www.alignmentforum.org/posts/v6Q7T335KCMxujhZu/clarifying-what-failure-looks-like-part-1,2020,blogPost,"Clarke, Sam",AI Alignment Forum An Orthodox Case Against Utility Functions,"This post has benefitted from discussion with Sam Eisenstat, Scott Garrabrant, Tsvi Benson-Tilsen, Daniel Demski, Daniel Kokotajlo, and Stuart Armstrong. It started out as a thought about Stuart Armstrong's research agenda. In this post, I hope to say something about what it means for a rational agent to have preferences. The view I am putting forward is relatively new to me, but it is not very radical. It is, dare I say, a conservative view -- I hold close to Bayesian expected utility theory. However, my impression is that it differs greatly from common impressions of Bayesian expected utility theory. I will argue against a particular view of expected utility theory -- a view which I'll call reductive utility. I do not recall seeing this view explicitly laid out and defended (except in in-person conversations). However, I expect at least a good chunk of the assumptions are commonly made. REDUCTIVE UTILITY The core tenets of reductive utility are as follows: * The sample space Ω of a rational agent's beliefs is, more or less, the set of possible ways the world could be -- which is to say, the set of possible physical configurations of the universe. Hence, each world ω∈Ω is one such configuration. * The preferences of a rational agent are represented by a utility function U:Ω →R from worlds to real numbers. * Furthermore, the utility function should be a computable function of worlds. Since I'm setting up the view which I'm knocking down, there is a risk I'm striking at a straw man. However, I think there are some good reasons to find the view appealing. The following subsections will expand on the three tenets, and attempt to provide some motivation for them. If the three points seem obvious to you, you might just skip to the next section. WORLDS ARE BASICALLY PHYSICAL What I mean here resembles the standard physical-reductionist view. However, my emphasis is on certain features of this view: * There is some ""basic stuff"" -- like like quarks",https://www.alignmentforum.org/posts/A8iGaZ3uHNNGgJeaD/an-orthodox-case-against-utility-functions,2020,blogPost,"Demski, Abram",AI Alignment Forum "Subagents, akrasia, and coherence in humans","In my previous posts, I have been building up a model of mind as a collection of subagents with different goals, and no straightforward hierarchy. This then raises the question of how that collection of subagents can exhibit coherent behavior: after all, many ways of aggregating the preferences of a number of agents fail to create consistent preference orderings. We can roughly describe coherence as the property that, if you become aware that there exists a more optimal strategy for achieving your goals than the one that you are currently executing, then you will switch to that better strategy. If an agent is not coherent in this way, then bad things are likely to happen to them. Now, we all know that humans sometimes express incoherent behavior. But on the whole, people still do okay: the median person in a developed country still manages to survive until their body starts giving up on them, and typically also manages to have and raise some number of initially-helpless children until they are old enough to take care of themselves. For a subagent theory of mind, we would like to have some explanation of when exactly the subagents manage to be collectively coherent (that is, change their behavior to some better one), and what are the situations in which they fail to do so. The conclusion of this post will be: We are capable of changing our behaviors on occasions when the mind-system as a whole puts sufficiently high probability on the new behavior being better, when the new behavior is not being blocked by a particular highly weighted subagent (such as an IFS-style protector) that puts high probability on it being bad, and when we have enough slack in our lives for any new behaviors to be evaluated in the first place. Akrasia is subagent disagreement about what to do.CORRECTING YOUR BEHAVIOR AS A DEFAULT There are many situations in which we exhibit incoherent behavior simply because we’re not aware of it. For instance, suppose that I do my daily chores in a p",https://www.lesswrong.com/posts/oJwJzeZ6ar2Hr7KAX/subagents-akrasia-and-coherence-in-humans,2019,blogPost,"Sotala, Kaj",LessWrong Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures,"In this work, we present and analyze reported failures of artificially intelligent systems and extrapolate our analysis to future AIs. We suggest that both the frequency and the seriousness of future AI failures will steadily increase. AI Safety can be improved based on ideas developed by cybersecurity experts. For narrow AIs safety failures are at the same, moderate, level of criticality as in cybersecurity, however for general AI, failures have a fundamentally different impact. A single failure of a superintelligent system may cause a catastrophic event without a chance for recovery. The goal of cybersecurity is to reduce the number of successful attacks on the system; the goal of AI Safety is to make sure zero attacks succeed in bypassing the safety mechanisms. Unfortunately, such a level of performance is unachievable. Every security system will eventually fail; there is no such thing as a 100% secure system.",http://arxiv.org/abs/1610.07997,2016,manuscript,"Yampolskiy, Roman V.; Spellchecker, M. S.", Artificial Intelligence and the Common Sense of Animals,"The problem of common sense remains a major obstacle to progress in artificial intelligence. Here, we argue that common sense in humans is founded on a set of basic capacities that are possessed by many other animals, capacities pertaining to the understanding of objects, space, and causality. The field of animal cognition has developed numerous experimental protocols for studying these capacities and, thanks to progress in deep reinforcement learning (RL), it is now possible to apply these methods directly to evaluate RL agents in 3D environments. Besides evaluation, the animal cognition literature offers a rich source of behavioural data, which can serve as inspiration for RL tasks and curricula.",http://www.sciencedirect.com/science/article/pii/S1364661320302163,2020,journalArticle,"Shanahan, Murray; Crosby, Matthew; Beyret, Benjamin; Cheke, Lucy",Trends in Cognitive Sciences Perfectionism and the Repugnant Conclusion,"The Repugnant Conclusion and its paradoxes pose a significant problem for outcome evaluation. Derek Parfit has suggested that we may be able to resolve this problem by accepting a view he calls ‘Perfectionism’, which gives lexically superior value to ‘the best things in life’. In this paper, I explore perfectionism and its potential to solve this problem. I argue that perfectionism provides neither a sufficient means of avoiding the Repugnant Conclusion nor a full explanation of its repugnance. This is because even lives that are ‘barely worth living’ may contain the best things in life if they also contain sufficient ‘bad things’, such as suffering or frustration. Therefore, perfectionism can only fully explain or avoid the Repugnant Conclusion if combined with other claims, such as that bad things have an asymmetrical value relative to many good things. This combined view faces the objection that any such asymmetry implies Parfit’s ‘Ridiculous Conclusion’. However, I argue that perfectionism itself faces very similar objections, and that these are question-begging against both views. Finally, I show how the combined view that I propose not only explains and avoids the Repugnant Conclusion but also allows us to escape many of its paradoxes as well.",https://doi.org/10.1007/s10790-019-09687-4,2019,journalArticle,"Beard, Simon",The Journal of Value Inquiry Towards formalizing universality,An attempt to formalize universality as “able to understand anything that any computation can understand.”,https://ai-alignment.com/towards-formalizing-universality-409ab893a456,2019,blogPost,"Christiano, Paul",AI Alignment (Medium) A non-mystical explanation of insight meditation and the three characteristics of existence: introduction and preamble,"INTRODUCTION Insight meditation, enlightenment, what’s that all about? The sequence of posts starting from this one is my personal attempt at answering that question. It grew out of me being annoyed about so much of this material seeming to be straightforwardly explainable in non-mysterious terms, but me also being unable to find any book or article that would do this to my satisfaction. In particular, I wanted something that would: * Explain what kinds of implicit assumptions build up our default understanding of reality and how those assumptions are subtly flawed. It would then point out aspects from our experience whose repeated observation will update those assumptions, and explain how this may cause psychological change in someone who meditates. * It would also explain how the so-called “three characteristics of existence” of Buddhism - impermanence, no-self and unsatisfactoriness - are all interrelated and connected with each other in a way your average Western science-minded, allergic-to-mysticism reader can understand. I failed to find a resource that would do this in the way I had in mind, so then I wrote one myself. From the onset, I want to note that I am calling this a non-mystical take on the three characteristics, rather than the non-mystical take on the three characteristics. This is an attempt to explain what I personally think is going on, and to sketch out an explanation of how various experiences and Buddhist teachings could be understandable in straightforward terms. I don’t expect this to be anything like a complete or perfect explanation, but rather one particular model that might be useful. The main intent of this series is summarized by a comment written by Vanessa Kosoy, justifiably skeptical of grandiose claims about enlightenment that are made without further elaboration on the actual mechanisms of it: I think that the only coherent way to convince us that Enlightenment is real is to provide a model from a 3r",https://www.lesswrong.com/posts/Mf2MCkYgSZSJRz5nM/a-non-mystical-explanation-of-insight-meditation-and-the,2020,blogPost,"Sotala, Kaj",LessWrong The State of AI Ethics,,https://montrealethics.ai/wp-content/uploads/2020/06/State-of-AI-Ethics-June-2020-report.pdf,2020,report,"Gupta, Abhishek; Ganapini, Marianna; Butalid, Renjie; Lanteigne, Camylle; Cohen, Allison; Akif, Mo; De Gasperis, Tania; Heath, Victoria; Galinkin, Erick", Hierarchically Decoupled Imitation for Morphological Transfer,"Learning long-range behaviors on complex high-dimensional agents is a fundamental problem in robot learning. For such tasks, we argue that transferring learned information from a morphologically simpler agent can massively improve the sample efficiency of a more complex one. To this end, we propose a hierarchical decoupling of policies into two parts: an independently learned low-level policy and a transferable high-level policy. To remedy poor transfer performance due to mismatch in morphologies, we contribute two key ideas. First, we show that incentivizing a complex agent's low-level to imitate a simpler agent's low-level significantly improves zero-shot high-level transfer. Second, we show that KL-regularized training of the high level stabilizes learning and prevents mode-collapse. Finally, on a suite of publicly released navigation and manipulation environments, we demonstrate the applicability of hierarchical transfer on long-range tasks across morphologies. Our code and videos can be found at https://sites.google.com/berkeley.edu/morphology-transfer.",http://arxiv.org/abs/2003.01709,2020,conferencePaper,"Hejna III, Donald J.; Abbeel, Pieter; Pinto, Lerrel",ICML 2020 Approval-maximizing representations,"If we train our agents with human oversight, can they learn superhuman representations?",https://ai-alignment.com/approval-maximizing-representations-56ee6a6a1fe6,2017,blogPost,"Christiano, Paul",AI Alignment (Medium) Preparing for the Future of Artificial Intelligence,,https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf,2016,report,Executive Office of the President National Science and Technology Council Committee on Technology, Technological Singularity,,,2017,book,"Callaghan, Vic; Miller, James; Yampolskiy, Roman; Armstrong, Stuart", “Unsupervised” translation as an (intent) alignment problem,Unsupervised translation is an interesting domain where models seem to “know” something we can’t get them to tell us.,https://ai-alignment.com/unsupervised-translation-as-a-safety-problem-99ae1f9b6b68,2020,blogPost,"Christiano, Paul",AI Alignment (Medium) A Dual Approach to Scalable Verification of Deep Networks,"This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.",http://auai.org/uai2018/proceedings/papers/204.pdf,2018,conferencePaper,"Dvijotham, Krishnamurthy; Stanforth, Robert; Gowal, Sven; Mann, Timothy; Kohli, Pushmeet", On Consensus and Humming in the IETF,,https://www.rfc-editor.org/info/rfc7282,2014,report,"Resnick, P.", Hierarchical Game-Theoretic Planning for Autonomous Vehicles,"The actions of an autonomous vehicle on the road affect and are affected by those of other drivers, whether overtaking, negotiating a merge, or avoiding an accident. This mutual dependence, best captured by dynamic game theory, creates a strong coupling between the vehicle’s planning and its predictions of other drivers’ behavior, and constitutes an open problem with direct implications on the safety and viability of autonomous driving technology. Unfortunately, dynamic games are too computationally demanding to meet the real-time constraints of autonomous driving in its continuous state and action space. In this paper, we introduce a novel game-theoretic trajectory planning algorithm for autonomous driving, that enables real-time performance by hierarchically decomposing the underlying dynamic game into a long-horizon “strategic” game with simplified dynamics and full information structure, and a short-horizon “tactical” game with full dynamics and a simplified information structure. The value of the strategic game is used to guide the tactical planning, implicitly extending the planning horizon, pushing the local trajectory optimization closer to global solutions, and, most importantly, quantitatively accounting for the autonomous vehicle and the human driver’s ability and incentives to influence each other. In addition, our approach admits non-deterministic models of human decisionmaking, rather than relying on perfectly rational predictions. Our results showcase richer, safer, and more effective autonomous behavior in comparison to existing techniques.",http://arxiv.org/abs/1810.05766,2018,conferencePaper,"Fisac, Jaime F.; Bronstein, Eli; Stefansson, Elis; Sadigh, Dorsa; Sastry, S. Shankar; Dragan, Anca D.",Robotics: Science and Systems 2019 Is it a bias or just a preference? An interesting issue in preference idealization,"When taking others’ preferences into account, we will often want to idealize them rather than taking them too literally. Consider the following example. You hold a glass of transparent liquid…",https://casparoesterheld.com/2017/01/18/is-it-a-bias-or-just-a-preference-an-interesting-issue-in-preference-idealization/,2017,blogPost,"Oesterheld, Caspar",The Universe from an Intentional Stance ProMP: Proximal Meta-Policy Search,"Credit assignment in Meta-reinforcement learning (Meta-RL) is still poorly understood. Existing methods either neglect credit assignment to pre-adaptation behavior or implement it naively. This leads to poor sample-efficiency during meta-training as well as ineffective task identification strategies. This paper provides a theoretical analysis of credit assignment in gradient-based Meta-RL. Building on the gained insights we develop a novel meta-learning algorithm that overcomes both the issue of poor credit assignment and previous difficulties in estimating meta-policy gradients. By controlling the statistical distance of both pre-adaptation and adapted policies during meta-policy search, the proposed algorithm endows efficient and stable meta-learning. Our approach leads to superior pre-adaptation policy behavior and consistently outperforms previous Meta-RL algorithms in sample-efficiency, wall-clock time, and asymptotic performance.",http://arxiv.org/abs/1810.06784,2018,conferencePaper,"Rothfuss, Jonas; Lee, Dennis; Clavera, Ignasi; Asfour, Tamim; Abbeel, Pieter",32nd Conference on Neural Information Processing Systems (NIPS 2018) Surveys on fractional progress towards HLAI,"How long until human-level performance, if we naively extrapolate progress since researchers joined their subfields?",https://aiimpacts.org/surveys-on-fractional-progress-towards-hlai/,2020,blogPost,"Bergal, Asya",AI Impacts Could artificial intelligence create an unemployment crisis?,,https://dl.acm.org/doi/10.1145/2483852.2483865,2013,journalArticle,"Ford, Martin",Communications of the ACM Cost Functions for Robot Motion Style,"We focus on autonomously generating robot motion for day to day physical tasks that is expressive of a certain style or emotion. Because we seek generalization across task instances and task types, we propose to capture style via cost functions that the robot can use to augment its nominal task cost and task constraints in a trajectory optimization process. We compare two approaches to representing such cost functions: a weighted linear combination of hand-designed features, and a neural network parameterization operating on raw trajectory input. For each cost type, we learn weights for each style from user feedback. We contrast these approaches to a nominal motion across different tasks and for different styles in a user study, and find that they both perform on par with each other, and significantly outperform the baseline. Each approach has its advantages: featurized costs require learning fewer parameters and can perform better on some styles, but neural network representations do not require expert knowledge to design features and could even learn more complex, nuanced costs than an expert can easily design.",http://arxiv.org/abs/1809.00092,2018,conferencePaper,"Zhou, Allan; Dragan, Anca D.",2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Inductive Coherence,"While probability theory is normally applied to external environments, there has been some recent interest in probabilistic modeling of the outputs of computations that are too expensive to run. Since mathematical logic is a powerful tool for reasoning about computer programs, we consider this problem from the perspective of integrating probability and logic. Recent work on assigning probabilities to mathematical statements has used the concept of coherent distributions, which satisfy logical constraints such as the probability of a sentence and its negation summing to one. Although there are algorithms which converge to a coherent probability distribution in the limit, this yields only weak guarantees about finite approximations of these distributions. In our setting, this is a significant limitation: Coherent distributions assign probability one to all statements provable in a specific logical theory, such as Peano Arithmetic, which can prove what the output of any terminating computation is; thus, a coherent distribution must assign probability one to the output of any terminating computation. To model uncertainty about computations, we propose to work with approximations to coherent distributions. We introduce inductive coherence, a strengthening of coherence that provides appropriate constraints on finite approximations, and propose an algorithm which satisfies this criterion.",http://arxiv.org/abs/1604.05288,2016,manuscript,"Garrabrant, Scott; Fallenstein, Benya; Demski, Abram; Soares, Nate", Humans can be assigned any values whatsoever…,"(Re)Posted as part of the AI Alignment Forum sequence on Value Learning. Rohin’s note: In the last post, we saw that a good broad value learning approach would need to understand the systematic biases in human planning in order to achieve superhuman performance. Perhaps we can just use machine learning again and learn the biases and reward simultaneously? This post by Stuart Armstrong (original here) and the associated paper say: “Not without more assumptions.” This post comes from a theoretical perspective that may be alien to ML researchers; in particular, it makes an argument that simplicity priors do not solve the problem pointed out here, where simplicity is based on Kolmogorov complexity (which is an instantiation of the Minimum Description Length principle). The analog in machine learning would be an argument that regularization would not work. The proof used is specific to Kolmogorov complexity and does not clearly generalize to arbitrary regularization techniques; however, I view the argument as being suggestive that regularization techniques would also be insufficient to address the problems raised here. -------------------------------------------------------------------------------- Humans have no values… nor do any agent. Unless you make strong assumptions about their rationality. And depending on those assumptions, you get humans to have any values. AN AGENT WITH NO CLEAR PREFERENCES There are three buttons in this world, B(0), B(1), and X, and one agent H. B(0) and B(1) can be operated by H, while X can be operated by an outside observer. H will initially press button B(0); if ever X is pressed, the agent will switch to pressing B(1). If X is pressed again, the agent will switch back to pressing B(0), and so on. After a large number of turns N, H will shut off. That’s the full algorithm for H. So the question is, what are the values/preferences/rewards of H? There are three natural reward functions that are plausible: * R(0), which is linear i",https://www.alignmentforum.org/posts/ANupXf8XfZo2EJxGv/humans-can-be-assigned-any-values-whatsoever,2018,blogPost,"Shah, Rohin",AI Alignment Forum Sequential Approximate Optimization,,http://link.springer.com/10.1007/978-3-540-88910-6_5,2009,bookSection,"Nakayama, Hirotaka; Yun, Yeboon; Yoon, Min",Sequential Approximate Multiobjective Optimization Using Computational Intelligence The Efficiency of Human Cognition Reflects Planned Information Processing,"Planning is useful. It lets people take actions that have desirable long-term consequences. But, planning is hard. It requires thinking about consequences, which consumes limited computational and cognitive resources. Thus, people should plan their actions, but they should also be smart about how they deploy resources used for planning their actions. Put another way, people should also ""plan their plans"". Here, we formulate this aspect of planning as a meta-reasoning problem and formalize it in terms of a recursive Bellman objective that incorporates both task rewards and information-theoretic planning costs. Our account makes quantitative predictions about how people should plan and meta-plan as a function of the overall structure of a task, which we test in two experiments with human participants. We find that people's reaction times reflect a planned use of information processing, consistent with our account. This formulation of planning to plan provides new insight into the function of hierarchical planning, state abstraction, and cognitive control in both humans and machines.",http://arxiv.org/abs/2002.05769,2020,conferencePaper,"Ho, Mark K.; Abel, David; Cohen, Jonathan D.; Littman, Michael L.; Griffiths, Thomas L.",Proceedings of the 34th AAAI Conference on Artificial Intelligence End to End Learning for Self-Driving Cars,"We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).",http://arxiv.org/abs/1604.07316,2016,manuscript,"Bojarski, Mariusz; Del Testa, Davide; Dworakowski, Daniel; Firner, Bernhard; Flepp, Beat; Goyal, Prasoon; Jackel, Lawrence D.; Monfort, Mathew; Muller, Urs; Zhang, Jiakai; Zhang, Xin; Zhao, Jake; Zieba, Karol", Special Issue “On Defining Artificial Intelligence”—Commentaries and Author’s Response,,https://content.sciendo.com/view/journals/jagi/11/2/article-p1.xml,2020,journalArticle,"Monett, Dagmar; Lewis, Colin W. P.; Thórisson, Kristinn R.; Bach, Joscha; Baldassarre, Gianluca; Granato, Giovanni; Berkeley, Istvan S. N.; Chollet, François; Crosby, Matthew; Shevlin, Henry; Fox, John; Laird, John E.; Legg, Shane; Lindes, Peter; Mikolov, Tomáš; Rapaport, William J.; Rojas, Raúl; Rosa, Marek; Stone, Peter; Sutton, Richard S.; Yampolskiy, Roman V.; Wang, Pei; Schank, Roger; Sloman, Aaron; Winfield, Alan",Journal of Artificial General Intelligence Reinforcement Learning with a Corrupted Reward Channel,"No real-world reward function is perfect. Sensory errors and software bugs may result in RL agents observing higher (or lower) rewards than they should. For example, a reinforcement learning agent may prefer states where a sensory error gives it the maximum reward, but where the true reward is actually small. We formalise this problem as a generalised Markov Decision Problem called Corrupt Reward MDP. Traditional RL methods fare poorly in CRMDPs, even under strong simplifying assumptions and when trying to compensate for the possibly corrupt rewards. Two ways around the problem are investigated. First, by giving the agent richer data, such as in inverse reinforcement learning and semi-supervised reinforcement learning, reward corruption stemming from systematic sensory errors may sometimes be completely managed. Second, by using randomisation to blunt the agent's optimisation, reward corruption can be partially managed under some assumptions.",http://arxiv.org/abs/1705.08417,2017,conferencePaper,"Everitt, Tom; Krakovna, Victoria; Orseau, Laurent; Hutter, Marcus; Legg, Shane","arXiv:1705.08417 [cs, stat]" 2019 recent trends in GPU price per FLOPS,"We estimate that in recent years, GPU prices have fallen at rates that would yield an order of magnitude over roughly: 17 years for single-precision FLOPS10 years for half-precision FLOPS5 years for half-precision fused multiply-add FLOPS Details GPUs (graphics processing units) are specialized electronic circuits originally used for computer graphics. In recent years, they have...",https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/,2020,blogPost,"Bergal, Asya",AI Impacts A causal framework for explaining the predictions of black-box sequence-to-sequence models,"We interpret the predictions of any black-box structured input-structured output model around a specific input-output pair. Our method returns an ""explanation"" consisting of groups of input-output tokens that are causally related. These dependencies are inferred by querying the black-box model with perturbed inputs, generating a graph over tokens from the responses, and solving a partitioning problem to select the most relevant components. We focus the general approach on sequence-to-sequence problems, adopting a variational autoencoder to yield meaningful input perturbations. We test our method across several NLP sequence generation tasks.",http://arxiv.org/abs/1707.01943,2017,conferencePaper,"Alvarez-Melis, David; Jaakkola, Tommi S.",Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing Actual causality,,,2016,book,"Halpern, Joseph Y.", A Concise Introduction to Decentralized POMDPs,,http://link.springer.com/10.1007/978-3-319-28929-8,2016,book,"Oliehoek, Frans A.; Amato, Christopher", Dealing with Moral Multiplicity,"The ethical views we hold depend significantly on the network structures of our brains: which ideas are associated with which valences and how strongly. These feelings and weights are shaped by our genetic predispositions, cultural circumstances, and life experiences. Had you developed differently, your moral views would have been different. It's up to us whether […]",https://longtermrisk.org/dealing-with-moral-multiplicity/,2015,blogPost,"Tomasik, Brian",Center on Long-Term Risk Questions of Reasoning Under Logical Uncertainty,"A logically uncertain reasoner would be able to reason as if they know both a programming language and a program, without knowing what the program outputs. Most practical reasoning involves some logical uncertainty, but no satisfactory theory of reasoning under logical uncertainty yet exists. A better theory of reasoning under logical uncertainty is needed in order to develop the tools necessary to construct highly reliable artificial reasoners. This paper introduces the topic, discusses a number of historical results, and describes a number of open problems.",https://intelligence.org/2015/01/09/new-report-questions-reasoning-logical-uncertainty/,2014,report,"Soares, Nate; Fallenstein, Benja", Historical economic growth trends,"An analysis of historical growth supports the possibility of radical increases in growth rate. Naive extrapolation of long-term trends would suggest massive increases in growth rate over the coming century, although growth over the last half-century has lagged very significantly behind these long-term trends. Support Bradford DeLong has published estimates for historical world GDP, piecing together...",https://aiimpacts.org/historical-growth-trends/,2019,blogPost,AI Impacts,AI Impacts Unmanned Aircraft Systems,,https://linkinghub.elsevier.com/retrieve/pii/B978012374518700016X,2010,bookSection,"Hobbs, Alan",Human Factors in Aviation Policy desiderata in the development of machine superintelligence,,,2016,journalArticle,"Bostrom, Nick; Dafoe, Allan; Flynn, Carrick","Future of Humanity Institute, University of Oxford. Retrieved June" Coordinated human action as example of superhuman intelligence,Collections of humans organized into groups and institutions provide many historical examples of the creation and attempted control of intelligences that routinely outperform individual humans. A preliminary look at the available evidence suggests that individuals are often cognitively outperformed in head-to-head competition with groups of similar average intelligence. This article surveys considerations relevant to the...,https://aiimpacts.org/coordinated-human-action-example-superhuman-intelligence/,2016,blogPost,AI Impacts,AI Impacts Incentivizing Collaboration in a Competition,"Research and design competitions aim to promote innovation or creative production, which are often best achieved through collaboration. The nature of a competition, however, typically necessitates sorting by individual performance. This presents tradeoffs for the competition designer, between incentivizing global performance and distinguishing individual capability. We model this situation in terms of an abstract collaboration game, where individual effort also benefits neighboring agents. We propose a scoring mechanism called LSWM that rewards agents based on localized social welfare. We show that LSWM promotes global performance, in that social optima are equilibria of the mechanism. Moreover, we establish conditions under which the mechanism leads to increased collaboration, and under which it ensures a formally defined distinguishability property. Through experiments, we evaluate the degree of distinguishability achieved whether or not the theoretical conditions identified hold.",,2019,conferencePaper,"Sinha, Arunesh; Wellman, Michael P.",Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems The Singularity and Machine Ethics,,http://link.springer.com/10.1007/978-3-642-32560-1_6,2012,bookSection,"Muehlhauser, Luke; Helm, Louie",Singularity Hypotheses Generating Visual Explanations,,http://link.springer.com/10.1007/978-3-319-46493-0_1,2016,bookSection,"Hendricks, Lisa Anne; Akata, Zeynep; Rohrbach, Marcus; Donahue, Jeff; Schiele, Bernt; Darrell, Trevor",Computer Vision – ECCV 2016 Machine Theory of Mind,"Theory of mind (ToM; Premack & Woodruff, 1978) broadly refers to humans' ability to represent the mental states of others, including their desires, beliefs, and intentions. We propose to train a machine to build such models too. We design a Theory of Mind neural network -- a ToMnet -- which uses meta-learning to build models of the agents it encounters, from observations of their behaviour alone. Through this process, it acquires a strong prior model for agents' behaviour, as well as the ability to bootstrap to richer predictions about agents' characteristics and mental states using only a small number of behavioural observations. We apply the ToMnet to agents behaving in simple gridworld environments, showing that it learns to model random, algorithmic, and deep reinforcement learning agents from varied populations, and that it passes classic ToM tasks such as the ""Sally-Anne"" test (Wimmer & Perner, 1983; Baron-Cohen et al., 1985) of recognising that others can hold false beliefs about the world. We argue that this system -- which autonomously learns how to model other agents in its world -- is an important step forward for developing multi-agent AI systems, for building intermediating technology for machine-human interaction, and for advancing the progress on interpretable AI.",http://arxiv.org/abs/1802.07740,2018,conferencePaper,"Rabinowitz, Neil C.; Perbet, Frank; Song, H. Francis; Zhang, Chiyuan; Eslami, S. M. Ali; Botvinick, Matthew",Proceedings of the 35th International Conference on Machine Learning "On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference","Our goal is for agents to optimize the right reward function, despite how difficult it is for us to specify what that is. Inverse Reinforcement Learning (IRL) enables us to infer reward functions from demonstrations, but it usually assumes that the expert is noisily optimal. Real people, on the other hand, often have systematic biases: risk-aversion, myopia, etc. One option is to try to characterize these biases and account for them explicitly during learning. But in the era of deep learning, a natural suggestion researchers make is to avoid mathematical models of human behavior that are fraught with specific assumptions, and instead use a purely data-driven approach. We decided to put this to the test – rather than relying on assumptions about which specific bias the demonstrator has when planning, we instead learn the demonstrator’s planning algorithm that they use to generate demonstrations, as a differentiable planner. Our exploration yielded mixed findings: on the one hand, learning the planner can lead to better reward inference than relying on the wrong assumption; on the other hand, this benefit is dwarfed by the loss we incur by going from an exact to a differentiable planner. This suggest that at least for the foreseeable future, agents need a middle ground between the flexibility of data-driven methods and the useful bias of known human biases. Code is available at https: //tinyurl.com/learningbiases.",http://arxiv.org/abs/1906.09624,2019,conferencePaper,"Shah, Rohin; Gundotra, Noah; Abbeel, Pieter; Dragan, Anca D.",Proceedings of the 36th International Conference on Machine Learning Locality of goals,"INTRODUCTION Studying goal-directedness produces two kinds of questions: questions about goals, and questions about being directed towards a goal. Most of my previous posts focused on the second kind; this one shifts to the first kind. Assume some goal-directed system with a known goal. The nature of this goal will influence which issues of safety the system might have. If the goal focuses on the input, the system might wirehead itself and/or game its specification. On the other hand, if the goal lies firmly in the environment, the system might have convergent instrumental subgoals and/or destroy any unspecified value. Locality aims at capturing this distinction. Intuitively, the locality of the system's goal captures how far away from the system one must look to check the accomplishment of the goal. Let's give some examples: * The goal of ""My sensor reaches the number 23"" is very local, probably maximally local. * The goal of ""Maintain the temperature of the room at 23 °C"" is less local, but still focused on a close neighborhood of the system. * The goal of ""No death from cancer in the whole world"" is even less local. Locality isn't about how the system extract a model of the world from its input, but about whether and how much it cares about the world beyond it. STARTING POINTS This intuition about locality came from the collision of two different classification of goals: the first from from Daniel Dennett and the second from Evan Hubinger. THERMOSTATS AND GOALS In ""The Intentional Stance"", Dennett explains, extends and defends... the intentional stance. One point he discusses is his liberalism: he is completely comfortable with admitting ridiculously simple systems like thermostats in the club of intentional systems -- to give them meaningful mental states about beliefs, desires and goals. Lest we readers feel insulted at the comparison, Dennett nonetheless admits that the goals of a thermostat differ from ours. Going along with the gag, we m",https://www.alignmentforum.org/posts/HkWB5KCJQ2aLsMzjt/locality-of-goals,2020,blogPost,"Shimi, Adam",AI Alignment Forum Tighter Variational Bounds are Not Necessarily Better,"We provide theoretical and empirical evidence that using tighter evidence lower bounds (ELBOs) can be detrimental to the process of learning an inference network by reducing the signal-to-noise ratio of the gradient estimator. Our results call into question common implicit assumptions that tighter ELBOs are better variational objectives for simultaneous model learning and inference amortization schemes. Based on our insights, we introduce three new algorithms: the partially importance weighted auto-encoder (PIWAE), the multiply importance weighted auto-encoder (MIWAE), and the combination importance weighted auto-encoder (CIWAE), each of which includes the standard importance weighted auto-encoder (IWAE) as a special case. We show that each can deliver improvements over IWAE, even when performance is measured by the IWAE target itself. Furthermore, our results suggest that PIWAE may be able to deliver simultaneous improvements in the training of both the inference and generative networks.",http://arxiv.org/abs/1802.04537,2019,manuscript,"Rainforth, Tom; Kosiorek, Adam R.; Le, Tuan Anh; Maddison, Chris J.; Igl, Maximilian; Wood, Frank; Teh, Yee Whye", What is Interpretability?,"In this post we lay out some ideas around framing interpretability research which we have found quite useful. Our framing is goal-oriented, which we believe is important for making sure interpretability research is meaningful. We also go over a variety of dimensions which we think are useful to consider when thinking about interpretability research. We wanted to have a shared vocabulary when talking about this kind of research, and found that these ideas helped us communicate effectively. One of our motivations for having these thoughts and discussions is so we can understand the relevance of interpretability to alignment, and to help us think about which categories or dimensions of interpretability research are important for alignment of strong AI. In a coming post we discuss interpretability and alignment, using the ideas from this post and other previous writing on the subject.",https://www.alignmentforum.org/posts/rSMbGFfsLMB3GWZtX/what-is-interpretability,2020,blogPost,"Kirk, Robert; Gavenčiak, Tomáš; Böhm, Stanislav",AI Alignment Forum Reachability Analysis of Deep Neural Networks with Provable Guarantees,"Verifying correctness of deep neural networks (DNNs) is challenging. We study a generic reachability problem for feed-forward DNNs which, for a given set of inputs to the network and a Lipschitz-continuous function over its outputs, computes the lower and upper bound on the function values. Because the network and the function are Lipschitz continuous, all values in the interval between the lower and upper bound are reachable. We show how to obtain the safety verification problem, the output range analysis problem and a robustness measure by instantiating the reachability problem. We present a novel algorithm based on adaptive nested optimisation to solve the reachability problem. The technique has been implemented and evaluated on a range of DNNs, demonstrating its efficiency, scalability and ability to handle a broader class of networks than state-of-the-art verification approaches.",http://arxiv.org/abs/1805.02242,2018,conferencePaper,"Ruan, Wenjie; Huang, Xiaowei; Kwiatkowska, Marta",Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18) Computational Rationality: Linking Mechanism and Behavior Through Bounded Utility Maximization,"We propose a framework for including information-processing bounds in rational analyses. It is an application of bounded optimality (Russell & Subramanian, 1995) to the challenges of developing theories of mechanism and behavior. The framework is based on the idea that behaviors are generated by cognitive mechanisms that are adapted to the structure of not only the environment but also the mind and brain itself. We call the framework computational rationality to emphasize the incorporation of computational mechanism into the definition of rational action. Theories are specified as optimal program problems, defined by an adaptation environment, a bounded machine, and a utility function. Such theories yield different classes of explanation, depending on the extent to which they emphasize adaptation to bounds, and adaptation to some ecology that differs from the immediate local environment. We illustrate this variation with examples from three domains: visual attention in a linguistic task, manual response ordering, and reasoning. We explore the relation of this framework to existing “levels” approaches to explanation, and to other optimality-based modeling approaches.",http://doi.wiley.com/10.1111/tops.12086,2014,journalArticle,"Lewis, Richard L.; Howes, Andrew; Singh, Satinder",Topics in Cognitive Science The AI Triad and What It Means for National Security Strategy,"One sentence summarizes the complexities of modern artificial intelligence: Machine learning systems use computing power to execute algorithms that learn from data. This AI triad of computing power, algorithms, and data offers a framework for decision-making in national security policy.",https://cset.georgetown.edu/research/the-ai-triad-and-what-it-means-for-national-security-strategy/,2020,report,"Buchanan, Ben", The five biggest threats to human existence,,,2014,journalArticle,"Sandberg, Anders","The Conversation, May" Scalable Verified Training for Provably Robust Image Classification,"Recent work has shown that it is possible to train deep neural networks that are provably robust to norm-bounded adversarial perturbations. Most of these methods are based on minimizing an upper bound on the worst-case loss over all possible adversarial perturbations. While these techniques show promise, they often result in difficult optimization procedures that remain hard to scale to larger networks. Through a comprehensive analysis, we show how a simple bounding technique, interval bound propagation (IBP), can be exploited to train large provably robust neural networks that beat the state-of-the-art in verified accuracy. While the upper bound computed by IBP can be quite weak for general networks, we demonstrate that an appropriate loss and clever hyper-parameter schedule allow the network to adapt such that the IBP bound is tight. This results in a fast and stable learning algorithm that outperforms more sophisticated methods and achieves state-of-the-art results on MNIST, CIFAR-10 and SVHN. It also allows us to train the largest model to be verified beyond vacuous bounds on a downscaled version of IMAGENET.",,2019,conferencePaper,"Gowal, S.; Dvijotham, K.; Stanforth, R.; Bunel, R.; Qin, C.; Uesato, J.; Arandjelovic, R.; Mann, T. A.; Kohli, P.",2019 IEEE/CVF International Conference on Computer Vision (ICCV) A logic of proofs for differential dynamic logic: toward independently checkable proof certificates for dynamic logics,"Differential dynamic logic is a logic for specifying and verifying safety, liveness, and other properties about models of cyberphysical systems. Theorem provers based on differential dynamic logic have been used to verify safety properties for models of selfdriving cars and collision avoidance protocols for aircraft. Unfortunately, these theorem provers do not have explicit proof terms, which makes the implementation of a number of important features unnecessarily complicated without soundness-critical and extralogical extensions to the theorem prover. Examples include: an unambiguous separation between proof checking and proof search, the ability to extract program traces corresponding to counterexamples, and synthesis of surely-live deterministic programs from liveness proofs for nondeterministic programs.",https://dl.acm.org/doi/10.1145/2854065.2854078,2016,conferencePaper,"Fulton, Nathan; Platzer, André",Proceedings of the 5th ACM SIGPLAN Conference on Certified Programs and Proofs FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models,"A promising class of generative models maps points from a simple distribution to a complex distribution through an invertible neural network. Likelihood-based training of these models requires restricting their architectures to allow cheap computation of Jacobian determinants. Alternatively, the Jacobian trace can be used if the transformation is specified by an ordinary differential equation. In this paper, we use Hutchinson's trace estimator to give a scalable unbiased estimate of the log-density. The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures. We demonstrate our approach on high-dimensional density estimation, image generation, and variational inference, achieving the state-of-the-art among exact likelihood methods with efficient sampling.",http://arxiv.org/abs/1810.01367,2018,manuscript,"Grathwohl, Will; Chen, Ricky T. Q.; Bettencourt, Jesse; Sutskever, Ilya; Duvenaud, David", Morphological freedom: what are the limits to transforming the body?,,http://aleph.se/papers/MF2.pdf,2017,manuscript,"Sandberg, Anders", Multitasking: Efficient Optimal Planning for Bandit Superprocesses,"A bandit superprocess is a decision problem composed from multiple independent Markov decision processes (MDPs), coupled only by the constraint that, at each time step, the agent may act in only one of the MDPs. Multitasking problems of this kind are ubiquitous in the real world, yet very little is known about them from a computational viewpoint, beyond the observation that optimal policies for the superprocess may prescribe actions that would be suboptimal for an MDP considered in isolation. (This observation implies that many applications of sequential decision analysis in practice are technically incorrect, since the decision problem being solved is often part of a larger, unstated bandit superprocess.) The paper summarizes the state-of-theart in the theory of bandit superprocesses and contributes a novel upper bound on the global value function of a bandit superprocess, defined in terms of a direct relaxation of the arms. The bound is equivalent to an existing bound (the Whittle integral), but is defined constructively, as the value of a related multi-armed bandit. We provide a new method to compute this bound and derive the first practical algorithm to select optimal actions in bandit superprocesses. The algorithm operates by repeatedly establishing dominance relations between actions using upper and lower bounds on action values. Experiments indicate that the algorithm’s run-time compares very favorably to other possible algorithms designed for more general factored MDPs.",,2015,conferencePaper,"Hadfield-Menell, Dylan; Russell, Stuart",Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence Human-Centered Artificial Intelligence and Machine Learning,"Humans are increasingly coming into contact with artificial intelligence and machine learning systems. Human-centered artificial intelligence is a perspective on AI and ML that algorithms must be designed with awareness that they are part of a larger system consisting of humans. We lay forth an argument that human-centered artificial intelligence can be broken down into two aspects: (1) AI systems that understand humans from a sociocultural perspective, and (2) AI systems that help humans understand them. We further argue that issues of social responsibility such as fairness, accountability, interpretability, and transparency.",http://arxiv.org/abs/1901.11184,2019,journalArticle,"Riedl, Mark O.",Human Behavior and Emerging Technologies The Basic AI Drives,"The field of Artificial Intelligence (AI) was initially directly aimed at the construction of ‘thinking machines’ – that is, computer systems with human-like general intelligence. But this task proved more difficult than expected. As the years passed, AI researchers gradually shifted focus to producing AI systems that intelligently approached specific tasks in relatively narrow domains. In recent years, however, more and more AI researchers have recognized the necessity – and the feasibility – of returning to the original goal of the field. Increasingly, there is a call to focus less on highly specialized ‘narrow AI’ problem solving systems, and more on confronting the difficult issues involved in creating ‘human-level intelligence’, and ultimately general intelligence that goes beyond the human level in various ways. Artificial General Intelligence (AGI), as this renewed focus has come to be called, attempts to study and reproduce intelligence as a whole in a domain independent way. Encouraged by the recent success of several smaller-scale AGI-related meetings and special tracks at conferences, the initiative to organize the very first international conference on AGI was taken, with the goal to give researchers in the field an opportunity to present relevant research results and to exchange ideas on topics of common interest. In this collection you will find the conference papers: full-length papers, short position statements and also the papers presented in the post conference workshop on the sociocultural, ethical and futurological implications of AGI.",,2008,bookSection,"Omohundro, Stephen",Artificial General Intelligence 2008: Proceedings of the First AGI Conference Learning Policy Representations in Multiagent Systems,"Modeling agent behavior is central to understanding the emergence of complex phenomena in multiagent systems. Prior work in agent modeling has largely been task-specific and driven by hand-engineering domain-specific prior knowledge. We propose a general learning framework for modeling agent behavior in any multiagent system using only a handful of interaction data. Our framework casts agent modeling as a representation learning problem. Consequently, we construct a novel objective inspired by imitation learning and agent identification and design an algorithm for unsupervised learning of representations of agent policies. We demonstrate empirically the utility of the proposed framework in (i) a challenging high-dimensional competitive environment for continuous control and (ii) a cooperative environment for communication, on supervised predictive tasks, unsupervised clustering, and policy optimization using deep reinforcement learning.",http://arxiv.org/abs/1806.06464,2018,manuscript,"Grover, Aditya; Al-Shedivat, Maruan; Gupta, Jayesh K.; Burda, Yura; Edwards, Harrison", "Research topic: Hardware, software and AI","This is the first in a sequence of articles outlining research which could help forecast AI development. Interpretation Concrete research projects are in boxes. ∑5 ∆8  means we guess the project will take (very) roughly five hours, and we rate its value (very) roughly 8/10. Most projects could be done to very different degrees of depth, or at very different scales. Our time cost...",https://aiimpacts.org/research-topic-hardware-software-and-ai/,2015,blogPost,AI Impacts,AI Impacts Rethinking the Inception Architecture for Computer Vision,,http://ieeexplore.ieee.org/document/7780677/,2016,conferencePaper,"Szegedy, Christian; Vanhoucke, Vincent; Ioffe, Sergey; Shlens, Jon; Wojna, Zbigniew",2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Aligning Superhuman AI with Human Behavior: Chess as a Model System,"As artificial intelligence becomes increasingly intelligent---in some cases, achieving superhuman performance---there is growing potential for humans to learn from and collaborate with algorithms. However, the ways in which AI systems approach problems are often different from the ways people do, and thus may be uninterpretable and hard to learn from. A crucial step in bridging this gap between human and artificial intelligence is modeling the granular actions that constitute human behavior, rather than simply matching aggregate human performance. We pursue this goal in a model system with a long history in artificial intelligence: chess. The aggregate performance of a chess player unfolds as they make decisions over the course of a game. The hundreds of millions of games played online by players at every skill level form a rich source of data in which these decisions, and their exact context, are recorded in minute detail. Applying existing chess engines to this data, including an open-source implementation of AlphaZero, we find that they do not predict human moves well. We develop and introduce Maia, a customized version of Alpha-Zero trained on human chess games, that predicts human moves at a much higher accuracy than existing engines, and can achieve maximum accuracy when predicting decisions made by players at a specific skill level in a tuneable way. For a dual task of predicting whether a human will make a large mistake on the next move, we develop a deep neural network that significantly outperforms competitive baselines. Taken together, our results suggest that there is substantial promise in designing artificial intelligence systems with human collaboration in mind by first accurately modeling granular human decision-making.",http://arxiv.org/abs/2006.01855,2020,journalArticle,"McIlroy-Young, Reid; Sen, Siddhartha; Kleinberg, Jon; Anderson, Ashton",Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining "Authoritarian Audiences, Rhetoric, and Propaganda in International Crises: Evidence from China",,,2019,journalArticle,"Weiss, Jessica Chen; Dafoe, Allan",International Studies Quarterly The Second Dialog State Tracking Challenge,,http://aclweb.org/anthology/W14-4337,2014,conferencePaper,"Henderson, Matthew; Thomson, Blaise; Williams, Jason D",Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL) "Integrative Biological Simulation, Neuropsychology, and AI Safety","We describe a biologically-inspired research agenda with parallel tracks aimed at AI and AI safety. The bottom-up component consists of building a sequence of biophysically realistic simulations of simple organisms such as the nematode $Caenorhabditis$ $elegans$, the fruit fly $Drosophila$ $melanogaster$, and the zebrafish $Danio$ $rerio$ to serve as platforms for research into AI algorithms and system architectures. The top-down component consists of an approach to value alignment that grounds AI goal structures in neuropsychology, broadly considered. Our belief is that parallel pursuit of these tracks will inform the development of value-aligned AI systems that have been inspired by embodied organisms with sensorimotor integration. An important set of side benefits is that the research trajectories we describe here are grounded in long-standing intellectual traditions within existing research communities and funding structures. In addition, these research programs overlap with significant contemporary themes in the biological and psychological sciences such as data/model integration and reproducibility.",http://arxiv.org/abs/1811.03493,2019,conferencePaper,"Sarma, Gopal P.; Safron, Adam; Hay, Nick J.",Proceedings of the AAAI Workshop on Artificial Intelligence Safety 2019 "Relationship of smartphone use severity with sleep quality, depression, and anxiety in university students",,https://akjournals.com/doi/10.1556/2006.4.2015.010,2015,journalArticle,"Demirci, Kadir; Akgönül, Mehmet; Akpinar, Abdullah",Journal of Behavioral Addictions An Untrollable Mathematician Illustrated,The following was a presentation I made for Sören Elverlin's AI Safety Reading Group. I decided to draw everything by hand because powerpoint is boring. Thanks to Ben Pace for formatting it for LW! See also the IAF post detailing the research which this presentation is based on.,https://www.alignmentforum.org/posts/CvKnhXTu9BPcdKE4W/an-untrollable-mathematician-illustrated,2018,blogPost,Abram Demski,AI Alignment Forum An Alternative Surrogate Loss for PGD-based Adversarial Testing,"Adversarial testing methods based on Projected Gradient Descent (PGD) are widely used for searching norm-bounded perturbations that cause the inputs of neural networks to be misclassified. This paper takes a deeper look at these methods and explains the effect of different hyperparameters (i.e., optimizer, step size and surrogate loss). We introduce the concept of MultiTargeted testing, which makes clever use of alternative surrogate losses, and explain when and how MultiTargeted is guaranteed to find optimal perturbations. Finally, we demonstrate that MultiTargeted outperforms more sophisticated methods and often requires less iterative steps than other variants of PGD found in the literature. Notably, MultiTargeted ranks first on MadryLab's white-box MNIST and CIFAR-10 leaderboards, reducing the accuracy of their MNIST model to 88.36% (with $\ell_\infty$ perturbations of $\epsilon = 0.3$) and the accuracy of their CIFAR-10 model to 44.03% (at $\epsilon = 8/255$). MultiTargeted also ranks first on the TRADES leaderboard reducing the accuracy of their CIFAR-10 model to 53.07% (with $\ell_\infty$ perturbations of $\epsilon = 0.031$).",http://arxiv.org/abs/1910.09338,2019,manuscript,"Gowal, Sven; Uesato, Jonathan; Qin, Chongli; Huang, Po-Sen; Mann, Timothy; Kohli, Pushmeet", Active Preference-Based Learning of Reward Functions,"Our goal is to efficiently learn reward functions encoding a human’s preferences for how a dynamical system should act. There are two challenges with this. First, in many problems it is difficult for people to provide demonstrations of the desired system trajectory (like a high-DOF robot arm motion or an aggressive driving maneuver), or to even assign how much numerical reward an action or trajectory should get. We build on work in label ranking and propose to learn from preferences (or comparisons) instead: the person provides the system a relative preference between two trajectories. Second, the learned reward function strongly depends on what environments and trajectories were experienced during the training phase. We thus take an active learning approach, in which the system decides on what preference queries to make. A novel aspect of our work is the complexity and continuous nature of the queries: continuous trajectories of a dynamical system in environments with other moving agents (humans or robots). We contribute a method for actively synthesizing queries that satisfy the dynamics of the system. Further, we learn the reward function from a continuous hypothesis space by maximizing the volume removed from the hypothesis space by each query. We assign weights to the hypothesis space in the form of a log-concave distribution and provide a bound on the number of iterations required to converge. We show that our algorithm converges faster to the desired reward compared to approaches that are not active or that do not synthesize queries in an autonomous driving domain. We then run a user study to put our method to the test with real people.",http://www.roboticsproceedings.org/rss13/p53.pdf,2017,conferencePaper,"Sadigh, Dorsa; Dragan, Anca; Sastry, Shankar; Seshia, Sanjit",Robotics: Science and Systems XIII CM3: Cooperative Multi-goal Multi-stage Multi-agent Reinforcement Learning,"A variety of cooperative multi-agent control problems require agents to achieve individual goals while contributing to collective success. This multi-goal multi-agent setting poses difficulties for recent algorithms, which primarily target settings with a single global reward, due to two new challenges: efficient exploration for learning both individual goal attainment and cooperation for others' success, and credit-assignment for interactions between actions and goals of different agents. To address both challenges, we restructure the problem into a novel two-stage curriculum, in which single-agent goal attainment is learned prior to learning multi-agent cooperation, and we derive a new multi-goal multi-agent policy gradient with a credit function for localized credit assignment. We use a function augmentation scheme to bridge value and policy functions across the curriculum. The complete architecture, called CM3, learns significantly faster than direct adaptations of existing algorithms on three challenging multi-goal multi-agent problems: cooperative navigation in difficult formations, negotiating multi-vehicle lane changes in the SUMO traffic simulator, and strategic cooperation in a Checkers environment.",http://arxiv.org/abs/1809.05188,2020,conferencePaper,"Yang, Jiachen; Nakhaei, Alireza; Isele, David; Fujimura, Kikuo; Zha, Hongyuan","arXiv:1809.05188 [cs, stat]" Planning to Give Information in Partially Observed Domains with a Learned Weighted Entropy Model,"In many robotic applications, an autonomous agent must act within and explore a partially observed environment that is unobserved by its human teammate. We consider such a setting in which the agent can, while acting, transmit declarative information to the human that helps them understand aspects of this unseen environment. Naturally, the human will have preferences about what information they are given. This work adopts an information-theoretic view of the human’s preferences: the human scores information based on the induced change in weighted entropy of their belief about the environment state. We formulate this setting as a belief MDP and give an algorithm for solving it approximately. Then, we give an algorithm that allows the agent to learn the human’s preferences online. We validate our approach experimentally in simulated discrete and continuous partially observed search-and-recover domains.",,2018,conferencePaper,"Chitnis, Rohan; Kaelbling, Leslie Pack; Lozano-Perez, Tomas", Ramsey and Joyce on Deliberation and Prediction,"Can an agent deliberating about an action A hold a meaningful credence that she will do A? ‘No’, say some authors, for ‘Deliberation Crowds Out Prediction’ (DCOP). Others disagree, but we argue here that such disagreements are often terminological. We explain why DCOP holds in a Ramseyian operationalist model of credence, but show that it is trivial to extend this model so that DCOP fails. We then discuss a model due to Joyce, and show that Joyce’s rejection of DCOP rests on terminological choices about terms such as ‘intention’, ‘prediction’, and ‘belief’. Once these choices are in view, they reveal underlying agreement between Joyce and the DCOP-favouring tradition that descends from Ramsey. Joyce’s Evidential Autonomy Thesis (EAT) is effectively DCOP, in different terminological clothing. Both principles rest on the so-called ‘transparency’ of first-person present-tensed reflection on one’s own mental states.",http://philsci-archive.pitt.edu/14972/,2020,journalArticle,"Liu, Yang; Price, Huw",Synthese The Incident Command System: High-Reliability Organizing For Complex And Volatile Task Environments.,,https://journals.aom.org/doi/abs/10.5465/3069401?journalCode=amj,2001,journalArticle,"Bigley, G. A.; Roberts, K. H.",Academy of Management Journal Reference Post: Trivial Decision Problem,,https://www.alignmentforum.org/posts/XAeWHqQTWjJmzB4k6/reference-post-trivial-decision-problem,2020,blogPost,"Leong, Chris",AI Alignment Forum Deep Learning: A Critical Appraisal,"Although deep learning has historical roots going back decades, neither the term ""deep learning"" nor the approach was popular just over five years ago, when the field was reignited by papers such as Krizhevsky, Sutskever and Hinton's now classic (2012) deep network model of Imagenet. What has the field discovered in the five subsequent years? Against a background of considerable progress in areas such as speech recognition, image recognition, and game playing, and considerable enthusiasm in the popular press, I present ten concerns for deep learning, and suggest that deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.",http://arxiv.org/abs/1801.00631,2018,manuscript,"Marcus, Gary", Countering Superintelligence Misinformation,,,2018,journalArticle,"Baum, Seth",Information How You Act Tells a Lot: Privacy-Leaking Attack on Deep Reinforcement Learning,"Machine learning has been widely applied to various applications, some of which involve training with privacy-sensitive data. A modest number of data breaches have been studied, including credit card information in natural language data and identities from face dataset. However, most of these studies focus on supervised learning models. As deep reinforcement learning (DRL) has been deployed in a number of real-world systems, such as indoor robot navigation, whether trained DRL policies can leak private information requires in-depth study.To explore such privacy breaches in general, we mainly propose two methods: environment dynamics search via genetic algorithm and candidate inference based on shadow policies. We conduct extensive experiments to demonstrate such privacy vulnerabilities in DRL under various settings. We leverage the proposed algorithms to infer floor plans from some trained Grid World navigation DRL agents with LiDAR perception. The proposed algorithm can correctly infer most of the floor plans and reaches an average recovery rate of 95.83% using policy gradient trained agents. In addition, we are able to recover the robot configuration in continuous control environments and an autonomous driving simulator with high accuracy. To the best of our knowledge, this is the first work to investigate privacy leakage in DRL settings and we show that DRL-based agents do potentially leak privacy-sensitive information from the trained policies.",,2019,conferencePaper,"Pan, Xinlei; Wang, Weiyao; Zhang, Xiaoshuai; Li, Bo; Yi, Jinfeng; Song, Dawn",Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems Artificial intelligence: a modern approach,"""Updated edition of popular textbook on Artificial Intelligence. This edition specific looks at ways of keeping artificial intelligence under control""--",,2021,book,"Russell, Stuart J.; Norvig, Peter", Showing versus doing: Teaching by demonstration,,https://par.nsf.gov/biblio/10082788-showing-versus-doing-teaching-demonstration,2016,conferencePaper,"Ho, M. K.; Littman, M. L.; MacGlashan, J.; Cushman, F.; Austerweil, J. L.",NeurIPS Clarifying some key hypotheses in AI alignment,"We've created a diagram mapping out important and controversial hypotheses for AI alignment. We hope that this will help researchers identify and more productively discuss their disagreements. DIAGRAM A part of the diagram. Click through to see the full version. CAVEATS 1. This does not decompose arguments exhaustively. It does not include every reason to favour or disfavour ideas. Rather, it is a set of key hypotheses and relationships with other hypotheses, problems, solutions, models, etc. Some examples of important but apparently uncontroversial premises within the AI safety community: orthogonality, complexity of value, Goodhart's Curse, AI being deployed in a catastrophe-sensitive context. 2. This is not a comprehensive collection of key hypotheses across the whole space of AI alignment. It focuses on a subspace that we find interesting and is relevant to more recent discussions we have encountered, but where key hypotheses seem relatively less illuminated. This includes rational agency and goal-directedness, CAIS, corrigibility, and the rationale of foundational and practical research. In hindsight, the selection criteria was something like: 1. The idea is closely connected to the problem of artificial systems optimizing adversarially against humans. 2. The idea must be explained sufficiently well that we believe it is plausible. 3. Arrows in the diagram indicate flows of evidence or soft relations, not absolute logical implications — please read the ""interpretation"" box in the diagram. Also pay attention to any reasoning written next to a Yes/No/Defer arrow — you may disagree with it, so don't blindly follow the arrow! BACKGROUND Much has been written in the way of arguments for AI risk. Recently there have been some talks and posts that clarify different arguments, point to open questions, and highlight the need for further clarification and analysis. We largely s",https://www.alignmentforum.org/posts/mJ5oNYnkYrd4sD5uE/clarifying-some-key-hypotheses-in-ai-alignment,2019,blogPost,"Cottier, Ben; Shah, Rohin",AI Alignment Forum Social choice and topology a case of pure and applied mathematics,,https://linkinghub.elsevier.com/retrieve/pii/S0723086904800161,2004,journalArticle,"Eckmann, Beno",Expositiones Mathematicae Unanimous Prediction for 100% Precision with Application to Learning Semantic Mappings,"Can we train a system that, on any new input, either says ""don't know"" or makes a prediction that is guaranteed to be correct? We answer the question in the affirmative provided our model family is well-specified. Specifically, we introduce the unanimity principle: only predict when all models consistent with the training data predict the same output. We operationalize this principle for semantic parsing, the task of mapping utterances to logical forms. We develop a simple, efficient method that reasons over the infinite set of all consistent models by only checking two of the models. We prove that our method obtains 100% precision even with a modest amount of training data from a possibly adversarial distribution. Empirically, we demonstrate the effectiveness of our approach on the standard GeoQuery dataset.",http://arxiv.org/abs/1606.06368,2016,conferencePaper,"Khani, Fereshte; Rinard, Martin; Liang, Percy",Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) "The Multi-slot Framework: A Formal Model for Multiple, Copiable AIs",,http://link.springer.com/10.1007/978-3-319-09274-4_10,2014,bookSection,"Orseau, Laurent",Artificial General Intelligence Multi-agent Social Reinforcement Learning Improves Generalization,"Social learning is a key component of human and animal intelligence. By taking cues from the behavior of experts in their environment, social learners can acquire sophisticated behavior and rapidly adapt to new circumstances. This paper investigates whether independent reinforcement learning (RL) agents in a multi-agent environment can use social learning to improve their performance using cues from other agents. We find that in most circumstances, vanilla model-free RL agents do not use social learning, even in environments in which individual exploration is expensive. We analyze the reasons for this deficiency, and show that by introducing a model-based auxiliary loss we are able to train agents to lever-age cues from experts to solve hard exploration tasks. The generalized social learning policy learned by these agents allows them to not only outperform the experts with which they trained, but also achieve better zero-shot transfer performance than solo learners when deployed to novel environments with experts. In contrast, agents that have not learned to rely on social learning generalize poorly and do not succeed in the transfer task. Further,we find that by mixing multi-agent and solo training, we can obtain agents that use social learning to out-perform agents trained alone, even when experts are not avail-able. This demonstrates that social learning has helped improve agents' representation of the task itself. Our results indicate that social learning can enable RL agents to not only improve performance on the task at hand, but improve generalization to novel environments.",http://arxiv.org/abs/2010.00581,2020,manuscript,"Ndousse, Kamal; Eck, Douglas; Levine, Sergey; Jaques, Natasha", Decision Dynamics in Two High Reliability Military Organizations,,http://pubsonline.informs.org/doi/abs/10.1287/mnsc.40.5.614,1994,journalArticle,"Roberts, Karlene H.; Stout, Suzanne K.; Halpern, Jennifer J.",Management Science "The limits of machine intelligence: Despite progress in machine intelligence, artificial general intelligence is still a major challenge",,https://onlinelibrary.wiley.com/doi/abs/10.15252/embr.201949177,2019,journalArticle,"Shevlin, Henry; Vold, Karina; Crosby, Matthew; Halina, Marta",EMBO reports Legible Normativity for AI Alignment: The Value of Silly Rules,"It has become commonplace to assert that autonomous agents will have to be built to follow human rules of behavior–social norms and laws. But human laws and norms are complex and culturally varied systems; in many cases agents will have to learn the rules. This requires autonomous agents to have models of how human rule systems work so that they can make reliable predictions about rules. In this paper we contribute to the building of such models by analyzing an overlooked distinction between important rules and what we call silly rules —rules with no discernible direct impact on welfare. We show that silly rules render a normative system both more robust and more adaptable in response to shocks to perceived stability. They make normativity more legible for humans, and can increase legibility for AI systems as well. For AI systems to integrate into human normative systems, we suggest, it may be important for them to have models that include representations of silly rules.",http://arxiv.org/abs/1811.01267,2019,conferencePaper,"Hadfield-Menell, Dylan; Andrus, McKane; Hadfield, Gillian K.","AIES '19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society" Dissolving the Fermi Paradox,,https://arxiv.org/abs/1806.02404,2018,manuscript,"Sandberg, Anders; Drexler, Eric; Ord, Toby", Improved Techniques for Training GANs,"We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3%. We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.",https://proceedings.neurips.cc/paper/2016/hash/8a3363abe792db2d8761d6403605aeb7-Abstract.html,2016,conferencePaper,"Salimans, Tim; Goodfellow, Ian; Zaremba, Wojciech; Cheung, Vicki; Radford, Alec; Chen, Xi",Advances in Neural Information Processing Systems 29 (NIPS 2016) The Virtuous Machine - Old Ethics for New Technology?,"Modern AI and robotic systems are characterized by a high and ever-increasing level of autonomy. At the same time, their applications in fields such as autonomous driving, service robotics and digital personal assistants move closer to humans. From the combination of both developments emerges the field of AI ethics which recognizes that the actions of autonomous machines entail moral dimensions and tries to answer the question of how we can build moral machines. In this paper we argue for taking inspiration from Aristotelian virtue ethics by showing that it forms a suitable combination with modern AI due to its focus on learning from experience. We furthermore propose that imitation learning from moral exemplars, a central concept in virtue ethics, can solve the value alignment problem. Finally, we show that an intelligent system endowed with the virtues of temperance and friendship to humans would not pose a control problem as it would not have the desire for limitless self-improvement.",http://arxiv.org/abs/1806.10322,2018,manuscript,"Berberich, Nicolas; Diepold, Klaus", Statistical trust establishment in wireless sensor networks,,http://ieeexplore.ieee.org/document/4447736/,2007,conferencePaper,"Probst, M.J.; Kasera, S.K.",2007 International Conference on Parallel and Distributed Systems Imitation Learning from Video by Leveraging Proprioception,"Classically, imitation learning algorithms have been developed for idealized situations, e.g., the demonstrations are often required to be collected in the exact same environment and usually include the demonstrator's actions. Recently, however, the research community has begun to address some of these shortcomings by offering algorithmic solutions that enable imitation learning from observation (IfO), e.g., learning to perform a task from visual demonstrations that may be in a different environment and do not include actions. Motivated by the fact that agents often also have access to their own internal states (i.e., proprioception), we propose and study an IfO algorithm that leverages this information in the policy learning process. The proposed architecture learns policies over proprioceptive state representations and compares the resulting trajectories visually to the demonstration data. We experimentally test the proposed technique on several MuJoCo domains and show that it outperforms other imitation from observation algorithms by a large margin.",http://arxiv.org/abs/1905.09335,2019,conferencePaper,"Torabi, Faraz; Warnell, Garrett; Stone, Peter",Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) Estimation with Incomplete Data: The Linear Case,"Traditional methods for handling incomplete data, including Multiple Imputation and Maximum Likelihood, require that the data be Missing At Random (MAR). In most cases, however, missingness in a variable depends on the underlying value of that variable. In this work, we devise model-based methods to consistently estimate mean, variance and covariance given data that are Missing Not At Random (MNAR). While previous work on MNAR data require variables to be discrete, we extend the analysis to continuous variables drawn from Gaussian distributions. We demonstrate the merits of our techniques by comparing it empirically to state of the art software packages.",https://www.ijcai.org/proceedings/2018/705,2018,conferencePaper,"Mohan, Karthika; Thoemmes, Felix; Pearl, Judea",Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence How to respond to the potential malicious uses of artificial intelligence?,,http://junq.info/wp-content/uploads/2019/09/JUNQ.pdf,2019,journalArticle,"Belfield, Haydn",Journal of Unresolved Questions Self-Confirming Price Prediction Strategies for Simultaneous One-Shot Auctions,"Bidding in simultaneous auctions is challenging because an agent's value for a good in one auction may depend on the uncertain outcome of other auctions: the so-called exposure problem. Given the gap in understanding of general simultaneous auction games, previous works have tackled this problem with heuristic strategies that employ probabilistic price predictions. We define a concept of self-confirming prices, and show that within an independent private value model, Bayes-Nash equilibrium can be fully characterized as a profile of optimal price prediction strategies with self-confirming predictions. We exhibit practical procedures to compute approximately optimal bids given a probabilistic price prediction, and near self-confirming price predictions given a price-prediction strategy. An extensive empirical game-theoretic analysis demonstrates that self-confirming price prediction strategies are effective in simultaneous auction games with both complementary and substitutable preference structures.",http://arxiv.org/abs/1210.4915,2017,journalArticle,"Wellman, Michael P.; Sodomka, Eric; Greenwald, Amy",Games and Economic Behavior Self-Modification and Mortality in Artificial Agents,"This paper considers the consequences of endowing an intelligent agent with the ability to modify its own code. The intelligent agent is patterned closely after AIXI [1], but the environment has read-only access to the agent's description. On the basis of some simple modi cations to the utility and horizon functions, we are able to discuss and compare some very di erent kinds of agents, speci cally: reinforcement-learning, goal-seeking, predictive, and knowledge-seeking agents. In particular, we introduce what we call the Simpleton Gambit which allows us to discuss whether these agents would choose to modify themselves toward their own detriment.",,2011,conferencePaper,"Orseau, Laurent; Ring, Mark",AGI 2011: Artificial General Intelligence Multi-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research,"The purpose of this technical report is two-fold. First of all, it introduces a suite of challenging continuous control tasks (integrated with OpenAI Gym) based on currently existing robotics hardware. The tasks include pushing, sliding and pick & place with a Fetch robotic arm as well as in-hand object manipulation with a Shadow Dexterous Hand. All tasks have sparse binary rewards and follow a Multi-Goal Reinforcement Learning (RL) framework in which an agent is told what to do using an additional input. The second part of the paper presents a set of concrete research ideas for improving RL algorithms, most of which are related to Multi-Goal RL and Hindsight Experience Replay.",http://arxiv.org/abs/1802.09464,2018,manuscript,"Plappert, Matthias; Andrychowicz, Marcin; Ray, Alex; McGrew, Bob; Baker, Bowen; Powell, Glenn; Schneider, Jonas; Tobin, Josh; Chociej, Maciek; Welinder, Peter; Kumar, Vikash; Zaremba, Wojciech", Cognitive Model Priors for Predicting Human Decisions,"Human decision-making underlies all economic behavior. For the past four decades, human decision-making under uncertainty has continued to be explained by theoretical models based on prospect theory, a framework that was awarded the Nobel Prize in Economic Sciences. However, theoretical models of this kind have developed slowly, and robust, high-precision predictive models of human decisions remain a challenge. While machine learning is a natural candidate for solving these problems, it is currently unclear to what extent it can improve predictions obtained by current theories. We argue that this is mainly due to data scarcity, since noisy human behavior requires massive sample sizes to be accurately captured by off-the-shelf machine learning methods. To solve this problem, what is needed are machine learning models with appropriate inductive biases for capturing human behavior, and larger datasets. We offer two contributions towards this end: first, we construct ""cognitive model priors"" by pretraining neural networks with synthetic data generated by cognitive models (i.e., theoretical models developed by cognitive psychologists). We find that fine-tuning these networks on small datasets of real human decisions results in unprecedented state-of-the-art improvements on two benchmark datasets. Second, we present the first large-scale dataset for human decision-making, containing over 240,000 human judgments across over 13,000 decision problems. This dataset reveals the circumstances where cognitive model priors are useful, and provides a new standard for benchmarking prediction of human decisions under uncertainty.",http://arxiv.org/abs/1905.09397,2019,conferencePaper,"Bourgin, David D.; Peterson, Joshua C.; Reichman, Daniel; Griffiths, Thomas L.; Russell, Stuart J.",Proceedings of the 36th International Conference on Machine Learning Grounding Language in Play,"Natural language is perhaps the most versatile and intuitive way for humans to communicate tasks to a robot. Prior work on Learning from Play (LfP) [Lynch et al, 2019] provides a simple approach for learning a wide variety of robotic behaviors from general sensors. However, each task must be specified with a goal image---something that is not practical in open-world environments. In this work we present a simple and scalable way to condition policies on human language instead. We extend LfP by pairing short robot experiences from play with relevant human language after-the-fact. To make this efficient, we introduce multicontext imitation, which allows us to train a single agent to follow image or language goals, then use just language conditioning at test time. This reduces the cost of language pairing to less than 1% of collected robot experience, with the majority of control still learned via self-supervised imitation. At test time, a single agent trained in this manner can perform many different robotic manipulation skills in a row in a 3D environment, directly from images, and specified only with natural language (e.g. ""open the drawer...now pick up the block...now press the green button...""). Finally, we introduce a simple technique that transfers knowledge from large unlabeled text corpora to robotic learning. We find that transfer significantly improves downstream robotic manipulation. It also allows our agent to follow thousands of novel instructions at test time in zero shot, in 16 different languages. See videos of our experiments at language-play.github.io",http://arxiv.org/abs/2005.07648,2020,manuscript,"Lynch, Corey; Sermanet, Pierre", Response to the European Commission’s consultation on AI,,https://www.cser.ac.uk/media/uploads/files/Consultation_Response_White_Paper_on_AI_-_Belfield_Hern%C3%A1ndez-Orallo_%C3%93_h%C3%89igeartaigh_Maas_Hagerty_Whittlestone.pdf,2020,report,"Belfield, Haydn; Hernández-Orallo, José; Ó hÉigeartaigh, Seán; Maas, Matthijs M; Hagerty, Alexa; Whittlestone, Jess", Building up to an Internal Family Systems model,"INTRODUCTION Internal Family Systems (IFS) is a psychotherapy school/technique/model which lends itself particularly well for being used alone or with a peer. For years, I had noticed that many of the kinds of people who put in a lot of work into developing their emotional and communication skills, some within the rationalist community and some outside it, kept mentioning IFS. So I looked at the Wikipedia page about the IFS model, and bounced off, since it sounded like nonsense to me. Then someone brought it up again, and I thought that maybe I should reconsider. So I looked at the WP page again, thought “nah, still nonsense”, and continued to ignore it. This continued until I participated in CFAR mentorship training last September, and we had a class on CFAR’s Internal Double Crux (IDC) technique. IDC clicked really well for me, so I started using it a lot and also facilitating it to some friends. However, once we started using it on more emotional issues (as opposed to just things with empirical facts pointing in different directions), we started running into some weird things, which it felt like IDC couldn’t quite handle… things which reminded me of how people had been describing IFS. So I finally read up on it, and have been successfully applying it ever since. In this post, I’ll try to describe and motivate IFS in terms which are less likely to give people in this audience the same kind of a “no, that’s nonsense” reaction as I initially had. EPISTEMIC STATUS This post is intended to give an argument for why something like the IFS model could be true and a thing that works. It’s not really an argument that IFS is correct. My reason for thinking in terms of IFS is simply that I was initially super-skeptical of it (more on the reasons of my skepticism later), but then started encountering things which it turned out IFS predicted - and I only found out about IFS predicting those things after I familiarized myself with it. Additionally, I now feel that IFS",https://www.lesswrong.com/posts/5gfqG3Xcopscta3st/building-up-to-an-internal-family-systems-model,2019,blogPost,"Sotala, Kaj",LessWrong MDPs with Unawareness in Robotics,"We formalize decision-making problems in robotics and automated control using continuous MDPs and actions that take place over continuous time intervals. We then approximate the continuous MDP using finer and finer discretizations. Doing this results in a family of systems, each of which has an extremely large action space, although only a few actions are “interesting”. We can view the decision maker as being unaware of which actions are “interesting”. We an model this using MDPUs, MDPs with unawareness, where the action space is much smaller. As we show, MDPUs can be used as a general framework for learning tasks in robotic problems. We prove results on the difficulty of learning a near-optimal policy in an an MDPU for a continuous task. We apply these ideas to the problem of having a humanoid robot learn on its own how to walk.",https://auai.org/uai2016/proceedings/papers/294.pdf,2016,conferencePaper,"Rong, Nan; Halpern, Joseph Y; Saxena, Ashutosh",UAI 2016 Proceedings Inspiration Learning through Preferences,"Current imitation learning techniques are too restrictive because they require the agent and expert to share the same action space. However, oftentimes agents that act differently from the expert can solve the task just as good. For example, a person lifting a box can be imitated by a ceiling mounted robot or a desktop-based robotic-arm. In both cases, the end goal of lifting the box is achieved, perhaps using different strategies. We denote this setup as \textit{Inspiration Learning} - knowledge transfer between agents that operate in different action spaces. Since state-action expert demonstrations can no longer be used, Inspiration learning requires novel methods to guide the agent towards the end goal. In this work, we rely on ideas of Preferential based Reinforcement Learning (PbRL) to design Advantage Actor-Critic algorithms for solving inspiration learning tasks. Unlike classic actor-critic architectures, the critic we use consists of two parts: a) a state-value estimation as in common actor-critic algorithms and b) a single step reward function derived from an expert/agent classifier. We show that our method is capable of extending the current imitation framework to new horizons. This includes continuous-to-discrete action imitation, as well as primitive-to-macro action imitation.",https://arxiv.org/abs/1809.05872v1,2018,manuscript,"Baram, Nir; Mannor, Shie", Implementation of Moral Uncertainty in Intelligent Machines,"The development of artificial intelligence will require systems of ethical decision making to be adapted for automatic computation. However, projects to implement moral reasoning in artificial moral agents so far have failed to satisfactorily address the widespread disagreement between competing approaches to moral philosophy. In this paper I argue that the proper response to this situation is to design machines to be fundamentally uncertain about morality. I describe a computational framework for doing so and show that it efficiently resolves common obstacles to the implementation of moral philosophy in intelligent machines.",https://doi.org/10.1007/s11023-017-9448-z,2017,journalArticle,"Bogosian, Kyle",Minds and Machines Multiobjective Optimization: Interactive and Evolutionary Approaches,,http://link.springer.com/10.1007/978-3-540-88908-3,2008,book,, Learning-Based Trading Strategies in the Face of Market Manipulation,,https://par.nsf.gov/biblio/10105525-learning-based-trading-strategies-face-market-manipulation,2019,journalArticle,"Wang, Xintong; Hoang, Chris; Wellman, Michael P.",ICML-19 Workshop on AI in Finance Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources,"Abstract Modeling human cognition is challenging because there are infinitely many mechanisms that can generate any given observation. Some researchers address this by constraining the hypothesis space through assumptions about what the human mind can and cannot do, while others constrain it through principles of rationality and adaptation. Recent work in economics, psychology, neuroscience, and linguistics has begun to integrate both approaches by augmenting rational models with cognitive constraints, incorporating rational principles into cognitive architectures, and applying optimality principles to understanding neural representations. We identify the rational use of limited resources as a unifying principle underlying these diverse approaches, expressing it in a new cognitive modeling paradigm called resource-rational analysis . The integration of rational principles with realistic cognitive constraints makes resource-rational analysis a promising framework for reverse-engineering cognitive mechanisms and representations. It has already shed new light on the debate about human rationality and can be leveraged to revisit classic questions of cognitive psychology within a principled computational framework. We demonstrate that resource-rational models can reconcile the mind's most impressive cognitive skills with people's ostensive irrationality. Resource-rational analysis also provides a new way to connect psychological theory more deeply with artificial intelligence, economics, neuroscience, and linguistics.",https://www.cambridge.org/core/product/identifier/S0140525X1900061X/type/journal_article,2020,journalArticle,"Lieder, Falk; Griffiths, Thomas L.",Behavioral and Brain Sciences Rational Use of Cognitive Resources: Levels of Analysis Between the Computational and the Algorithmic,,http://doi.wiley.com/10.1111/tops.12142,2015,journalArticle,"Griffiths, Thomas L.; Lieder, Falk; Goodman, Noah D.",Topics in Cognitive Science Winter-Safe Deterrence as a Practical Contribution to Reducing Nuclear Winter Risk: A Reply,,https://www.tandfonline.com/doi/full/10.1080/13523260.2015.1054101,2015,journalArticle,"Baum, Seth D.",Contemporary Security Policy The evolved radio and its implications for modelling the evolution of novel sensors,,http://ieeexplore.ieee.org/document/1004522/,2002,conferencePaper,"Bird, J.; Layzell, P.",Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600) That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox,"If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a $10^{30}$ multiplier of achievable computation. We hence suggest the ""aestivation hypothesis"": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.",https://arxiv.org/abs/1705.03394v1,2017,manuscript,"Sandberg, Anders; Armstrong, Stuart; Cirkovic, Milan M.", Leveraging Uncertainty Estimates for Predicting Segmentation Quality,"The use of deep learning for medical imaging has seen tremendous growth in the research community. One reason for the slow uptake of these systems in the clinical setting is that they are complex, opaque and tend to fail silently. Outside of the medical imaging domain, the machine learning community has recently proposed several techniques for quantifying model uncertainty (i.e.~a model knowing when it has failed). This is important in practical settings, as we can refer such cases to manual inspection or correction by humans. In this paper, we aim to bring these recent results on estimating uncertainty to bear on two important outputs in deep learning-based segmentation. The first is producing spatial uncertainty maps, from which a clinician can observe where and why a system thinks it is failing. The second is quantifying an image-level prediction of failure, which is useful for isolating specific cases and removing them from automated pipelines. We also show that reasoning about spatial uncertainty, the first output, is a useful intermediate representation for generating segmentation quality predictions, the second output. We propose a two-stage architecture for producing these measures of uncertainty, which can accommodate any deep learning-based medical segmentation pipeline.",http://arxiv.org/abs/1807.00502,2018,conferencePaper,"DeVries, Terrance; Taylor, Graham W.","1st Conference on Medical Imaging with Deep Learning (MIDL 2018)," A behaviorist approach to building phenomenological bridges,"A few weeks ago, I wrote about the BPB problem and how it poses a problem for classical/non-logical decision theories. In my post, I briefly mentioned a behaviorist approach to BPB, only to immedia…",https://casparoesterheld.com/2017/10/22/a-behaviorist-approach-to-building-phenomenological-bridges/,2017,blogPost,"Oesterheld, Caspar",The Universe from an Intentional Stance Learning a Behavioral Repertoire from Demonstrations,"Imitation Learning (IL) is a machine learning approach to learn a policy from a dataset of demonstrations. IL can be useful to kick-start learning before applying reinforcement learning (RL) but it can also be useful on its own, e.g. to learn to imitate human players in video games. However, a major limitation of current IL approaches is that they learn only a single ""average"" policy based on a dataset that possibly contains demonstrations of numerous different types of behaviors. In this paper, we propose a new approach called Behavioral Repertoire Imitation Learning (BRIL) that instead learns a repertoire of behaviors from a set of demonstrations by augmenting the state-action pairs with behavioral descriptions. The outcome of this approach is a single neural network policy conditioned on a behavior description that can be precisely modulated. We apply this approach to train a policy on 7,777 human replays to perform build-order planning in StarCraft II. Principal Component Analysis (PCA) is applied to construct a low-dimensional behavioral space from the high-dimensional army unit composition of each demonstration. The results demonstrate that the learned policy can be effectively manipulated to express distinct behaviors. Additionally, by applying the UCB1 algorithm, we are able to adapt the behavior of the policy - in-between games - to reach a performance beyond that of the traditional IL baseline approach.",http://arxiv.org/abs/1907.03046,2019,conferencePaper,"Justesen, Niels; Duque, Miguel Gonzalez; Jaramillo, Daniel Cabarcas; Mouret, Jean-Baptiste; Risi, Sebastian",arXiv:1907.03046 [cs] Organizing Maintenance Work At Two American Nuclear Power Plants,,http://doi.wiley.com/10.1111/j.1468-5973.1996.tb00082.x,1996,journalArticle,"Bourrier, Mathilde",Journal of Contingencies and Crisis Management Heuristic Approaches for Goal Recognition in Incomplete Domain Models,"Recent approaches to goal recognition have progressively relaxed the assumptions about the amount and correctness of domain knowledge and available observations, yielding accurate and efficient algorithms. These approaches, however, assume completeness and correctness of the domain theory against which their algorithms match observations: this is too strong for most real-world domains. In this paper, we develop goal recognition techniques that are capable of recognizing goals using \textit{incomplete} (and possibly incorrect) domain theories. We show the efficiency and accuracy of our approaches empirically against a large dataset of goal and plan recognition problems with incomplete domains.",http://arxiv.org/abs/1804.05917,2018,manuscript,"Pereira, Ramon Fraga; Meneguzzi, Felipe", Returns to scale in research,"When universities or university departments produce research outputs—such as published papers—they sometimes experience increasing returns to scale, sometimes constant returns to scale, and sometimes decreasing returns to scale. At the level of nations however, R&D tends to see increasing returns to scale. These results are preliminary. Background “Returns to scale” refers to the responsiveness of...",https://aiimpacts.org/returns-to-scale-in-research/,2016,blogPost,AI Impacts,AI Impacts """Go west, young man!"" - Preferences in (imperfect) maps","Many people are very nationalistic, putting their country above all others. Such people can be hazy about what ""above all others"" can mean, outside of a few clear examples - eg winning a total war totally. They're also very hazy on what is meant by ""their country"" - geography is certainly involved, as is proclaimed or legal nationality, maybe some ethnic groups or a language, or even just giving deference to certain ideals. Consider the plight of a communist Croatian Yugoslav nationalist during the 1990s... I'd argue that the situation these nationalists find themselves in - strong views on poorly defined concepts - is the general human state for preferences. Or, to use an appropriate map and territory analogy: * Most people forge their preferences by exploring their local territory, creating a mental map of this, and taking strong preferences over the concepts within their mental map. When the map starts to become imperfect, they will try to extend the concepts to new areas, so that their preferences can also be extended. Some of the debates about the meaning of words are about this extension-of-preferences process. Scott Alexander recommends that we dissolve concepts such as disease, looking for the relevant categories of 'deserves sympathy' and 'acceptable to treat in a medical way'. And that dissolving is indeed the correct thing for rationalists to do. But, for most people, including most rationalists, 'sick people deserve sympathy' is a starting moral principle, one we've learnt by example and experience in childhood. When we ask 'do obese people deserve sympathy?' we've trying to extend that moral principle to a situation where our map/model (which includes, say, three categories of people: healthy, mildly sick, very sick) no longer matches up with reality. Scott's dissolving process requires decomposing 'disease' into more nodes, and then applying moral principles to those individual nodes. In this case, a compelling consequentialist analy",https://www.alignmentforum.org/posts/pfmFe5fgEn2weJuer/go-west-young-man-preferences-in-imperfect-maps,2020,blogPost,"Armstrong, Stuart",AI Alignment Forum "A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy","Artificial general intelligence (AGI) is AI that can reason across a wide range of domains. It has long been considered the “grand dream” or “holy grail” of AI. It also poses major issues of ethics, risk, and policy due to its potential to transform society: if AGI is built, it could either help solve the world’s problems or cause major catastrophe, possibly even human extinction. This paper presents the first-ever survey of active AGI R&D projects in terms of ethics, risk, and policy. A thorough search identifies 45 projects of diverse sizes, nationalities, ethical goals, and other attributes. Most projects are either academic or corporate. The academic projects tend to express goals of advancing knowledge and are less likely to be active on AGI safety issues. The corporate projects tend to express goals of benefiting humanity and are more likely to be active on safety. Most projects are based in the US, and almost all are in either the US or a US ally, including all of the larger projects. This geographic concentration could simplify policymaking, though most projects publish open-source code, enabling contributions from anywhere in the world. These and other findings of the survey offer an empirical basis for the study of AGI R&D and a guide for policy and other action.",https://papers.ssrn.com/abstract=3070741,2017,report,"Baum, Seth", Verified compilation on a verified processor,"Developing technology for building verified stacks, i.e., computer systems with comprehensive proofs of correctness, is one way the science of programming languages furthers the computing discipline. While there have been successful projects verifying complex, realistic system components, including compilers (software) and processors (hardware), to date these verification efforts have not been compatible to the point of enabling a single end-to-end correctness theorem about running a verified compiler on a verified processor. In this paper we show how to extend the trustworthy development methodology of the CakeML project, including its verified compiler, with a connection to verified hardware. Our hardware target is Silver, a verified proof-of-concept processor that we introduce here. The result is an approach to producing verified stacks that scales to proving correctness, at the hardware level, of the execution of realistic software including compilers and proof checkers. Alongside our hardware-level theorems, we demonstrate feasibility by hosting and running our verified artefacts on an FPGA board.",https://doi.org/10.1145/3314221.3314622,2019,conferencePaper,"Lööw, Andreas; Kumar, Ramana; Tan, Yong Kiam; Myreen, Magnus O.; Norrish, Michael; Abrahamsson, Oskar; Fox, Anthony",Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation "The Microstructure of the “Flash Crash”: Flow Toxicity, Liquidity Crashes, and the Probability of Informed Trading",,http://jpm.pm-research.com/lookup/doi/10.3905/jpm.2011.37.2.118,2011,journalArticle,"Easley, David; López de Prado, Marcos M.; O’Hara, Maureen",The Journal of Portfolio Management Learning Robotic Manipulation through Visual Planning and Acting,"Planning for robotic manipulation requires reasoning about the changes a robot can affect on objects. When such interactions can be modelled analytically, as in domains with rigid objects, efficient planning algorithms exist. However, in both domestic and industrial domains, the objects of interest can be soft, or deformable, and hard to model analytically. For such cases, we posit that a data-driven modelling approach is more suitable. In recent years, progress in deep generative models has produced methods that learn to `imagine' plausible images from data. Building on the recent Causal InfoGAN generative model, in this work we learn to imagine goal-directed object manipulation directly from raw image data of self-supervised interaction of the robot with the object. After learning, given a goal observation of the system, our model can generate an imagined plan -- a sequence of images that transition the object into the desired goal. To execute the plan, we use it as a reference trajectory to track with a visual servoing controller, which we also learn from the data as an inverse dynamics model. In a simulated manipulation task, we show that separating the problem into visual planning and visual tracking control is more sample efficient and more interpretable than alternative data-driven approaches. We further demonstrate our approach on learning to imagine and execute in 3 environments, the final of which is deformable rope manipulation on a PR2 robot.",http://www.roboticsproceedings.org/rss15/p74.pdf,2019,conferencePaper,"Wang, Angelina; Kurutach, Thanard; Liu, Kara; Abbeel, Pieter; Tamar, Aviv",Robotics: Science and Systems XV Do Deep Generative Models Know What They Don't Know?,"A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. Moreover, we find evidence of this phenomenon when pairing several popular image data sets: FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN. To investigate this curious behavior, we focus analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flows to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature. Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution until their behavior for out-of-distribution inputs is better understood.",http://arxiv.org/abs/1810.09136,2019,conferencePaper,"Nalisnick, Eric; Matsukawa, Akihiro; Teh, Yee Whye; Gorur, Dilan; Lakshminarayanan, Balaji","arXiv:1810.09136 [cs, stat]" The future of growth: near-zero growth rates,"Exponential growth is a common pattern found throughout nature. Yet it is also a pattern that tends not to last, as growth rates tend to decline sooner or later. In biology, this pattern of exponential growth that wanes off is found in everything from the development of individual bodies — for instance, in the growth of […]",https://longtermrisk.org/the-future-of-growth-near-zero-growth-rates/,2017,blogPost,Center on Long-Term Risk,Center on Long-Term Risk Bridging Hamilton-Jacobi Safety Analysis and Reinforcement Learning,"Safety analysis is a necessary component in the design and deployment of autonomous systems. Techniques from robust optimal control theory, such as Hamilton-Jacobi reachability analysis, allow a rigorous formalization of safety as guaranteed constraint satisfaction. Unfortunately, the computational complexity of these tools for general dynamical systems scales poorly with state dimension, making existing tools impractical beyond small problems. Modern reinforcement learning methods have shown promising ability to find approximate yet proficient solutions to optimal control problems in complex and high-dimensional systems, however their formulation is restricted to problems with an additive payoff (reward) over time, unsuitable for reasoning about safety. In recent work, we proved that the problem of maximizing the minimum payoff over time, central to safety analysis, can be time-discounted to induce a contraction mapping. Here, we introduce a novel, timediscounted Safety Bellman Equation that renders reinforcement learning techniques amenable to quantitative safety analysis, enabling them to approximate the safe set and optimal safety policy. This opens a new avenue of research connecting controltheoretic safety analysis and the reinforcement learning domain. We demonstrate our formulation on a variety of simulated robotics tasks and reinforcement learning schemes, validating our results against analytic and numerical solutions when these can be obtained, and showing scalability to previously intractable problems of up to 18 state dimensions by exploiting state-of-the-art deep reinforcement learning algorithms.",https://ieeexplore.ieee.org/document/8794107/,2019,conferencePaper,"Fisac, Jaime F.; Lugovoy, Neil F.; Rubies-Royo, Vicenc; Ghosh, Shromona; Tomlin, Claire J.",2019 International Conference on Robotics and Automation (ICRA) Chinese Public AI R&D Spending: Provisional Findings,,,2019,report,"Arnold, Zachary; Acharya, Ashwin", Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree,"This is the second post in my sequence on moral anti-realism (see my previous post). This second post should work perfectly when read as a standalone piece. MY MOTIVATION TO WRITE THIS POST For this post, I set out to describe and analyze what arguably constitutes the most fundamental disagreement in philosophy: realism versus anti-realism, not only in metaethics but in general. In one form or another, this disagreement also shows up in the philosophy of mind, personal identity, aesthetics, epistemology, the philosophy of physics, and so on. Belief in different types of realism is correlated (see the appendix), so I see significant benefits to also discussing other types of realism. For instance, moral philosophers on both sides of the realism/anti-realism divide have drawn comparisons from different domains in support of their positions.[1] To address those comparisons, prior discussion of those other domains is helpful. However, I tried to structure this post to be more than a (second) introduction. Even though my aim is to neutrally describe the core differences between realism and anti-realism, in doing so I will already present some of my main arguments for anti-realism. My most persuasive “argument” against (moral) realism isn’t any single knockdown objection, but rather my overall impression that when we go from realism to anti-realism, we don’t have to give up anything worth wanting. I expect (moral) realists to disagree with that sentiment, in part because I could imagine that many may not have been motivated to explore the option space for moral reasoning under anti-realism (“to make anti-realism work”). I wrote this post to make what I perceive to be underappreciated points about how realism comes with some surprisingly non-trivial claims, and how anti-realism doesn’t have to mean throwing up one’s hands saying “anything goes.” SUMMARY * We tend to have the feeling that disagreements on what’s “moral,” “conscious,” “(epistemically) right,” and so on",https://forum.effectivealtruism.org/posts/6nPnqXCaYsmXCtjTk/moral-anti-realism-sequence-2-why-realists-and-anti-realists,2020,blogPost,"Gloor, Lukas",Effective Altruism Forum Unifying scene registration and trajectory optimization for learning from demonstrations with application to manipulation of deformable objects,"Recent work [1], [2] has shown promising results in enabling robotic manipulation of deformable objects through learning from demonstrations. Their method computes a registration from training scene to test scene, and then applies an extrapolation of this registration to the training scene gripper motion to obtain the gripper motion for the test scene. The warping cost of scene-to-scene registrations is used to determine the nearest neighbor from a set of training demonstrations. Then once the gripper motion has been generalized to the test situation, they apply trajectory optimization [3] to plan for the robot motions that will track the predicted gripper motions. In many situations, however, the predicted gripper motions cannot be followed perfectly due to, for example, joint limits or obstacles. In this case the past work finds a path that minimizes deviation from the predicted gripper trajectory as measured by its Euclidean distance for position and angular distance for orientation.",http://ieeexplore.ieee.org/document/6943185/,2014,conferencePaper,"Lee, Alex X.; Huang, Sandy H.; Hadfield-Menell, Dylan; Tzeng, Eric; Abbeel, Pieter",2014 IEEE/RSJ International Conference on Intelligent Robots and Systems Learning from Richer Human Guidance: Augmenting Comparison-Based Learning with Feature Queries,"We focus on learning the desired objective function for a robot. Although trajectory demonstrations can be very informative of the desired objective, they can also be difficult for users to provide. Answers to comparison queries, asking which of two trajectories is preferable, are much easier for users, and have emerged as an effective alternative. Unfortunately, comparisons are far less informative. We propose that there is much richer information that users can easily provide and that robots ought to leverage. We focus on augmenting comparisons with feature queries, and introduce a unified formalism for treating all answers as observations about the true desired reward. We derive an active query selection algorithm, and test these queries in simulation and on real users. We find that richer, feature-augmented queries can extract more information faster, leading to robots that better match user preferences in their behavior.",http://arxiv.org/abs/1802.01604,2018,conferencePaper,"Basu, Chandrayee; Singhal, Mukesh; Dragan, Anca D.",Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI '18 Towards Characterizing Divergence in Deep Q-Learning,"Deep Q-Learning (DQL), a family of temporal difference algorithms for control, employs three techniques collectively known as the `deadly triad' in reinforcement learning: bootstrapping, off-policy learning, and function approximation. Prior work has demonstrated that together these can lead to divergence in Q-learning algorithms, but the conditions under which divergence occurs are not well-understood. In this note, we give a simple analysis based on a linear approximation to the Q-value updates, which we believe provides insight into divergence under the deadly triad. The central point in our analysis is to consider when the leading order approximation to the deep-Q update is or is not a contraction in the sup norm. Based on this analysis, we develop an algorithm which permits stable deep Q-learning for continuous control without any of the tricks conventionally used (such as target networks, adaptive gradient optimizers, or using multiple Q functions). We demonstrate that our algorithm performs above or near state-of-the-art on standard MuJoCo benchmarks from the OpenAI Gym.",http://arxiv.org/abs/1903.08894,2019,manuscript,"Achiam, Joshua; Knight, Ethan; Abbeel, Pieter", Delegative Reinforcement Learning: Learning To Avoid Traps With A Little Help,"Most known regret bounds for reinforcement learning are either episodic or assume an environment without traps. We derive a regret bound without making either assumption, by allowing the algorithm to occasionally delegate an action to an external advisor. We thus arrive at a setting of active one-shot model-based reinforcement learning that we call DRL (delegative reinforcement learning.) The algorithm we construct in order to demonstrate the regret bound is a variant of Posterior Sampling Reinforcement Learning supplemented by a subroutine that decides which actions should be delegated. The algorithm is not anytime, since the parameters must be adjusted according to the target time discount. Currently, our analysis is limited to Markov decision processes with finite numbers of hypotheses, states and actions.",,2019,conferencePaper,"Kosoy, Vanessa", Psychlab: A Psychology Laboratory for Deep Reinforcement Learning Agents,"Psychlab is a simulated psychology laboratory inside the first-person 3D game world of DeepMind Lab (Beattie et al. 2016). Psychlab enables implementations of classical laboratory psychological experiments so that they work with both human and artificial agents. Psychlab has a simple and flexible API that enables users to easily create their own tasks. As examples, we are releasing Psychlab implementations of several classical experimental paradigms including visual search, change detection, random dot motion discrimination, and multiple object tracking. We also contribute a study of the visual psychophysics of a specific state-of-the-art deep reinforcement learning agent: UNREAL (Jaderberg et al. 2016). This study leads to the surprising conclusion that UNREAL learns more quickly about larger target stimuli than it does about smaller stimuli. In turn, this insight motivates a specific improvement in the form of a simple model of foveal vision that turns out to significantly boost UNREAL's performance, both on Psychlab tasks, and on standard DeepMind Lab tasks. By open-sourcing Psychlab we hope to facilitate a range of future such studies that simultaneously advance deep reinforcement learning and improve its links with cognitive science.",http://arxiv.org/abs/1801.08116,2018,manuscript,"Leibo, Joel Z.; d'Autume, Cyprien de Masson; Zoran, Daniel; Amos, David; Beattie, Charles; Anderson, Keith; Castañeda, Antonio García; Sanchez, Manuel; Green, Simon; Gruslys, Audrunas; Legg, Shane; Hassabis, Demis; Botvinick, Matthew M.", Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments,"We present a new framework for motion planning that wraps around existing kinodynamic planners and guarantees recursive feasibility when operating in a priori unknown, static environments. Our approach makes strong guarantees about overall safety and collision avoidance by utilizing a robust controller derived from reachability analysis. We ensure that motion plans never exit the safe backward reachable set of the initial state, while safely exploring the space. This preserves the safety of the initial state, and guarantees that that we will eventually find the goal if it is possible to do so while exploring safely. We implement our framework in the Robot Operating System (ROS) software environment and demonstrate it in a real-time simulation.",http://arxiv.org/abs/1811.07834,2018,conferencePaper,"Fridovich-Keil, David; Fisac, Jaime F.; Tomlin, Claire J.",2019 International Conference on Robotics and Automation (ICRA) Goals and short descriptions,"OUTLINE I develop some contents—previously introduced in the Value Learning sequence by Rohin Shah—more formally, to clarify the distinction between agents with and without a goal. Then I present related work and make some considerations on the relation between safety and goal-directedness. The appendix contains some details on the used formalism and can be skipped without losing much information. A BRIEF PRELIMINARY In the first post of the Value Learning sequence, Shah compares two agents that exhibit the same behaviour (a winning strategy) when playing Tic-Tac-Toe, but are different in their design: one applies the minimax algorithm to the setting and rules of the game, while the other one follows a lookup table—you can think of its code as a long sequence of if-else statements. Shah highlights the difference in terms of generalisation: the first one would still win if the winning conditions were changed, while the lookup table would not. Generalisation is one of the components of goal-directedness, and lookup tables are among the least goal-directed agent designs. Here I want to point at another difference that exists between agents with and without a goal, based on the concept of algorithmic complexity. SETUP Most problems in AI consist in finding a function π∈AO, called policy in some contexts, where A={a1,…,am} and O={o1,…,on} indicate the sets of possible actions and observations. A deterministic policy can be written as a string π=ai 1ai2…ain with aik indicating the action taken when ok is observed. Here I consider a problem setting as a triplet (A,O,D) where D stands for some kind of environmental data—could be about, for example, the transition function in a MDP, or the structure of the elements in the search space O. Since I want to analyse behaviour across different environments, instead of considering one single policy I’ll sometimes refer to a more general function g (probably closer to the concept of “agent design”, rather than just “agent”) ma",https://www.alignmentforum.org/posts/d4NgfKY3cq9yiBLSM/goals-and-short-descriptions,2020,blogPost,"Campolo, Michele",AI Alignment Forum Agent57: Outperforming the Atari Human Benchmark,"Atari games have been a long-standing benchmark in the reinforcement learning (RL) community for the past decade. This benchmark was proposed to test general competency of RL algorithms. Previous work has achieved good average performance by doing outstandingly well on many games of the set, but very poorly in several of the most challenging games. We propose Agent57, the first deep RL agent that outperforms the standard human benchmark on all 57 Atari games. To achieve this result, we train a neural network which parameterizes a family of policies ranging from very exploratory to purely exploitative. We propose an adaptive mechanism to choose which policy to prioritize throughout the training process. Additionally, we utilize a novel parameterization of the architecture that allows for more consistent and stable learning.",http://arxiv.org/abs/2003.13350,2020,manuscript,"Badia, Adrià Puigdomènech; Piot, Bilal; Kapturowski, Steven; Sprechmann, Pablo; Vitvitskyi, Alex; Guo, Daniel; Blundell, Charles", Adversarial Robustness through Local Linearization,"Adversarial training is an effective methodology for training deep neural networks that are robust against adversarial, norm-bounded perturbations. However, the computational cost of adversarial training grows prohibitively as the size of the model and number of input dimensions increase. Further, training against less expensive and therefore weaker adversaries produces models that are robust against weak attacks but break down under attacks that are stronger. This is often attributed to the phenomenon of gradient obfuscation; such models have a highly non-linear loss surface in the vicinity of training examples, making it hard for gradient-based attacks to succeed even though adversarial examples still exist. In this work, we introduce a novel regularizer that encourages the loss to behave linearly in the vicinity of the training data, thereby penalizing gradient obfuscation while encouraging robustness. We show via extensive experiments on CIFAR-10 and ImageNet, that models trained with our regularizer avoid gradient obfuscation and can be trained significantly faster than adversarial training. Using this regularizer, we exceed current state of the art and achieve 47% adversarial accuracy for ImageNet with l-infinity adversarial perturbations of radius 4/255 under an untargeted, strong, white-box attack. Additionally, we match state of the art results for CIFAR-10 at 8/255.",https://proceedings.neurips.cc/paper/2019/hash/0defd533d51ed0a10c5c9dbf93ee78a5-Abstract.html,2019,conferencePaper,"Qin, Chongli; Martens, James; Gowal, Sven; Krishnan, Dilip; Dvijotham, Krishnamurthy; Fawzi, Alhussein; De, Soham; Stanforth, Robert; Kohli, Pushmeet",Advances in Neural Information Processing Systems 32 (NeurIPS 2019) Human-AI Learning Performance in Multi-Armed Bandits,"People frequently face challenging decision-making problems in which outcomes are uncertain or unknown. Artificial intelligence (AI) algorithms exist that can outperform humans at learning such tasks. Thus, there is an opportunity for AI agents to assist people in learning these tasks more effectively. In this work, we use a multi-armed bandit as a controlled setting in which to explore this direction. We pair humans with a selection of agents and observe how well each human-agent team performs. We find that team performance can beat both human and agent performance in isolation. Interestingly, we also find that an agent’s performance in isolation does not necessarily correlate with the human-agent team’s performance. A drop in agent performance can lead to a disproportionately large drop in team performance, or in some settings can even improve team performance. Pairing a human with an agent that performs slightly better than them can make them perform much better, while pairing them with an agent that performs the same can make them them perform much worse. Further, our results suggest that people have different exploration strategies and might perform better with agents that match their strategy. Overall, optimizing human-agent team performance requires going beyond optimizing agent performance, to understanding how the agent’s suggestions will influence human decision-making.",http://arxiv.org/abs/1812.09376,2019,conferencePaper,"Pandya, Ravi; Huang, Sandy H.; Hadfield-Menell, Dylan; Dragan, Anca D.","AIES '19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society" A Critical Look at Risk Assessments for Global Catastrophes,,https://onlinelibrary.wiley.com/doi/abs/10.1111/j.0272-4332.2004.00419.x,2004,journalArticle,"Kent, Adrian",Risk Analysis Value-Decomposition Networks For Cooperative Multi-Agent Learning,"We study the problem of cooperative multi-agent reinforcement learning with a single joint reward signal. This class of learning problems is difficult because of the often large combined action and observation spaces. In the fully centralized and decentralized approaches, we find the problem of spurious rewards and a phenomenon we call the ""lazy agent"" problem, which arises due to partial observability. We address these problems by training individual agents with a novel value decomposition network architecture, which learns to decompose the team value function into agent-wise value functions. We perform an experimental evaluation across a range of partially-observable multi-agent domains and show that learning such value-decompositions leads to superior results, in particular when combined with weight sharing, role information and information channels.",http://arxiv.org/abs/1706.05296,2018,conferencePaper,"Sunehag, Peter; Lever, Guy; Gruslys, Audrunas; Czarnecki, Wojciech Marian; Zambaldi, Vinicius; Jaderberg, Max; Lanctot, Marc; Sonnerat, Nicolas; Leibo, Joel Z.; Tuyls, Karl; Graepel, Thore",Proc. of the 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2018) Smarter than us: The rise of machine intelligence,,,2014,book,"Armstrong, Stuart", Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks,,http://link.springer.com/10.1007/978-3-319-63387-9_5,2017,bookSection,"Katz, Guy; Barrett, Clark; Dill, David L.; Julian, Kyle; Kochenderfer, Mykel J.",Computer Aided Verification On the promotion of safe and socially beneficial artificial intelligence,,http://link.springer.com/10.1007/s00146-016-0677-0,2017,journalArticle,"Baum, Seth D.",AI & Society Program equilibrium,,https://linkinghub.elsevier.com/retrieve/pii/S0899825604000314,2004,journalArticle,"Tennenholtz, Moshe",Games and Economic Behavior Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks,"We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.",http://arxiv.org/abs/1602.07868,2016,manuscript,"Salimans, Tim; Kingma, Diederik P.", Disentangling arguments for the importance of AI safety,"[Note: my views have changed since writing this post, and while I still consider it useful as a catalogue of concerns, I no longer think that it satisfactorily disentangles those concerns from each other. I hope to post better material along these lines later this year]. I recently attended the 2019 Beneficial AGI conference organised by the Future of Life Institute. I’ll publish a more complete write-up later, but I was particularly struck by how varied attendees' reasons for considering AI safety important were. Before this, I’d observed a few different lines of thought, but interpreted them as different facets of the same idea. Now, though, I’ve identified at least 6 distinct serious arguments for why AI safety is a priority. By distinct I mean that you can believe any one of them without believing any of the others - although of course the particular categorisation I use is rather subjective, and there’s a significant amount of overlap. In this post I give a brief overview of my own interpretation of each argument (note that I don’t necessarily endorse them myself). They are listed roughly from most specific and actionable to most general. I finish with some thoughts on what to make of this unexpected proliferation of arguments. Primarily, I think it increases the importance of clarifying and debating the core ideas in AI safety. 1. Maximisers are dangerous. Superintelligent AGI will behave as if it’s maximising the expectation of some utility function, since doing otherwise can be shown to be irrational. Yet we can’t write down a utility function which precisely describes human values, and optimising very hard for any other function will lead to that AI rapidly seizing control (as a convergent instrumental subgoal) and building a future which contains very little of what we value (because of Goodhart’s law and the complexity and fragility of values). We won’t have a chance to notice and correct misalignment because an AI which",https://www.alignmentforum.org/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety,2019,blogPost,"Ngo, Richard",AI Alignment Forum Bias in AI: How we Build Fair AI Systems and Less-Biased Humans,"Without a process to guide the responsible development of trustworthy AI, our systems won’t benefit society — in fact, AI systems could exacerbate the negative consequences of unconscious bias.",https://www.ibm.com/blogs/policy/bias-in-ai/,2018,blogPost,Anonymous,THINKPolicy Blog Deep learning from crowds,"Over the last few years, deep learning has revolutionized the field of machine learning by dramatically improving the state-of-the-art in various domains. However, as the size of supervised artificial neural networks grows, typically so does the need for larger labeled datasets. Recently, crowdsourcing has established itself as an efficient and cost-effective solution for labeling large sets of data in a scalable manner, but it often requires aggregating labels from multiple noisy contributors with different levels of expertise. In this paper, we address the problem of learning deep neural networks from crowds. We begin by describing an EM algorithm for jointly learning the parameters of the network and the reliabilities of the annotators. Then, a novel general-purpose crowd layer is proposed, which allows us to train deep neural networks end-to-end, directly from the noisy labels of multiple annotators, using only backpropagation. We empirically show that the proposed approach is able to internally capture the reliability and biases of different annotators and achieve new state-of-the-art results for various crowdsourced datasets across different settings, namely classification, regression and sequence labeling.",http://arxiv.org/abs/1709.01779,2017,conferencePaper,"Rodrigues, Filipe; Pereira, Francisco",Proc. of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) One-Shot Observation Learning Using Visual Activity Features,"Observation learning is the process of learning a task by observing an expert demonstrator. Our principal contribution is a one-shot learning method for robot manipulation tasks in which only a single demonstration is required. The key idea is to encode the demonstration in an activity space defined as part of a previously trained activity classifier. The distance between this encoding and equivalent encodings from trials of a robot performing the same task provides a reward function supporting iterative learning of task completion by the robotic manipulator. We use reinforcement learning for experiments with a simulated robotic manipulator, and stochastic trajectory optimisation for experiments with a real robotic manipulator. We show that the proposed method can be used to learn tasks from a single demonstration under varying viewpoint of observation, object properties, scene background and morphology of the manipulator. Videos of all results, including demonstrations, can be found on: https://tinyurl.com/s2l-stage1",http://arxiv.org/abs/1810.07483,2019,manuscript,"Pauly, Leo; Agboh, Wisdom C.; Hogg, David C.; Fuentes, Raul", The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities,"Biological evolution provides a creative fount of complex and subtle adaptations, often surprising the scientists who discover them. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, evolution's creativity is not limited to nature. Indeed, many researchers in the field of digital evolution have observed their evolving algorithms and organisms subverting their intentions, exposing unrecognized bugs in their code, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. Such stories routinely reveal creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.",http://arxiv.org/abs/1803.03453,2019,journalArticle,"Lehman, Joel; Clune, Jeff; Misevic, Dusan; Adami, Christoph; Altenberg, Lee; Beaulieu, Julie; Bentley, Peter J.; Bernard, Samuel; Beslon, Guillaume; Bryson, David M.; Chrabaszcz, Patryk; Cheney, Nick; Cully, Antoine; Doncieux, Stephane; Dyer, Fred C.; Ellefsen, Kai Olav; Feldt, Robert; Fischer, Stephan; Forrest, Stephanie; Frénoy, Antoine; Gagné, Christian; Goff, Leni Le; Grabowski, Laura M.; Hodjat, Babak; Hutter, Frank; Keller, Laurent; Knibbe, Carole; Krcah, Peter; Lenski, Richard E.; Lipson, Hod; MacCurdy, Robert; Maestre, Carlos; Miikkulainen, Risto; Mitri, Sara; Moriarty, David E.; Mouret, Jean-Baptiste; Nguyen, Anh; Ofria, Charles; Parizeau, Marc; Parsons, David; Pennock, Robert T.; Punch, William F.; Ray, Thomas S.; Schoenauer, Marc; Shulte, Eric; Sims, Karl; Stanley, Kenneth O.; Taddei, François; Tarapore, Danesh; Thibault, Simon; Weimer, Westley; Watson, Richard; Yosinski, Jason",Artificial Life LESS is More: Rethinking Probabilistic Models of Human Behavior,"Robots need models of human behavior for both inferring human goals and preferences, and predicting what people will do. A common model is the Boltzmann noisily-rational decision model, which assumes people approximately optimize a reward function and choose trajectories in proportion to their exponentiated reward. While this model has been successful in a variety of robotics domains, its roots lie in econometrics, and in modeling decisions among different discrete options, each with its own utility or reward. In contrast, human trajectories lie in a continuous space, with continuous-valued features that influence the reward function. We propose that it is time to rethink the Boltzmann model, and design it from the ground up to operate over such trajectory spaces. We introduce a model that explicitly accounts for distances between trajectories, rather than only their rewards. Rather than each trajectory affecting the decision independently, similar trajectories now affect the decision together. We start by showing that our model better explains human behavior in a user study. We then analyze the implications this has for robot inference, first in toy environments where we have ground truth and find more accurate inference, and finally for a 7DOF robot arm learning from user demonstrations.",http://arxiv.org/abs/2001.04465,2020,conferencePaper,"Bobu, Andreea; Scobee, Dexter R. R.; Fisac, Jaime F.; Sastry, S. Shankar; Dragan, Anca D.",Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda,,http://link.springer.com/10.1007/978-3-662-54033-6_5,2017,bookSection,"Soares, Nate; Fallenstein, Benya",The Technological Singularity Mapping Intelligence: Requirements and Possibilities,,http://link.springer.com/10.1007/978-3-319-96448-5_13,2018,bookSection,"Bhatnagar, Sankalp; Alexandrova, Anna; Avin, Shahar; Cave, Stephen; Cheke, Lucy; Crosby, Matthew; Feyereisl, Jan; Halina, Marta; Loe, Bao Sheng; Ó hÉigeartaigh, Seán; Martínez-Plumed, Fernando; Price, Huw; Shevlin, Henry; Weller, Adrian; Winfield, Alan; Hernández-Orallo, José",Philosophy and Theory of Artificial Intelligence 2017 Differential Privacy,,http://link.springer.com/10.1007/978-1-4419-5906-5_752,2011,bookSection,"Dwork, Cynthia",Encyclopedia of Cryptography and Security Social and Governance Implications of Improved Data Efficiency,"Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the social-economic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent data-rich AI firms, exposing them to more competition from data-poor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency – as more actors gain access to any level of capability – the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the “AI production function"", will be key to understanding the development of the AI industry and its societal impacts.",http://arxiv.org/abs/2001.05068,2020,conferencePaper,"Tucker, Aaron D.; Anderljung, Markus; Dafoe, Allan","Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society" On Handling Self-masking and Other Hard Missing Data Problems,,https://why19.causalai.net/papers/mohan-why19.pdf,2019,conferencePaper,"Mohan, Karthika","Beyond Curve Fitting: Causation, Counterfactuals, and Imagination-based AI (AAAI Spring Symposium)" More Robust Doubly Robust Off-policy Evaluation,"We study the problem of off-policy evaluation (OPE) in reinforcement learning (RL), where the goal is to estimate the performance of a policy from the data generated by another policy(ies). In particular, we focus on the doubly robust (DR) estimators that consist of an importance sampling (IS) component and a performance model, and utilize the low (or zero) bias of IS and low variance of the model at the same time. Although the accuracy of the model has a huge impact on the overall performance of DR, most of the work on using the DR estimators in OPE has been focused on improving the IS part, and not much on how to learn the model. In this paper, we propose alternative DR estimators, called more robust doubly robust (MRDR), that learn the model parameter by minimizing the variance of the DR estimator. We first present a formulation for learning the DR model in RL. We then derive formulas for the variance of the DR estimator in both contextual bandits and RL, such that their gradients w.r.t.~the model parameters can be estimated from the samples, and propose methods to efficiently minimize the variance. We prove that the MRDR estimators are strongly consistent and asymptotically optimal. Finally, we evaluate MRDR in bandits and RL benchmark problems, and compare its performance with the existing methods.",http://arxiv.org/abs/1802.03493,2018,conferencePaper,"Farajtabar, Mehrdad; Chow, Yinlam; Ghavamzadeh, Mohammad",Proceedings of the 35th International Conference on Machine Learning Exploration Strategies in Deep Reinforcement Learning,Exploitation versus exploration is a critical topic in reinforcement learning. This post introduces several common approaches for better exploration in Deep RL.,https://lilianweng.github.io/2020/06/07/exploration-strategies-in-deep-reinforcement-learning.html,2020,blogPost,"Weng, Lilian",Lil'Log Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function),"Utility functions or their equivalents (value functions, objective functions, loss functions, reward functions, preference orderings) are a central tool in most current machine learning systems. These mechanisms for defining goals and guiding optimization run into practical and conceptual difficulty when there are independent, multi-dimensional objectives that need to be pursued simultaneously and cannot be reduced to each other. Ethicists have proved several impossibility theorems that stem from this origin; those results appear to show that there is no way of formally specifying what it means for an outcome to be good for a population without violating strong human ethical intuitions (in such cases, the objective function is a social welfare function). We argue that this is a practical problem for any machine learning system (such as medical decision support systems or autonomous weapons) or rigidly rule-based bureaucracy that will make high stakes decisions about human lives: such systems should not use objective functions in the strict mathematical sense. We explore the alternative of using uncertain objectives, represented for instance as partially ordered preferences, or as probability distributions over total orders. We show that previously known impossibility theorems can be transformed into uncertainty theorems in both of those settings, and prove lower bounds on how much uncertainty is implied by the impossibility results. We close by proposing two conjectures about the relationship between uncertainty in objectives and severe unintended consequences from AI systems.",http://arxiv.org/abs/1901.00064,2019,manuscript,"Eckersley, Peter", Good and safe uses of AI Oracles,"It is possible that powerful and potentially dangerous artificial intelligence (AI) might be developed in the future. An Oracle is a design which aims to restrain the impact of a potentially dangerous AI by restricting the agent to no actions besides answering questions. Unfortunately, most Oracles will be motivated to gain more control over the world by manipulating users through the content of their answers, and Oracles of potentially high intelligence might be very successful at this \citep{DBLP:journals/corr/AlfonsecaCACAR16}. In this paper we present two designs for Oracles which, even under pessimistic assumptions, will not manipulate their users into releasing them and yet will still be incentivised to provide their users with helpful answers. The first design is the counterfactual Oracle -- which choses its answer as if it expected nobody to ever read it. The second design is the low-bandwidth Oracle -- which is limited by the quantity of information it can transmit.",http://arxiv.org/abs/1711.05541,2018,manuscript,"Armstrong, Stuart; O'Rorke, Xavier", An Architectural Risk Analysis of Machine Learning Systems: Toward More Secure Machine Learning,,https://www.garymcgraw.com/wp-content/uploads/2020/02/BIML-ARA.pdf,2020,report,"McGraw, Gary; Figueroa, Harold; Shepardson, Victor; Bonett, Richie", Imitation Learning from Observations by Minimizing Inverse Dynamics Disagreement,"This paper studies Learning from Observations (LfO) for imitation learning with access to state-only demonstrations. In contrast to Learning from Demonstration (LfD) that involves both action and state supervision, LfO is more practical in leveraging previously inapplicable resources (e.g. videos), yet more challenging due to the incomplete expert guidance. In this paper, we investigate LfO and its difference with LfD in both theoretical and practical perspectives. We first prove that the gap between LfD and LfO actually lies in the disagreement of inverse dynamics models between the imitator and the expert, if following the modeling approach of GAIL. More importantly, the upper bound of this gap is revealed by a negative causal entropy which can be minimized in a model-free way. We term our method as Inverse-Dynamics-Disagreement-Minimization (IDDM) which enhances the conventional LfO method through further bridging the gap to LfD. Considerable empirical results on challenging benchmarks indicate that our method attains consistent improvements over other LfO counterparts.",https://arxiv.org/abs/1910.04417v4,2019,conferencePaper,"Yang, Chao; Ma, Xiaojian; Huang, Wenbing; Sun, Fuchun; Liu, Huaping; Huang, Junzhou; Gan, Chuang",Advances in Neural Information Processing Systems 32 (NeurIPS 2019) Working together to face humanity’s greatest threats: Introduction to The Future of Research on Catastrophic and Existential Risk.,"Ours is a resilient species. Around 70,000 years ago our total population may have fallen to between three and ten thousand individuals, possibly due to a supervolcanic eruption (Ambrose 1998) . Yet our ancestors survived, squeezed through the bottleneck, and flourished. But this resilience cannot be taken for granted. We are interconnected and interdependent as never before; the power and scale of our technological capacities are unprecedented. We are in uncharted waters and thus our previous survival is no longer a reason to expect our continued survival (Bostrom 2013). As a result, it is urgent that we develop a systematic understanding of the nature and causes of catastrophic and existential risks.",https://www.repository.cam.ac.uk/handle/1810/280193,2018,journalArticle,"Currie, Adrian; Ó HÉigeartaigh, Seán; Apollo-University Of Cambridge Repository; Apollo-University Of Cambridge Repository",Futures Some Moral and Technical Consequences of Automation,,https://www.sciencemag.org/lookup/doi/10.1126/science.131.3410.1355,1960,journalArticle,"Wiener, N.",Science Using Causal Analysis to Learn Specifications from Task Demonstrations,"Learning models of user behaviour is an important problem that is broadly applicable across many application domains requiring human-robot interaction. In this work we show that it is possible to learn a generative model for distinct user behavioral types, extracted from human demonstrations, by enforcing clustering of preferred task solutions within the latent space. We use this model to differentiate between user types and to find cases with overlapping solutions. Moreover, we can alter an initially guessed solution to satisfy the preferences that constitute a particular user type by backpropagating through the learned differentiable model. An advantage of structuring generative models in this way is that it allows us to extract causal relationships between symbols that might form part of the user's specification of the task, as manifested in the demonstrations. We show that the proposed method is capable of correctly distinguishing between three user types, who differ in degrees of cautiousness in their motion, while performing the task of moving objects with a kinesthetically driven robot in a tabletop environment. Our method successfully identifies the correct type, within the specified time, in 99% [97.8 - 99.8] of the cases, which outperforms an IRL baseline. We also show that our proposed method correctly changes a default trajectory to one satisfying a particular user specification even with unseen objects. The resulting trajectory is shown to be directly implementable on a PR2 humanoid robot completing the same task.",http://arxiv.org/abs/1903.01267,2019,conferencePaper,"Angelov, Daniel; Hristov, Yordan; Ramamoorthy, Subramanian",Proc. 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019) Neuron Shapley: Discovering the Responsible Neurons,"We develop Neuron Shapley as a new framework to quantify the contribution of individual neurons to the prediction and performance of a deep network. By accounting for interactions across neurons, Neuron Shapley is more effective in identifying important filters compared to common approaches based on activation patterns. Interestingly, removing just 30 filters with the highest Shapley scores effectively destroys the prediction accuracy of Inception-v3 on ImageNet. Visualization of these few critical filters provides insights into how the network functions. Neuron Shapley is a flexible framework and can be applied to identify responsible neurons in many tasks. We illustrate additional applications of identifying filters that are responsible for biased prediction in facial recognition and filters that are vulnerable to adversarial attacks. Removing these filters is a quick way to repair models. Enabling all these applications is a new multi-arm bandit algorithm that we developed to efficiently estimate Neuron Shapley values.",http://arxiv.org/abs/2002.09815,2020,conferencePaper,"Ghorbani, Amirata; Zou, James","34th Conference on Neural Information Processing Systems (NeurIPS 2020)," A mechanistic model of meditation,"Meditation has been claimed to have all kinds of transformative effects on the psyche, such as improving concentration ability, healing trauma, cleaning up delusions, allowing one to track their subconscious strategies, and making one’s nervous system more efficient. However, an explanation for why and how exactly this would happen has typically been lacking. This makes people reasonably skeptical of such claims. In this post, I want to offer an explanation for one kind of a mechanism: meditation increasing the degree of a person’s introspective awareness, and thus leading to increasing psychological unity as internal conflicts are detected and resolved. Note that this post does not discuss “enlightenment”. That is a related but separate topic. It is possible to pursue meditation mainly for its ordinary psychological benefits while being uninterested in enlightenment, and vice versa. WHAT IS INTROSPECTIVE AWARENESS? In an earlier post on introspective awareness, I distinguished between being aware of something, and being aware of having been aware of something. My example involved that of a robot whose consciousness contains one mental object at a time, and which is aware of different things at different times: Robot’s thought at time 1: It’s raining outsideRobot’s thought at time 2: Battery lowRobot’s thought at time 3: Technological unemployment protestors are outsideRobot’s thought at time 4: Battery lowRobot’s thought at time 5: I’m now recharging my batteryAt times 2-5, the robot has no awareness of the fact that it was thinking about rain at time 1. As soon as something else captures its attention, it has no idea of this earlier conscious content - unless a particular subsystem happens to record the fact, and can later re-present the content in an appropriately tagged form: Time 6: At time 1, there was the thought that [It’s raining outside]I said that at time 6, the robot had a moment of introspective awareness: a mental object containing a summary of",https://www.lesswrong.com/posts/WYmmC3W6ZNhEgAmWG/a-mechanistic-model-of-meditation,2019,blogPost,"Sotala, Kaj",LessWrong Risk analysis and risk management for the artificial superintelligence research and development process,,,2017,bookSection,"Barrett, Anthony M.; Baum, Seth D.",The Technological Singularity Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?,"Algorithmic approaches to interpreting machine learning models have proliferated in recent years. We carry out human subject tests that are the first of their kind to isolate the effect of algorithmic explanations on a key aspect of model interpretability, simulatability, while avoiding important confounding experimental factors. A model is simulatable when a person can predict its behavior on new inputs. Through two kinds of simulation tests involving text and tabular data, we evaluate five explanations methods: (1) LIME, (2) Anchor, (3) Decision Boundary, (4) a Prototype model, and (5) a Composite approach that combines explanations from each method. Clear evidence of method effectiveness is found in very few cases: LIME improves simulatability in tabular classification, and our Prototype method is effective in counterfactual simulation tests. We also collect subjective ratings of explanations, but we do not find that ratings are predictive of how helpful explanations are. Our results provide the first reliable and comprehensive estimates of how explanations influence simulatability across a variety of explanation methods and data domains. We show that (1) we need to be careful about the metrics we use to evaluate explanation methods, and (2) there is significant room for improvement in current methods. All our supporting code, data, and models are publicly available at: https://github.com/peterbhase/InterpretableNLP-ACL2020",http://arxiv.org/abs/2005.01831,2020,conferencePaper,"Hase, Peter; Bansal, Mohit",Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics Should Robots be Obedient?,"Intuitively, obedience -- following the order that a human gives -- seems like a good property for a robot to have. But, we humans are not perfect and we may give orders that are not best aligned to our preferences. We show that when a human is not perfectly rational then a robot that tries to infer and act according to the human's underlying preferences can always perform better than a robot that simply follows the human's literal order. Thus, there is a tradeoff between the obedience of a robot and the value it can attain for its owner. We investigate how this tradeoff is impacted by the way the robot infers the human's preferences, showing that some methods err more on the side of obedience than others. We then analyze how performance degrades when the robot has a misspecified model of the features that the human cares about or the level of rationality of the human. Finally, we study how robots can start detecting such model misspecification. Overall, our work suggests that there might be a middle ground in which robots intelligently decide when to obey human orders, but err on the side of obedience.",http://arxiv.org/abs/1705.09990,2017,conferencePaper,"Milli, Smitha; Hadfield-Menell, Dylan; Dragan, Anca; Russell, Stuart",IJCAI'17: Proceedings of the 26th International Joint Conference on Artificial Intelligence The Scientometrics Of AI Benchmarks: Unveiling The Underlying Mechanics Of Ai Research,,,2020,journalArticle,"Barredo, Pablo; Hernández-Orallo, José; Martınez-Plumed, F.; h Éigeartaigh, S. O.",Evaluating Progress in Artificial Intelligence (EPAI 2020). ECAI Multiverse-wide cooperation in a nutshell,"(Crossposted from the FRI blog.) This is a post I wrote about Caspar Oesterheld’s long paper Multiverse-wide cooperation via correlated decision-making. Because I have found the idea tricky to explain – which unfortunately makes it difficult to get feedback from others on whether the thinking behind it makes sense – I decided to write a shorter summary. While I am hoping that my text can serve as a standalone piece, for additional introductory content I also recommend reading the beginning of Caspar’s paper, or watching the short video introduction here (requires basic knowledge of the “CDT, EDT or something else” debate in decision theory). 0. ELEVATOR PITCH (Disclaimer: Especially for the elevator pitch section here, I am sacrificing accuracy and precision for brevity. References can be found in Caspar’s paper.) It would be an uncanny coincidence if the observable universe made up everything that exists. The reason we cannot find any evidence for there being stuff beyond the edges of our universe is not because it is likely that there is nothingness, but because photons from further away simply would not have had sufficient time after the big bang to reach us. This means that the universe we find ourselves in may well be vastly larger than what we can observe, in fact even infinitely larger. The theory of inflationary cosmology in addition hints at the existence of other universe bubbles with different fundamental constants forming or disappearing under certain conditions, somehow co-existing with our universe in parallel. The umbrella term multiverse captures the idea that the observable universe is just a tiny portion of everything that exists. The multiverse may contain myriads of worlds like ours, including other worlds with intelligent life and civilization. An infinite multiverse (of one sort or another) is actually amongst the most popular cosmological hypotheses, arguably even favored by the majority of experts. Many ethical theories (in particular",https://forum.effectivealtruism.org/posts/7MdLurJGhGmqRv25c/multiverse-wide-cooperation-in-a-nutshell,2017,blogPost,"Gloor, Lukas",Effective Altruism Forum Scaling Laws for Autoregressive Generative Modeling,"We identify empirical scaling laws for the cross-entropy loss in four domains: generative image modeling, video modeling, multimodal image$\leftrightarrow$text models, and mathematical problem solving. In all cases autoregressive Transformers smoothly improve in performance as model size and compute budgets increase, following a power-law plus constant scaling law. The optimal model size also depends on the compute budget through a power-law, with exponents that are nearly universal across all data domains. The cross-entropy loss has an information theoretic interpretation as $S($True$) + D_{\mathrm{KL}}($True$||$Model$)$, and the empirical scaling laws suggest a prediction for both the true data distribution's entropy and the KL divergence between the true and model distributions. With this interpretation, billion-parameter Transformers are nearly perfect models of the YFCC100M image distribution downsampled to an $8\times 8$ resolution, and we can forecast the model size needed to achieve any given reducible loss (ie $D_{\mathrm{KL}}$) in nats/image for other resolutions. We find a number of additional scaling laws in specific domains: (a) we identify a scaling relation for the mutual information between captions and images in multimodal models, and show how to answer the question ""Is a picture worth a thousand words?""; (b) in the case of mathematical problem solving, we identify scaling laws for model performance when extrapolating beyond the training distribution; (c) we finetune generative image models for ImageNet classification and find smooth scaling of the classification loss and error rate, even as the generative loss levels off. Taken together, these results strengthen the case that scaling laws have important implications for neural network performance, including on downstream tasks.",http://arxiv.org/abs/2010.14701,2020,manuscript,"Henighan, Tom; Kaplan, Jared; Katz, Mor; Chen, Mark; Hesse, Christopher; Jackson, Jacob; Jun, Heewoo; Brown, Tom B.; Dhariwal, Prafulla; Gray, Scott; Hallacy, Chris; Mann, Benjamin; Radford, Alec; Ramesh, Aditya; Ryder, Nick; Ziegler, Daniel M.; Schulman, John; Amodei, Dario; McCandlish, Sam", Safe Exploration in Continuous Action Spaces,"We address the problem of deploying a reinforcement learning (RL) agent on a physical system such as a datacenter cooling unit or robot, where critical constraints must never be violated. We show how to exploit the typically smooth dynamics of these systems and enable RL algorithms to never violate constraints during learning. Our technique is to directly add to the policy a safety layer that analytically solves an action correction formulation per each state. The novelty of obtaining an elegant closed-form solution is attained due to a linearized model, learned on past trajectories consisting of arbitrary actions. This is to mimic the real-world circumstances where data logs were generated with a behavior policy that is implausible to describe mathematically; such cases render the known safety-aware off-policy methods inapplicable. We demonstrate the efficacy of our approach on new representative physics-based environments, and prevail where reward shaping fails by maintaining zero constraint violations.",http://arxiv.org/abs/1801.08757,2018,manuscript,"Dalal, Gal; Dvijotham, Krishnamurthy; Vecerik, Matej; Hester, Todd; Paduraru, Cosmin; Tassa, Yuval", Reasons to Be Nice to Other Value Systems,"Several arguments support the heuristic that we should help groups holding different value systems from our own when doing so is cheap, unless those groups prove uncooperative to our values. This is true even if we don't directly care at all about other groups' value systems. Exactly how nice to be depends on the particulars of the situation.",https://longtermrisk.org/reasons-to-be-nice-to-other-value-systems/,2015,blogPost,"Tomasik, Brian",Center on Long-Term Risk Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning,"Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.",http://arxiv.org/abs/1711.06782,2017,manuscript,"Eysenbach, Benjamin; Gu, Shixiang; Ibarz, Julian; Levine, Sergey", System 2 as working-memory augmented System 1 reasoning,"The terms System 1 and System 2 were originally coined by the psychologist Keith Stanovich and then popularized by Daniel Kahneman in his book Thinking, Fast and Slow. Stanovich noted that a number of fields within psychology had been developing various kinds of theories distinguishing between fast/intuitive on the one hand and slow/deliberative thinking on the other. Often these fields were not aware of each other. The S1/S2 model was offered as a general version of these specific theories, highlighting features of the two modes of thought that tended to appear in all the theories. Since then, academics have continued to discuss the models. Among other developments, Stanovich and other authors have discontinued the use of the System 1/System 2 terminology as misleading, choosing to instead talk about Type 1 and Type 2 processing. In this post, I will build on some of that discussion to argue that Type 2 processing is a particular way of chaining together the outputs of various subagents using working memory. Some of the processes involved in this chaining are themselves implemented by particular kinds of subagents. This post has three purposes: * Summarize some of the discussion about the dual process model that has taken place in recent years; in particular, the move to abandon the System 1/System 2 terminology. * Connect the framework of thought that I have been developing in my multi-agent minds sequence with dual-process models. * Push back on some popular interpretations of S1/S2 theory which I have been seeing on LW and other places, such as ones in which the two systems are viewed as entirely distinct, S1 is viewed as biased and S2 as logical, and ones in which it makes sense to identify more as one system or the other. Let’s start with looking at some criticism of the S1/S2 model endorsed by the person who coined the terms. WHAT TYPE 1/TYPE 2 PROCESSING IS NOT The terms “System 1 and System 2” suggest just that: two distinct, clear",https://www.lesswrong.com/posts/HbXXd2givHBBLxr3d/system-2-as-working-memory-augmented-system-1-reasoning,2019,blogPost,"Sotala, Kaj",LessWrong Universal Intelligence: A Definition of Machine Intelligence,,http://link.springer.com/10.1007/s11023-007-9079-x,2007,journalArticle,"Legg, Shane; Hutter, Marcus",Minds and Machines Immigration Policy and the Global Competition for AI Talent,"Current immigration policies may undermine the historic strength of the United States in attracting and retaining international AI talent. This report examines the immigration policies of four U.S. economic competitor nations—the United Kingdom, Canada, France, and Australia—to offer best practices for ensuring future AI competitiveness.",https://cset.georgetown.edu/research/immigration-policy-and-the-global-competition-for-ai-talent/,2020,report,"Huang, Tina; Arnold, Zachary", Hybrid Models with Deep and Invertible Features,"We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i.e. a normalizing flow). An attractive property of our model is that both p(features), the density of the features, and p(targets | features), the predictive distribution, can be computed exactly in a single feed-forward pass. We show that our hybrid model, despite the invertibility constraints, achieves similar accuracy to purely predictive models. Moreover the generative component remains a good model of the input features despite the hybrid optimization objective. This offers additional capabilities such as detection of out-of-distribution inputs and enabling semi-supervised learning. The availability of the exact joint density p(targets, features) also allows us to compute many quantities readily, making our hybrid model a useful building block for downstream applications of probabilistic deep learning.",http://arxiv.org/abs/1902.02767,2019,conferencePaper,"Nalisnick, Eric; Matsukawa, Akihiro; Teh, Yee Whye; Gorur, Dilan; Lakshminarayanan, Balaji","arXiv:1902.02767 [cs, stat]" Curiosity Killed the Cat and the Asymptotically Optimal Agent,"Reinforcement learners are agents that learn to pick actions that lead to high reward. Ideally, the value of a reinforcement learner's policy approaches optimality--where the optimal informed policy is the one which maximizes reward. Unfortunately, we show that if an agent is guaranteed to be ""asymptotically optimal"" in any (stochastically computable) environment, then subject to an assumption about the true environment, this agent will be either destroyed or incapacitated with probability 1; both of these are forms of traps as understood in the Markov Decision Process literature. Environments with traps pose a well-known problem for agents, but we are unaware of other work which shows that traps are not only a risk, but a certainty, for agents of a certain caliber. Much work in reinforcement learning uses an ergodicity assumption to avoid this problem. Often, doing theoretical research under simplifying assumptions prepares us to provide practical solutions even in the absence of those assumptions, but the ergodicity assumption in reinforcement learning may have led us entirely astray in preparing safe and effective exploration strategies for agents in dangerous environments. Rather than assuming away the problem, we present an agent with the modest guarantee of approaching the performance of a mentor, doing safe exploration instead of reckless exploration.",http://arxiv.org/abs/2006.03357,2020,manuscript,"Cohen, Michael K.; Hutter, Marcus", Asymmetric Actor Critic for Image-Based Robot Learning,"Deep reinforcement learning (RL) has proven a powerful technique in many sequential decision making domains. However, Robotics poses many challenges for RL, most notably training on a physical system can be expensive and dangerous, which has sparked significant interest in learning control policies using a physics simulator. While several recent works have shown promising results in transferring policies trained in simulation to the real world, they often do not fully utilize the advantage of working with a simulator. In this work, we exploit the full state observability in the simulator to train better policies which take as input only partial observations (RGBD images). We do this by employing an actor-critic training algorithm in which the critic is trained on full states while the actor (or policy) gets rendered images as input. We show experimentally on a range of simulated tasks that using these asymmetric inputs significantly improves performance. Finally, we combine this method with domain randomization and show real robot experiments for several tasks like picking, pushing, and moving a block. We achieve this simulation to real world transfer without training on any real world data.",http://www.roboticsproceedings.org/rss14/p08.pdf,2018,conferencePaper,"Pinto, Lerrel; Andrychowicz, Marcin; Welinder, Peter; Zaremba, Wojciech; Abbeel, Pieter",Robotics: Science and Systems XIV 2016 Expert Survey on Progress in AI,"Published June 2016; last substantial update before Oct 2017 The 2016 Expert Survey on Progress in AI is a survey of machine learning researchers that Katja Grace and John Salvatier of AI Impacts ran in collaboration with Allan Dafoe, Baobao Zhang, and Owain Evans in 2016. Details Some survey results are reported in When Will...",https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/,2016,blogPost,"Grace, Katja; Salvatier, John; Dafoe, Allan; Zhang, Baobao; Evans, Owain",AI Impacts CURL: Contrastive Unsupervised Representations for Reinforcement Learning,"We present CURL: Contrastive Unsupervised Representations for Reinforcement Learning. CURL extracts high-level features from raw pixels using contrastive learning and performs offpolicy control on top of the extracted features. CURL outperforms prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 1.9x and 1.2x performance gains at the 100K environment and interaction steps benchmarks respectively. On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features. Our code is open-sourced and available at https://www. github.com/MishaLaskin/curl.",http://arxiv.org/abs/2004.04136,2020,conferencePaper,"Srinivas, Aravind; Laskin, Michael; Abbeel, Pieter",Proceedings of the 37th International Conference on Machine Learning Coherent Extrapolated Volition,,,2004,manuscript,"Yudkowsky, Eliezer", AGI Safety Literature Review,"The development of Artificial General Intelligence (AGI) promises to be a major event. Along with its many potential benefits, it also raises serious safety concerns (Bostrom, 2014). The intention of this paper is to provide an easily accessible and up-to-date collection of references for the emerging field of AGI safety. A significant number of safety problems for AGI have been identified. We list these, and survey recent research on solving them. We also cover works on how best to think of AGI from the limited knowledge we have today, predictions for when AGI will first be created, and what will happen after its creation. Finally, we review the current public policy on AGI.",http://arxiv.org/abs/1805.01109,2018,conferencePaper,"Everitt, Tom; Lea, Gary; Hutter, Marcus",Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence Mortal universal agents & wireheading,,https://www6.inrae.fr/mia-paris/Equipes/Membres/Anciens/Laurent-Orseau/Mortal-universal-agents-wireheading,2015,blogPost,"Orseau, Laurent",MIA Paris The average utilitarian’s solipsism wager,"The following prudential argument is relatively common in my circles: We probably live in a simulation, but if we don’t, our actions matter much more. Thus, expected value calculations are do…",https://casparoesterheld.com/2017/03/15/the-average-utilitarians-solipsism-wager/,2017,blogPost,Caspar,The Universe from an Intentional Stance Planning to Explore via Self-Supervised World Models,"Reinforcement learning allows solving complex tasks, however, the learning tends to be task-specific and the sample efficiency remains a challenge. We present Plan2Explore, a self-supervised reinforcement learning agent that tackles both these challenges through a new approach to self-supervised exploration and fast adaptation to new tasks, which need not be known during exploration. During exploration, unlike prior methods which retrospectively compute the novelty of observations after the agent has already reached them, our agent acts efficiently by leveraging planning to seek out expected future novelty. After exploration, the agent quickly adapts to multiple downstream tasks in a zero or a few-shot manner. We evaluate on challenging control tasks from high-dimensional image inputs. Without any training supervision or task-specific interaction, Plan2Explore outperforms prior self-supervised exploration methods, and in fact, almost matches the performances oracle which has access to rewards. Videos and code at https://ramanans1.github.io/plan2explore/",http://arxiv.org/abs/2005.05960,2020,conferencePaper,"Sekar, Ramanan; Rybkin, Oleh; Daniilidis, Kostas; Abbeel, Pieter; Hafner, Danijar; Pathak, Deepak",Proceedings of the 37th International Conference on Machine Learning Safe Reinforcement Learning with Model Uncertainty Estimates,"Many current autonomous systems are being designed with a strong reliance on black box predictions from deep neural networks (DNNs). However, DNNs tend to be overconfident in predictions on unseen data and can give unpredictable results for far-from-distribution test data. The importance of predictions that are robust to this distributional shift is evident for safety-critical applications, such as collision avoidance around pedestrians. Measures of model uncertainty can be used to identify unseen data, but the state-of-the-art extraction methods such as Bayesian neural networks are mostly intractable to compute. This paper uses MC-Dropout and Bootstrapping to give computationally tractable and parallelizable uncertainty estimates. The methods are embedded in a Safe Reinforcement Learning framework to form uncertainty-aware navigation around pedestrians. The result is a collision avoidance policy that knows what it does not know and cautiously avoids pedestrians that exhibit unseen behavior. The policy is demonstrated in simulation to be more robust to novel observations and take safer actions than an uncertainty-unaware baseline.",http://arxiv.org/abs/1810.08700,2019,conferencePaper,"Lütjens, Björn; Everett, Michael; How, Jonathan P.",arXiv:1810.08700 [cs] Exploring AI Futures Through Role Play,,https://dl.acm.org/doi/10.1145/3375627.3375817,2020,conferencePaper,"Avin, Shahar; Gruetzemacher, Ross; Fox, James","Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society" An Optimistic Perspective on Offline Reinforcement Learning,"Off-policy reinforcement learning (RL) using a fixed offline dataset of logged interactions is an important consideration in real world applications. This paper studies offline RL using the DQN replay dataset comprising the entire replay experience of a DQN agent on 60 Atari 2600 games. We demonstrate that recent off-policy deep RL algorithms, even when trained solely on this fixed dataset, outperform the fully trained DQN agent. To enhance generalization in the offline setting, we present Random Ensemble Mixture (REM), a robust Q-learning algorithm that enforces optimal Bellman consistency on random convex combinations of multiple Q-value estimates. Offline REM trained on the DQN replay dataset surpasses strong RL baselines. Ablation studies highlight the role of offline dataset size and diversity as well as the algorithm choice in our positive results. Overall, the results here present an optimistic view that robust RL algorithms trained on sufficiently large and diverse offline datasets can lead to high quality policies. The DQN replay dataset can serve as an offline RL benchmark and is open-sourced.",http://arxiv.org/abs/1907.04543,2020,conferencePaper,"Agarwal, Rishabh; Schuurmans, Dale; Norouzi, Mohammad","arXiv:1907.04543 [cs, stat]" The stabilization of environments,,https://linkinghub.elsevier.com/retrieve/pii/000437029400006M,1995,journalArticle,"Hammond, Kristian J.; Converse, Timothy M.; Grass, Joshua W.",Artificial Intelligence Suphx: Mastering Mahjong with Deep Reinforcement Learning,"Artificial Intelligence (AI) has achieved great success in many domains, and game AI is widely regarded as its beachhead since the dawn of AI. In recent years, studies on game AI have gradually evolved from relatively simple environments (e.g., perfect-information games such as Go, chess, shogi or two-player imperfect-information games such as heads-up Texas hold’em) to more complex ones (e.g., multi-player imperfect-information games such as multi-player Texas hold’em and StartCraft II). Mahjong is a popular multi-player imperfect-information game worldwide but very challenging for AI research due to its complex playing/scoring rules and rich hidden information. We design an AI for Mahjong, named Suphx, based on deep reinforcement learning with some newly introduced techniques including global reward prediction, oracle guiding, and run-time policy adaptation. Suphx has demonstrated stronger performance than most top human players in terms of stable rank and is rated above 99.99% of all the officially ranked human players in the Tenhou platform. This is the first time that a computer program outperforms most top human players in Mahjong.",http://arxiv.org/abs/2003.13590,2020,manuscript,"Li, Junjie; Koyamada, Sotetsu; Ye, Qiwei; Liu, Guoqing; Wang, Chao; Yang, Ruihan; Zhao, Li; Qin, Tao; Liu, Tie-Yan; Hon, Hsiao-Wuen", Towards a Human-like Open-Domain Chatbot,"We present Meena, a multi-turn open-domain chatbot trained end-to-end on data mined and filtered from public domain social media conversations. This 2.6B parameter neural network is simply trained to minimize perplexity of the next token. We also propose a human evaluation metric called Sensibleness and Specificity Average (SSA), which captures key elements of a human-like multi-turn conversation. Our experiments show strong correlation between perplexity and SSA. The fact that the best perplexity end-to-end trained Meena scores high on SSA (72% on multi-turn evaluation) suggests that a human-level SSA of 86% is potentially within reach if we can better optimize perplexity. Additionally, the full version of Meena (with a filtering mechanism and tuned decoding) scores 79% SSA, 23% higher in absolute SSA than the existing chatbots we evaluated.",http://arxiv.org/abs/2001.09977,2020,manuscript,"Adiwardana, Daniel; Luong, Minh-Thang; So, David R.; Hall, Jamie; Fiedel, Noah; Thoppilan, Romal; Yang, Zi; Kulshreshtha, Apoorv; Nemade, Gaurav; Lu, Yifeng; Le, Quoc V.", "Artificial intelligence: The future is superintelligent [Book review of ""Life 3.0: Being Human in the Age of Artificial Intelligence"" by Max Tegmark]",Stuart Russell weighs up a book on the risks and rewards of the AI revolution.,http://www.nature.com/articles/548520a,2017,journalArticle,"Russell, Stuart",Nature Historical and Technical Notes on Aqueducts from Prehistoric to Medieval Times,,http://www.mdpi.com/2073-4441/5/4/1996,2013,journalArticle,"De Feo, Giovanni; Angelakis, Andreas; Antoniou, Georgios; El-Gohary, Fatma; Haut, Benoît; Passchier, Cees; Zheng, Xiao",Water Gradient Surgery for Multi-Task Learning,"While deep learning and deep reinforcement learning (RL) systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge. Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning. However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently. The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood. In this work, we identify a set of three conditions of the multi-task optimization landscape that cause detrimental gradient interference, and develop a simple yet general approach for avoiding such interference between task gradients. We propose a form of gradient surgery that projects a task’s gradient onto the normal plane of the gradient of any other task that has a conflicting gradient. On a series of challenging multi-task supervised and multi-task RL problems, this approach leads to substantial gains in efficiency and performance. Further, it is model-agnostic and can be combined with previously-proposed multitask architectures for enhanced performance.",http://arxiv.org/abs/2001.06782,2020,conferencePaper,"Yu, Tianhe; Kumar, Saurabh; Gupta, Abhishek; Levine, Sergey; Hausman, Karol; Finn, Chelsea","34th Conference on Neural Information Processing Systems (NeurIPS 2020)," Three characteristics: impermanence,"This is the sixth post of the ""a non-mystical explanation of the three characteristics of existence"" series. IMPERMANENCE Like no-self and unsatisfactoriness, impermanence seems like a label for a broad cluster of related phenomena. A one-sentence description of it, phrased in experiential terms, would be that “All experienced phenomena, whether physical or mental, inner or outer, are impermanent”. As an intellectual claim, this does not sound too surprising: few people would seriously think that either physical things or mental experiences last forever. However, there are ways in which impermanence does contradict our intuitive assumptions. A conventional example of this is change blindness. In a typical change blindness experiment, people report having good awareness of the details of a picture shown to them: but when details are changed during an eye saccade, subjects fail to notice any difference. Maybe a person’s hat looks red, and people who have been looking right at the hat fail to notice that it looked green just a second ago: the consciousness of the green-ness has vanished, replaced entirely with red. People are typically surprised by this, thinking that “if it was red a second ago, surely I would remember that” - a thought that implicitly assumes that sense percepts leave permanent memories behind. But as long something does not explicitly store a piece of conscious information, it is gone as soon as it has been experienced. This is a natural consequence of the Global Neuronal Workspace (GNW) model of consciousness from neuroscience. As I have previously discussed, studies suggest that the content of consciousness corresponds to information held in a particular network of neurons called the ""global workspace"". This workspace can only hold a single piece of conscious content at a time, and new information is constantly trying to enter it, replacing the old information. Now if the content of your consciousness happens to be something like this:",https://www.lesswrong.com/posts/T8gD9mRDHnb2gyn9N/three-characteristics-impermanence,2020,blogPost,"Sotala, Kaj",LessWrong Preface to the sequence on value learning,"This is a meta-post about the upcoming sequence on Value Learning that will start to be published this Thursday. This preface will also be revised significantly once the second half of the sequence is fully written. PURPOSE OF THE SEQUENCE The first part of this sequence will be about the tractability of ambitious value learning, which is the idea of inferring a utility function for an AI system to optimize based on observing human behavior. After a short break, we will (hopefully) continue with the second part, which will be about why we might want to think about techniques that infer human preferences, even if we assume we won’t do ambitious value learning with such techniques. The aim of this part of the sequence is to gather the current best public writings on the topic, and provide a unifying narrative that ties them into a cohesive whole. This makes the key ideas more discoverable and discussable, and provides a quick reference for existing researchers. It is meant to teach the ideas surrounding one specific approach to aligning advanced AI systems. We’ll explore the specification problem, in which we would like to define the behavior we want to see from an AI system. Ambitious value learning is one potential avenue of attack on the specification problem, that assumes a particular model of an AI system (maximizing expected utility) and a particular source of data (human behavior). We will then delve into conceptual work on ambitious value learning that has revealed obstructions to this approach. There will be pointers to current research that aims to circumvent these obstructions. The second part of this sequence is currently being assembled, and this preface will be updated with details once it is ready. The first half of this sequence takes you near the cutting edge of conceptual work on the ambitious value learning problem, with some pointers to work being done at this frontier. Based on the arguments in the sequence, I am confident that the obvious f",https://www.alignmentforum.org/posts/oH8KMnXHnw964QyS6/preface-to-the-sequence-on-value-learning,2018,blogPost,"Shah, Rohin",AI Alignment Forum Logical Induction,"We present a computable algorithm that assigns probabilities to every logical statement in a given formal language, and refines those probabilities over time. For instance, if the language is Peano arithmetic, it assigns probabilities to all arithmetical statements, including claims about the twin prime conjecture, the outputs of long-running computations, and its own probabilities. We show that our algorithm, an instance of what we call a logical inductor, satisfies a number of intuitive desiderata, including: (1) it learns to predict patterns of truth and falsehood in logical statements, often long before having the resources to evaluate the statements, so long as the patterns can be written down in polynomial time; (2) it learns to use appropriate statistical summaries to predict sequences of statements whose truth values appear pseudorandom; and (3) it learns to have accurate beliefs about its own current beliefs, in a manner that avoids the standard paradoxes of self-reference. For example, if a given computer program only ever produces outputs in a certain range, a logical inductor learns this fact in a timely manner; and if late digits in the decimal expansion of $\pi$ are difficult to predict, then a logical inductor learns to assign $\approx 10\%$ probability to ""the $n$th digit of $\pi$ is a 7"" for large $n$. Logical inductors also learn to trust their future beliefs more than their current beliefs, and their beliefs are coherent in the limit (whenever $\phi \implies \psi$, $\mathbb{P}_\infty(\phi) \le \mathbb{P}_\infty(\psi)$, and so on); and logical inductors strictly dominate the universal semimeasure in the limit. These properties and many others all follow from a single logical induction criterion, which is motivated by a series of stock trading analogies. Roughly speaking, each logical sentence $\phi$ is associated with a stock that is worth \$1 per share if [...]",http://arxiv.org/abs/1609.03543,2017,manuscript,"Garrabrant, Scott; Benson-Tilsen, Tsvi; Critch, Andrew; Soares, Nate; Taylor, Jessica", Roadmap to a Roadmap: How Could We Tell When AGI is a ‘Manhattan Project’ Away?,"This paper argues that at a certain point in research toward AGI, the problem may become well-enough theorized that a clear roadmap exists for achieving it, such that a Manhattan Project-like effort could greatly shorten the time to completion. If state actors perceive that this threshold has been crossed, their incentives around openness and international cooperation may shift rather suddenly, with serious implications for AI risks and the stability of international AI governance regimes. The paper characterizes how such a ‘runway’ period would be qualitatively different from preceding stages of AI research, and accordingly proposes a research program aimed at assessing how close the field of AI is to such a threshold—that is, it calls for the formulation of a ‘roadmap to the roadmap.’",http://dmip.webs.upv.es/EPAI2020/papers/EPAI_2020_paper_11.pdf,2020,conferencePaper,"Levin, John-Clark; Maas, Matthijs M", Implicit extortion,"Extortion can be equally effective, and harder to notice, when you don’t tell the target it’s occurring.",https://ai-alignment.com/implicit-extortion-3c80c45af1e3,2018,blogPost,"Christiano, Paul",AI Alignment (Medium) AI Paradigms and AI Safety: Mapping Artefacts and Techniques to Safety Issues,"AI safety often analyses a risk or safety issue, such as interruptibility, under a particular AI paradigm, such as reinforcement learning. But what is an AI paradigm and how does it affect the understanding and implications of the safety issue? Is AI safety research covering the most representative paradigms and the right combinations of paradigms with safety issues? Will current research directions in AI safety be able to anticipate more capable and powerful systems yet to come? In this paper we analyse these questions, introducing a distinction between two types of paradigms in AI: artefacts and techniques. We then use experimental data of research and media documents from AI Topics, an official publication of the AAAI, to examine how safety research is distributed across artefacts and techniques. We observe that AI safety research is not sufficiently anticipatory, and is heavily weighted towards certain research paradigms. We identify a need for AI safety to be more explicit about the artefacts and techniques for which a particular issue may be applicable, in order to identify gaps and cover a broader range of issues.",,2020,conferencePaper,"Hernandez-Orallo, Jose; Martınez-Plumed, Fernando; Avin, Shahar; Whittlestone, Jess; Ó hÉigeartaigh, Seán",European Conference on Artificial Intelligence Verification of deep probabilistic models,"Probabilistic models are a critical part of the modern deep learning toolbox - ranging from generative models (VAEs, GANs), sequence to sequence models used in machine translation and speech processing to models over functional spaces (conditional neural processes, neural processes). Given the size and complexity of these models, safely deploying them in applications requires the development of tools to analyze their behavior rigorously and provide some guarantees that these models are consistent with a list of desirable properties or specifications. For example, a machine translation model should produce semantically equivalent outputs for innocuous changes in the input to the model. A functional regression model that is learning a distribution over monotonic functions should predict a larger value at a larger input. Verification of these properties requires a new framework that goes beyond notions of verification studied in deterministic feedforward networks, since requiring worst-case guarantees in probabilistic models is likely to produce conservative or vacuous results. We propose a novel formulation of verification for deep probabilistic models that take in conditioning inputs and sample latent variables in the course of producing an output: We require that the output of the model satisfies a linear constraint with high probability over the sampling of latent variables and for every choice of conditioning input to the model. We show that rigorous lower bounds on the probability that the constraint is satisfied can be obtained efficiently. Experiments with neural processes show that several properties of interest while modeling functional spaces can be modeled within this framework (monotonicity, convexity) and verified efficiently using our algorithms",http://arxiv.org/abs/1812.02795,2018,conferencePaper,"Dvijotham, Krishnamurthy; Garnelo, Marta; Fawzi, Alhussein; Kohli, Pushmeet","arXiv:1812.02795 [cs, stat]" Counterfactual Accuracies For Alternative Models,,,2020,conferencePaper,"Bhatt, Umang; Gummadi, Krishna; Zafar, Muhammad Bilal; Weller, Adrian", Heuristics for clueless agents: how to get away with ignoring what matters most in ordinary decision-making,"Even our most mundane decisions have the potential to significantly impact the long-term future, but we are often clueless about what this impact may be. In this paper, we aim to characterize and solve two problems raised by recent discussions of cluelessness, which we term the Problems of Decision Paralysis and the Problem of Decision-Making Demandingness. After reviewing and rejecting existing solutions to both problems, we argue that the way forward is to be found in the distinction between procedural and substantive rationality. Clueless agents have access to a variety of heuristic decision-making procedures which are often rational responses to the decision problems that they face. By simplifying or even ignoring information about potential long-term impacts, heuristics produce effective decisions without demanding too much of ordinary decision-makers. We outline two classes of problem features bearing on the rationality of decision-making procedures for clueless agents, and show how these features can be used to shed light on our motivating problems.",,2020,report,"Thorstad, David A; Mogensen, Andreas L", Information Acquisition Under Resource Limitations in a Noisy Environment,"We introduce a theoretical model of information acquisition under resource limitations in a noisy environment. An agent must guess the truth value of a given Boolean formula ϕ after performing a bounded number of noisy tests of the truth values of variables in the formula. We observe that, in general, the problem of finding an optimal testing strategy for ϕ is hard, but we suggest a useful heuristic. The techniques we use also give insight into two apparently unrelated, but well-studied problems: (1) rational inattention (the optimal strategy may involve hardly ever testing variables that are clearly relevant to ϕ) and (2) what makes a formula hard to learn/remember.",,2018,conferencePaper,"Soloviev, Matvey; Halpern, Joseph Y",Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) The Rocket Alignment Problem,"The following is a fictional dialogue building off of AI Alignment: Why It’s Hard, and Where to Start.   (Somewhere in a not-very-near neighboring world, where science took a very different course…)   ALFONSO:  Hello, Beth. I’ve noticed a lot of speculations lately about “spaceplanes” being used to attack cities, or possibly becoming infused with malevolent... Read more »",https://intelligence.org/2018/10/03/rocket-alignment/,2018,blogPost,"Yudkowsky, Eliezer",Machine Intelligence Research Institute Imitation Learning from Imperfect Demonstration,"Imitation learning (IL) aims to learn an optimal policy from demonstrations. However, such demonstrations are often imperfect since collecting optimal ones is costly. To effectively learn from imperfect demonstrations, we propose a novel approach that utilizes confidence scores, which describe the quality of demonstrations. More specifically, we propose two confidence-based IL methods, namely two-step importance weighting IL (2IWIL) and generative adversarial IL with imperfect demonstration and confidence (IC-GAIL). We show that confidence scores given only to a small portion of sub-optimal demonstrations significantly improve the performance of IL both theoretically and empirically.",https://arxiv.org/abs/1901.09387v3,2019,conferencePaper,"Wu, Yueh-Hua; Charoenphakdee, Nontawat; Bao, Han; Tangkaratt, Voot; Sugiyama, Masashi",Proceedings of the 36th International Conference on Machine Learning Computational Extensive-Form Games,"We define solution concepts appropriate for computationally bounded players playing a fixed finite game. To do so, we need to define what it means for a \emph{computational game}, which is a sequence of games that get larger in some appropriate sense, to represent a single finite underlying extensive-form game. Roughly speaking, we require all the games in the sequence to have essentially the same structure as the underlying game, except that two histories that are indistinguishable (i.e., in the same information set) in the underlying game may correspond to histories that are only computationally indistinguishable in the computational game. We define a computational version of both Nash equilibrium and sequential equilibrium for computational games, and show that every Nash (resp., sequential) equilibrium in the underlying game corresponds to a computational Nash (resp., sequential) equilibrium in the computational game. One advantage of our approach is that if a cryptographic protocol represents an abstract game, then we can analyze its strategic behavior in the abstract game, and thus separate the cryptographic analysis of the protocol from the strategic analysis.",http://arxiv.org/abs/1506.03030,2016,conferencePaper,"Halpern, Joseph Y.; Pass, Rafael; Seeman, Lior",Proceedings of the 2016 ACM Conference on Economics and Computation On Learning Intrinsic Rewards for Policy Gradient Methods,"In many sequential decision making tasks, it is challenging to design reward functions that help an RL agent efficiently learn behavior that is considered good by the agent designer. A number of different formulations of the reward-design problem, or close variants thereof, have been proposed in the literature. In this paper we build on the Optimal Rewards Framework of Singh et.al. that defines the optimal intrinsic reward function as one that when used by an RL agent achieves behavior that optimizes the task-specifying or extrinsic reward function. Previous work in this framework has shown how good intrinsic reward functions can be learned for lookahead search based planning agents. Whether it is possible to learn intrinsic reward functions for learning agents remains an open problem. In this paper we derive a novel algorithm for learning intrinsic rewards for policy-gradient based learning agents. We compare the performance of an augmented agent that uses our algorithm to provide additive intrinsic rewards to an A2C-based policy learner (for Atari games) and a PPO-based policy learner (for Mujoco domains) with a baseline agent that uses the same policy learners but with only extrinsic rewards. Our results show improved performance on most but not all of the domains.",http://arxiv.org/abs/1804.06459,2018,conferencePaper,"Zheng, Zeyu; Oh, Junhyuk; Singh, Satinder",Advances in Neural Information Processing Systems 31 Learning Task Specifications from Demonstrations,"Real world applications often naturally decompose into several sub-tasks. In many settings (e.g., robotics) demonstrations provide a natural way to specify the sub-tasks. However, most methods for learning from demonstrations either do not provide guarantees that the artifacts learned for the sub-tasks can be safely recombined or limit the types of composition available. Motivated by this deficit, we consider the problem of inferring Boolean non-Markovian rewards (also known as logical trace properties or specifications) from demonstrations provided by an agent operating in an uncertain, stochastic environment. Crucially, specifications admit well-defined composition rules that are typically easy to interpret. In this paper, we formulate the specification inference task as a maximum a posteriori (MAP) probability inference problem, apply the principle of maximum entropy to derive an analytic demonstration likelihood model and give an efficient approach to search for the most likely specification in a large candidate pool of specifications. In our experiments, we demonstrate how learning specifications can help avoid common problems that often arise due to ad-hoc reward composition.",https://arxiv.org/abs/1710.03875v5,2018,conferencePaper,"Vazquez-Chanlatte, Marcell; Jha, Susmit; Tiwari, Ashish; Ho, Mark K.; Seshia, Sanjit A.",Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Following human norms,"So far we have been talking about how to learn “values” or “instrumental goals”. This would be necessary if we want to figure out how to build an AI system that does exactly what we want it to do. However, we’re probably fine if we can keep learning and building better AI systems. This suggests that it’s sufficient to build AI systems that don’t screw up so badly that it ends this process. If we accomplish that, then steady progress in AI will eventually get us to AI systems that do what we want. So, it might be helpful to break down the problem of learning values into the subproblems of learning what to do, and learning what not to do. Standard AI research will continue to make progress on learning what to do; catastrophe happens when our AI system doesn’t know what not to do. This is the part that we need to make progress on. This is a problem that humans have to solve as well. Children learn basic norms such as not to litter, not to take other people’s things, what not to say in public, etc. As argued in Incomplete Contracting and AI alignment, any contract between humans is never explicitly spelled out, but instead relies on an external unwritten normative structure under which a contract is interpreted. (Even if we don’t explicitly ask our cleaner not to break any vases, we still expect them not to intentionally do so.) We might hope to build AI systems that infer and follow these norms, and thereby avoid catastrophe. It’s worth noting that this will probably not be an instance of narrow value learning, since there are several differences: * Narrow value learning requires that you learn what to do, unlike norm inference. * Norm following requires learning from a complex domain (human society), whereas narrow value learning can be applied in simpler domains as well. * Norms are a property of groups of agents, whereas narrow value learning can be applied in settings with a single agent. Despite this, I have included it in this sequence because it",https://www.alignmentforum.org/posts/eBd6WvzhuqduCkYv3/following-human-norms,2019,blogPost,"Shah, Rohin",AI Alignment Forum A Model for General Intelligence,"The overarching problem in artificial intelligence (AI) is that we do not understand the intelligence process well enough to enable the development of adequate computational models. Much work has been done in AI over the years at lower levels, but a big part of what has been missing involves the high level, abstract, general nature of intelligence. We address this gap by developing a model for general intelligence. To accomplish this, we focus on three basic aspects of intelligence. First, we must realize the general order and nature of intelligence at a high level. Second, we must come to know what these realizations mean with respect to the overall intelligence process. Third, we must describe these realizations as clearly as possible. We propose a hierarchical model to help capture and exploit the order within intelligence. The underlying order involves patterns of signals that become organized, stored and activated in space and time. These patterns can be described using a simple, general hierarchy, with physical signals at the lowest level, information in the middle, and abstract signal representations at the top. This high level perspective provides a big picture that literally helps us see the intelligence process, thereby enabling fundamental realizations, a better understanding and clear descriptions of the intelligence process. The resulting model can be used to support all kinds of information processing across multiple levels of abstraction. As computer technology improves, and as cooperation increases between humans and computers, people will become more efficient and more productive in performing their information processing tasks.",http://arxiv.org/abs/1811.02546,2018,manuscript,"Yaworsky, Paul", Conclusion to the sequence on value learning,"This post summarizes the sequence on value learning. While it doesn’t introduce any new ideas, it does shed light on which parts I would emphasize most, and the takeaways I hope that readers get. I make several strong claims here; interpret these as my impressions, not my beliefs. I would guess many researchers disagree with the (strength of the) claims, though I do not know what their arguments would be. Over the last three months we’ve covered a lot of ground. It’s easy to lose sight of the overall picture over such a long period of time, so let's do a brief recap. THE “OBVIOUS” APPROACH Here is an argument for the importance of AI safety: * Any agent that is much more intelligent than us should not be exploitable by us, since if we could find some way to exploit the agent, the agent could also find the exploit and patch it. * Anything that is not exploitable must be an expected utility maximizer; since we cannot exploit a superintelligent AI, it must look like an expected utility maximizer to us. * Due to Goodhart’s Law, even “slightly wrong” utility functions can lead to catastrophic outcomes when maximized. * Our utility function is complex and fragile, so getting the “right” utility function is difficult. This argument implies that by the time we have a superintelligent AI system, there is only one part of that system that could still have been influenced by us: the utility function. Every other feature of the AI system is fixed by math. As a result, we must necessarily solve AI alignment by influencing the utility function. So of course, the natural approach is to get the right utility function, or at least an adequate one, and have our AI system optimize that utility function. Besides fragility of value, which you might hope that machine learning could overcome, the big challenge is that even if you assume full access to the entire human policy, we cannot infer their values without making an assumption about how their preferences r",https://www.alignmentforum.org/posts/TE5nJ882s5dCMkBB8/conclusion-to-the-sequence-on-value-learning,2019,blogPost,"Shah, Rohin",AI Alignment Forum Transmitting fibers in the brain: Total length and distribution of lengths,"The human brain’s approximately 86 billion neurons are probably connected by something like 850,000 km of axons and dendrites. Of this total, roughly 80% is short-range, local connections (averaging 680 microns in length), and approximately 20% is long-range, global connections in the form of myelinated fibers (likely averaging several centimeters in length). Background The brain’s...",https://aiimpacts.org/transmitting-fibers-in-the-brain-total-length-and-distribution-of-lengths/,2018,blogPost,"McCaslin, Tegan",AI Impacts Lessons for Artificial Intelligence from Other Global Risks,"The prominence of artificial intelligence (AI) as a global risk is a relatively recent phenomenon. Other global risks have longer histories and larger bodies of scholarship. The study of these other risks can offer considerable insight to the study of AI risk. This paper examines four risks: biotechnology, nuclear weapons, global warming, and asteroid collision. Several overarching lessons are found. First, the extreme severity of global risks is often insufficient to motivate action to reduce the risks. Second, perceptions of global risks can be influenced by people’s incentives and by their cultural and intellectual orientations. Third, the success of efforts to address global risks can depend on the extent of buy-in from parties who may be negatively affected by the efforts. Fourth, global risks and risk reduction initiatives can be shaped by broader socio-political conditions, such as the degree of policy influence of private industry within a political jurisdiction. The paper shows how these and other lessons can inform efforts to reduce risks from AI.",,2019,bookSection,"Baum, Seth",The Global Politics of Artificial Intelligence Fast and Easy Infinitely Wide Networks with Neural Tangents,,http://ai.googleblog.com/2020/03/fast-and-easy-infinitely-wide-networks.html,2020,blogPost,"Schoenholz, Samuel S; Novak, Roman",Google AI Blog Two Neglected Problems in Human-AI Safety,"In this post I describe a couple of human-AI safety problems in more detail. These helped motivate my proposed hybrid approach, and I think need to be addressed by other AI safety approaches that currently do not take them into account. 1. How to prevent ""aligned"" AIs from unintentionally corrupting human values? We know that ML systems tend to have problems with adversarial examples and distributional shifts in general. There seems to be no reason not to expect that human value functions have similar problems, which even ""aligned"" AIs could trigger unless they are somehow designed not to. For example, such AIs could give humans so much power so quickly or put them in such novel situations that their moral development can't keep up, and their value systems no longer apply or give essentially random answers. AIs could give us new options that are irresistible to some parts of our motivational systems, like more powerful versions of video game and social media addiction. In the course of trying to figure out what we most want or like, they could in effect be searching for adversarial examples on our value functions. At our own request or in a sincere attempt to help us, they could generate philosophical or moral arguments that are wrong but extremely persuasive. (Some of these issues, like the invention of new addictions and new technologies in general, would happen even without AI, but I think AIs would likely, by default, strongly exacerbate the problem by differentially accelerating such technologies faster than progress in understanding how to safely handle them.) 2. How to defend against intentional attempts by AIs to corrupt human values? It looks like we may be headed towards a world of multiple AIs, some of which are either unaligned, or aligned to other owners or users. In such a world there's a strong incentive to use one's own AIs to manipulate other people's values in a direction that benefits oneself (even if the resulting loss to others are greater",https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety,2018,blogPost,"Dai, Wei",AI Alignment Forum A formal theory of inductive inference. Part I,,https://linkinghub.elsevier.com/retrieve/pii/S0019995864902232,1964,journalArticle,"Solomonoff, R.J.",Information and Control Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles,"Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.",http://arxiv.org/abs/1612.01474,2017,conferencePaper,"Lakshminarayanan, Balaji; Pritzel, Alexander; Blundell, Charles","arXiv:1612.01474 [cs, stat]" Resilience to global food supply catastrophes,,http://link.springer.com/10.1007/s10669-015-9549-2,2015,journalArticle,"Baum, Seth D.; Denkenberger, David C.; Pearce, Joshua M.; Robock, Alan; Winkler, Richelle",Environment Systems and Decisions Two guarantees,"I suspect AI alignment should aim to separately establish good performance in the average case, and lack-of-malice in the worst case.",https://ai-alignment.com/two-guarantees-c4c03a6b434f,2018,blogPost,"Christiano, Paul",AI Alignment (Medium) "Intelligent Machinery, A Heretical Theory (c.1951)","Turing gave the presentation ‘Intelligent Machinery, A Heretical Theory’ on a radio discussion programme called The ’51 Society. Named after the year in which the programme first went to air, The ’51 Society was produced by the BBC Home Service at their Manchester studio and ran for several years. A presentation by the week’s guest would be followed by a panel discussion. Regulars on the panel included Max Newman, Professor of Mathematics at Manchester, the philosopher Michael Polanyi, then Professor of Social Studies at Manchester, and the mathematician Peter Hilton, a younger member of Newman’s department at Manchester who had worked with Turing and Newman at Bletchley Park. Turing’s target in ‘Intelligent Machinery, A Heretical Theory’ is the claim that ‘You cannot make a machine to think for you’ (p. 472). A common theme in his writing is that if a machine is to be intelligent, then it will need to ‘learn by experience’ (probably with some pre-selection, by an external educator, of the experiences to which the machine will be subjected). The present article continues the discussion of machine learning begun in Chapters 10 and 11. Turing remarks that the ‘human analogy alone’ suggests that a process of education ‘would in practice be an essential to the production of a reasonably intelligent machine within a reasonably short space of time’ (p. 473). He emphasizes the point, also made in Chapter 11, that one might ‘start from a comparatively simple machine, and, by subjecting it to a suitable range of ‘‘experience’’ transform it into one which was more elaborate, and was able to deal with a far greater range of contingencies’ (p. 473). Turing goes on to give some indication of how learning might be accomplished, introducing the idea of a machine’s building up what he calls ‘indexes of experiences’ (p. 474). (This idea is not mentioned elsewhere in his writings.) An example of an index of experiences is a list (ordered in some way) of situations in which the machine has found itself, coupled with the action that was taken, and the outcome, good or bad. The situations are described in terms of features.",https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198250791.001.0001/isbn-9780198250791-book-part-18,2004,bookSection,"Turing, Alan",The Essential Turing Discrete-Continuous Mixtures in Probabilistic Programming: Generalized Semantics and Inference Algorithms,"Despite the recent successes of probabilistic programming languages (PPLs) in AI applications, PPLs offer only limited support for random variables whose distributions combine discrete and continuous elements. We develop the notion of measure-theoretic Bayesian networks (MTBNs) and use it to provide more general semantics for PPLs with arbitrarily many random variables defined over arbitrary measure spaces. We develop two new general sampling algorithms that are provably correct under the MTBN framework: the lexicographic likelihood weighting (LLW) for general MTBNs and the lexicographic particle filter (LPF), a specialized algorithm for state-space models. We further integrate MTBNs into a widely used PPL system, BLOG, and verify the effectiveness of the new inference algorithms through representative examples.",http://arxiv.org/abs/1806.02027,2018,conferencePaper,"Wu, Yi; Srivastava, Siddharth; Hay, Nicholas; Du, Simon; Russell, Stuart",Proceedings of the 35th International Conference on Machine Learning Groupthink: Collective Delusions in Organizations and Markets,,https://academic.oup.com/restud/article-lookup/doi/10.1093/restud/rds030,2013,journalArticle,"Bénabou, Roland",The Review of Economic Studies Cold Case: The Lost MNIST Digits,"Although the popular MNIST dataset [LeCun et al., 1994] is derived from the NIST database [Grother and Hanaoka, 1995], the precise processing steps for this derivation have been lost to time. We propose a reconstruction that is accurate enough to serve as a replacement for the MNIST dataset, with insignificant changes in accuracy. We trace each MNIST digit to its NIST source and its rich metadata such as writer identifier, partition identifier, etc. We also reconstruct the complete MNIST test set with 60,000 samples instead of the usual 10,000. Since the balance 50,000 were never distributed, they can be used to investigate the impact of twenty-five years of MNIST experiments on the reported testing performances. Our limited results unambiguously confirm the trends observed by Recht et al. [2018, 2019]: although the misclassification rates are slightly off, classifier ordering and model selection remain broadly reliable. We attribute this phenomenon to the pairing benefits of comparing classifiers on the same digits.",http://arxiv.org/abs/1905.10498,2019,conferencePaper,"Yadav, Chhavi; Bottou, Léon",Advances in Neural Information Processing Systems 32 (NeurIPS 2019) Assessing Contributions of Major Emitters' Paris-Era Decisions to Future Temperature Extremes,"The likelihood and severity of high-impact future temperature extremes can be reduced through climate change mitigation efforts. However, meeting the Paris Agreement warming limits requires notably stronger greenhouse gas emissions reduction efforts by major emitters than existing pledges. We examine the impact of Paris-era decision-making by the world's three largest greenhouse gas emitters (EU, USA, and China) on projected future extreme temperature events. Country-level contributions to the occurrence of future temperature extremes are calculated based on current emissions policies and sequential mitigation efforts, using a new metric called the Contribution to Excess Risk Ratio. We demonstrate the Contribution concept by applying it to extreme monthly temperature projections. In many regions, future extremes depend on the current and future carbon dioxide emissions reductions adopted by major emitters. By implementing stronger Paris-era climate pledges, major emitters can reduce the frequency of future extremes and their own calculated contributions to these temperature extremes.",https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018GL081608,2019,journalArticle,"Lewis, Sophie C.; Perkins‐Kirkpatrick, Sarah E.; Althor, Glenn; King, Andrew D.; Kemp, Luke",Geophysical Research Letters Neural-encoding Human Experts' Domain Knowledge to Warm Start Reinforcement Learning,"Deep reinforcement learning has been successful in a variety of tasks, such as game playing and robotic manipulation. However, attempting to learn \textit{tabula rasa} disregards the logical structure of many domains as well as the wealth of readily available knowledge from domain experts that could help ""warm start"" the learning process. We present a novel reinforcement learning technique that allows for intelligent initialization of a neural network weights and architecture. Our approach permits the encoding domain knowledge directly into a neural decision tree, and improves upon that knowledge with policy gradient updates. We empirically validate our approach on two OpenAI Gym tasks and two modified StarCraft 2 tasks, showing that our novel architecture outperforms multilayer-perceptron and recurrent architectures. Our knowledge-based framework finds superior policies compared to imitation learning-based and prior knowledge-based approaches. Importantly, we demonstrate that our approach can be used by untrained humans to initially provide >80% increase in expected reward relative to baselines prior to training (p < 0.001), which results in a >60% increase in expected reward after policy optimization (p = 0.011).",https://arxiv.org/abs/1902.06007v4,2019,manuscript,"Silva, Andrew; Gombolay, Matthew", How much could refuges help us recover from a global catastrophe?,,,2015,journalArticle,"Beckstead, Nick",Futures Sparse Graphical Memory for Robust Planning,"To operate effectively in the real world, agents should be able to act from high-dimensional raw sensory input such as images and achieve diverse goals across long time-horizons. Current deep reinforcement and imitation learning methods can learn directly from high-dimensional inputs but do not scale well to long-horizon tasks. In contrast, classical graphical methods like A* search are able to solve long-horizon tasks, but assume that the state space is abstracted away from raw sensory input. Recent works have attempted to combine the strengths of deep learning and classical planning; however, dominant methods in this domain are still quite brittle and scale poorly with the size of the environment. We introduce Sparse Graphical Memory (SGM), a new data structure that stores states and feasible transitions in a sparse memory. SGM aggregates states according to a novel two-way consistency objective, adapting classic state aggregation criteria to goal-conditioned RL: two states are redundant when they are interchangeable both as goals and as starting states. Theoretically, we prove that merging nodes according to two-way consistency leads to an increase in shortest path lengths that scales only linearly with the merging threshold. Experimentally, we show that SGM significantly outperforms current state of the art methods on long horizon, sparse-reward visual navigation tasks. Project video and code are available at https://mishalaskin.github.io/sgm/",http://arxiv.org/abs/2003.06417,2020,conferencePaper,"Emmons, Scott; Jain, Ajay; Laskin, Michael; Kurutach, Thanard; Abbeel, Pieter; Pathak, Deepak",Advances in Neural Information Processing Systems 33 Pre-proceedings Towards Learning Multi-agent Negotiations via Self-Play,"Making sophisticated, robust, and safe sequential decisions is at the heart of intelligent systems. This is especially critical for planning in complex multi-agent environments, where agents need to anticipate other agents’ intentions and possible future actions. Traditional methods formulate the problem as a Markov Decision Process, but the solutions often rely on various assumptions and become brittle when presented with corner cases. In contrast, deep reinforcement learning (Deep RL) has been very effective at finding policies by simultaneously exploring, interacting, and learning from environments. Leveraging the powerful Deep RL paradigm, we demonstrate that an iterative procedure of self-play can create progressively more diverse environments, leading to the learning of sophisticated and robust multi-agent policies. We demonstrate this in a challenging multi-agent simulation of merging traffic, where agents must interact and negotiate with others in order to successfully merge on or off the road. While the environment starts off simple, we increase its complexity by iteratively adding an increasingly diverse set of agents to the agent “zoo” as training progresses. Qualitatively, we find that through selfplay, our policies automatically learn interesting behaviors such as defensive driving, overtaking, yielding, and the use of signal lights to communicate intentions to other agents. In addition, quantitatively, we show a dramatic improvement of the success rate of merging maneuvers from 63% to over 98%.",http://arxiv.org/abs/2001.10208,2020,conferencePaper,"Tang, Yichuan Charlie",Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Human-aligned artificial intelligence is a multiobjective problem,"As the capabilities of artificial intelligence (AI) systems improve, it becomes important to constrain their actions to ensure their behaviour remains beneficial to humanity. A variety of ethical, legal and safety-based frameworks have been proposed as a basis for designing these constraints. Despite their variations, these frameworks share the common characteristic that decision-making must consider multiple potentially conflicting factors. We demonstrate that these alignment frameworks can be represented as utility functions, but that the widely used Maximum Expected Utility (MEU) paradigm provides insufficient support for such multiobjective decision-making. We show that a Multiobjective Maximum Expected Utility paradigm based on the combination of vector utilities and non-linear action–selection can overcome many of the issues which limit MEU’s effectiveness in implementing aligned AI. We examine existing approaches to multiobjective AI, and identify how these can contribute to the development of human-aligned intelligent agents.",https://doi.org/10.1007/s10676-017-9440-6,2018,journalArticle,"Vamplew, Peter; Dazeley, Richard; Foale, Cameron; Firmin, Sally; Mummery, Jane",Ethics and Information Technology Legibility and predictability of robot motion,,http://ieeexplore.ieee.org/document/6483603/,2013,conferencePaper,"Dragan, Anca D.; Lee, Kenton C.T.; Srinivasa, Siddhartha S.",2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) A Unified Framework for Planning in Adversarial and Cooperative Environments,"Users of AI systems may rely upon them to produce plans for achieving desired objectives. Such AI systems should be able to compute obfuscated plans whose execution in adversarial situations protects privacy, as well as legible plans which are easy for team members to understand in cooperative situations. We develop a unified framework that addresses these dual problems by computing plans with a desired level of comprehensibility from the point of view of a partially informed observer. For adversarial settings, our approach produces obfuscated plans with observations that are consistent with at least k goals from a set of decoy goals. By slightly varying our framework, we present an approach for producing legible plans in cooperative settings such that the observation sequence projected by the plan is consistent with at most j goals from a set of confounding goals. In addition, we show how the observability of the observer can be controlled to either obfuscate or convey the actions in a plan when the goal is known to the observer. We present theoretical results on the complexity analysis of our approach. We also present an empirical evaluation to show the feasibility and usefulness of our approaches using IPC domains.",http://www.aaai.org/ojs/index.php/AAAI/article/view/4093,2019,conferencePaper,"Kulkarni, Anagha; Srivastava, Siddharth; Kambhampati, Subbarao",Proceedings of the AAAI Conference on Artificial Intelligence Using Natural Language for Reward Shaping in Reinforcement Learning,"Recent reinforcement learning (RL) approaches have shown strong performance in complex domains such as Atari games, but are often highly sample inefficient. A common approach to reduce interaction time with the environment is to use reward shaping, which involves carefully designing reward functions that provide the agent intermediate rewards for progress towards the goal. However, designing appropriate shaping rewards is known to be difficult as well as time-consuming. In this work, we address this problem by using natural language instructions to perform reward shaping. We propose the LanguagE-Action Reward Network (LEARN), a framework that maps free-form natural language instructions to intermediate rewards based on actions taken by the agent. These intermediate language-based rewards can seamlessly be integrated into any standard reinforcement learning algorithm. We experiment with Montezuma's Revenge from the Atari Learning Environment, a popular benchmark in RL. Our experiments on a diverse set of 15 tasks demonstrate that, for the same number of interactions with the environment, language-based rewards lead to successful completion of the task 60% more often on average, compared to learning without language.",http://arxiv.org/abs/1903.02020,2019,conferencePaper,"Goyal, Prasoon; Niekum, Scott; Mooney, Raymond J.",Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) Scalable agent alignment via reward modeling: a research direction,"One obstacle to applying reinforcement learning algorithms to real-world problems is the lack of suitable reward functions. Designing such reward functions is difficult in part because the user only has an implicit understanding of the task objective. This gives rise to the agent alignment problem: how do we create agents that behave in accordance with the user's intentions? We outline a high-level research direction to solve the agent alignment problem centered around reward modeling: learning a reward function from interaction with the user and optimizing the learned reward function with reinforcement learning. We discuss the key challenges we expect to face when scaling reward modeling to complex and general domains, concrete approaches to mitigate these challenges, and ways to establish trust in the resulting agents.",http://arxiv.org/abs/1811.07871,2018,manuscript,"Leike, Jan; Krueger, David; Everitt, Tom; Martic, Miljan; Maini, Vishal; Legg, Shane", Automatic analysis of malware behavior using machine learning,,https://www.medra.org/servlet/aliasResolver?alias=iospress&doi=10.3233/JCS-2010-0410,2011,journalArticle,"Rieck, Konrad; Trinius, Philipp; Willems, Carsten; Holz, Thorsten",Journal of Computer Security Beyond lowest-warping cost action selection in trajectory transfer,"We consider the problem of learning from demonstrations to manipulate deformable objects. Recent work [1], [2], [3] has shown promising results that enable robotic manipulation of deformable objects through learning from demonstrations. Their approach is able to generalize from a single demonstration to new test situations, and suggests a nearest neighbor approach to select a demonstration to adapt to a given test situation. Such a nearest neighbor approach, however, ignores important aspects of the problem: brittleness (versus robustness) of demonstrations when generalized through this process, and the extent to which a demonstration makes progress towards a goal.",http://ieeexplore.ieee.org/document/7139644/,2015,conferencePaper,"Hadfield-Menell, Dylan; Lee, Alex X.; Finn, Chelsea; Tzeng, Eric; Huang, Sandy; Abbeel, Pieter",2015 IEEE International Conference on Robotics and Automation (ICRA) Toward Idealized Decision Theory,"This paper motivates the study of decision theory as necessary for aligning smarter-than-human artificial systems with human interests. We discuss the shortcomings of two standard formulations of decision theory, and demonstrate that they cannot be used to describe an idealized decision procedure suitable for approximation by artificial systems. We then explore the notions of policy selection and logical counterfactuals, two recent insights into decision theory that point the way toward promising paths for future research.",http://arxiv.org/abs/1507.01986,2015,manuscript,"Soares, Nate; Fallenstein, Benja", Multi-agent Inverse Reinforcement Learning for Certain General-sum Stochastic Games,"This paper addresses the problem of multi-agent inverse reinforcement learning (MIRL) in a two-player general-sum stochastic game framework. Five variants of MIRL are considered: uCS-MIRL, advE-MIRL, cooE-MIRL, uCE-MIRL, and uNE-MIRL, each distinguished by its solution concept. Problem uCS-MIRL is a cooperative game in which the agents employ cooperative strategies that aim to maximize the total game value. In problem uCE-MIRL, agents are assumed to follow strategies that constitute a correlated equilibrium while maximizing total game value. Problem uNE-MIRL is similar to uCE-MIRL in total game value maximization, but it is assumed that the agents are playing a Nash equilibrium. Problems advE-MIRL and cooE-MIRL assume agents are playing an adversarial equilibrium and a coordination equilibrium, respectively. We propose novel approaches to address these five problems under the assumption that the game observer either knows or is able to accurate estimate the policies and solution concepts for players. For uCS-MIRL, we first develop a characteristic set of solutions ensuring that the observed bi-policy is a uCS and then apply a Bayesian inverse learning method. For uCE-MIRL, we develop a linear programming problem subject to constraints that define necessary and sufficient conditions for the observed policies to be correlated equilibria. The objective is to choose a solution that not only minimizes the total game value difference between the observed bi-policy and a local uCS, but also maximizes the scale of the solution. We apply a similar treatment to the problem of uNE-MIRL. The remaining two problems can be solved efficiently by taking advantage of solution uniqueness and setting up a convex optimization problem. Results are validated on various benchmark grid-world games.",http://arxiv.org/abs/1806.09795,2019,journalArticle,"Lin, Xiaomin; Adams, Stephen C.; Beling, Peter A.",Journal of Artificial Intelligence Research Dynamic inconsistency of the inaction and initial state baseline,"Vika has been posting about various baseline choices for impact measure. In this post, I'll argue that the stepwise inaction baseline is dynamically inconsistent/time-inconsistent. Informally, what this means is that an agent will have different preferences from its future self. LOSSES FROM TIME-INCONSISTENCY Why is time-inconsistency bad? It's because it allows money-pump situations: the environment can extract free reward from the agent, to no advantage to that agent. Or, put more formally: * An agent A is time-inconsistent between times t and t′>t, if at time t it would pay a positive amount of reward to constrain its possible choices at time t′. Outside of anthropics and game theory, we expect our agent to be time-consistent. TIME INCONSISTENCY EXAMPLE Consider the following example: The robot can move in all four directions - N, E, S, W - and can also take the noop operation, ∅. The discount rate is γ<1. It gets a reward of r>0 for standing on the blue button for the first time. Using attainable utility preservation, the penalty function is defined by the auxiliary set R; here, this just consists of the reward function that gives p>0 for standing on the red button for the first time. Therefore if the robot moves from a point n steps away from the red button, to one m steps away, it gets a penalty[1] of p|γn−γm| - the difference between the expected red-button rewards for an optimiser in both positions. TWO PATHS It's pretty clear there are two potentially optimal paths the robot can take: going straight to the blue button (higher reward, but higher penalty), or taking the long way round (lower reward, but lower penalty): Fortunately, when summing up the penalties, you sum terms like …p|γn−1−γn|+p|γn− γn+1|…, so a lot of the terms cancel. Thus for the short route, the reward is r⋅γ8 (distance of eight to the blue button) and the penalty is 2p(γ3−γ7) (closest to the red button: 3 squares, furthest: 7 squares). For the long route, the rewar",https://www.alignmentforum.org/posts/w8QBmgQwb83vDMXoz/dynamic-inconsistency-of-the-inaction-and-initial-state,2020,blogPost,"Armstrong, Stuart",AI Alignment Forum Modeling the Human Trajectory,"In arriving at our funding priorities---including criminal justice reform, farm animal welfare, pandemic preparedness, health-related science, and artificial intelligence safety---Open Philanthropy has pondered profound questions. How much should we care about people who will live far in the",https://www.openphilanthropy.org/blog/modeling-human-trajectory,2020,blogPost,"Roodman, David",Open Philanthropy Inferring and assisting with constraints in shared autonomy,"Our goal is to enable robots to better assist people with motor impairments in day-to-day tasks. Currently, such robots are teleoperated, which is tedious. It requires carefully maneuvering the robot by providing input through some interface. This is further complicated because most tasks are filled with constraints, e.g. on how much the end effector can tilt before the glass that the robot is carrying spills. Satisfying these constraints can be difficult or even impossible with the latency, bandwidth, and resolution of the input interface. We seek to make operating these robots more efficient and reduce cognitive load on the operator. Given that manipulation research is not advanced enough to make these robots autonomous in the near term, achieving this goal requires finding aspects of these tasks that are difficult for human operators to achieve, but easy to automate with current capabilities. We propose constraints are the key: maintaining task constraints is the most difficult part of the task for operators, yet it is easy to do autonomously. We introduce a method for inferring constraints from operator input, along with a confidence-based way of assisting the user in maintaining them, and evaluate in a user study.",,2016,conferencePaper,"Mehr, N.; Horowitz, R.; Dragan, A. D.",2016 IEEE 55th Conference on Decision and Control (CDC) The complexity of agreement,,http://portal.acm.org/citation.cfm?doid=1060590.1060686,2005,conferencePaper,"Aaronson, Scott",Proceedings of the thirty-seventh annual ACM symposium on Theory of computing - STOC '05 "Introduction: Creativity, Conservatism & the Social Epistemology of Science",,http://philsci-archive.pitt.edu/15066/,2018,journalArticle,"Currie, Adrian","Creativity, Conservatism & the Social Epistemology of Science" Provable defenses against adversarial examples via the convex outer adversarial polytope,,,2018,conferencePaper,"Wong, Eric; Kolter, Zico",International Conference on Machine Learning Will AI undergo discontinuous progress?,"This post grew out of conversations with several people, including Daniel Kokotajlo, grue_slinky and Linda Lisefors, and is based in large part on a collection of scattered comments and blog-posts across lesswrong, along with some podcast interviews - e.g. here. The in-text links near quotes will take you to my sources. I am attempting to distinguish two possibilities which are often run together - that progress in AI towards AGI (‘takeoff’) will be discontinuous and that it will be fast, but continuous. Resolving this distinction also addresses the claim that there has been a significant shift in arguments for AI presenting an existential risk: from older arguments discussing an ultra-fast intelligence explosion occurring in a single ‘seed AI’ to more moderate scenarios. I argue that the ‘shift in arguments on AI safety’ is not a total change in basic assumptions (which some observers have claimed) but just a reduction in confidence about a specifically discontinuous takeoff. Finally, I try to explicitly operationalize the practical differences between discontinuous takeoff and fast, continuous takeoff. Further Reading Summary: Why AI risk might be solved without additional intervention from Longtermists Paul Christiano’s original post MIRIs Thoughts on Discontinuous takeoff Misconceptions about continuous takeoff AI Impacts original post Soft Takeoff can still lead to Decisive Strategic Advantage DEFINING DISCONTINUOUS PROGRESS What do I mean by ‘discontinuous’? If we were to graph world GDP over the last 10,000 years, it fits onto a hyperbolic growth pattern. We could call this ‘continuous’ since it is following a single trend, or we could call it ‘discontinuous’ because, on the scale of millennia, the industrial revolution exploded out of nowhere. I will call these sorts of hyperbolic trends ‘continuous, but fast’, in line with Paul Christiano, who argued for continuous takeoff, defining it this way: AI is just another, faster step in the hyperbolic g",https://www.alignmentforum.org/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress,2020,blogPost,"Martin, Sammy",AI Alignment Forum Emergence of Grounded Compositional Language in Multi-Agent Populations,"By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.",http://arxiv.org/abs/1703.04908,2018,manuscript,"Mordatch, Igor; Abbeel, Pieter", Adversarial Attacks and Defences Competition,"To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them. In this chapter, we describe the structure and organization of the competition and the solutions developed by several of the top-placing teams.",http://arxiv.org/abs/1804.00097,2018,conferencePaper,"Kurakin, Alexey; Goodfellow, Ian; Bengio, Samy; Dong, Yinpeng; Liao, Fangzhou; Liang, Ming; Pang, Tianyu; Zhu, Jun; Hu, Xiaolin; Xie, Cihang; Wang, Jianyu; Zhang, Zhishuai; Ren, Zhou; Yuille, Alan; Huang, Sangxia; Zhao, Yao; Zhao, Yuzhe; Han, Zhonglin; Long, Junjiajia; Berdibekov, Yerkebulan; Akiba, Takuya; Tokui, Seiya; Abe, Motoki",The NIPS '17 Competition: Building Intelligent Systems Building Thinking Machines by Solving Animal Cognition Tasks,"In ‘Computing Machinery and Intelligence’, Turing, sceptical of the question ‘Can machines think?’, quickly replaces it with an experimentally verifiable test: the imitation game. I suggest that for such a move to be successful the test needs to be relevant, expansive, solvable by exemplars, unpredictable, and lead to actionable research. The Imitation Game is only partially successful in this regard and its reliance on language, whilst insightful for partially solving the problem, has put AI progress on the wrong foot, prescribing a top-down approach for building thinking machines. I argue that to fix shortcomings with modern AI systems a nonverbal operationalisation is required. This is provided by the recent Animal-AI Testbed, which translates animal cognition tests for AI and provides a bottom-up research pathway for building thinking machines that create predictive models of their environment from sensory input.",http://link.springer.com/10.1007/s11023-020-09535-6,2020,journalArticle,"Crosby, Matthew",Minds and Machines The Flash Crash: High-Frequency Trading in an Electronic Market: The Flash Crash,,http://doi.wiley.com/10.1111/jofi.12498,2017,journalArticle,"Kirilenko, Andrei; Kyle, Albert S.; Samadi, Mehrdad; Tuzun, Tugkan",The Journal of Finance Big Self-Supervised Models are Strong Semi-Supervised Learners,"One paradigm for learning from few labeled examples while making best use of a large amount of unlabeled data is unsupervised pretraining followed by supervised fine-tuning. Although this paradigm uses unlabeled data in a task-agnostic way, in contrast to most previous approaches to semi-supervised learning for computer vision, we show that it is surprisingly effective for semi-supervised learning on ImageNet. A key ingredient of our approach is the use of a big (deep and wide) network during pretraining and fine-tuning. We find that, the fewer the labels, the more this approach (task-agnostic use of unlabeled data) benefits from a bigger network. After fine-tuning, the big network can be further improved and distilled into a much smaller one with little loss in classification accuracy by using the unlabeled examples for a second time, but in a task-specific way. The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2 (a modification of SimCLR), supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge. This procedure achieves 73.9\% ImageNet top-1 accuracy with just 1\% of the labels ($\le$13 labeled images per class) using ResNet-50, a $10\times$ improvement in label efficiency over the previous state-of-the-art. With 10\% of labels, ResNet-50 trained with our method achieves 77.5\% top-1 accuracy, outperforming standard supervised training with all of the labels.",http://arxiv.org/abs/2006.10029,2020,conferencePaper,"Chen, Ting; Kornblith, Simon; Swersky, Kevin; Norouzi, Mohammad; Hinton, Geoffrey","34th Conference on Neural Information Processing Systems (NeurIPS 2020)," Consequences of Misaligned AI,AI systems often rely on two key components: a specified goal or reward function and an optimization algorithm to compute the optimal behavior for that goal. This approach is intended to provide value for a principal: the user on whose behalf the agent acts. The objectives given to these agents often refer to a partial specification of the principal’s goals. We consider the cost of this incompleteness by analyzing a model of a principal and an agent in a resource constrained world where the L attributes of the state correspond to different sources of utility for the principal. We assume that the reward function given to the agent only has support on J < L attributes. The contributions of our paper are as follows: 1) we propose a novel model of an incomplete principal—agent problem from artificial intelligence; 2) we provide necessary and sufficient conditions under which indefinitely optimizing for any incomplete proxy objective leads to arbitrarily low overall utility; and 3) we show how modifying the setup to allow reward functions that reference the full state or allowing the principal to update the proxy objective over time can lead to higher utility solutions. The results in this paper argue that we should view the design of reward functions as an interactive and dynamic process and identifies a theoretical scenario where some degree of interactivity is desirable.,,2020,conferencePaper,"Zhuang, Simon; Hadfield-Menell, Dylan",Advances in Neural Information Processing Systems 33 (2020) Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery,"Reinforcement learning requires manual specification of a reward function to learn a task. While in principle this reward function only needs to specify the task goal, in practice reinforcement learning can be very time-consuming or even infeasible unless the reward function is shaped so as to provide a smooth gradient towards a successful outcome. This shaping is difficult to specify by hand, particularly when the task is learned from raw observations, such as images. In this paper, we study how we can automatically learn dynamical distances: a measure of the expected number of time steps to reach a given goal state from any other state. These dynamical distances can be used to provide well-shaped reward functions for reaching new goals, making it possible to learn complex tasks efficiently. We show that dynamical distances can be used in a semi-supervised regime, where unsupervised interaction with the environment is used to learn the dynamical distances, while a small amount of preference supervision is used to determine the task goal, without any manually engineered reward function or goal examples. We evaluate our method both on a real-world robot and in simulation. We show that our method can learn to turn a valve with a real-world 9-DoF hand, using raw image observations and just ten preference labels, without any other supervision. Videos of the learned skills can be found on the project website: https://sites.google.com/view/dynamical-distance-learning.",http://arxiv.org/abs/1907.08225,2020,manuscript,"Hartikainen, Kristian; Geng, Xinyang; Haarnoja, Tuomas; Levine, Sergey", Alignment as Translation,"Technology Changes Constraints argues that economic constraints are usually modular with respect to technology changes - so for reasoning about technology changes, it’s useful to cast them in terms of economic constraints. Two constraints we’ll talk about here: * Compute - flops, memory, etc. * Information - sensors, data, etc. Thanks to ongoing technology changes, both of these constraints are becoming more and more slack over time - compute and information are both increasingly abundant and cheap. Immediate question: what happens in the limit as the prices of both compute and information go to zero? Essentially, we get omniscience: our software has access to a perfect, microscopically-detailed model of the real world. Computers have the memory and processing capability to run arbitrary queries on that model, and predictions are near-perfectly accurate (modulo quantum noise). This limit applies even without AGI - as compute and information become more abundant, our software approaches omniscience, even limiting ourselves to special-purpose reasoning algorithms. Of course, AGI would presumably be closer to omniscience than non-AGI algorithms, at the same level of compute/information. It would be able to more accurately predict more things which aren’t directly observable via available sensors, and it would be able to run larger queries with the same amount of compute. (How much closer to omniscience an AGI would get is an open question, but it would at least not be any worse in a big-O sense.) Next question: as compute and information constraints slacken, which constraints become taut? What new bottlenecks appear, for problems which were previously bottlenecked on compute/information? To put it differently: if our software can run arbitrary queries on an accurate, arbitrarily precise low-level model of the physical world, what else do we need in order to get value out of that capability? Well, mainly we need some way to specify what it is that we want. We",https://www.alignmentforum.org/posts/42YykiTqtGMyJAjDM/alignment-as-translation,2020,blogPost,"Wentworth, John S",AI Alignment Forum Understanding Agent Incentives using Causal Influence Diagrams. Part I: Single Action Settings,"Agents are systems that optimize an objective function in an environment. Together, the goal and the environment induce secondary objectives, incentives. Modeling the agent-environment interaction using causal influence diagrams, we can answer two fundamental questions about an agent's incentives directly from the graph: (1) which nodes can the agent have an incentivize to observe, and (2) which nodes can the agent have an incentivize to control? The answers tell us which information and influence points need extra protection. For example, we may want a classifier for job applications to not use the ethnicity of the candidate, and a reinforcement learning agent not to take direct control of its reward mechanism. Different algorithms and training paradigms can lead to different causal influence diagrams, so our method can be used to identify algorithms with problematic incentives and help in designing algorithms with better incentives.",http://arxiv.org/abs/1902.09980,2019,manuscript,"Everitt, Tom; Ortega, Pedro A.; Barnes, Elizabeth; Legg, Shane", Shielded Decision-Making in MDPs,"A prominent problem in artificial intelligence and machine learning is the safe exploration of an environment. In particular, reinforcement learning is a well-known technique to determine optimal policies for complicated dynamic systems, but suffers from the fact that such policies may induce harmful behavior. We present the concept of a shield that forces decision-making to provably adhere to safety requirements with high probability. Our method exploits the inherent uncertainties in scenarios given by Markov decision processes. We present a method to compute probabilities of decision making regarding temporal logic constraints. We use that information to realize a shield that---when applied to a reinforcement learning algorithm---ensures (near-)optimal behavior both for the safety constraints and for the actual learning objective. In our experiments, we show on the arcade game PAC-MAN that the learning efficiency increases as the learning needs orders of magnitude fewer episodes. We show tradeoffs between sufficient progress in exploration of the environment and ensuring strict safety.",,2018,book,"Jansen, Nils; Könighofer, Bettina; Junges, Sebastian; Bloem, Roderick", Modeling Agents with Probabilistic Programs,,,2017,book,"Evans, Owain; Stuhlmüller, Andreas; Salvatier, John; Filan, Daniel", "We, Borg: Speculations on hive minds as a posthuman state",,,2015,blogPost,"Sandberg, Anders",aleph.se A Dialogue on Suffering Subroutines,"This piece presents a hypothetical dialogue that explains why instrumental computational processes of a future superintelligence might evoke moral concern. Generally, agent-like components might emerge in many places, including the computing processes of a future civilization. Whether and how much these subroutines matter are questions for future generations to figure out, but it's good to keep an open mind to the possibility that our intuitions about what suffering is may change dramatically.",https://longtermrisk.org/a-dialogue-on-suffering-subroutines/,2015,blogPost,"Tomasik, Brian",Center on Long-Term Risk Model splintering: moving from one imperfect model to another,"1. THE BIG PROBLEM In the last few months, I've become convinced that there is a key meta-issue in AI safety; a problem that seems to come up in all sorts of areas. It's hard to summarise, but my best phrasing would be: * Many problems in AI safety seem to be variations of ""this approach seems safe in this imperfect model, but when we generalise the model more, it becomes dangerously underdefined"". Call this model splintering. * It is intrinsically worth studying how to (safely) transition from one imperfect model to another. This is worth doing, independently of whatever ""perfect"" or ""ideal"" model might be in the background of the imperfect models. This sprawling post will be presenting examples of model splintering, arguments for its importance, a formal setting allowing us to talk about it, and some uses we can put this setting to. 1.1 IN THE LANGUAGE OF TRADITIONAL ML In the language of traditional ML, we could connect all these issues to "" out-of-distribution"" behaviour. This is the problems that algorithms encounter when the set they are operating on is drawn from a different distribution than the training set they were trained on. Humans can often see that the algorithm is out-of-distribution and correct it, because we have a more general distribution in mind than the one the algorithm was trained on. In these terms, the issues of this post can be phrased as: 1. When the AI finds itself mildly out-of-distribution, how best can it extend its prior knowledge to the new situation? 2. What should the AI do if it finds itself strongly out-of-distribution? 3. What should the AI do if it finds itself strongly out-of-distribution, and humans don't know the correct distribution either? 1.2 MODEL SPLINTERING EXAMPLES Let's build a more general framework. Say that you start with some brilliant idea for AI safety/alignment/effectiveness. This idea is phrased in some (imperfect) model. Then ""model splintering"" happens when you or the AI",https://www.alignmentforum.org/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1,2020,blogPost,"Armstrong, Stuart",AI Alignment Forum Occam's razor is insufficient to infer the preferences of irrational agents,"Inverse reinforcement learning (IRL) attempts to infer human rewards or preferences from observed behavior. Since human planning systematically deviates from rationality, several approaches have been tried to account for specific human shortcomings. However, the general problem of inferring the reward function of an agent of unknown rationality has received little attention. Unlike the well-known ambiguity problems in IRL, this one is practically relevant but cannot be resolved by observing the agent’s policy in enough environments. This paper shows (1) that a No Free Lunch result implies it is impossible to uniquely decompose a policy into a planning algorithm and reward function, and (2) that even with a reasonable simplicity prior/Occam’s razor on the set of decompositions, we cannot distinguish between the true decomposition and others that lead to high regret. To address this, we need simple ‘normative’ assumptions, which cannot be deduced exclusively from observations.",,2018,conferencePaper,"Armstrong, Stuart; Mindermann, Sören",Advances in Neural Information Processing Systems Our driverless dilemma,"Suppose that a driverless car is headed toward five pedestrians. It can stay on course and kill them or swerve into a concrete wall, killing its passenger. On page 1573 of this issue, Bonnefon et al. (1) explore this social dilemma in a series of clever survey experiments. They show that people generally approve of cars programmed to minimize the total amount of harm, even at the expense of their passengers, but are not enthusiastic about riding in such “utilitarian” cars—that is, autonomous vehicles that are, in certain emergency situations, programmed to sacrifice their passengers for the greater good. Such dilemmas may arise infrequently, but once millions of autonomous vehicles are on the road, the improbable becomes probable, perhaps even inevitable. And even if such cases never arise, autonomous vehicles must be programmed to handle them. How should they be programmed? And who should decide? When should your car be willing to kill you? When should your car be willing to kill you?",https://science.sciencemag.org/content/352/6293/1514,2016,journalArticle,"Greene, Joshua D.",Science How the Simulation Argument Dampens Future Fanaticism,"Some effective altruists assume that most of the expected impact of our actions comes from how we influence the very long-term future of Earthoriginating intelligence over the coming ∼billions of years. According to this view, helping humans and animals in the short term matters, but it mainly only matters via effects on far-future outcomes.",,2016,manuscript,"Tomasik, Brian", The Liability Problem for Autonomous Artificial Agents,"This paper describes and frames a central ethical issue–the liability problem–facing the regulation of artificial computational agents, including artificial intelligence (AI) and robotic systems, as they become increasingly autonomous, and supersede current capabilities. While it frames the issue in legal terms of liability and culpability, these terms are deeply imbued and interconnected with their ethical and moral correlate– responsibility. In order for society to benefit from advances in AI technology, it will be necessary to develop regulatory policies which manage the risk and liability of deploying systems with increasingly autonomous capabilities. However, current approaches to liability have difficulties when it comes to dealing with autonomous artificial agents because their behavior may be unpredictable to those who create and deploy them, and they will not be proper legal or moral agents. This problem is the motivation for a research project that will explore the fundamental concepts of autonomy, agency and liability; clarify the different varieties of agency that artificial systems might realize, including causal, legal and moral; and the illuminate the relationships between these. The paper will frame the problem of liability in autonomous agents, sketch out its relation to fundamental concepts in human legal and moral agency– including autonomy, agency, causation, intention, responsibility and culpability–and their applicability or inapplicability to autonomous artificial agents.",,2016,conferencePaper,"Asaro, Peter M", Microscopy Cell Segmentation via Adversarial Neural Networks,"We present a novel method for cell segmentation in microscopy images which is inspired by the Generative Adversarial Neural Network (GAN) approach. Our framework is built on a pair of two competitive artificial neural networks, with a unique architecture, termed Rib Cage, which are trained simultaneously and together define a min-max game resulting in an accurate segmentation of a given image. Our approach has two main strengths, similar to the GAN, the method does not require a formulation of a loss function for the optimization process. This allows training on a limited amount of annotated data in a weakly supervised manner. Promising segmentation results on real fluorescent microscopy data are presented. The code is freely available at: https://github.com/arbellea/DeepCellSeg.git",http://arxiv.org/abs/1709.05860,2018,conferencePaper,"Arbelle, Assaf; Raviv, Tammy Riklin",2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) Adaptive Mechanism Design: Learning to Promote Cooperation,"In the future, artificial learning agents are likely to become increasingly widespread in our society. They will interact with both other learning agents and humans in a variety of complex settings including social dilemmas. We consider the problem of how an external agent can promote cooperation between artificial learners by distributing additional rewards and punishments based on observing the learners' actions. We propose a rule for automatically learning how to create right incentives by considering the players' anticipated parameter updates. Using this learning rule leads to cooperation with high social welfare in matrix games in which the agents would otherwise learn to defect with high probability. We show that the resulting cooperative outcome is stable in certain games even if the planning agent is turned off after a given number of episodes, while other games require ongoing intervention to maintain mutual cooperation. However, even in the latter case, the amount of necessary additional incentives decreases over time.",http://arxiv.org/abs/1806.04067,2019,conferencePaper,"Baumann, Tobias; Graepel, Thore; Shawe-Taylor, John",2020 International Joint Conference on Neural Networks (IJCNN) On the axiomatic treatment of probability,,http://www.impan.pl/get/doi/10.4064/cm-3-2-125-137,1955,journalArticle,"Łoś, J.",Colloquium Mathematicum Tessellating Hills: a toy model for demons in imperfect search,"If you haven't already, take a look at this post by johnswentworth to understand what this is all about: https://www.lesswrong.com/posts/KnPN7ett8RszE79PH/demons-in-imperfect-search The short version is that while systems that use perfect search, such as AIXI, have many safety problems, a whole new set of problems arises when we start creating systems that are not perfect searchers. Patterns can form that exploit the imperfect nature of the search function to perpetuate themselves. johnswentworth refers to such patterns as ""demons"". After reading that post I decided to see if I could observe demon formation in a simple model: gradient descent on a not-too-complicated mathematical function. It turns out that even in this very simplistic case, demon formation can happen. Hopefully this post will give people an example of demon formation where the mechanism is simple and easy to visualize. MODEL The function we try to minimize using gradient descent is called the loss function. Here it is: L(→x)=−x0+ϵn∑j=1xj⋅splotchj(→x) Let me explain what some of the parts of this loss mean. Each function splotchj( →x) is periodic with period 2π in every component of →x. I decided in this case to make my splotch functions out of a few randomly chosen sine waves added together. ϵ is chosen to be a small number so in any local region, ϵ∑nj=1xj⋅splotchj(→x) will look approximately periodic: A bunch of hills repeating over and over again with period 2π across the landscape. But over large enough distances, the relative weightings of various splotches do change. Travel a distance of 20π in the x7 direction, and splotch7 will be a larger component of the repeating pattern than it was before. This allows for selection effects. The −x0 term means that the vector →x mainly wants to increase its x0 component. But the splotch functions can also direct its motion. A splotch function might have a kind of ridge that directs some of the x0 motion into other components. If splotch7 tends to",https://www.alignmentforum.org/posts/X7S3u5E4KktLp7gHz/tessellating-hills-a-toy-model-for-demons-in-imperfect,2020,blogPost,DaemonicSigil,AI Alignment Forum Zero-Shot Visual Imitation,"The current dominant paradigm for imitation learning relies on strong supervision of expert actions to learn both 'what' and 'how' to imitate. We pursue an alternative paradigm wherein an agent first explores the world without any expert supervision and then distills its experience into a goal-conditioned skill policy with a novel forward consistency loss. In our framework, the role of the expert is only to communicate the goals (i.e., what to imitate) during inference. The learned policy is then employed to mimic the expert (i.e., how to imitate) after seeing just a sequence of images demonstrating the desired task. Our method is 'zero-shot' in the sense that the agent never has access to expert actions during training or for the task demonstration at inference. We evaluate our zero-shot imitator in two real-world settings: complex rope manipulation with a Baxter robot and navigation in previously unseen office environments with a TurtleBot. Through further experiments in VizDoom simulation, we provide evidence that better mechanisms for exploration lead to learning a more capable policy which in turn improves end task performance. Videos, models, and more details are available at https://pathak22.github.io/zeroshot-imitation/",http://arxiv.org/abs/1804.08606,2018,conferencePaper,"Pathak, Deepak; Mahmoudieh, Parsa; Luo, Guanghao; Agrawal, Pulkit; Chen, Dian; Shentu, Yide; Shelhamer, Evan; Malik, Jitendra; Efros, Alexei A.; Darrell, Trevor",Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops Interpretable and Pedagogical Examples,"Teachers intentionally pick the most informative examples to show their students. However, if the teacher and student are neural networks, the examples that the teacher network learns to give, although effective at teaching the student, are typically uninterpretable. We show that training the student and teacher iteratively, rather than jointly, can produce interpretable teaching strategies. We evaluate interpretability by (1) measuring the similarity of the teacher’s emergent strategies to intuitive strategies in each domain and (2) conducting human experiments to evaluate how effective the teacher’s strategies are at teaching humans. We show that the teacher network learns to select or generate interpretable, pedagogical examples to teach rule-based, probabilistic, boolean, and hierarchical concepts.",http://arxiv.org/abs/1711.00694,2017,manuscript,"Milli, Smitha; Abbeel, Pieter; Mordatch, Igor", Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents,"As deep reinforcement learning driven by visual perception becomes more widely used there is a growing need to better understand and probe the learned agents. Understanding the decision making process and its relationship to visual inputs can be very valuable to identify problems in learned behavior. However, this topic has been relatively under-explored in the research community. In this work we present a method for synthesizing visual inputs of interest for a trained agent. Such inputs or states could be situations in which specific actions are necessary. Further, critical states in which a very high or a very low reward can be achieved are often interesting to understand the situational awareness of the system as they can correspond to risky states. To this end, we learn a generative model over the state space of the environment and use its latent space to optimize a target function for the state of interest. In our experiments we show that this method can generate insights for a variety of environments and reinforcement learning methods. We explore results in the standard Atari benchmark games as well as in an autonomous driving simulator. Based on the efficiency with which we have been able to identify behavioural weaknesses with this technique, we believe this general approach could serve as an important tool for AI safety applications.",http://arxiv.org/abs/1904.01318,2019,manuscript,"Rupprecht, Christian; Ibrahim, Cyril; Pal, Christopher J.", How Useful Is Quantilization For Mitigating Specification-Gaming?,"For some tasks, there exists a goal that perfectly describes what the designer wants the AI system to achieve. For many tasks, however, the best available proxy objective is only a rough approximation of the designer’s intentions. When given such a goal, a system that optimizes the proxy objective tends to select degenerate solutions where the proxy reward is very different from the designer’s true reward function. One way to counteract the tendency toward specification-gaming is quantilization, a method that interpolates between imitating demonstrations, and optimizing the proxy objective. If the demonstrations are of adequate quality, and the proxy reward overestimates performance, then quantilization has better guaranteed performance than other strategies. However, if the proxy reward underestimates performance, then either imitation or optimization will offer the best guarantee. This work introduces three new gym environments: Mountain Car-RR, Hopper-RR, and Video Pinball-RR, and shows that quantilization outperforms baselines on these tasks.",,2019,conferencePaper,"Carey, Ryan", Precedents for economic n-year doubling before 4n-year doubling,"Does the economy ever double without having first doubled four times slower? Yes, but not since 3000BC.",https://aiimpacts.org/precedents-for-economic-n-year-doubling-before-4n-year-doubling/,2020,blogPost,AI Impacts,AI Impacts The Case for Strong Longtermism,,https://globalprioritiesinstitute.org/wp-content/uploads/2019/Greaves_MacAskill_The_Case_for_Strong_Longtermism.pdf,2019,manuscript,"Greaves, Hilary; MacAskill, William", Detecting Spiky Corruption in Markov Decision Processes,"Current reinforcement learning methods fail if the reward function is imperfect, i.e. if the agent observes reward different from what it actually receives. We study this problem within the formalism of Corrupt Reward Markov Decision Processes (CRMDPs). We show that if the reward corruption in a CRMDP is sufficiently ""spiky"", the environment is solvable. We fully characterize the regret bound of a Spiky CRMDP, and introduce an algorithm that is able to detect its corrupt states. We show that this algorithm can be used to learn the optimal policy with any common reinforcement learning algorithm. Finally, we investigate our algorithm in a pair of simple gridworld environments, finding that our algorithm can detect the corrupt states and learn the optimal policy despite the corruption.",http://arxiv.org/abs/1907.00452,2019,manuscript,"Mancuso, Jason; Kisielewski, Tomasz; Lindner, David; Singh, Alok", Finding Generalizable Evidence by Learning to Convince Q&A Models,"We propose a system that finds the strongest supporting evidence for a given answer to a question, using passage-based question-answering (QA) as a testbed. We train evidence agents to select the passage sentences that most convince a pretrained QA model of a given answer, if the QA model received those sentences instead of the full passage. Rather than finding evidence that convinces one model alone, we find that agents select evidence that generalizes; agent-chosen evidence increases the plausibility of the supported answer, as judged by other QA models and humans. Given its general nature, this approach improves QA in a robust manner: using agent-selected evidence (i) humans can correctly answer questions with only ~20% of the full passage and (ii) QA models can generalize to longer passages and harder questions.",http://arxiv.org/abs/1909.05863,2019,conferencePaper,"Perez, Ethan; Karamcheti, Siddharth; Fergus, Rob; Weston, Jason; Kiela, Douwe; Cho, Kyunghyun",Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processingand the 9th International Joint Conference on Natural Language Processing Safe operation as a social construct,,https://www.tandfonline.com/doi/full/10.1080/001401399184884,1999,journalArticle,"Rochlin, Gene I.",Ergonomics On the Impossibility of Supersized Machines,,https://arxiv.org/abs/1703.10987,2017,manuscript,"Garfinkel, Ben; Brundage, Miles; Filan, Daniel; Flynn, Carrick; Luketina, Jelena; Page, Michael; Sandberg, Anders; Snyder-Beattie, Andrew; Tegmark, Max", Is state-dependent valuation more adaptive than simpler rules?,,https://linkinghub.elsevier.com/retrieve/pii/S0376635717302048,2018,journalArticle,"Halpern, Joseph Y.; Seeman, Lior",Behavioural Processes Don't Blink: Reputations for Resolve and Higher-Order Beliefs in Crisis Bargaining,,,2017,manuscript,"Dafoe, Allan; Zwetsloot, Remco", Future directions for narrow value learning,"Narrow value learning is a huge field that people are already working on (though not by that name) and I can’t possibly do it justice. This post is primarily a list of things that I think are important and interesting, rather than an exhaustive list of directions to pursue. (In contrast, the corresponding post for ambitious value learning did aim to be exhaustive, and I don’t think I missed much work there.) You might think that since so many people are already working on narrow value learning, we should focus on more neglected areas of AI safety. However, I still think it’s worth working on because long-term safety suggests a particular subset of problems to focus on; that subset seems quite neglected. For example, a lot of work is about how to improve current algorithms in a particular domain, and the solutions encode domain knowledge to succeed. This seems not very relevant for long-term concerns. Some work assumes that a handcoded featurization is given (so that the true reward is linear in the features); this is not an assumption we could make for more powerful AI systems. I will speculate a bit on the neglectedness and feasibility of each of these areas, since for many of them there isn’t a person or research group who would champion them whom I could defer to about the arguments for success. THE BIG PICTURE This category of research is about how you could take narrow value learning algorithms and use them to create an aligned AI system. Typically, I expect this to work by having the narrow value learning enable some form of corrigibility. As far as I can tell, nobody outside of the AI safety community works on this problem. While it is far too early to stake a confident position one way or the other, I am slightly less optimistic about this avenue of approach than one in which we create a system that is directly trained to be corrigible. Avoiding problems with goal-directedness. How do we put together narrow value learning techniques in a way that does",https://www.alignmentforum.org/posts/MxadmSXHnoCupsWqx/future-directions-for-narrow-value-learning,2019,blogPost,"Shah, Rohin",AI Alignment Forum Quantifying Independently Reproducible Machine Learning,"Many warn that Artificial Intelligence has a serious reproducibility crisis, but is it so? Some conclusions from the author's experience trying to replicate 255 papers.",https://thegradient.pub/independently-reproducible-machine-learning/,2020,blogPost,"Raff, Edward",The Gradient The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk,,,2019,book,"Amadae, S.M.; Avin, Shahar; Borrie, John; Bronk, Justin; Hagström, Martin; Horowitz, Michael C.; Kaspersen, Anja; King, Chris; Rickli, Jean-Marc; Sauer, Frank; Scheftelowitsch, Dimitri; Stoutland, Page O.; Topychkanov, Petr", Program obfuscation: a quantitative approach,,http://portal.acm.org/citation.cfm?doid=1314257.1314263,2007,conferencePaper,"Anckaert, Bertrand; Madou, Matias; De Sutter, Bjorn; De Bus, Bruno; De Bosschere, Koen; Preneel, Bart",Proceedings of the 2007 ACM workshop on Quality of protection - QoP '07 Introducing the Unrestricted Adversarial Examples Challenge,"Posted by Tom B. Brown and Catherine Olsson, Research Engineers, Google Brain Team Machine learning is being deployed in more and more rea...",http://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html,2018,blogPost,Tom B Brown; Catherine Olsson,Google AI Blog "A non-mystical explanation of ""no-self"" (three characteristics series)","This is the second post of the ""a non-mystical explanation of insight meditation and the three characteristics of existence"" series. You can read the first post, explaining my general intent and approach, here. ON THE THREE CHARACTERISTICS So, just what are the three characteristics of existence? My take is that they are a rough way of clustering the kinds of insights that you may get from insight meditation: in one way or another, most insights about the structure of your mind can be said to be about no-self, impermanence, unsatisfactoriness, or some combination of them. As upcoming posts should hopefully make obvious, this is not a very clear-cut distinction: the three are deeply intertwined with each other, and you can’t fully explain one without explaining the others. I am starting with a discussion of no-self, then moving to unsatisfactoriness, then coming back to no-self, moving between the different characteristics in a way that seems most clear. I think that what’s called “enlightenment” refers to the gradual accumulation of these kinds of insights, combined with practices aimed at exploiting an understanding of them. There are many different insights and ways of exploring them, as well as many general approaches for making use of them. Different traditions also seem to have different enlightenments [1, 2]. Thus, rather than providing any definitive explanation of “this is enlightenment”, I attempt to focus on exploring how various cognitive mechanisms behind different enlightenments work. My intent is to cover enough of different things to give a taste of what's out there and what kinds of outcomes might be possible, while acknowledging that there's also a lot that I have no clue of yet. So this is not trying to be anything like “a definitive and complete explanation of the three characteristics”; I don’t think anyone could write such a thing, as nobody can have explored all the aspects of all the three. Rather, this is more of a sketch of those aspects",https://www.lesswrong.com/posts/W59Nb72sYJhMJKGB8/a-non-mystical-explanation-of-no-self-three-characteristics,2020,blogPost,"Sotala, Kaj",LessWrong Interpretable Convolutional Neural Networks,,https://ieeexplore.ieee.org/document/8579018/,2018,conferencePaper,"Zhang, Quanshi; Wu, Ying Nian; Zhu, Song-Chun",2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition On Catastrophic Interference in Atari 2600 Games,"Model-free deep reinforcement learning is sample inefficient. One hypothesis -- speculated, but not confirmed -- is that catastrophic interference within an environment inhibits learning. We test this hypothesis through a large-scale empirical study in the Arcade Learning Environment (ALE) and, indeed, find supporting evidence. We show that interference causes performance to plateau; the network cannot train on segments beyond the plateau without degrading the policy used to reach there. By synthetically controlling for interference, we demonstrate performance boosts across architectures, learning algorithms and environments. A more refined analysis shows that learning one segment of a game often increases prediction errors elsewhere. Our study provides a clear empirical link between catastrophic interference and sample efficiency in reinforcement learning.",http://arxiv.org/abs/2002.12499,2020,manuscript,"Fedus, William; Ghosh, Dibya; Martin, John D.; Bellemare, Marc G.; Bengio, Yoshua; Larochelle, Hugo", Learning Safe Policies with Expert Guidance,"We propose a framework for ensuring safe behavior of a reinforcement learning agent when the reward function may be difficult to specify. In order to do this, we rely on the existence of demonstrations from expert policies, and we provide a theoretical framework for the agent to optimize in the space of rewards consistent with its existing knowledge. We propose two methods to solve the resulting optimization: an exact ellipsoid-based method and a method in the spirit of the ""follow-the-perturbed-leader"" algorithm. Our experiments demonstrate the behavior of our algorithm in both discrete and continuous problems. The trained agent safely avoids states with potential negative effects while imitating the behavior of the expert in the other states.",http://arxiv.org/abs/1805.08313,2018,conferencePaper,"Huang, Jessie; Wu, Fa; Precup, Doina; Cai, Yang",Advances in Neural Information Processing Systems 31 (NeurIPS 2018) A Rawlsian algorithm for autonomous vehicles,,http://link.springer.com/10.1007/s10676-017-9419-3,2017,journalArticle,"Leben, Derek",Ethics and Information Technology Growing Recursive Self-Improvers,"Research into the capability of recursive self-improvement typically only considers pairs of agent, self-modification candidate , and asks whether the agent can determine/prove if the self-modification is beneficial and safe. But this leaves out the much more important question of how to come up with a potential self-modification in the first place, as well as how to build an AI system capable of evaluating one. Here we introduce a novel class of AI systems, called experience-based AI (EXPAI), which trivializes the search for beneficial and safe self-modifications. Instead of distracting us with proof-theoretical issues, EXPAI systems force us to consider their education in order to control a system’s growth towards a robust and trustworthy, benevolent and well-behaved agent. We discuss what a practical instance of EXPAI looks like and build towards a “test theory” that allows us to gauge an agent’s level of understanding of educational material.",http://link.springer.com/10.1007/978-3-319-41649-6_13,2016,bookSection,"Steunebrink, Bas R.; Thórisson, Kristinn R.; Schmidhuber, Jürgen",Artificial General Intelligence Transcending Complacency On Superintelligent Machines,"As the Hollywood blockbuster Transcendence debuts this weekend with Johnny Depp, Morgan Freeman and clashing visions for the future of humanity,...",https://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html,2014,magazineArticle,"Hawking, Stephen; Tegmark, Max; Russell, Stuart; Wilczek, Frank",HuffPost Future progress in artificial intelligence: A poll among experts,,,2014,journalArticle,"Müller, Vincent C.; Bostrom, Nick",AI Matters Causal Confusion in Imitation Learning,"Behavioral cloning reduces policy learning to supervised learning by training a discriminative model to predict expert actions given observations. Such discriminative models are non-causal: the training procedure is unaware of the causal structure of the interaction between the expert and the environment. We point out that ignoring causality is particularly damaging because of the distributional shift in imitation learning. In particular, it leads to a counter-intuitive ""causal misidentification"" phenomenon: access to more information can yield worse performance. We investigate how this problem arises, and propose a solution to combat it through targeted interventions---either environment interaction or expert queries---to determine the correct causal model. We show that causal misidentification occurs in several benchmark control domains as well as realistic driving settings, and validate our solution against DAgger and other baselines and ablations.",https://arxiv.org/abs/1905.11979v2,2019,conferencePaper,"de Haan, Pim; Jayaraman, Dinesh; Levine, Sergey",Advances in Neural Information Processing Systems 32 (NeurIPS 2019) Artificial Intelligence: American Attitudes and Trends,,https://www.ssrn.com/abstract=3312874,2019,report,"Zhang, Baobao; Dafoe, Allan", Guided Meta-Policy Search,"Reinforcement learning (RL) algorithms have demonstrated promising results on complex tasks, yet often require impractical numbers of samples because they learn from scratch. Meta-RL aims to address this challenge by leveraging experience from previous tasks in order to more quickly solve new tasks. However, in practice, these algorithms generally also require large amounts of on-policy experience during the meta-training process, making them impractical for use in many problems. To this end, we propose to learn a reinforcement learning procedure through imitation of expert policies that solve previously-seen tasks. This involves a nested optimization, with RL in the inner loop and supervised imitation learning in the outer loop. Because the outer loop imitation learning can be done with off-policy data, we can achieve significant gains in meta-learning sample efficiency. In this paper, we show how this general idea can be used both for meta-reinforcement learning and for learning fast RL procedures from multi-task demonstration data. The former results in an approach that can leverage policies learned for previous tasks without significant amounts of on-policy data during meta-training, whereas the latter is particularly useful in cases where demonstrations are easy for a person to provide. Across a number of continuous control meta-RL problems, we demonstrate significant improvements in meta-RL sample efficiency in comparison to prior work as well as the ability to scale to domains with visual observations.",http://arxiv.org/abs/1904.00956,2019,conferencePaper,"Mendonca, Russell; Gupta, Abhishek; Kralev, Rosen; Abbeel, Pieter; Levine, Sergey; Finn, Chelsea",33rd Conference on Neural Information Processing Systems (NeurIPS 2019) The Role of Cooperation in Responsible AI Development,"In this paper, we argue that competitive pressures could incentivize AI companies to underinvest in ensuring their systems are safe, secure, and have a positive social impact. Ensuring that AI systems are developed responsibly may therefore require preventing and solving collective action problems between companies. We note that there are several key factors that improve the prospects for cooperation in collective action problems. We use this to identify strategies to improve the prospects for industry cooperation on the responsible development of AI.",http://arxiv.org/abs/1907.04534,2019,manuscript,"Askell, Amanda; Brundage, Miles; Hadfield, Gillian", "Learning from Physical Human Corrections, One Feature at a Time","We focus on learning robot objective functions from human guidance: specifically, from physical corrections provided by the person while the robot is acting. Objective functions are typically parametrized in terms of features, which capture aspects of the task that might be important. When the person intervenes to correct the robot’s behavior, the robot should update its understanding of which features matter, how much, and in what way. Unfortunately, real users do not provide optimal corrections that isolate exactly what the robot was doing wrong. Thus, when receiving a correction, it is difficult for the robot to determine which features the person meant to correct, and which features were changed unintentionally. In this paper, we propose to improve the efficiency of robot learning during physical interactions by reducing unintended learning. Our approach allows the human-robot team to focus on learning one feature at a time, unlike state-of-the-art techniques that update all features at once. We derive an online method for identifying the single feature which the human is trying to change during physical interaction, and experimentally compare this one-at-a-time approach to the all-at-once baseline in a user study. Our results suggest that users teaching one-at-a-time perform better, especially in tasks that require changing multiple features.",http://dl.acm.org/citation.cfm?doid=3171221.3171267,2018,conferencePaper,"Bajcsy, Andrea; Losey, Dylan P.; O'Malley, Marcia K.; Dragan, Anca D.",Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI '18 Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model,"Developing control policies in simulation is often more practical and safer than directly running experiments in the real world. This applies to policies obtained from planning and optimization, and even more so to policies obtained from reinforcement learning, which is often very data demanding. However, a policy that succeeds in simulation often doesn't work when deployed on a real robot. Nevertheless, often the overall gist of what the policy does in simulation remains valid in the real world. In this paper we investigate such settings, where the sequence of states traversed in simulation remains reasonable for the real world, even if the details of the controls are not, as could be the case when the key differences lie in detailed friction, contact, mass and geometry properties. During execution, at each time step our approach computes what the simulation-based control policy would do, but then, rather than executing these controls on the real robot, our approach computes what the simulation expects the resulting next state(s) will be, and then relies on a learned deep inverse dynamics model to decide which real-world action is most suitable to achieve those next states. Deep models are only as good as their training data, and we also propose an approach for data collection to (incrementally) learn the deep inverse dynamics model. Our experiments shows our approach compares favorably with various baselines that have been developed for dealing with simulation to real world model discrepancy, including output error control and Gaussian dynamics adaptation.",http://arxiv.org/abs/1610.03518,2016,manuscript,"Christiano, Paul; Shah, Zain; Mordatch, Igor; Schneider, Jonas; Blackwell, Trevor; Tobin, Joshua; Abbeel, Pieter; Zaremba, Wojciech", Adversarial Policies: Attacking Deep Reinforcement Learning,"Deep reinforcement learning (RL) policies are known to be vulnerable to adversarial perturbations to their observations, similar to adversarial examples for classifiers. However, an attacker is not usually able to directly modify another agent’s observations. This might lead one to wonder: is it possible to attack an RL agent simply by choosing an adversarial policy acting in a multi-agent environment so as to create natural observations that are adversarial? We demonstrate the existence of adversarial policies in zero-sum games between simulated humanoid robots with proprioceptive observations, against state-of-the-art victims trained via self-play to be robust to opponents. The adversarial policies reliably win against the victims but generate seemingly random and uncoordinated behavior. We find that these policies are more successful in high-dimensional environments, and induce substantially different activations in the victim policy network than when the victim plays against a normal opponent. Videos are available at https://adversarialpolicies.github.io/.",http://arxiv.org/abs/1905.10615,2019,manuscript,"Gleave, Adam; Dennis, Michael; Kant, Neel; Wild, Cody; Levine, Sergey; Russell, Stuart", The MAGICAL Benchmark for Robust Imitation,"Imitation Learning (IL) algorithms are typically evaluated in the same environment that was used to create demonstrations. This rewards precise reproduction of demonstrations in one particular environment, but provides little information about how robustly an algorithm can generalise the demonstrator’s intent to substantially different deployment settings. This paper presents the MAGICAL benchmark suite, which permits systematic evaluation of generalisation by quantifying robustness to different kinds of distribution shift that an IL algorithm is likely to encounter in practice. Using the MAGICAL suite, we confirm that existing IL algorithms overfit significantly to the context in which demonstrations are provided. We also show that standard methods for reducing overfitting are effective at creating narrow perceptual invariances, but are not sufficient to enable transfer to contexts that require substantially different behaviour, which suggests that new approaches will be needed in order to robustly generalise demonstrator intent. Code and data for the MAGICAL suite is available at https://github.com/qxcv/magical/.",https://papers.nips.cc/paper/2020/hash/d464b5ac99e74462f321c06ccacc4bff-Abstract.html,2020,conferencePaper,"Toyer, Sam; Shah, Rohin; Critch, Andrew; Russell, Stuart",Advances in Neural Information Processing Systems 33 Pre-proceedings Mammalian Value Systems,"Characterizing human values is a topic deeply interwoven with the sciences, humanities, art, and many other human endeavors. In recent years, a number of thinkers have argued that accelerating trends in computer science, cognitive science, and related disciplines foreshadow the creation of intelligent machines which meet and ultimately surpass the cognitive abilities of human beings, thereby entangling an understanding of human values with future technological development. Contemporary research accomplishments suggest sophisticated AI systems becoming widespread and responsible for managing many aspects of the modern world, from preemptively planning users' travel schedules and logistics, to fully autonomous vehicles, to domestic robots assisting in daily living. The extrapolation of these trends has been most forcefully described in the context of a hypothetical ""intelligence explosion,"" in which the capabilities of an intelligent software agent would rapidly increase due to the presence of feedback loops unavailable to biological organisms. The possibility of superintelligent agents, or simply the widespread deployment of sophisticated, autonomous AI systems, highlights an important theoretical problem: the need to separate the cognitive and rational capacities of an agent from the fundamental goal structure, or value system, which constrains and guides the agent's actions. The ""value alignment problem"" is to specify a goal structure for autonomous agents compatible with human values. In this brief article, we suggest that recent ideas from affective neuroscience and related disciplines aimed at characterizing neurological and behavioral universals in the mammalian class provide important conceptual foundations relevant to describing human values. We argue that the notion of ""mammalian value systems"" points to a potential avenue for fundamental research in AI safety and AI ethics.",http://arxiv.org/abs/1607.08289,2019,journalArticle,"Sarma, Gopal P.; Hay, Nick J.",Informatica Using Artificial Intelligence to Augment Human Intelligence,"By creating user interfaces which let us work with the representations inside machine learning models, we can give people new tools for reasoning.",https://distill.pub/2017/aia,2017,journalArticle,"Carter, Shan; Nielsen, Michael",Distill Infinite Data/Compute Arguments in Alignment,,https://www.alignmentforum.org/posts/7CJBiHYxebTmMfGs3/infinite-data-compute-arguments-in-alignment,2020,blogPost,"Wentworth, John S",AI Alignment Forum Preliminary design of the SAFE platform,,http://dl.acm.org/citation.cfm?doid=2039239.2039245,2011,conferencePaper,"DeHon, André; Ray, Sumit; Shivers, Olin; Smith, Jonathan M.; Sullivan, Gregory; Karel, Ben; Knight, Thomas F.; Malecha, Gregory; Montagu, Benoît; Morisset, Robin; Morrisett, Greg; Pierce, Benjamin C.; Pollack, Randy",Proceedings of the 6th Workshop on Programming Languages and Operating Systems - PLOS '11 You Shouldn’t Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods,"Transparency of algorithmic systems has been discussed as a way for end-users and regulators to develop appropriate trust in machine learning models. One popular approach, LIME (Ribeiro, Singh, and Guestrin 2016), even suggests that model explanations can answer the question “Why should I trust you?”. Here we show a straightforward method for modifying a pre-trained model to manipulate the output of many popular feature importance explanation methods with little change in accuracy, thus demonstrating the danger of trusting such explanation methods to evaluate the trustworthiness of models. We show how this explanation attack can mask a model’s discriminatory use of a sensitive feature, raising strong concerns about using such explanation methods as reliable evaluations to check the fairness of a model.",,2020,conferencePaper,"Dimanov, Botty; Bhatt, Umang; Jamnik, Mateja; Weller, Adrian", Deep Reinforcement Learning that Matters,"In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.",http://arxiv.org/abs/1709.06560,2019,manuscript,"Henderson, Peter; Islam, Riashat; Bachman, Philip; Pineau, Joelle; Precup, Doina; Meger, David", Model Mis-specification and Inverse Reinforcement Learning,"Posted as part of the AI Alignment Forum sequence on Value Learning. Rohin's note: While I motivated the last post with an example of using a specific model for human biases, in this post (original here), Jacob Steinhardt and Owain Evans point out that model mis-specification can arise in other parts of inverse reinforcement learning as well. The arguments here consider some more practical concerns (for example, the worries about getting only short-term data for each human would not be a problem if you had the entire human policy). -------------------------------------------------------------------------------- In my previous post, “Latent Variables and Model Mis-specification”, I argued that while machine learning is good at optimizing accuracy on observed signals, it has less to say about correctly inferring the values for unobserved variables in a model. In this post I’d like to focus in on a specific context for this: inverse reinforcement learning (Ng et al. 2000, Abbeel et al. 2004, Ziebart et al. 2008, Ho et al 2016), where one observes the actions of an agent and wants to infer the preferences and beliefs that led to those actions. For this post, I am pleased to be joined by Owain Evans, who is an active researcher in this area and has co-authored an online book about building models of agents (see here in particular for a tutorial on inverse reinforcement learning and inverse planning). Owain and I are particularly interested in inverse reinforcement learning (IRL) because it has been proposed (most notably by Stuart Russell) as a method for learning human values in the context of AI safety; among other things, this would eventually involve learning and correctly implementing human values by artificial agents that are much more powerful, and act with much broader scope, than any humans alive today. While we think that overall IRL is a promising route to consider, we believe that there are also a number of non-obvious pitfalls related to performing IRL",https://www.alignmentforum.org/posts/cnC2RMWEGiGpJv8go/model-mis-specification-and-inverse-reinforcement-learning,2018,blogPost,"Evans, Owain; Steinhardt, Jacob",AI Alignment Forum "The Age of Em: Work, Love, and Life when Robots Rule the Earth","Robots may one day rule the world, but what is a robot-ruled Earth like? Many think the first truly smart robots will be brain emulations or ems. Scan a human brain, then run a model with the same connections on a fast computer, and you have a robot brain, but recognizably human. Train an em to do some job and copy it a million times: an army of workers is at your disposal. When they can be made cheaply, within perhaps a century, ems will displace humans in most jobs. In this new economic era, the world economy may double in size every few weeks. Some say we can't know the future, especially following such a disruptive new technology, but Professor Robin Hanson sets out to prove them wrong. Applying decades of expertise in physics, computer science, and economics, he uses standard theories to paint a detailed picture of a world dominated by ems. While human lives don't change greatly in the em era, em lives are as different from ours as our lives are from those of our farmer and forager ancestors. Ems make us question common assumptions of moral progress, because they reject many of the values we hold dear. Read about em mind speeds, body sizes, job training and career paths, energy use and cooling infrastructure, virtual reality, aging and retirement, death and immortality, security, wealth inequality, religion, teleportation, identity, cities, politics, law, war, status, friendship and love. This book shows you just how strange your descendants may be, though ems are no stranger than we would appear to our ancestors. To most ems, it seems good to be an em.",https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198754626.001.0001/isbn-9780198754626,2016,book,"Hanson, Robin", Can you program ethics into a self-driving car?,,http://ieeexplore.ieee.org/document/7473149/,2016,journalArticle,"Goodall, Noah J.",IEEE Spectrum The Cartography of Global Catastrophic Governance,,,2020,report,"Kemp, Luke; Rhodes, Catherine", The technological singularity,,https://link.springer.com/book/10.1007%2F978-3-662-54033-6,2015,book,"Miller, Jim; Yampolskiy, Roman; Armstrong, Stuart; Callaghan, Vic", Courteous Autonomous Cars,"Typically, autonomous cars optimize for a combination of safety, efficiency, and driving quality. But as we get better at this optimization, we start seeing behavior go from too conservative to too aggressive. The car's behavior exposes the incentives we provide in its cost function. In this work, we argue for cars that are not optimizing a purely selfish cost, but also try to be courteous to other interactive drivers. We formalize courtesy as a term in the objective that measures the increase in another driver's cost induced by the autonomous car's behavior. Such a courtesy term enables the robot car to be aware of possible irrationality of the human behavior, and plan accordingly. We analyze the effect of courtesy in a variety of scenarios. We find, for example, that courteous robot cars leave more space when merging in front of a human driver. Moreover, we find that such a courtesy term can help explain real human driver behavior on the NGSIM dataset.",http://arxiv.org/abs/1808.02633,2018,conferencePaper,"Sun, Liting; Zhan, Wei; Tomizuka, Masayoshi; Dragan, Anca D.",2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) MIRI AI Predictions Dataset,"The MIRI AI predictions dataset is a collection of public predictions about human-level AI timelines. We edited the original dataset, as described below. Our dataset is available here, and the original here. Interesting features of the dataset include: The median dates at which people's predictions suggest AI is less likely than not and more likely than not are...",https://aiimpacts.org/miri-ai-predictions-dataset/,2015,blogPost,AI Impacts,AI Impacts Non-Adversarial Imitation Learning and its Connections to Adversarial Methods,"Many modern methods for imitation learning and inverse reinforcement learning, such as GAIL or AIRL, are based on an adversarial formulation. These methods apply GANs to match the expert's distribution over states and actions with the implicit state-action distribution induced by the agent's policy. However, by framing imitation learning as a saddle point problem, adversarial methods can suffer from unstable optimization, and convergence can only be shown for small policy updates. We address these problems by proposing a framework for non-adversarial imitation learning. The resulting algorithms are similar to their adversarial counterparts and, thus, provide insights for adversarial imitation learning methods. Most notably, we show that AIRL is an instance of our non-adversarial formulation, which enables us to greatly simplify its derivations and obtain stronger convergence guarantees. We also show that our non-adversarial formulation can be used to derive novel algorithms by presenting a method for offline imitation learning that is inspired by the recent ValueDice algorithm, but does not rely on small policy updates for convergence. In our simulated robot experiments, our offline method for non-adversarial imitation learning seems to perform best when using many updates for policy and discriminator at each iteration and outperforms behavioral cloning and ValueDice.",http://arxiv.org/abs/2008.03525,2020,manuscript,"Arenz, Oleg; Neumann, Gerhard", Learning from the Climate Change Debate to Avoid Polarisation on Negative Emissions,"This paper identifies critical lessons from the climate change experience to guide how communications and engagement on negative emissions can be conducted to encourage functional public and policy discourse. Negative emissions technologies present a significant opportunity for limiting climate change, and are likely to be necessary to keep warming below 2°C. While the concept of negative emissions is still in its infancy, there is evidence of nascent polarization, and a lack of nuance in discussion of individual technologies. We argue that if negative emissions technologies are to be implemented effectively and sustainably, an effective governance regime is needed; built on functional societal discourse and avoiding the ideological baggage of the broader climate change debate or the controversies concerning geoengineering. At its core, our argument is to avoid the ideological bundling of negative emissions; this can be pursued directly and via careful selection of communication frames and the use of non-partisan, trusted messengers. Whether these lessons are heeded may determine if negative emissions are governed proactively, or are distorted politically, misused and delayed.",https://doi.org/10.1080/17524032.2019.1630463,2019,journalArticle,"Colvin, R. M.; Kemp, Luke; Talberg, Anita; Castella, Clare De; Downie, C.; Friel, S.; Grant, Will J.; Howden, Mark; Jotzo, Frank; Markham, Francis; Platow, Michael J.",Environmental Communication The Incentives that Shape Behaviour,"Which variables does an agent have an incentive to control with its decision, and which variables does it have an incentive to respond to? We formalise these incentives, and demonstrate unique graphical criteria for detecting them in any single decision causal influence diagram. To this end, we introduce structural causal influence models, a hybrid of the influence diagram and structural causal model frameworks. Finally, we illustrate how these incentives predict agent incentives in both fairness and AI safety applications.",http://arxiv.org/abs/2001.07118,2020,manuscript,"Carey, Ryan; Langlois, Eric; Everitt, Tom; Legg, Shane", Artificial intelligence in a crisis needs ethics with urgency,"Artificial intelligence tools can help save lives in a pandemic. However, the need to implement technological solutions rapidly raises challenging ethical issues. We need new approaches for ethics with urgency, to ensure AI can be safely and beneficially used in the COVID-19 response and beyond.",https://www.nature.com/articles/s42256-020-0195-0,2020,journalArticle,"Tzachor, Asaf; Whittlestone, Jess; Sundaram, Lalitha; hÉigeartaigh, Seán Ó",Nature Machine Intelligence Robust Change Captioning,"Describing what has changed in a scene can be useful to a user, but only if generated text focuses on what is semantically relevant. It is thus important to distinguish distractors (e.g. a viewpoint change) from relevant changes (e.g. an object has moved). We present a novel Dual Dynamic Attention Model (DUDA) to perform robust Change Captioning. Our model learns to distinguish distractors from semantic changes, localize the changes via Dual Attention over ""before"" and ""after"" images, and accurately describe them in natural language via Dynamic Speaker, by adaptively focusing on the necessary visual inputs (e.g. ""before"" or ""after"" image). To study the problem in depth, we collect a CLEVR-Change dataset, built off the CLEVR engine, with 5 types of scene changes. We benchmark a number of baselines on our dataset, and systematically study different change types and robustness to distractors. We show the superiority of our DUDA model in terms of both change captioning and localization. We also show that our approach is general, obtaining state-of-the-art results on the recent realistic Spot-the-Diff dataset which has no distractors.",http://arxiv.org/abs/1901.02527,2019,conferencePaper,"Park, Dong Huk; Darrell, Trevor; Rohrbach, Anna",Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems,"Autonomous agents acting in the real-world often operate based on models that ignore certain aspects of the environment. The incompleteness of any given model---handcrafted or machine acquired---is inevitable due to practical limitations of any modeling technique for complex real-world settings. Due to the limited fidelity of its model, an agent's actions may have unexpected, undesirable consequences during execution. Learning to recognize and avoid such negative side effects of the agent's actions is critical to improving the safety and reliability of autonomous systems. This emerging research topic is attracting increased attention due to the increased deployment of AI systems and their broad societal impacts. This article provides a comprehensive overview of different forms of negative side effects and the recent research efforts to address them. We identify key characteristics of negative side effects, highlight the challenges in avoiding negative side effects, and discuss recently developed approaches, contrasting their benefits and limitations. We conclude with a discussion of open questions and suggestions for future research directions.",http://arxiv.org/abs/2008.12146,2020,manuscript,"Saisubramanian, Sandhya; Zilberstein, Shlomo; Kamar, Ece", Adapting a kidney exchange algorithm to align with human values,"The efficient and fair allocation of limited resources is a classical problem in economics and computer science. In kidney exchanges, a central market maker allocates living kidney donors to patients in need of an organ. Patients and donors in kidney exchanges are prioritized using ad-hoc weights decided on by committee and then fed into an allocation algorithm that determines who gets what—and who does not. In this paper, we provide an end-to-end methodology for estimating weights of individual participant profiles in a kidney exchange. We first elicit from human subjects a list of patient attributes they consider acceptable for the purpose of prioritizing patients (e.g., medical characteristics, lifestyle choices, and so on). Then, we ask subjects comparison queries between patient profiles and estimate weights in a principled way from their responses. We show how to use these weights in kidney exchange market clearing algorithms. We then evaluate the impact of the weights in simulations and find that the precise numerical values of the weights we computed matter little, other than the ordering of profiles that they imply. However, compared to not prioritizing patients at all, there is a significant effect, with certain classes of patients being (de)prioritized based on the human-elicited value judgments.",http://www.sciencedirect.com/science/article/pii/S0004370220300229,2020,journalArticle,"Freedman, Rachel; Borg, Jana Schaich; Sinnott-Armstrong, Walter; Dickerson, John P.; Conitzer, Vincent",Artificial Intelligence Probabilistic inference in human semantic memory,,https://linkinghub.elsevier.com/retrieve/pii/S136466130600129X,2006,journalArticle,"Steyvers, Mark; Griffiths, Thomas L.; Dennis, Simon",Trends in Cognitive Sciences PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings,"For autonomous vehicles (AVs) to behave appropriately on roads populated by human-driven vehicles, they must be able to reason about the uncertain intentions and decisions of other drivers from rich perceptual information. Towards these capabilities, we present a probabilistic forecasting model of future interactions between a variable number of agents. We perform both standard forecasting and the novel task of conditional forecasting, which reasons about how all agents will likely respond to the goal of a controlled agent (here, the AV). We train models on real and simulated data to forecast vehicle trajectories given past positions and LIDAR. Our evaluation shows that our model is substantially more accurate in multi-agent driving scenarios compared to existing state-of-the-art. Beyond its general ability to perform conditional forecasting queries, we show that our model's predictions of all agents improve when conditioned on knowledge of the AV's goal, further illustrating its capability to model agent interactions.",http://arxiv.org/abs/1905.01296,2019,conferencePaper,"Rhinehart, Nicholas; McAllister, Rowan; Kitani, Kris; Levine, Sergey","Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019" Examples of early action on risks,"Details Discussion There are many current efforts to mitigate risks from artificial intelligence. We might learn something about the likelihood of these efforts influencing AI risk by looking at similar past efforts. To this end, we are interested here in past risk mitigation efforts that have the following characteristics (taken from this paper contributing to the same project (p5) and...",https://aiimpacts.org/examples-of-early-action-on-a-risk/,2016,blogPost,AI Impacts,AI Impacts Deep Reinforcement Learning from Policy-Dependent Human Feedback,"To widen their accessibility and increase their utility, intelligent agents must be able to learn complex behaviors as specified by (non-expert) human users. Moreover, they will need to learn these behaviors within a reasonable amount of time while efficiently leveraging the sparse feedback a human trainer is capable of providing. Recent work has shown that human feedback can be characterized as a critique of an agent's current behavior rather than as an alternative reward signal to be maximized, culminating in the COnvergent Actor-Critic by Humans (COACH) algorithm for making direct policy updates based on human feedback. Our work builds on COACH, moving to a setting where the agent's policy is represented by a deep neural network. We employ a series of modifications on top of the original COACH algorithm that are critical for successfully learning behaviors from high-dimensional observations, while also satisfying the constraint of obtaining reduced sample complexity. We demonstrate the effectiveness of our Deep COACH algorithm in the rich 3D world of Minecraft with an agent that learns to complete tasks by mapping from raw pixels to actions using only real-time human feedback in 10-15 minutes of interaction.",http://arxiv.org/abs/1902.04257,2019,manuscript,"Arumugam, Dilip; Lee, Jun Ki; Saskin, Sophie; Littman, Michael L.", RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning,"Deep reinforcement learning (deep RL) has been successful in learning sophisticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, benefiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a ""fast"" reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose (""slow"") RL algorithm. The RNN receives all information a typical RL algorithm would receive, including observations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the ""fast"" RL algorithm on the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-arm bandit problems and finite MDPs. After RL$^2$ is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the large-scale side, we test RL$^2$ on a vision-based navigation task and show that it scales up to high-dimensional problems.",http://arxiv.org/abs/1611.02779,2016,manuscript,"Duan, Yan; Schulman, John; Chen, Xi; Bartlett, Peter L.; Sutskever, Ilya; Abbeel, Pieter", Imitating Latent Policies from Observation,"In this paper, we describe a novel approach to imitation learning that infers latent policies directly from state observations. We introduce a method that characterizes the causal effects of latent actions on observations while simultaneously predicting their likelihood. We then outline an action alignment procedure that leverages a small amount of environment interactions to determine a mapping between the latent and real-world actions. We show that this corrected labeling can be used for imitating the observed behavior, even though no expert actions are given. We evaluate our approach within classic control environments and a platform game and demonstrate that it performs better than standard approaches. Code for this work is available at https://github.com/ashedwards/ILPO.",http://arxiv.org/abs/1805.07914,2019,conferencePaper,"Edwards, Ashley D.; Sahni, Himanshu; Schroecker, Yannick; Isbell, Charles L.",Proceedings of the 36th International Conference on Machine Learning Learning Existing Social Conventions via Observationally Augmented Self-Play,"In order for artificial agents to coordinate effectively with people, they must act consistently with existing conventions (e.g. how to navigate in traffic, which language to speak, or how to coordinate with teammates). A group's conventions can be viewed as a choice of equilibrium in a coordination game. We consider the problem of an agent learning a policy for a coordination game in a simulated environment and then using this policy when it enters an existing group. When there are multiple possible conventions we show that learning a policy via multi-agent reinforcement learning (MARL) is likely to find policies which achieve high payoffs at training time but fail to coordinate with the real group into which the agent enters. We assume access to a small number of samples of behavior from the true convention and show that we can augment the MARL objective to help it find policies consistent with the real group's convention. In three environments from the literature - traffic, communication, and team coordination - we observe that augmenting MARL with a small amount of imitation learning greatly increases the probability that the strategy found by MARL fits well with the existing social convention. We show that this works even in an environment where standard training methods very rarely find the true convention of the agent's partners.",http://arxiv.org/abs/1806.10071,2019,conferencePaper,"Lerer, Adam; Peysakhovich, Alexander","AIES '19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society" Open Questions in Creating Safe Open-ended AI: Tensions Between Control and Creativity,"Artificial life originated and has long studied the topic of open-ended evolution, which seeks the principles underlying artificial systems that innovate continually, inspired by biological evolution. Recently, interest has grown within the broader field of AI in a generalization of open-ended evolution, here called open-ended search, wherein such questions of open-endedness are explored for advancing AI, whatever the nature of the underlying search algorithm (e.g. evolutionary or gradient-based). For example, open-ended search might design new architectures for neural networks, new reinforcement learning algorithms, or most ambitiously, aim at designing artificial general intelligence. This paper proposes that open-ended evolution and artificial life have much to contribute towards the understanding of open-ended AI, focusing here in particular on the safety of open-ended search. The idea is that AI systems are increasingly applied in the real world, often producing unintended harms in the process, which motivates the growing field of AI safety. This paper argues that open-ended AI has its own safety challenges, in particular, whether the creativity of open-ended systems can be productively and predictably controlled. This paper explains how unique safety problems manifest in open-ended search, and suggests concrete contributions and research questions to explore them. The hope is to inspire progress towards creative, useful, and safe open-ended search algorithms.",http://arxiv.org/abs/2006.07495,2020,conferencePaper,"Ecoffet, Adrien; Clune, Jeff; Lehman, Joel",Artificial Life Conference Proceedings Robust Online Optimization of Reward-uncertain MDPs,"Imprecise-reward Markov decision processes (IRMDPs) are MDPs in which the reward function is only partially specified (e.g., by some elicitation process). Recent work using minimax regret to solve IRMDPs has shown, despite their theoretical intractability, how the set of policies that are nondominated w.r.t. reward uncertainty can be exploited to accelerate regret computation. However, the number of nondominated policies is generally so large as to undermine this leverage. In this paper, we show how the quality of the approximation can be improved online by pruning/adding nondominated policies during reward elicitation, while maintaining computational tractability. Drawing insights from the POMDP literature, we also develop a new anytime algorithm for constructing the set of nondominated policies with provable (anytime) error bounds. These bounds can be exploited to great effect in our online approximation scheme.",,2011,conferencePaper,"Regan, Kevin; Boutilier, Craig",Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence Deep Bayesian Reward Learning from Preferences,"Bayesian inverse reinforcement learning (IRL) methods are ideal for safe imitation learning, as they allow a learning agent to reason about reward uncertainty and the safety of a learned policy. However, Bayesian IRL is computationally intractable for high-dimensional problems because each sample from the posterior requires solving an entire Markov Decision Process (MDP). While there exist non-Bayesian deep IRL methods, these methods typically infer point estimates of reward functions, precluding rigorous safety and uncertainty analysis. We propose Bayesian Reward Extrapolation (B-REX), a highly efficient, preference-based Bayesian reward learning algorithm that scales to high-dimensional, visual control tasks. Our approach uses successor feature representations and preferences over demonstrations to efficiently generate samples from the posterior distribution over the demonstrator's reward function without requiring an MDP solver. Using samples from the posterior, we demonstrate how to calculate high-confidence bounds on policy performance in the imitation learning setting, in which the ground-truth reward function is unknown. We evaluate our proposed approach on the task of learning to play Atari games via imitation learning from pixel inputs, with no access to the game score. We demonstrate that B-REX learns imitation policies that are competitive with a state-of-the-art deep imitation learning method that only learns a point estimate of the reward function. Furthermore, we demonstrate that samples from the posterior generated via B-REX can be used to compute high-confidence performance bounds for a variety of evaluation policies. We show that high-confidence performance bounds are useful for accurately ranking different evaluation policies when the reward function is unknown. We also demonstrate that high-confidence performance bounds may be useful for detecting reward hacking.",https://arxiv.org/abs/1912.04472v1,2019,manuscript,"Brown, Daniel S.; Niekum, Scott", Learning User Preferences via Reinforcement Learning with Spatial Interface Valuing,"Interactive Machine Learning is concerned with creating systems that operate in environments alongside humans to achieve a task. A typical use is to extend or amplify the capabilities of a human in cognitive or physical ways, requiring the machine to adapt to the users' intentions and preferences. Often, this takes the form of a human operator providing some type of feedback to the user, which can be explicit feedback, implicit feedback, or a combination of both. Explicit feedback, such as through a mouse click, carries a high cognitive load. The focus of this study is to extend the current state of the art in interactive machine learning by demonstrating that agents can learn a human user's behavior and adapt to preferences with a reduced amount of explicit human feedback in a mixed feedback setting. The learning agent perceives a value of its own behavior from hand gestures given via a spatial interface. This feedback mechanism is termed Spatial Interface Valuing. This method is evaluated experimentally in a simulated environment for a grasping task using a robotic arm with variable grip settings. Preliminary results indicate that learning agents using spatial interface valuing can learn a value function mapping spatial gestures to expected future rewards much more quickly as compared to those same agents just receiving explicit feedback, demonstrating that an agent perceiving feedback from a human user via a spatial interface can serve as an effective complement to existing approaches.",https://arxiv.org/abs/1902.00719v1,2019,conferencePaper,"Alonso Jr, Miguel", Goertzel’s GOLEM implements evidential decision theory applied to policy choice,"I’ve written about the question of which decision theories describe the behavior of approaches to AI like the “Law of Effect”. In this post, I would like to discuss GOLEM, an arch…",https://casparoesterheld.com/2018/04/26/goertzels-golem-implements-evidential-decision-theory-applied-to-policy-choice/,2018,blogPost,"Oesterheld, Caspar",The Universe from an Intentional Stance Combining the Causal Judgments of Experts with Possibly Different Focus Areas,"In many real-world settings, a decision-maker must combine information provided by different experts in order to decide on an effective policy. Alrajeh, Chockler, and Halpern (2018) showed how to combine causal models that are compatible in the sense that, for variables that appear in both models, the experts agree on the causal structure. In this work we show how causal models can be combined in cases where the experts might disagree on the causal structure for variables that appear in both models due to having different focus areas. We provide a new formal definition of compatibility of models in this setting and show how compatible models can be combined. We also consider the complexity of determining whether models are compatible. We believe that the notions defined in this work are of direct relevance to many practical decision making scenarios that come up in natural, social, and medical science settings.",,2018,conferencePaper,"Friedenberg, Meir; Halpern, Joseph Y", Does the brain represent words? An evaluation of brain decoding studies of language understanding,"Language decoding studies have identified word representations which can be used to predict brain activity in response to novel words and sentences (Anderson et al., 2016; Pereira et al., 2018). The unspoken assumption of these studies is that, during processing, linguistic information is transformed into some shared semantic space, and those semantic representations are then used for a variety of linguistic and non-linguistic tasks. We claim that current studies vastly underdetermine the content of these representations, the algorithms which the brain deploys to produce and consume them, and the computational tasks which they are designed to solve. We illustrate this indeterminacy with an extension of the sentence-decoding experiment of Pereira et al. (2018), showing how standard evaluations fail to distinguish between language processing models which deploy different mechanisms and which are optimized to solve very different tasks. We conclude by suggesting changes to the brain decoding paradigm which can support stronger claims of neural representation.",http://arxiv.org/abs/1806.00591,2018,conferencePaper,"Gauthier, Jon; Ivanova, Anna",2018 Conference on Cognitive Computational Neuroscience. Autonomous Cars and their Moral Implications,,,2015,journalArticle,"Sandberg, Anders; Bradshaw-Martin, Heather",Multitudes Defending Against Neural Fake News,"Recent progress in natural language generation has raised dual-use concerns. While applications like summarization and translation are positive, the underlying technology also might enable adversaries to generate neural fake news: targeted propaganda that closely mimics the style of real news. Modern computer security relies on careful threat modeling: identifying potential threats and vulnerabilities from an adversary's point of view, and exploring potential mitigations to these threats. Likewise, developing robust defenses against neural fake news requires us first to carefully investigate and characterize the risks of these models. We thus present a model for controllable text generation called Grover. Given a headline like `Link Found Between Vaccines and Autism,' Grover can generate the rest of the article; humans find these generations to be more trustworthy than human-written disinformation. Developing robust verification techniques against generators like Grover is critical. We find that best current discriminators can classify neural fake news from real, human-written, news with 73% accuracy, assuming access to a moderate level of training data. Counterintuitively, the best defense against Grover turns out to be Grover itself, with 92% accuracy, demonstrating the importance of public release of strong generators. We investigate these results further, showing that exposure bias -- and sampling strategies that alleviate its effects -- both leave artifacts that similar discriminators can pick up on. We conclude by discussing ethical issues regarding the technology, and plan to release Grover publicly, helping pave the way for better detection of neural fake news.",http://arxiv.org/abs/1905.12616,2019,conferencePaper,"Zellers, Rowan; Holtzman, Ari; Rashkin, Hannah; Bisk, Yonatan; Farhadi, Ali; Roesner, Franziska; Choi, Yejin",Advances in Neural Information Processing Systems 32 (NeurIPS 2019) How Much Computational Power Does It Take to Match the Human Brain?,"Open Philanthropy is interested in when AI systems will be able to perform various tasks that humans can perform (“AI timelines”). To inform our thinking, I investigated what evidence the human brain provides about the computational power",https://www.openphilanthropy.org/brain-computation-report,2020,blogPost,"Carlsmith, Joseph",Open Philanthropy "Deep Imitative Models for Flexible Inference, Planning, and Control","Imitation Learning (IL) is an appealing approach to learn desirable autonomous behavior. However, directing IL to achieve arbitrary goals is difficult. In contrast, planning-based algorithms use dynamics models and reward functions to achieve goals. Yet, reward functions that evoke desirable behavior are often difficult to specify. In this paper, we propose Imitative Models to combine the benefits of IL and goal-directed planning. Imitative Models are probabilistic predictive models of desirable behavior able to plan interpretable expert-like trajectories to achieve specified goals. We derive families of flexible goal objectives, including constrained goal regions, unconstrained goal sets, and energy-based goals. We show that our method can use these objectives to successfully direct behavior. Our method substantially outperforms six IL approaches and a planning-based approach in a dynamic simulated autonomous driving task, and is efficiently learned from expert demonstrations without online data collection. We also show our approach is robust to poorly specified goals, such as goals on the wrong side of the road.",http://arxiv.org/abs/1810.06544,2019,conferencePaper,"Rhinehart, Nicholas; McAllister, Rowan; Levine, Sergey","arXiv:1810.06544 [cs, stat]" Mean Actor Critic,"We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning. MAC is a policy gradient algorithm that uses the agent's explicit representation of all action values to estimate the gradient of the policy, rather than using only the actions that were actually executed. We prove that this approach reduces variance in the policy gradient estimate relative to traditional actor-critic methods. We show empirical results on two control domains and on six Atari games, where MAC is competitive with state-of-the-art policy search algorithms.",http://arxiv.org/abs/1709.00503,2018,manuscript,"Allen, Cameron; Asadi, Kavosh; Roderick, Melrose; Mohamed, Abdel-rahman; Konidaris, George; Littman, Michael", Learning the Preferences of Bounded Agents,,,2015,conferencePaper,"Evans, Owain; Goodman, Noah D",NIPS Workshop on Bounded Optimality The Reversal Test: Eliminating Status Quo Bias in Applied Ethics,,https://www.journals.uchicago.edu/doi/abs/10.1086/505233,2006,journalArticle,"Bostrom, Nick; Ord, Toby",Ethics Comparison of Maximum Likelihood and GAN-based training of Real NVPs,"We train a generator by maximum likelihood and we also train the same generator architecture by Wasserstein GAN. We then compare the generated samples, exact log-probability densities and approximate Wasserstein distances. We show that an independent critic trained to approximate Wasserstein distance between the validation set and the generator distribution helps detect overfitting. Finally, we use ideas from the one-shot learning literature to develop a novel fast learning critic.",http://arxiv.org/abs/1705.05263,2017,manuscript,"Danihelka, Ivo; Lakshminarayanan, Balaji; Uria, Benigno; Wierstra, Daan; Dayan, Peter", Uncertain human consequences in asteroid risk analysis and the global catastrophe threshold,,,2018,journalArticle,"Baum, Seth D.",Natural Hazards Ethical guidelines for a superintelligence,,https://linkinghub.elsevier.com/retrieve/pii/S0004370214001453,2015,journalArticle,"Davis, Ernest",Artificial Intelligence Improving Deep Reinforcement Learning in Minecraft with Action Advice,"Training deep reinforcement learning agents complex behaviors in 3D virtual environments requires significant computational resources. This is especially true in environments with high degrees of aliasing, where many states share nearly identical visual features. Minecraft is an exemplar of such an environment. We hypothesize that interactive machine learning IML, wherein human teachers play a direct role in training through demonstrations, critique, or action advice, may alleviate agent susceptibility to aliasing. However, interactive machine learning is only practical when the number of human interactions is limited, requiring a balance between human teacher effort and agent performance. We conduct experiments with two reinforcement learning algorithms which enable human teachers to give action advice, Feedback Arbitration and Newtonian Action Advice, under visual aliasing conditions. To assess potential cognitive load per advice type, we vary the accuracy and frequency of various human action advice techniques. Training efficiency, robustness against infrequent and inaccurate advisor input, and sensitivity to aliasing are examined.",http://arxiv.org/abs/1908.01007,2019,conferencePaper,"Frazier, Spencer; Riedl, Mark","arXiv:1908.01007 [cs, stat]" Negative Update Intervals in Deep Multi-Agent Reinforcement Learning,"In Multi-Agent Reinforcement Learning (MA-RL), independent cooperative learners must overcome a number of pathologies to learn optimal joint policies. Addressing one pathology often leaves approaches vulnerable towards others. For instance, hysteretic Q-learning addresses miscoordination while leaving agents vulnerable towards misleading stochastic rewards. Other methods, such as leniency, have proven more robust when dealing with multiple pathologies simultaneously. However, leniency has predominately been studied within the context of strategic form games (bimatrix games) and fully observable Markov games consisting of a small number of probabilistic state transitions. This raises the question of whether these findings scale to more complex domains. For this purpose we implement a temporally extend version of the Climb Game, within which agents must overcome multiple pathologies simultaneously, including relative overgeneralisation, stochasticity, the alter-exploration and moving target problems, while learning from a large observation space. We find that existing lenient and hysteretic approaches fail to consistently learn near optimal joint-policies in this environment. To address these pathologies we introduce Negative Update Intervals-DDQN (NUI-DDQN), a Deep MA-RL algorithm which discards episodes yielding cumulative rewards outside the range of expanding intervals. NUI-DDQN consistently gravitates towards optimal joint-policies in our environment, overcoming the outlined pathologies.",http://arxiv.org/abs/1809.05096,2019,conferencePaper,"Palmer, Gregory; Savani, Rahul; Tuyls, Karl",Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS) Pitfalls of learning a reward function online,"In some agent designs like inverse reinforcement learning an agent needs to learn its own reward function. Learning the reward function and optimising for it are typically two different processes, usually performed at different stages. We consider a continual (``one life'') learning approach where the agent both learns the reward function and optimises for it at the same time. We show that this comes with a number of pitfalls, such as deliberately manipulating the learning process in one direction, refusing to learn, ``learning'' facts already known to the agent, and making decisions that are strictly dominated (for all relevant reward functions). We formally introduce two desirable properties: the first is `unriggability', which prevents the agent from steering the learning process in the direction of a reward function that is easier to optimise. The second is `uninfluenceability', whereby the reward-function learning process operates by learning facts about the environment. We show that an uninfluenceable process is automatically unriggable, and if the set of possible environments is sufficiently rich, the converse is true too.",http://arxiv.org/abs/2004.13654,2020,conferencePaper,"Armstrong, Stuart; Leike, Jan; Orseau, Laurent; Legg, Shane",arXiv:2004.13654 [cs] Apprenticeship learning via inverse reinforcement learning,,http://portal.acm.org/citation.cfm?doid=1015330.1015430,2004,conferencePaper,"Abbeel, Pieter; Ng, Andrew Y.",Twenty-first international conference on Machine learning - ICML '04 Parallels Between AI Safety by Debate and Evidence Law,,https://cullenokeefe.com/blog/debate-evidence,2020,blogPost,"O'Keefe, Cullen",Cullen O'Keefe Integrating disagreeing subagents,"In my previous post, I suggested that akrasia involves subagent disagreement - or in other words, different parts of the brain having differing ideas on what the best course of action is. The existence of such conflicts raises the question, how does one resolve them? In this post I will discuss various techniques which could be interpreted as ways of resolving subagents disagreements, as well as some of the reasons for why this doesn’t always happen. A WORD ON INTERPRETING “SUBAGENTS” The frame that I’ve had so far is that of the brain being composed of different subagents with conflicting beliefs. On the other hand, one could argue that the subagent interpretation isn’t strictly necessary for many of the examples that I bring up in this post. One could just as well view my examples as talking about a single agent with conflicting beliefs. The distinction between these two frames isn’t always entirely clear. In “ Complex Behavior from Simple (Sub)Agents”, mordinamael presents a toy model where an agent has different goals. Moving to different locations will satisfy the different goals to a varying extent. The agent will generate a list of possible moves and picks the move which will bring some goal the closest to being satisfied. Is this a unified agent, or one made up of several subagents? One could argue for either interpretation. On the other hand, mordinamael's post frames the goals as subagents, and they are in a sense competing with each other. On the other hand, the subagents arguably don’t make the final decision themselves: they just report expected outcomes, and then a central mechanism picks a move based on their reports. This resembles the neuroscience model I discussed in my last post, where different subsystems in the brain submit various action “bids” to the basal ganglia. Various mechanisms then pick a winning bid based on various criteria - such as how relevant the subsystem’s concerns are for the current situation, and how accurate the diffe",https://www.lesswrong.com/posts/hnLutdvjC8kPScPAj/integrating-disagreeing-subagents,2019,blogPost,"Sotala, Kaj",LessWrong Did EDT get it right all along? Introducing yet another medical Newcomb problem,"One of the main arguments given against Evidential Decision Theory (EDT) is that it would “one-box” in medical Newcomb problems. Whether this is the winning action has been a hotly debated issue on LessWrong. A majority, including experts in the area such as Eliezer Yudkowsky and Wei Dai, seem to think that one should two-box (See e.g. Yudkowsky 2010, p.67). Others have tried to argue in favor of EDT by claiming that the winning action would be to one-box, or by offering reasons why EDT would in some cases two-box after all. In this blog post, I want to argue that EDT gets it right: one-boxing is the correct action in medical Newcomb problems. I introduce a new thought experiment, the Coin Flip Creation problem, in which I believe the winning move is to one-box. This new problem is structurally similar to other medical Newcomb problems such as the Smoking Lesion, though it might elicit the intuition to one-box even in people who would two-box in some of the other problems. I discuss both how EDT and other decision theories would reason in the problem and why people’s intuitions might diverge in different formulations of medical Newcomb problems. TWO KINDS OF NEWCOMBLIKE PROBLEMS There are two different kinds of Newcomblike problems. In Newcomb’s original paradox, both EDT and Logical Decision Theories (LDT), such as Timeless Decision Theory (TDT) would one-box and therefore, unlike CDT, win $1 million. In medical Newcomb problems, EDT’s and LDT’s decisions diverge. This is because in the latter, a (physical) causal node that isn’t itself a decision algorithm influences both the current world state and our decisions – resulting in a correlation between action and environment but, unlike the original Newcomb, no “logical” causation. It’s often unclear exactly how a causal node can exert influence on our decisions. Does it change our decision theory, utility function, or the information available to us? In the case of the Smoking Lesion problem, it seems plausible",https://www.lesswrong.com/posts/iqpizeN4hkbTjkugo/did-edt-get-it-right-all-along-introducing-yet-another,2017,blogPost,"Treutlein, Johannes",LessWrong Active reinforcement learning with monte-carlo tree search,,https://arxiv.org/abs/1803.04926,2018,manuscript,"Schulze, Sebastian; Evans, Owain", Servant of Many Masters: Shifting priorities in Pareto-optimal sequential decision-making,"It is often argued that an agent making decisions on behalf of two or more principals who have different utility functions should adopt a {\em Pareto-optimal} policy, i.e., a policy that cannot be improved upon for one agent without making sacrifices for another. A famous theorem of Harsanyi shows that, when the principals have a common prior on the outcome distributions of all policies, a Pareto-optimal policy for the agent is one that maximizes a fixed, weighted linear combination of the principals' utilities. In this paper, we show that Harsanyi's theorem does not hold for principals with different priors, and derive a more precise generalization which does hold, which constitutes our main result. In this more general case, the relative weight given to each principal's utility should evolve over time according to how well the agent's observations conform with that principal's prior. The result has implications for the design of contracts, treaties, joint ventures, and robots.",http://arxiv.org/abs/1711.00363,2017,manuscript,"Critch, Andrew; Russell, Stuart", Variational Lossy Autoencoder,"Representation learning seeks to expose certain aspects of observed data in a learned representation that's amenable to downstream tasks like classification. For instance, a good representation for 2D images might be one that describes only global structure and discards information about detailed texture. In this paper, we present a simple but principled method to learn such global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN. Our proposed VAE model allows us to have control over what the global latent code can learn and , by designing the architecture accordingly, we can force the global latent code to discard irrelevant information such as texture in 2D images, and hence the VAE only ""autoencodes"" data in a lossy fashion. In addition, by leveraging autoregressive models as both prior distribution $p(z)$ and decoding distribution $p(x|z)$, we can greatly improve generative modeling performance of VAEs, achieving new state-of-the-art results on MNIST, OMNIGLOT and Caltech-101 Silhouettes density estimation tasks.",http://arxiv.org/abs/1611.02731,2017,conferencePaper,"Chen, Xi; Kingma, Diederik P.; Salimans, Tim; Duan, Yan; Dhariwal, Prafulla; Schulman, John; Sutskever, Ilya; Abbeel, Pieter","arXiv:1611.02731 [cs, stat]" Risks from general artificial intelligence without an intelligence explosion,"“An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behin…",https://vkrakovna.wordpress.com/2015/11/29/ai-risk-without-an-intelligence-explosion/,2015,blogPost,"Krakovna, Victoria",Victoria Krakovna Global Catastrophic Risks,"A global catastrophic risk is one with the potential to wreak death and destruction on a global scale. In human history, wars and plagues have done so on more than one occasion, and misguided ideologies and totalitarian regimes have darkened an entire era or a region. Advances in technology are adding dangers of a new kind. It could happen again. In Global Catastrophic Risks 25 leading experts look at the gravest risks facing humanity in the 21st century, including asteroid impacts, gamma-ray bursts, Earth-based natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, general artificial intelligence, and social collapse. The book also addresses over-arching issues - policy responses and methods for predicting and managing catastrophes. This is invaluable reading for anyone interested in the big issues of our time; for students focusing on science, society, technology, and public policy; and for academics, policy-makers, and professionals working in these acutely important fields.",,2011,book,"Bostrom, Nick; Cirkovic, Milan M.", "Human factors in large-scale technological systems' accidents: Three Mile Island, Bhopal, Chernobyl",,http://journals.sagepub.com/doi/10.1177/108602669100500203,1991,journalArticle,"Meshkati, Najmedin",Industrial Crisis Quarterly Evidence against current methods leading to human level artificial intelligence,"This is a list of published arguments that we know of that current methods in artificial intelligence will not lead to human-level AI. Details Clarifications We take 'current methods' to mean techniques for engineering artificial intelligence that are already known, involving no “qualitatively new ideas”. We have not precisely defined 'current methods'. Many of the...",https://aiimpacts.org/evidence-against-current-methods-leading-to-human-level-artificial-intelligence/,2019,blogPost,"Long, Robert; Bergal, Asya",AI Impacts Transhumanist FAQ 3.0,,https://humanityplus.org/philosophy/transhumanist-faq/,2017,blogPost,"Bostrom, Nick",Humanity + Adversarial Risk and the Dangers of Evaluating Against Weak Attacks,"This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate 'adversarial risk' as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as 'obscurity to an adversary,' and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.",http://arxiv.org/abs/1802.05666,2018,conferencePaper,"Uesato, Jonathan; O'Donoghue, Brendan; Oord, Aaron van den; Kohli, Pushmeet",Proceedings of the 35th International Conference on Machine Learning A Note on the Existence of Ratifiable Acts,Sufficient conditions are given under which ratifiable acts exist.,https://www.cambridge.org/core/product/identifier/S175502031800028X/type/journal_article,2020,journalArticle,"Halpern, Joseph Y.",The Review of Symbolic Logic Reinforcement Learning Under Moral Uncertainty,"An ambitious goal for artificial intelligence is to create agents that behave ethically: The capacity to abide by human moral norms would greatly expand the context in which autonomous agents could be practically and safely deployed. While ethical agents could be trained through reinforcement, by rewarding correct behavior under a specific moral theory (e.g. utilitarianism), there remains widespread disagreement (both societally and among moral philosophers) about the nature of morality and what ethical theory (if any) is objectively correct. Acknowledging such disagreement, recent work in moral philosophy proposes that ethical behavior requires acting under moral uncertainty, i.e. to take into account when acting that one's credence is split across several plausible ethical theories. Inspired by such work, this paper proposes a formalism that translates such insights to the field of reinforcement learning. Demonstrating the formalism's potential, we then train agents in simple environments to act under moral uncertainty, highlighting how such uncertainty can help curb extreme behavior from commitment to single theories. The overall aim is to draw productive connections from the fields of moral philosophy and machine ethics to that of machine learning, to inspire further research by highlighting a spectrum of machine learning research questions relevant to training ethically capable reinforcement learning agents.",http://arxiv.org/abs/2006.04734,2020,manuscript,"Ecoffet, Adrien; Lehman, Joel", What Is Unfair about Unequal Brute Luck? An Intergenerational Puzzle,"According to Luck egalitarians, fairness requires us to bring it about that nobody is worse off than others where this results from brute bad luck, but not where they choose or deserve to be so. In this paper, I consider one type of brute bad luck that appears paradigmatic of what a Luck Egalitarian ought to be most concerned about, namely that suffered by people who are born to badly off parents and are less well off as a result. However, when we consider what is supposedly unfair about this kind of unequal brute luck, luck egalitarians face a dilemma. According to the standard account of luck egalitarianism, differential brute luck is unfair because of its effects on the distribution of goods. Yet, where some parents are worse off because they have chosen to be imprudent, it may be impossible to neutralize these effects without creating a distribution that seems at least as unfair. This, I argue, is problematic for luck egalitarianism. I, therefore, explore two alternative views that can avoid this problem. On the first of these, proposed by Shlomi Segall, the distributional effects of unequal brute luck are unfair only when they make a situation more unequal, but not when they make it more equal. On the second, it is the unequal brute luck itself, rather than its distributional effects, that is unfair. I conclude with some considerations in favour of this second view, while accepting that both are valid responses to the problem I describe.",https://doi.org/10.1007/s11406-018-00053-5,2019,journalArticle,"Beard, Simon",Philosophia Maximum Causal Tsallis Entropy Imitation Learning,"In this paper, we propose a novel maximum causal Tsallis entropy (MCTE) framework for imitation learning which can efficiently learn a sparse multi-modal policy distribution from demonstrations. We provide the full mathematical analysis of the proposed framework. First, the optimal solution of an MCTE problem is shown to be a sparsemax distribution, whose supporting set can be adjusted. The proposed method has advantages over a softmax distribution in that it can exclude unnecessary actions by assigning zero probability. Second, we prove that an MCTE problem is equivalent to robust Bayes estimation in the sense of the Brier score. Third, we propose a maximum causal Tsallis entropy imitation learning (MCTEIL) algorithm with a sparse mixture density network (sparse MDN) by modeling mixture weights using a sparsemax distribution. In particular, we show that the causal Tsallis entropy of an MDN encourages exploration and efficient mixture utilization while Boltzmann Gibbs entropy is less effective. We validate the proposed method in two simulation studies and MCTEIL outperforms existing imitation learning methods in terms of average returns and learning multi-modal policies.",http://arxiv.org/abs/1805.08336,2018,conferencePaper,"Lee, Kyungjae; Choi, Sungjoon; Oh, Songhwai",Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Lyapunov-based Safe Policy Optimization for Continuous Control,"We study continuous action reinforcement learning problems in which it is crucial that the agent interacts with the environment only through safe policies, i.e.,~policies that do not take the agent to undesirable situations. We formulate these problems as constrained Markov decision processes (CMDPs) and present safe policy optimization algorithms that are based on a Lyapunov approach to solve them. Our algorithms can use any standard policy gradient (PG) method, such as deep deterministic policy gradient (DDPG) or proximal policy optimization (PPO), to train a neural network policy, while guaranteeing near-constraint satisfaction for every policy update by projecting either the policy parameter or the action onto the set of feasible solutions induced by the state-dependent linearized Lyapunov constraints. Compared to the existing constrained PG algorithms, ours are more data efficient as they are able to utilize both on-policy and off-policy data. Moreover, our action-projection algorithm often leads to less conservative policy updates and allows for natural integration into an end-to-end PG training pipeline. We evaluate our algorithms and compare them with the state-of-the-art baselines on several simulated (MuJoCo) tasks, as well as a real-world indoor robot navigation problem, demonstrating their effectiveness in terms of balancing performance and constraint satisfaction. Videos of the experiments can be found in the following link: https://drive.google.com/file/d/1pzuzFqWIE710bE2U6DmS59AfRzqK2Kek/view?usp=sharing.",http://arxiv.org/abs/1901.10031,2019,manuscript,"Chow, Yinlam; Nachum, Ofir; Faust, Aleksandra; Duenez-Guzman, Edgar; Ghavamzadeh, Mohammad", What failure looks like,"The stereotyped image of AI catastrophe is a powerful, malicious AI system that takes its creators by surprise and quickly achieves a decisive advantage over the rest of humanity. I think this is probably not what failure will look like, and I want to try to paint a more realistic picture. I’ll tell the story in two parts: * Part I: machine learning will increase our ability to “get what we can measure,” which could cause a slow-rolling catastrophe. (""Going out with a whimper."") * Part II: ML training, like competitive economies or natural ecosystems, can give rise to “greedy” patterns that try to expand their own influence. Such patterns can ultimately dominate the behavior of a system and cause sudden breakdowns. (""Going out with a bang,"" an instance of optimization daemons [https://arbital.com/p/daemons/].) I think these are the most important problems if we fail to solve intent alignment [https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6]. In practice these problems will interact with each other, and with other disruptions/instability caused by rapid progress. These problems are worse in worlds where progress is relatively fast, and fast takeoff can be a key risk factor, but I’m scared even if we have several years. With fast enough takeoff, my expectations start to look more like the caricature---this post envisions reasonably broad deployment of AI, which becomes less and less likely as things get faster. I think the basic problems are still essentially the same though, just occurring within an AI lab rather than across the world. (None of the concerns in this post are novel.) PART I: YOU GET WHAT YOU MEASURE If I want to convince Bob to vote for Alice, I can experiment with many different persuasion strategies and see which ones work. Or I can build good predictive models of Bob’s behavior and then search for actions that will lead him to vote for Alice. These are powerful techniques for achieving any goal that can be ea",https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like,2019,blogPost,"Christiano, Paul",AI Alignment Forum AI Insights Dataset Analysis,,http://mediangroup.org/docs/insights-analysis.pdf,,manuscript,"McKenzie, Colleen; Hidysmith, J Bryce", Screen time and sleep among school-aged children and adolescents: A systematic literature review,,https://linkinghub.elsevier.com/retrieve/pii/S1087079214000811,2015,journalArticle,"Hale, Lauren; Guan, Stanford",Sleep Medicine Reviews Mesa-Search vs Mesa-Control,"I currently see the spontaneous emergence of learning algorithms as significant evidence for the commonality of mesa-optimization in existing ML, and suggestive evidence for the commonality of inner alignment problems in near-term ML. [I currently think that there is only a small amount of evidence toward this. However, due to thinking about the issues, I've still made a significant personal update in favor of inner alignment problems being frequent.] This is bad news, in that it greatly increases my odds on this alignment problem arising in practice. It's good news in that it suggests this alignment problem won't catch ML researchers off guard; maybe there will be time to develop countermeasures while misaligned systems are at only a moderate level of capability. In any case, I want to point out that the mesa-optimizers suggested by this evidence might not count as mesa-optimizers by some definitions. SEARCH VS CONTROL Nevan Wichers comments on spontaneous-emergence-of-learning: I don't think that paper is an example of mesa optimization. Because the policy could be implementing a very simple heuristic to solve the task, similar to: Pick the image that lead to highest reward in the last 10 timesteps with 90% probability. Pik an image at random with 10% probability. So the policy doesn't have to have any properties of a mesa optimizer like considering possible actions and evaluating them with a utility function, ect. In Selection vs Control, I wrote about two different kinds of 'optimization': * Selection refers to search-like systems, which look through a number of possibilities and select one. * Control refers to systems like thermostats, organisms, and missile guidance systems. These systems do not get a re-do for their choices. They make choices which move toward the goal at every moment, but they don't get to search, trying many different things -- at least, not in the same sense. I take Nevan Wichers to be saying that there is no eviden",https://www.alignmentforum.org/posts/WmBukJkEFM72Xr397/mesa-search-vs-mesa-control,2020,blogPost,"Demski, Abram",AI Alignment Forum "Weak and Strong Gradient Directions: Explaining Memorization, Generalization, and Hardness of Examples at Scale","Coherent Gradients (CGH) is a recently proposed hypothesis to explain why over-parameterized neural networks trained with gradient descent generalize well even though they have sufficient capacity to memorize the training set. The key insight of CGH is that, since the overall gradient for a single step of SGD is the sum of the per-example gradients, it is strongest in directions that reduce the loss on multiple examples if such directions exist. In this paper, we validate CGH on ResNet, Inception, and VGG models on ImageNet. Since the techniques presented in the original paper do not scale beyond toy models and datasets, we propose new methods. By posing the problem of suppressing weak gradient directions as a problem of robust mean estimation, we develop a coordinate-based median of means approach. We present two versions of this algorithm, M3, which partitions a mini-batch into 3 groups and computes the median, and a more efficient version RM3, which reuses gradients from previous two time steps to compute the median. Since they suppress weak gradient directions without requiring per-example gradients, they can be used to train models at scale. Experimentally, we find that they indeed greatly reduce overfitting (and memorization) and thus provide the first convincing evidence that CGH holds at scale. We also propose a new test of CGH that does not depend on adding noise to training labels or on suppressing weak gradient directions. Using the intuition behind CGH, we posit that the examples learned early in the training process (i.e., ""easy"" examples) are precisely those that have more in common with other training examples. Therefore, as per CGH, the easy examples should generalize better amongst themselves than the hard examples amongst themselves. We validate this hypothesis with detailed experiments, and believe that it provides further orthogonal evidence for CGH.",http://arxiv.org/abs/2003.07422,2020,manuscript,"Zielinski, Piotr; Krishnan, Shankar; Chatterjee, Satrajit", Surveying Safety-relevant AI Characteristics,"The current analysis in the AI safety literature usually combines a risk or safety issue (e.g., interruptibility) with a particular paradigm for an AI agent (e.g., reinforcement learning). However, there is currently no survey of safety-relevant characteristics of AI systems that may reveal neglected areas of research or suggest to developers what design choices they could make to avoid or minimise certain safety concerns. In this paper, we take a first step towards delivering such a survey, from two angles. The first features AI system characteristics that are already known to be relevant to safety concerns, including internal system characteristics, characteristics relating to the effect of the external environment on the system, and characteristics relating to the effect of the system on the target environment. The second presents a brief survey of a broad range of AI system characteristics that could prove relevant to safety research, including types of interaction, computation, integration, anticipation, supervision, modification, motivation and achievement. This survey enables further work in exploring system characteristics and design choices that affect safety concerns.",,2019,conferencePaper,"Hernandez-Orallo, Jose; Martınez-Plumed, Fernando; Avin, Shahar",1st AAAI's Workshop on Artificial Intelligence Safety (SafeAI) Fears of an AI pioneer,,https://www.sciencemag.org/lookup/doi/10.1126/science.349.6245.252,2015,journalArticle,"Bohannon, J.",Science AI Alignment Research Overview,,https://docs.google.com/document/d/1FbTuRvC4TFWzGYerTKpBU7FJlyvjeOvVYF2uYNFSlOc/edit,2019,manuscript,"Steinhardt, Jacob", Artificial Intelligence as a positive and negative factor in global risk,"By far the greatest danger of Artificial Intelligence (AI) is that people conclude too early that they understand it. Of course, this problem is not limited to the field of AI. Jacques Monod wrote: ‘A curious aspect of the theory of evolution is that everybody thinks he understands it’ (Monod, 1974). The problem seems to be unusually acute in Artificial Intelligence. The field of AI has a reputation for making huge promises and then failing to deliver on them. Most observers conclude that AI is hard, as indeed it is. But the embarrassment does not stem from the difficulty. It is difficult to build a star from hydrogen, but the field of stellar astronomy does not have a terrible reputation for promising to build stars and then failing. The critical inference is not that AI is hard, but that, for some reason, it is very easy for people to think they know far more about AI than they actually do. It may be tempting to ignore Artificial Intelligence because, of all the global risks discussed in this book, AI is probably hardest to discuss. We cannot consult actuarial statistics to assign small annual probabilities of catastrophe, as with asteroid strikes. We cannot use calculations from a precise, precisely confirmed model to rule out events or place infinitesimal upper bounds on their probability, as with proposed physics disasters. But this makes AI catastrophes more worrisome, not less. The effect of many cognitive biases has been found to increase with time pressure, cognitive busyness, or sparse information. Which is to say that the more difficult the analytic challenge, the more important it is to avoid or reduce bias. Therefore I strongly recommend reading my other chapter (Chapter 5) in this book before continuing with this chapter. When something is universal enough in our everyday lives, we take it for granted to the point of forgetting it exists. Imagine a complex biological adaptation with ten necessary parts. If each of the ten genes is independently at 50% frequency in the gene pool – each gene possessed by only half the organisms in that species – then, on average, only 1 in 1024 organisms will possess the full, functioning adaptation.",https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198570509.001.0001/isbn-9780198570509-book-part-21,2008,bookSection,"Yudkowsky, Eliezer",Global Catastrophic Risks Discontinuous progress investigation,"Published Feb 2, 2015; last updated April 12 2020 We have collected cases of discontinuous technological progress to inform our understanding of whether artificial intelligence performance is likely to undergo such a discontinuity. This page details our investigation. We know of ten events that produced a robust discontinuity in progress equivalent to more than a century...",https://aiimpacts.org/discontinuous-progress-investigation/,2015,blogPost,AI Impacts,AI Impacts The Logic of Strategic Assets: From Oil to AI,"What resources and technologies are strategic? This question is often the focus of policy and theoretical debates, where the label “strategic” designates those assets that warrant the attention of the highest levels of the state. But these conversations are plagued by analytical confusion, flawed heuristics, and the rhetorical use of “strategic” to advance particular agendas. We aim to improve these conversations through conceptual clarification, introducing a theory based on important rivalrous externalities for which socially optimal behavior will not be produced alone by markets or individual national security entities. We distill and theorize the most important three forms of these externalities, which involve cumulative-, infrastructure-, and dependency-strategic logics. We then employ these logics to clarify three important cases: the Avon 2 engine in the 1950s, the U.S.-Japan technology rivalry in the late 1980s, and contemporary conversations about artificial intelligence.",https://arxiv.org/abs/2001.03246,2020,manuscript,"Ding, Jeffrey; Dafoe, Allan", Superhuman AI for multiplayer poker,"In recent years there have been great strides in artificial intelligence (AI), with games often serving as challenge problems, benchmarks, and milestones for progress. Poker has served for decades as such a challenge problem. Past successes in such benchmarks, including poker, have been limited to two-player games. However, poker in particular is traditionally played with more than two players. Multiplayer games present fundamental additional issues beyond those in two-player games, and multiplayer poker is a recognized AI milestone. In this paper we present Pluribus, an AI that we show is stronger than top human professionals in six-player no-limit Texas hold’em poker, the most popular form of poker played by humans.",https://www.sciencemag.org/lookup/doi/10.1126/science.aay2400,2019,journalArticle,"Brown, Noam; Sandholm, Tuomas",Science Specifying AI Objectives As a Human-AI Collaboration Problem,"Estimation, planning, control, and learning are giving us robots that can generate good behavior given a specified objective and set of constraints. What I care about is how humans enter this behavior generation picture, and study two complementary challenges: 1) how to optimize behavior when the robot is not acting in isolation, but needs to coordinate or collaborate with people; and 2) what to optimize in order to get the behavior we want. My work has traditionally focused on the former, but more recently I have been casting the latter as a human-robot collaboration problem as well (where the human is the end-user, or even the robotics engineer building the system). Treating it as such has enabled us to use robot actions to gain information; to account for human pedagogic behavior; and to exchange information between the human and the robot via a plethora of communication channels, from external forces that the person physically applies to the robot, to comparison queries, to defining a proxy objective function.",http://doi.acm.org/10.1145/3306618.3314227,2019,conferencePaper,"Dragan, Anca","Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society" Effect of nuclear weapons on historic trends in explosives,"Nuclear weapons constituted a ~7 thousand year discontinuity in relative effectiveness factor (TNT equivalent per kg of explosive). Nuclear weapons do not appear to have clearly represented progress in the cost-effectiveness of explosives, though the evidence there is weak. Details This case study is part of AI Impacts’ discontinuous progress investigation. Background The development of nuclear...",https://aiimpacts.org/discontinuity-from-nuclear-weapons/,2014,blogPost,AI Impacts,AI Impacts Œuf: minimizing the Coq extraction TCB,,http://dl.acm.org/citation.cfm?doid=3176245.3167089,2018,conferencePaper,"Mullen, Eric; Pernsteiner, Stuart; Wilcox, James R.; Tatlock, Zachary; Grossman, Dan",Proceedings of the 7th ACM SIGPLAN International Conference on Certified Programs and Proofs - CPP 2018 Deepwater Horizon and the Law of the Sea: Was the Cure Worse than the Disease,,,2014,journalArticle,"Wilson, Grant",BC Envtl. Aff. L. Rev. A survey of research questions for robust and beneficial AI,,https://futureoflife.org/data/documents/research_survey.pdf?x96845,2016,manuscript,"Dewey, Daniel; Russell, Stuart J; Tegmark, Max", Putting out the dark fire: constraining speculative physics disasters,,https://mflb.com/lsag_1/dark_fire_3.pdf,2015,manuscript,"Sandberg, Anders; Landry, Forrest", Predicting responsibility judgments from dispositional inferences and causal attributions,"How do people hold others responsible for their actions? In this paper, we test and extend a computational framework originally introduced by Gerstenberg et al. (2018) that assigns responsibility as a function of two factors: a dispositional inference that captures what we learn about a person's character from their action, and the causal role that the person's action played in bringing about the outcome. This framework has been shown to accurately capture how people assign responsibility to decision-makers in achievement contexts. Here, we focus on a more complex group setting in which political committee members vote on whether or not a policy should be passed. This setting allowed us to manipulate both dispositional inferences and causal attributions in graded ways, as well as directly test the model's key components by asking participants to judge how surprising and how important a committee member's vote was. Participants' answers to these questions in Experiment 1 accurately predicted the responsibility judgments of another group of participants in Experiment 2. In Experiment 3, we show that the model also predicts moral responsibility judgments and that, in the moral domain, dispositional inferences affect responsibility judgments more strongly than causal attributions.",https://osf.io/63zvw,2019,manuscript,"Langenhoff, Antonia F; Wiegmann, Alex; Halpern, Joseph Y; tenenbaum, josh; Gerstenberg, Tobias", Hierarchical Learning in Stochastic Domains: Preliminary Results,,https://linkinghub.elsevier.com/retrieve/pii/B9781558603073500289,1993,bookSection,"Kaelbling, Leslie Pack",Machine Learning Proceedings 1993 Of Myths and Moonshine,,,2014,magazineArticle,"Russell, Stuart",Edge.org What Would pi* Do?: Imitation Learning via Off-Policy Reinforcement Learning,Learning to imitate expert actions given demonstrations containing image observations is a difficult problem in robotic control. The key challenge is generalizing behavior to out-of-distribution...,https://openreview.net/forum?id=B1excoAqKQ,2018,journalArticle,"Reddy, Siddharth; Dragan, Anca D.; Levine, Sergey", Experimenting with a Democratic Ideal: Deliberative Polling and Public Opinion,,http://link.springer.com/10.1057/palgrave.ap.5500121,2005,journalArticle,"Fishkin, James S; Luskin, Robert C",Acta Politica "Hail mary, value porosity, and utility diversification",,,2014,report,"Bostrom, Nick", Some Considerations on Learning to Explore via Meta-Reinforcement Learning,We consider the problem of exploration in meta reinforcement learning. Two new meta reinforcement learning algorithms are suggested: E-MAML and E-$\text{RL}^2$. Results are presented on a novel environment we call `Krazy World' and a set of maze environments. We show E-MAML and E-$\text{RL}^2$ deliver better performance on tasks where exploration is important.,http://arxiv.org/abs/1803.01118,2019,manuscript,"Stadie, Bradly C.; Yang, Ge; Houthooft, Rein; Chen, Xi; Duan, Yan; Wu, Yuhuai; Abbeel, Pieter; Sutskever, Ilya", Molecular Imprinting: The missing piece in the puzzle of abiogenesis?,,https://arxiv.org/abs/1807.07065,2018,manuscript,"Drexler, K. Eric", Superintelligence As a Cause or Cure For Risks of Astronomical Suffering,"Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk , often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to existential risk but can also help prevent it, superintelligent AI can both be a suffering risk or help avoid it. Some types of work aimed at making superintelligent AI safe will also help prevent suffering risks, and there may also be a class of safeguards for AI that helps specifically against s-risks.",http://www.informatica.si/index.php/informatica/article/view/1877,2017,journalArticle,"Sotala, Kaj; Gloor, Lukas",Informatica Learning to Play No-Press Diplomacy with Best Response Policy Iteration,"Recent advances in deep reinforcement learning (RL) have led to considerable progress in many 2-player zero-sum games, such as Go, Poker and Starcraft. The purely adversarial nature of such games allows for conceptually simple and principled application of RL methods. However real-world settings are many-agent, and agent interactions are complex mixtures of common-interest and competitive aspects. We consider Diplomacy, a 7-player board game designed to accentuate dilemmas resulting from many-agent interactions. It also features a large combinatorial action space and simultaneous moves, which are challenging for RL algorithms. We propose a simple yet effective approximate best response operator, designed to handle large combinatorial action spaces and simultaneous moves. We also introduce a family of policy iteration methods that approximate fictitious play. With these methods, we successfully apply RL to Diplomacy: we show that our agents convincingly outperform the previous state-of-the-art, and game theoretic equilibrium analysis shows that the new process yields consistent improvements.",http://arxiv.org/abs/2006.04635,2020,conferencePaper,"Anthony, Thomas; Eccles, Tom; Tacchetti, Andrea; Kramár, János; Gemp, Ian; Hudson, Thomas C.; Porcel, Nicolas; Lanctot, Marc; Pérolat, Julien; Everett, Richard; Werpachowski, Roman; Singh, Satinder; Graepel, Thore; Bachrach, Yoram",34th Conference on Neural Information Processing Systems (NeurIPS 2020) Trial without Error: Towards Safe Reinforcement Learning via Human Intervention,"AI systems are increasingly applied to complex tasks that involve interaction with humans. During training, such systems are potentially dangerous, as they haven't yet learned to avoid actions that could cause serious harm. How can an AI system explore and learn without making a single mistake that harms humans or otherwise causes serious damage? For model-free reinforcement learning, having a human ""in the loop"" and ready to intervene is currently the only way to prevent all catastrophes. We formalize human intervention for RL and show how to reduce the human labor required by training a supervised learner to imitate the human's intervention decisions. We evaluate this scheme on Atari games, with a Deep RL agent being overseen by a human for four hours. When the class of catastrophes is simple, we are able to prevent all catastrophes without affecting the agent's learning (whereas an RL baseline fails due to catastrophic forgetting). However, this scheme is less successful when catastrophes are more complex: it reduces but does not eliminate catastrophes and the supervised learner fails on adversarial examples found by the agent. Extrapolating to more challenging environments, we show that our implementation would not scale (due to the infeasible amount of human labor required). We outline extensions of the scheme that are necessary if we are to train model-free agents without a single catastrophe.",https://arxiv.org/abs/1707.05173v1,2018,conferencePaper,"Saunders, William; Sastry, Girish; Stuhlmueller, Andreas; Evans, Owain",Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems Asymptotically Unambitious Artificial General Intelligence,"General intelligence, the ability to solve arbitrary solvable problems, is supposed by many to be artificially constructible. Narrow intelligence, the ability to solve a given particularly difficult problem, has seen impressive recent development. Notable examples include self-driving cars, Go engines, image classifiers, and translators. Artificial General Intelligence (AGI) presents dangers that narrow intelligence does not: if something smarter than us across every domain were indifferent to our concerns, it would be an existential threat to humanity, just as we threaten many species despite no ill will. Even the theory of how to maintain the alignment of an AGI's goals with our own has proven highly elusive. We present the first algorithm we are aware of for asymptotically unambitious AGI, where ""unambitiousness"" includes not seeking arbitrary power. Thus, we identify an exception to the Instrumental Convergence Thesis, which is roughly that by default, an AGI would seek power, including over us.",http://arxiv.org/abs/1905.12186,2020,conferencePaper,"Cohen, Michael K.; Vellambi, Badri; Hutter, Marcus",arXiv:1905.12186 [cs] Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment,"Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously crafted adversarial examples. In this paper, we present TextFooler, a simple but strong baseline to generate natural adversarial text. By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks. We demonstrate the advantages of this framework in three ways: (1) effective---it outperforms state-of-the-art attacks in terms of success rate and perturbation rate, (2) utility-preserving---it preserves semantic content and grammaticality, and remains correctly classified by humans, and (3) efficient---it generates adversarial text with computational complexity linear to the text length. *The code, pre-trained target models, and test examples are available at https://github.com/jind11/TextFooler.",http://arxiv.org/abs/1907.11932,2020,conferencePaper,"Jin, Di; Jin, Zhijing; Zhou, Joey Tianyi; Szolovits, Peter",Proceedings of the AAAI Conference on Artificial Intelligence Some cruxes on impactful alternatives to AI policy work,"Ben Pace and I (Richard Ngo) recently did a public double crux at the Berkeley REACH on how valuable it is for people to go into AI policy and strategy work: I was optimistic and Ben was pessimistic. During the actual event, we didn't come anywhere near to finding a double crux on that issue. But after a lot of subsequent discussion, we've come up with some more general cruxes about where impact comes from. I found Ben's model of how to have impact very interesting, and so in this post I've tried to explain it, along with my disagreements. Ben liked the goal of writing up a rough summary of our positions and having further discussion in the comments, so while he edited it somewhat he doesn’t at all think that it’s a perfect argument, and it’s not what he’d write if he spent 10 hours on it. He endorsed the wording of the cruxes as broadly accurate. (During the double crux, we also discussed how the heavy-tailed worldview applies to community building, but decided on this post to focus on the object level of what impact looks like.) Note from Ben: “I am not an expert in policy, and have not put more than about 20-30 hours of thought into it total as a career path. But, as I recently heard Robin Hanson say, there’s a common situation that looks like this: some people have a shiny idea that they think about a great deal and work through the details of, that folks in other areas are skeptical of given their particular models of how the world works. Even though the skeptics have less detail, it can be useful to publicly say precisely why they’re skeptical. In this case I’m often skeptical when folks tell me they’re working to reduce x-risk by focusing on policy. Folks doing policy work in AI might be right, and I might be wrong, but it seemed like a good use of time to start a discussion with Richard about how I was thinking about it and what would change my mind. If the following discussion causes me to change my mind on this question, I’ll be really super happy wit",https://www.lesswrong.com/posts/DJB82jKwgJE5NsWgT/some-cruxes-on-impactful-alternatives-to-ai-policy-work,2018,blogPost,"Ngo, Richard",LessWrong AlphaGo Zero and capability amplification,AlphaGo Zero happens to be a great proof-of-concept of iterated capability amplification (my preferred approach to safe RL).,https://ai-alignment.com/alphago-zero-and-capability-amplification-ede767bb8446,2017,blogPost,"Christiano, Paul",AI Alignment (Medium) End-to-End Robotic Reinforcement Learning without Reward Engineering,"The combination of deep neural network models and reinforcement learning algorithms can make it possible to learn policies for robotic behaviors that directly read in raw sensory inputs, such as camera images, effectively subsuming both estimation and control into one model. However, real-world applications of reinforcement learning must specify the goal of the task by means of a manually programmed reward function, which in practice requires either designing the very same perception pipeline that end-to-end reinforcement learning promises to avoid, or else instrumenting the environment with additional sensors to determine if the task has been performed successfully. In this paper, we propose an approach for removing the need for manual engineering of reward specifications by enabling a robot to learn from a modest number of examples of successful outcomes, followed by actively solicited queries, where the robot shows the user a state and asks for a label to determine whether that state represents successful completion of the task. While requesting labels for every single state would amount to asking the user to manually provide the reward signal, our method requires labels for only a tiny fraction of the states seen during training, making it an efficient and practical approach for learning skills without manually engineered rewards. We evaluate our method on real-world robotic manipulation tasks where the observations consist of images viewed by the robot's camera. In our experiments, our method effectively learns to arrange objects, place books, and drape cloth, directly from images and without any manually specified reward functions, and with only 1-4 hours of interaction with the real world.",http://arxiv.org/abs/1904.07854,2019,conferencePaper,"Singh, Avi; Yang, Larry; Hartikainen, Kristian; Finn, Chelsea; Levine, Sergey","arXiv:1904.07854 [cs, stat]" The EMPATHIC Framework for Task Learning from Implicit Human Feedback,"Reactions such as gestures, facial expressions, and vocalizations are an abundant, naturally occurring channel of information that humans provide during interactions. A robot or other agent could leverage an understanding of such implicit human feedback to improve its task performance at no cost to the human. This approach contrasts with common agent teaching methods based on demonstrations, critiques, or other guidance that need to be attentively and intentionally provided. In this paper, we first define the general problem of learning from implicit human feedback and then propose to address this problem through a novel data-driven framework, EMPATHIC. This two-stage method consists of (1) mapping implicit human feedback to relevant task statistics such as reward, optimality, and advantage; and (2) using such a mapping to learn a task. We instantiate the first stage and three second-stage evaluations of the learned mapping. To do so, we collect a dataset of human facial reactions while participants observe an agent execute a sub-optimal policy for a prescribed training task. We train a deep neural network on this data and demonstrate its ability to (1) infer relative reward ranking of events in the training task from prerecorded human facial reactions; (2) improve the policy of an agent in the training task using live human facial reactions; and (3) transfer to a novel domain in which it evaluates robot manipulation trajectories.",http://arxiv.org/abs/2009.13649,2020,manuscript,"Cui, Yuchen; Zhang, Qiping; Allievi, Alessandro; Stone, Peter; Niekum, Scott; Knox, W. Bradley", Corrigibility,,https://aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10124/10136,2015,conferencePaper,"Soares, Nate; Fallenstein, Benja; Armstrong, Stuart; Yudkowsky, Eliezer",Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence Why Can’t You Do That HAL? Explaining Unsolvability of Planning Tasks,"Explainable planning is widely accepted as a prerequisite for autonomous agents to successfully work with humans. While there has been a lot of research on generating explanations of solutions to planning problems, explaining the absence of solutions remains a largely open and under-studied problem, even though such situations can be the hardest to understand or debug. In this paper, we show that hierarchical abstractions can be used to efficiently generate reasons for unsolvability of planning problems. In contrast to related work on computing certificates of unsolvability, we show that our methods can generate compact, humanunderstandable reasons for unsolvability. Empirical analysis and user studies show the validity of our methods as well as their computational efficacy on a number of benchmark planning domains.",https://www.ijcai.org/proceedings/2019/197,2019,conferencePaper,"Sreedharan, Sarath; Srivastava, Siddharth; Smith, David; Kambhampati, Subbarao",Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence An Agent-based Modelling Framework for Driving Policy Learning in Connected and Autonomous Vehicles,"Due to the complexity of the natural world, a programmer cannot foresee all possible situations, a connected and autonomous vehicle (CAV) will face during its operation, and hence, CAVs will need to learn to make decisions autonomously. Due to the sensing of its surroundings and information exchanged with other vehicles and road infrastructure, a CAV will have access to large amounts of useful data. While different control algorithms have been proposed for CAVs, the benefits brought about by connectedness of autonomous vehicles to other vehicles and to the infrastructure, and its implications on policy learning has not been investigated in literature. This paper investigates a data driven driving policy learning framework through an agent-based modelling approaches. The contributions of the paper are two-fold. A dynamic programming framework is proposed for in-vehicle policy learning with and without connectivity to neighboring vehicles. The simulation results indicate that while a CAV can learn to make autonomous decisions, vehicle-to-vehicle (V2V) communication of information improves this capability. Furthermore, to overcome the limitations of sensing in a CAV, the paper proposes a novel concept for infrastructure-led policy learning and communication with autonomous vehicles. In infrastructure-led policy learning, road-side infrastructure senses and captures successful vehicle maneuvers and learns an optimal policy from those temporal sequences, and when a vehicle approaches the road-side unit, the policy is communicated to the CAV. Deep-imitation learning methodology is proposed to develop such an infrastructure-led policy learning framework.",http://arxiv.org/abs/1709.04622,2018,conferencePaper,"De Silva, Varuna; Wang, Xiongzhao; Aladagli, Deniz; Kondoz, Ahmet; Ekmekcioglu, Erhan",IntelliSys 2018: Intelligent Systems and Applications Dynamic generation and refinement of robot verbalization,"With a growing number of robots performing autonomously without human intervention, it is difficult to understand what the robots experience along their routes during execution without looking at execution logs. Rather than looking through logs, our goal is for robots to respond to queries in natural language about what they experience and what routes they have chosen. We propose verbalization as the process of converting route experiences into natural language, and highlight the importance of varying verbalizations based on user preferences. We present our verbalization space representing different dimensions that verbalizations can be varied, and our algorithm for automatically generating them on our CoBot robot. Then we present our study of how users can request different verbalizations in dialog. Using the study data, we learn a language model to map user dialog to the verbalization space. Finally, we demonstrate the use of the learned model within a dialog system in order for any user to request information about CoBot’s route experience at varying levels of detail.",http://ieeexplore.ieee.org/document/7745133/,2016,conferencePaper,"Perera, Vittorio; Selveraj, Sai P.; Rosenthal, Stephanie; Veloso, Manuela",2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) Impossibility of deducing preferences and rationality from human policy,,,2017,manuscript,"Armstrong, Stuart; Mindermann, Sören", Motivated value selection for artificial agents,,,2015,conferencePaper,"Armstrong, Stuart",Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence Energetics of the brain and AI,,https://arxiv.org/abs/1602.04019,2016,manuscript,"Sandberg, Anders", The medicalization of love,,,2015,journalArticle,"Earp, Brian D.; Sandberg, Anders; Savulescu, Julian",Cambridge Quarterly of Healthcare Ethics What is narrow value learning?,"Ambitious value learning aims to achieve superhuman performance by figuring out the underlying latent ""values"" that humans have, and evaluating new situations according to these values. In other words, it is trying to infer the criteria by which we judge situations to be good. This is particularly hard because in novel situations that humans haven't seen yet, we haven't even developed the criteria by which we would evaluate. (This is one of the reasons why we need to model humans as suboptimal, which causes problems.) Instead of this, we can use narrow value learning, which produces behavior that we want in some narrow domain, without expecting generalization to novel circumstances. The simplest form of this is imitation learning, where the AI system simply tries to imitate the supervisor's behavior. This limits the AI’s performance to that of its supervisor. We could also learn from preferences over behavior, which can scale to superhuman performance, since the supervisor can often evaluate whether a particular behavior meets our preferences even if she can’t perform it herself. We could also teach our AI systems to perform tasks that we would not want to do ourselves, such as handling hot objects. Nearly all of the work on preference learning, including most work on inverse reinforcement learning (IRL), is aimed at narrow value learning. IRL is often explicitly stated to be a technique for imitation learning, and early algorithms phrase the problem as matching the features in the demonstration, not exceeding them. The few algorithms that try to generalize to different test distributions, such as AIRL, are only aiming for relatively small amounts of generalization. (Why use IRL instead of behavioral cloning, where you mimic the actions that the demonstrator took? The hope is that IRL gives you a good inductive bias for imitation, allowing you to be more sample efficient and to generalize a little bit.) You might have noticed that I talk about narrow value learn",https://www.alignmentforum.org/posts/vX7KirQwHsBaSEdfK/what-is-narrow-value-learning,2019,blogPost,"Shah, Rohin",AI Alignment Forum How we’re predicting AI–or failing to,,,2015,bookSection,"Armstrong, Stuart; Sotala, Kaj",Beyond artificial intelligence The Unilateralist’s Curse and the Case for a Principle of Conformity,"In some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will be undertaken more often than is optimal. We suggest that this phenomenon, which we call the unilateralist’s curse, arises in many contexts, including some that are important for public policy. To lift the curse, we propose a principle of conformity, which would discourage unilateralist action. We consider three different models for how this principle could be implemented, and respond to an objection that could be raised against it.",https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4959137/,2016,journalArticle,"Bostrom, Nick; Douglas, Thomas; Sandberg, Anders",Social Epistemology General Purpose Intelligence: Arguing The Orthogonality Thesis,"In his paper “The Superintelligent Will,” Nick Bostrom formalized the Orthogonality thesis: the idea that the final goals and intelligence levels of artificial agents are independent of each other. This paper presents arguments for a (narrower) version of the thesis. It proceeds through three steps. First it shows that superintelligent agents with essentially arbitrary goals can exist in our universe –both as theoretical impractical agents such as AIXI and as physically possible realworld agents. Then it argues that if humans are capable of building human-level artificial intelligences, we can build them with an extremely broad spectrum of goals. Finally it shows that the same result holds for any superintelligent agent we could directly or indirectly build. This result is relevant for arguments about the potential motivations of future agents: knowing an artificial agent is of high intelligence does not allow us to presume that it will be moral, we will need to figure out its goals directly.",https://www.ceeol.com/search/article-detail?id=137912,2013,journalArticle,"Armstrong, Stuart",Analysis and Metaphysics Technical AGI safety research outside AI,"I think there are many questions whose answers would be useful for technical AGI safety research, but which will probably require expertise outside AI to answer. In this post I list 30 of them, divided into four categories. Feel free to get in touch if you’d like to discuss these questions and why I think they’re important in more detail. I personally think that making progress on the ones in the first category is particularly vital, and plausibly tractable for researchers from a wide range of academic backgrounds. Studying and understanding safety problems 1. How strong are the economic or technological pressures towards building very general AI systems, as opposed to narrow ones? How plausible is the CAIS model [https://www.fhi.ox.ac.uk/reframing/] of advanced AI capabilities arising from the combination of many narrow services? 2. What are the most compelling arguments for and against discontinuous [https://intelligence.org/files/IEM.pdf] versus continuous [https://sideways-view.com/2018/02/24/takeoff-speeds/] takeoffs? In particular, how should we think about the analogy from human evolution, and the scalability of intelligence with compute? 3. What are the tasks via which narrow AI is most likely to have a destabilising impact on society? What might cyber crime look like when many important jobs have been automated? 4. How plausible are safety concerns about economic dominance by influence-seeking agents [https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/more-realistic-tales-of-doom] , as well as structural loss of control [https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure] scenarios? Can these be reformulated in terms of standard economic ideas, such as principal-agent problems [http://www.overcomingbias.com/2019/04/agency-failure-ai-apocalypse.html] and the effects of automation? 5. How can we make the concepts of agency and goal-directed behavio",https://forum.effectivealtruism.org/posts/2e9NDGiXt8PjjbTMC/technical-agi-safety-research-outside-ai,2019,blogPost,"Ngo, Richard",Effective Altruism Forum Preference Elicitation for Participatory Budgeting,"Participatory budgeting enables the allocation of public funds by collecting and aggregating individual preferences. It has already had a sizable real-world impact, but making the most of this new paradigm requires rethinking some of the basics of computational social choice, including the very way in which individuals express their preferences. We attempt to maximize social welfare by using observed votes as proxies for voters’ unknown underlying utilities, and analytically compare four preference elicitation methods: knapsack votes, rankings by value or value for money, and threshold approval votes. We find that threshold approval voting is qualitatively superior, and also performs well in experiments using data from real participatory budgeting elections.This paper was accepted by Yan Chen, decision analysis.",https://pubsonline.informs.org/doi/10.1287/mnsc.2020.3666,2020,journalArticle,"Benadè, Gerdus; Nath, Swaprava; Procaccia, Ariel D.; Shah, Nisarg",Management Science "Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control","We propose a plan online and learn offline (POLO) framework for the setting where an agent, with an internal model, needs to continually act and learn in the world. Our work builds on the synergistic relationship between local model-based control, global value function learning, and exploration. We study how local trajectory optimization can cope with approximation errors in the value function, and can stabilize and accelerate value function learning. Conversely, we also study how approximate value functions can help reduce the planning horizon and allow for better policies beyond local solutions. Finally, we also demonstrate how trajectory optimization can be used to perform temporally coordinated exploration in conjunction with estimating uncertainty in value function approximation. This exploration is critical for fast and stable learning of the value function. Combining these components enable solutions to complex simulated control tasks, like humanoid locomotion and dexterous in-hand manipulation, in the equivalent of a few minutes of experience in the real world.",http://arxiv.org/abs/1811.01848,2019,manuscript,"Lowrey, Kendall; Rajeswaran, Aravind; Kakade, Sham; Todorov, Emanuel; Mordatch, Igor", Tight Variational Bounds via Random Projections and I-Projections,"Information projections are the key building block of variational inference algorithms and are used to approximate a target probabilistic model by projecting it onto a family of tractable distributions. In general, there is no guarantee on the quality of the approximation obtained. To overcome this issue, we introduce a new class of random projections to reduce the dimensionality and hence the complexity of the original model. In the spirit of random projections, the projection preserves (with high probability) key properties of the target distribution. We show that information projections can be combined with random projections to obtain provable guarantees on the quality of the approximation obtained, regardless of the complexity of the original model. We demonstrate empirically that augmenting mean field with a random projection step dramatically improves partition function and marginal probability estimates, both on synthetic and real world data.",http://arxiv.org/abs/1510.01308,2016,conferencePaper,"Hsu, Lun-Kai; Achim, Tudor; Ermon, Stefano",Proceedings of the 19th International Conference on Artificial Intelligence and Statistics "Risk and resilience for unknown, unquantifiable, systemic, and unlikely/catastrophic threats",,http://link.springer.com/10.1007/s10669-015-9551-8,2015,journalArticle,"Baum, Seth D.",Environment Systems and Decisions The Technological Singularity: Managing the Journey,,,2016,book,"Yampolskiy, Roman; Armstrong, Stuart", Isolated refuges for surviving global catastrophes,,https://linkinghub.elsevier.com/retrieve/pii/S0016328715000464,2015,journalArticle,"Baum, Seth D.; Denkenberger, David C.; Haqq-Misra, Jacob",Futures A Virtue of Precaution Regarding the Moral Status of Animals with Uncertain Sentience,"We address the moral importance of fish, invertebrates such as crustaceans, snails and insects, and other animals about which there is qualified scientific uncertainty about their sentience. We argue that, on a sentientist basis, one can at least say that how such animals fare make ethically significant claims on our character. It is a requirement of a morally decent (or virtuous) person that she at least pays attention to and is cautious regarding the possibly morally relevant aspects of such animals. This involves having a moral stance, in the sense of patterns of perception, such that one notices such animals as being morally relevant in various situations. For the person who does not already consider these animals in this way, this could be a big change in moral psychology, and can be assumed to have behavioural consequences, albeit indeterminate. Character has been largely neglected in the literature, which focuses on act-centred approaches (i.e. that the evidence on sentience supports, or does not support, taking some specific action). We see our character-centred approach as complementary to, not superior to, act-centred approaches. Our approach has the advantage of allowing us to make ethically interesting and practically relevant claims about a wider range of cases, but it has the drawback of providing less specific action guidance.",https://doi.org/10.1007/s10806-017-9662-y,2017,journalArticle,"Knutsson, Simon; Munthe, Christian",Journal of Agricultural and Environmental Ethics Likelihood of discontinuous progress around the development of AGI,"We aren’t convinced by any of the arguments we’ve seen to expect large discontinuity in AI progress above the extremely low base rate for all technologies. However this topic is controversial, and many thinkers on the topic disagree with us, so we consider this an open question. Details Definitions We say a technological discontinuity has...",https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/,2018,blogPost,AI Impacts,AI Impacts Robust Temporal Difference Learning for Critical Domains,"We present a new Q-function operator for temporal difference (TD) learning methods that explicitly encodes robustness against significant rare events (SRE) in critical domains. The operator, which we call the $\kappa$-operator, allows to learn a robust policy in a model-based fashion without actually observing the SRE. We introduce single- and multi-agent robust TD methods using the operator $\kappa$. We prove convergence of the operator to the optimal robust Q-function with respect to the model using the theory of Generalized Markov Decision Processes. In addition we prove convergence to the optimal Q-function of the original MDP given that the probability of SREs vanishes. Empirical evaluations demonstrate the superior performance of $\kappa$-based TD methods both in the early learning phase as well as in the final converged stage. In addition we show robustness of the proposed method to small model errors, as well as its applicability in a multi-agent context.",http://arxiv.org/abs/1901.08021,2019,manuscript,"Klima, Richard; Bloembergen, Daan; Kaisers, Michael; Tuyls, Karl", "Learning the preferences of ignorant, inconsistent agents",,,2016,conferencePaper,"Evans, Owain; Stuhlmüller, Andreas; Goodman, Noah",Thirtieth AAAI Conference on Artificial Intelligence The myth of the rational voter: why democracies choose bad policies,,,2008,book,"Caplan, Bryan Douglas", "Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow","Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable. Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients. In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck. By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients. We demonstrate that our proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms. Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running. We show that our method can learn such skills directly from \emph{raw} video demonstrations, substantially outperforming prior adversarial imitation learning methods. The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re-optimized in new settings. Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods.",http://arxiv.org/abs/1810.00821,2018,conferencePaper,"Peng, Xue Bin; Kanazawa, Angjoo; Toyer, Sam; Abbeel, Pieter; Levine, Sergey","arXiv:1810.00821 [cs, stat]" Leveraging Human Guidance for Deep Reinforcement Learning Tasks,"Reinforcement learning agents can learn to solve sequential decision tasks by interacting with the environment. Human knowledge of how to solve these tasks can be incorporated using imitation learning, where the agent learns to imitate human demonstrated decisions. However, human guidance is not limited to the demonstrations. Other types of guidance could be more suitable for certain tasks and require less human effort. This survey provides a high-level overview of five recent learning frameworks that primarily rely on human guidance other than conventional, step-by-step action demonstrations. We review the motivation, assumption, and implementation of each framework. We then discuss possible future research directions.",https://arxiv.org/abs/1909.09906v1,2019,conferencePaper,"Zhang, Ruohan; Torabi, Faraz; Guan, Lin; Ballard, Dana H.; Stone, Peter",Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) Active Inverse Reward Design,"Reward design, the problem of selecting an appropriate reward function for an AI system, is both critically important, as it encodes the task the system should perform, and challenging, as it requires reasoning about and understanding the agent’s environment in detail. AI practitioners often iterate on the reward function for their systems in a trial-and-error process to get their desired behavior. Inverse reward design (IRD) is a preference inference method that infers a true reward function from an observed, possibly misspecified, proxy reward function. This allows the system to determine when it should trust its observed reward function and respond appropriately. This has been shown to avoid problems in reward design such as negative side-effects (omitting a seemingly irrelevant but important aspect of the task) and reward hacking (learning to exploit unanticipated loopholes). In this paper, we actively select the set of proxy reward functions available to the designer. This improves the quality of inference and simplifies the associated reward design problem. We present two types of queries: discrete queries, where the system designer chooses from a discrete set of reward functions, and feature queries, where the system queries the designer for weights on a small set of features. We evaluate this approach with experiments in a personal shopping assistant domain and a 2D navigation domain. We find that our approach leads to reduced regret at test time compared with vanilla IRD. Our results indicate that actively selecting the set of available reward functions is a promising direction to improve the efficiency and effectiveness of reward design.",https://arxiv.org/abs/1809.03060,2018,manuscript,"Mindermann, Sören; Shah, Rohin; Gleave, Adam; Hadfield-Menell, Dylan", Theoretical and empirical evidence for the impact of inductive biases on cultural evolution,"The question of how much the outcomes of cultural evolution are shaped by the cognitive capacities of human learners has been explored in several disciplines, including psychology, anthropology and linguistics. We address this question through a detailed investigation of transmission chains, in which each person passes information to another along a chain. We review mathematical and empirical evidence that shows that under general conditions, and across experimental paradigms, the information passed along transmission chains will be affected by the inductive biases of the people involved—the constraints on learning and memory, which influence conclusions from limited data. The mathematical analysis considers the case where each person is a rational Bayesian agent. The empirical work consists of behavioural experiments in which human participants are shown to operate in the manner predicted by the Bayesian framework. Specifically, in situations in which each person's response is used to determine the data seen by the next person, people converge on concepts consistent with their inductive biases irrespective of the information seen by the first member of the chain. We then relate the Bayesian analysis of transmission chains to models of biological evolution, clarifying how chains of individuals correspond to population-level models and how selective forces can be incorporated into our models. Taken together, these results indicate how laboratory studies of transmission chains can provide information about the dynamics of cultural evolution and illustrate that inductive biases can have a significant impact on these dynamics.",https://royalsocietypublishing.org/doi/10.1098/rstb.2008.0146,2008,journalArticle,"Griffiths, Thomas L; Kalish, Michael L; Lewandowsky, Stephan",Philosophical Transactions of the Royal Society B: Biological Sciences Domain Randomization and Generative Models for Robotic Grasping,"Deep learning-based robotic grasping has made significant progress thanks to algorithmic improvements and increased data availability. However, state-of-the-art models are often trained on as few as hundreds or thousands of unique object instances, and as a result generalization can be a challenge. In this work, we explore a novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis. We generate millions of unique, unrealistic procedurally generated objects, and train a deep neural network to perform grasp planning on these objects. Since the distribution of successful grasps for a given object can be highly multimodal, we propose an autoregressive grasp planning model that maps sensor inputs of a scene to a probability distribution over possible grasps. This model allows us to sample grasps efficiently at test time (or avoid sampling entirely). We evaluate our model architecture and data generation pipeline in simulation and the real world. We find we can achieve a $>$90% success rate on previously unseen realistic objects at test time in simulation despite having only been trained on random objects. We also demonstrate an 80% success rate on real-world grasp attempts despite having only been trained on random simulated objects.",https://ieeexplore.ieee.org/abstract/document/8593933?casa_token=lwbqoFmevgAAAAAA:9QSGCfSy7lAxjPGI7i3NqTLMPaOzxxkfA210snUtaVcKlzACmUX8_vbx_jAGwKDrlsDTIBJJ,2018,conferencePaper,"Tobin, Joshua; Biewald, Lukas; Duan, Rocky; Andrychowicz, Marcin; Handa, Ankur; Kumar, Vikash; McGrew, Bob; Schneider, Jonas; Welinder, Peter; Zaremba, Wojciech; Abbeel, Pieter",2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Privacy-preserving data mining,,https://dl.acm.org/doi/10.1145/335191.335438,2000,journalArticle,"Agrawal, Rakesh; Srikant, Ramakrishnan",ACM SIGMOD Record Quasi-Direct Drive for Low-Cost Compliant Robotic Manipulation,"Robots must cost less and be force-controlled to enable widespread, safe deployment in unconstrained human environments. We propose Quasi-Direct Drive actuation as a capable paradigm for robotic force-controlled manipulation in human environments at low-cost. Our prototype - Blue - is a human scale 7 Degree of Freedom arm with 2kg payload. Blue can cost less than $5000. We show that Blue has dynamic properties that meet or exceed the needs of human operators: the robot has a nominal position-control bandwidth of 7.5Hz and repeatability within 4mm. We demonstrate a Virtual Reality based interface that can be used as a method for telepresence and collecting robot training demonstrations. Manufacturability, scaling, and potential use-cases for the Blue system are also addressed. Videos and additional information can be found online at berkeleyopenarms.github.io",http://arxiv.org/abs/1904.03815,2019,conferencePaper,"Gealy, David V.; McKinley, Stephen; Yi, Brent; Wu, Philipp; Downey, Phillip R.; Balke, Greg; Zhao, Allan; Guo, Menglong; Thomasson, Rachel; Sinclair, Anthony; Cuellar, Peter; McCarthy, Zoe; Abbeel, Pieter",2019 International Conference on Robotics and Automation (ICRA) Transcendence : An AI Researcher Enjoys Watching His Own Execution,"So, how seriously should we take the movie's premise -- that superhuman AI is a potential threat to humanity? And how plausible, from a scientific viewpo...",https://www.huffpost.com/entry/ai-transcendence_b_5235364,2014,magazineArticle,"Russell, Stuart; Co-author, ContributorComputer science professor at Berkeley;; Approach’, ‘Artificial Intelligence: a Modern",HuffPost A Less Biased Evaluation of Out-of-distribution Sample Detectors,"In the real world, a learning system could receive an input that is unlike anything it has seen during training. Unfortunately, out-of-distribution samples can lead to unpredictable behaviour. We need to know whether any given input belongs to the population distribution of the training/evaluation data to prevent unpredictable behaviour in deployed systems. A recent surge of interest in this problem has led to the development of sophisticated techniques in the deep learning literature. However, due to the absence of a standard problem definition or an exhaustive evaluation, it is not evident if we can rely on these methods. What makes this problem different from a typical supervised learning setting is that the distribution of outliers used in training may not be the same as the distribution of outliers encountered in the application. Classical approaches that learn inliers vs. outliers with only two datasets can yield optimistic results. We introduce OD-test, a three-dataset evaluation scheme as a more reliable strategy to assess progress on this problem. We present an exhaustive evaluation of a broad set of methods from related areas on image classification tasks. Contrary to the existing results, we show that for realistic applications of high-dimensional images the previous techniques have low accuracy and are not reliable in practice.",http://arxiv.org/abs/1809.04729,2019,manuscript,"Shafaei, Alireza; Schmidt, Mark; Little, James J.", Existential risk assessment: A reply to Baum,"We welcome Seth Baum's reply to our paper. We broadly agree with the points Baum makes; however, the field of Existential Risk Studies remains young and undeveloped, and we think that there are many points on which further reflection is needed. We briefly discuss three: the normative aspects of terms like 'existential catastrophe,' the opportunities for low hanging fruit in method selection and application, and the importance of context when making probability claims.",https://linkinghub.elsevier.com/retrieve/pii/S0016328720300963,2020,journalArticle,"Beard, Simon; Rowe, Thomas; Fox, James",Futures The Emotional Nature of Post-Cognitive Singularities,"SummaryThe next evolution of intelligent, physical life on earth will be to artificial, super-intelligent agents or even entities we might call ‘post’- or ‘trans’ humans. And, contrary to popular opinion about the man-made advanced intelligences, these entities will not be just information-driven machines devoid of emotion. Instead, today’s computing and Artificial Intelligence (AI) science has set us on the path that we fully anticipate a technological Singularity event. From this event we expect the emergence of intelligent, living entities that have the capacity to integrate massive amounts of data, but this computing will be controlled by emotional mechanisms. These new forms of life will live side-by-side with humanity so the real, foreseeable problem of this post-Singularity, post-cognitive era will be an existential one—and a possible misalignment between us (humans) and ‘them’ (the new entities). The different forms of life will pursue different goals, likely to be mediated by different, or even opposite, emotional syntaxes. How humans will interact with these new post-cognitive, emotional Singularity Entities has yet to be defined, but new social patterns will surely emerge.",https://doi.org/10.1007/978-3-662-54033-6_11,2017,bookSection,"Vallverdú, Jordi",The Technological Singularity: Managing the Journey Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data,"Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as ""teachers"" for a ""student"" model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.",http://arxiv.org/abs/1610.05755,2017,conferencePaper,"Papernot, Nicolas; Abadi, Martín; Erlingsson, Úlfar; Goodfellow, Ian; Talwar, Kunal","arXiv:1610.05755 [cs, stat]" Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time,"This paper investigates how to utilize different forms of human interaction to safely train autonomous systems in real-time by learning from both human demonstrations and interventions. We implement two components of the Cycle-of-Learning for Autonomous Systems, which is our framework for combining multiple modalities of human interaction. The current effort employs human demonstrations to teach a desired behavior via imitation learning, then leverages intervention data to correct for undesired behaviors produced by the imitation learner to teach novel tasks to an autonomous agent safely, after only minutes of training. We demonstrate this method in an autonomous perching task using a quadrotor with continuous roll, pitch, yaw, and throttle commands and imagery captured from a downward-facing camera in a high-fidelity simulated environment. Our method improves task completion performance for the same amount of human interaction when compared to learning from demonstrations alone, while also requiring on average 32% less data to achieve that performance. This provides evidence that combining multiple modes of human interaction can increase both the training speed and overall performance of policies for autonomous systems.",http://arxiv.org/abs/1810.11545,2018,conferencePaper,"Goecks, Vinicius G.; Gremillion, Gregory M.; Lawhern, Vernon J.; Valasek, John; Waytowich, Nicholas R.",Proceedings of the AAAI Conference on Artificial Intelligence Accounting for the neglected dimensions of AI progress,,https://arxiv.org/abs/1806.00610,2018,manuscript,"Martínez-Plumed, Fernando; Avin, Shahar; Brundage, Miles; Dafoe, Allan; hÉigeartaigh, Sean Ó; Hernández-Orallo, José", Commitment and credibility in multipolar AI scenarios,"The ability to make credible commitments is a key factor in many bargaining situations ranging from trade to international conflict. This post builds a taxonomy of the commitment mechanisms that transformative AI (TAI) systems could use in future multipolar scenarios, describes various issues they have in practice, and draws some tentative conclusions about the landscape of commitments we might expect in the future. INTRODUCTION A better understanding of the commitments that future AI systems could make is helpful for predicting and influencing the dynamics of multipolar scenarios. The option to credibly bind oneself to certain actions or strategies fundamentally changes the game theory behind bargaining, cooperation, and conflict. Credible commitments and general transparency can work to stabilize positive-sum agreements, and to increase the efficiency of threats (Schelling 1960), both of which could be relevant to how well TAI trajectories will reflect our values. Because human goals can be contradictory, and even broadly aligned AI systems could come to prioritize different outcomes depending on their domains and histories, these systems could end up in competitive situations and bargaining failures where a lot of value is lost. Similarly, if some systems in a multipolar scenario are well aligned and others less so, some worst cases might be avoidable if stable peaceful agreements can be reached. As an example of the practical significance of commitment ability in stabilizing peaceful strategies, standard theories in international relations hold that conflicts between nations are difficult to avoid indefinitely primarily because there are no reliable commitment mechanisms for peaceful agreements (e.g. Powell 2004, Lake 1999, Rosato 2015), even when nations would overall prefer them. In addition to the direct costs of conflict, the lack of enforceable commitments leads to continuous resource loss from arms races, monitoring, and other preparations for possible",https://www.lesswrong.com/posts/LvtsFKxg2t3nWhKRq/commitment-and-credibility-in-multipolar-ai-scenarios,2020,blogPost,"Leskela, Anni",LessWrong Vulnerabilities in CDT and TI-unaware agents,,https://www.alignmentforum.org/posts/vFXK8eQdLhicYNNqF/vulnerabilities-in-cdt-and-ti-unaware-agents,2020,blogPost,"Moreno Casares, Pablo Antonio; Zagami, Davide; Leong, Chris",AI Alignment Forum AI Unsafety via Non-Zero-Sum Debate,"In this post, I describe how to view debate as a way of assisting a human to spot flaws in an AI’s proposal. I then argue that the zero-sum assumption is critical for making debate work and that various seemingly-helpful modifications of debate might break it instead. -------------------------------------------------------------------------------- A naive way of using arbitrary optimizers as oracles:Suppose you have a black-box optimizer X that can be connected to any well-defined quantity to be maximized. X can potentially be very powerful - e.g., having a highly accurate model of the world and “a lot of optimization power”. One way to turn X into an oracle is to ask it a question and decide to give it reward 1 if we like its answer and 0 if we don’t.[1] Of course, standard AI-safety arguments (e.g., AI takeover and perverse instantiation) suggest that this is a pretty bad idea for powerful X. For the sake of argument, suppose that we can fix all of the “obvious” problems and ensure that X won’t wirehead, won’t try to escape the box we put it in etc., and will only care about the reward it gets for its answer. Two problems with naive optimizers-turned-oracles: (1) telling the difference between good and awesome answers and (2) answers with hidden flaws:One problem with this type of oracles is that it’s hard to decide whether we like its answers or not. Suppose I ask it for food recommendations for the evening and it suggests pancakes. Pancakes seem fine, although there are some foods that I would like better. So should I reward the AI or not? The second problem is that the oracle optimizes for giving answers that seem good to a human. (Not out of malice, but because “actually being good” isn’t well-defined.) And since humans aren’t omniscient, there will be many seemingly good answers that in fact have disastrous consequences if acted upon. To address (1), use two AIs:The first problem can be tackled by using two copies of the optimizer and rewarding the one w",https://www.alignmentforum.org/posts/BRiMQELD5WYyvncTE/ai-unsafety-via-non-zero-sum-debate,2020,blogPost,"Kovarik, Vojta",AI Alignment Forum Towards Empathic Deep Q-Learning,"As reinforcement learning (RL) scales to solve increasingly complex tasks, interest continues to grow in the fields of AI safety and machine ethics. As a contribution to these fields, this paper introduces an extension to Deep Q-Networks (DQNs), called Empathic DQN, that is loosely inspired both by empathy and the golden rule (""Do unto others as you would have them do unto you""). Empathic DQN aims to help mitigate negative side effects to other agents resulting from myopic goal-directed behavior. We assume a setting where a learning agent coexists with other independent agents (who receive unknown rewards), where some types of reward (e.g. negative rewards from physical harm) may generalize across agents. Empathic DQN combines the typical (self-centered) value with the estimated value of other agents, by imagining (by its own standards) the value of it being in the other's situation (by considering constructed states where both agents are swapped). Proof-of-concept results in two gridworld environments highlight the approach's potential to decrease collateral harms. While extending Empathic DQN to complex environments is non-trivial, we believe that this first step highlights the potential of bridge-work between machine ethics and RL to contribute useful priors for norm-abiding RL agents.",http://arxiv.org/abs/1906.10918,2019,conferencePaper,"Bussmann, Bart; Heinerman, Jacqueline; Lehman, Joel",Proceedings of the Workshop on Artificial Intelligence Safety 2019 Optimal Farsighted Agents Tend to Seek Power,"Some researchers have speculated that capable reinforcement learning (RL) agents pursuing misspecified objectives are often incentivized to seek resources and power in pursuit of those objectives. An agent seeking power is incentivized to behave in undesirable ways, including rationally preventing deactivation and correction. Others have voiced skepticism: humans seem idiosyncratic in their urges to power, which need not be present in the agents we design. We formalize a notion of power within the context of finite Markov decision processes (MDPs). With respect to a neutral class of reward function distributions, our results suggest that farsighted optimal policies tend to seek power over the environment.",http://arxiv.org/abs/1912.01683,2020,manuscript,"Turner, Alexander Matt; Smith, Logan; Shah, Rohin; Tadepalli, Prasad", UCB Exploration via Q-Ensembles,"We show how an ensemble of $Q^*$-functions can be leveraged for more effective exploration in deep reinforcement learning. We build on well established algorithms from the bandit setting, and adapt them to the $Q$-learning setting. We propose an exploration strategy based on upper-confidence bounds (UCB). Our experiments show significant gains on the Atari benchmark.",http://arxiv.org/abs/1706.01502,2017,manuscript,"Chen, Richard Y.; Sidor, Szymon; Abbeel, Pieter; Schulman, John", Anthropic bias: observation selection effects in science and philosophy,,,2002,book,"Bostrom, Nick", Modeling Friends and Foes,"How can one detect friendly and adversarial behavior from raw data? Detecting whether an environment is a friend, a foe, or anything in between, remains a poorly understood yet desirable ability for safe and robust agents. This paper proposes a definition of these environmental ""attitudes"" based on an characterization of the environment's ability to react to the agent's private strategy. We define an objective function for a one-shot game that allows deriving the environment's probability distribution under friendly and adversarial assumptions alongside the agent's optimal strategy. Furthermore, we present an algorithm to compute these equilibrium strategies, and show experimentally that both friendly and adversarial environments possess non-trivial optimal strategies.",http://arxiv.org/abs/1807.00196,2018,manuscript,"Ortega, Pedro A.; Legg, Shane", "There is plenty of time at the bottom: the economics, risk and ethics of time compression",,,2019,journalArticle,"Sandberg, Anders",foresight "Neural Networks for Safety-Critical Applications - Challenges, Experiments and Perspectives","We propose a methodology for designing dependable Artificial Neural Networks (ANN) by extending the concepts of understandability, correctness, and validity that are crucial ingredients in existing certification standards. We apply the concept in a concrete case study in designing a high-way ANN-based motion predictor to guarantee safety properties such as impossibility for the ego vehicle to suggest moving to the right lane if there exists another vehicle on its right.",http://arxiv.org/abs/1709.00911,2017,conferencePaper,"Cheng, Chih-Hong; Diehl, Frederik; Hamza, Yassine; Hinz, Gereon; Nührenberg, Georg; Rickert, Markus; Ruess, Harald; Troung-Le, Michael","2018 Design, Automation & Test in Europe Conference & Exhibition (DATE)" Cyber insurance,,,2017,journalArticle,"Petratos, Pythagoras; Sandberg, Anders; Zhou, Feng","Handbook of Cyber-Development, Cyber-Democracy, and Cyber-Defense" Are we living at the hinge of history,,,2020,report,"MacAskill, William", Safe Reinforcement Learning via Probabilistic Shields,"This paper targets the efficient construction of a safety shield for decision making in scenarios that incorporate uncertainty. Markov decision processes (MDPs) are prominent models to capture such planning problems. Reinforcement learning (RL) is a machine learning technique to determine near-optimal policies in MDPs that may be unknown prior to exploring the model. However, during exploration, RL is prone to induce behavior that is undesirable or not allowed in safety- or mission-critical contexts. We introduce the concept of a probabilistic shield that enables decision-making to adhere to safety constraints with high probability. In a separation of concerns, we employ formal verification to efficiently compute the probabilities of critical decisions within a safety-relevant fragment of the MDP. We use these results to realize a shield that is applied to an RL algorithm which then optimizes the actual performance objective. We discuss tradeoffs between sufficient progress in exploration of the environment and ensuring safety. In our experiments, we demonstrate on the arcade game PAC-MAN and on a case study involving service robots that the learning efficiency increases as the learning needs orders of magnitude fewer episodes.",http://arxiv.org/abs/1807.06096,2019,conferencePaper,"Jansen, Nils; Könighofer, Bettina; Junges, Sebastian; Serban, Alexandru C.; Bloem, Roderick",arXiv:1807.06096 [cs] Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications,"Inverse reinforcement learning (IRL) infers a reward function from demonstrations, allowing for policy improvement and generalization. However, despite much recent interest in IRL, little work has been done to understand the minimum set of demonstrations needed to teach a specific sequential decision-making task. We formalize the problem of finding maximally informative demonstrations for IRL as a machine teaching problem where the goal is to find the minimum number of demonstrations needed to specify the reward equivalence class of the demonstrator. We extend previous work on algorithmic teaching for sequential decision-making tasks by showing a reduction to the set cover problem which enables an efficient approximation algorithm for determining the set of maximally-informative demonstrations. We apply our proposed machine teaching algorithm to two novel applications: providing a lower bound on the number of queries needed to learn a policy using active IRL and developing a novel IRL algorithm that can learn more efficiently from informative demonstrations than a standard IRL approach.",http://arxiv.org/abs/1805.07687,2019,conferencePaper,"Brown, Daniel S.; Niekum, Scott",Proceedings of the AAAI Conference on Artificial Intelligence "The Assymetry, Uncertainty, and the Long Term",,https://globalprioritiesinstitute.org/wp-content/uploads/2019/Thomas-Asymmetry-paper.pdf,2019,manuscript,"Thomas, Teruji", Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning,"We propose a unified mechanism for achieving coordination and communication in Multi-Agent Reinforcement Learning (MARL), through rewarding agents for having causal influence over other agents’ act...",http://proceedings.mlr.press/v97/jaques19a.html,2019,conferencePaper,"Jaques, Natasha; Lazaridou, Angeliki; Hughes, Edward; Gulcehre, Caglar; Ortega, Pedro; Strouse, Dj; Leibo, Joel Z.; Freitas, Nando De",International Conference on Machine Learning Towards Adjustable Autonomy for the Real World,"Adjustable autonomy refers to entities dynamically varying their own autonomy, transferring decision-making control to other entities (typically agents transferring control to human users) in key situations. Determining whether and when such transfers-of-control should occur is arguably the fundamental research problem in adjustable autonomy. Previous work has investigated various approaches to addressing this problem but has often focused on individual agent-human interactions. Unfortunately, domains requiring collaboration between teams of agents and humans reveal two key shortcomings of these previous approaches. First, these approaches use rigid one-shot transfers of control that can result in unacceptable coordination failures in multiagent settings. Second, they ignore costs (e.g., in terms of time delays or effects on actions) to an agent's team due to such transfers-of-control. To remedy these problems, this article presents a novel approach to adjustable autonomy, based on the notion of a transfer-of-control strategy. A transfer-of-control strategy consists of a conditional sequence of two types of actions: (i) actions to transfer decision-making control (e.g., from an agent to a user or vice versa) and (ii) actions to change an agent's pre-specified coordination constraints with team members, aimed at minimizing miscoordination costs. The goal is for high-quality individual decisions to be made with minimal disruption to the coordination of the team. We present a mathematical model of transfer-of-control strategies. The model guides and informs the operationalization of the strategies using Markov Decision Processes, which select an optimal strategy, given an uncertain environment and costs to the individuals and teams. The approach has been carefully evaluated, including via its use in a real-world, deployed multi-agent system that assists a research group in its daily activities.",https://jair.org/index.php/jair/article/view/10312,2002,journalArticle,"Scerri, P.; Pynadath, D. V.; Tambe, M.",Journal of Artificial Intelligence Research From self to craving (three characteristics series),"Buddhists talk a lot about the self, and also about suffering. They claim that if you come to investigate what the self is really made of, then this will lead to a reduction in suffering. Why would that be? This post seeks to answer that question. First, let’s recap a few things that we have been talking about before. THE CONNECTION BETWEEN SELF AND CRAVING In “a non-mystical explanation of ‘no-self’”, I talked about the way in which there are two kinds of goals. First, we can manipulate something that does not require a representation of ourselves. For example, we can figure out how to get a truck on a column of blocks. In that case, we can figure out a sequence of actions that takes the truck from its initial state to its target state. We don’t necessarily need to think about ourselves as we are figuring this out - the actual sequence could just as well be carried out by someone else. I mentioned that these kinds of tasks seem to allow flow states, in which the sense of self becomes temporarily suspended as unnecessary, and which are typically experienced as highly enjoyable and free from discomfort. Alternatively, we can think of a goal which intrinsically requires self-reference. For example, I might be feeling sad, and think that I want to feel happy instead. In this case, both the initial state and the target state are defined in terms of what I feel, so in order to measure my progress, I need to track a reference to myself. In that post, I remarked that changing one’s experience of one’s self may change how emotions are experienced. This does not necessarily require high levels of enlightenment: it is a common mindfulness practice to reframe your emotions as something that is external to you, in which case negative emotions might cease to feel aversive. I have also previously discussed therapy techniques that allow you to create some distance between yourself and your feelings, making them less aversive. For example, one may pay attention to where in",https://www.lesswrong.com/posts/r6kzvdia4S8TKE6WF/from-self-to-craving-three-characteristics-series,2020,blogPost,"Sotala, Kaj",LessWrong Brave new love: The threat of high-tech “conversion” therapy and the bio-oppression of sexual minorities,,,2014,journalArticle,"Earp, Brian D.; Sandberg, Anders; Savulescu, Julian",AJOB neuroscience The Foundations of Deep Learning with a Path Towards General Intelligence,"Like any field of empirical science, AI may be approached axiomatically. We formulate requirements for a general-purpose, human-level AI system in terms of postulates. We review the methodology of deep learning, examining the explicit and tacit assumptions in deep learning research. Deep Learning methodology seeks to overcome limitations in traditional machine learning research as it combines facets of model richness, generality, and practical applicability. The methodology so far has produced outstanding results due to a productive synergy of function approximation, under plausible assumptions of irreducibility and the efficiency of back-propagation family of algorithms. We examine these winning traits of deep learning, and also observe the various known failure modes of deep learning. We conclude by giving recommendations on how to extend deep learning methodology to cover the postulates of general-purpose AI including modularity, and cognitive architecture. We also relate deep learning to advances in theoretical neuroscience research.",https://arxiv.org/abs/1806.08874v1,2018,conferencePaper,"Özkural, Eray", "Autonomous Vehicles: Disengagements, Accidents and Reaction Times",,https://dx.plos.org/10.1371/journal.pone.0168054,2016,journalArticle,"Dixit, Vinayak V.; Chand, Sai; Nair, Divya J.",PLOS ONE Robots in war: the next weapons of mass destruction?,"Davos 2016: There is no doubt that as the technology improves, autonomous weapons will be highly effective. But does that necessarily mean they’re a good idea?",https://www.weforum.org/agenda/2016/01/robots-in-war-the-next-weapons-of-mass-destruction/,2016,magazineArticle,"Russell, Stuart",World Economic Forum Active reinforcement learning: Observing rewards at a cost,,,2016,conferencePaper,"Krueger, David; Leike, Jan; Evans, Owain; Salvatier, John","Future of Interactive Learning Machines, NIPS Workshop" Embedded Agency,"Traditional models of rational action treat the agent as though it is cleanly separated from its environment, and can act on that environment from the outside. Such agents have a known functional relationship with their environment, can model their environment in every detail, and do not need to reason about themselves or their internal parts. We provide an informal survey of obstacles to formalizing good reasoning for agents embedded in their environment. Such agents must optimize an environment that is not of type ``function''; they must rely on models that fit within the modeled environment; and they must reason about themselves as just another physical system, made of parts that can be modified and that can work at cross purposes.",http://arxiv.org/abs/1902.09469,2019,manuscript,"Demski, Abram; Garrabrant, Scott", Game Theory with Translucent Players,"A traditional assumption in game theory is that players are opaque to one another---if a player changes strategies, then this change in strategies does not affect the choice of other players' strategies. In many situations this is an unrealistic assumption. We develop a framework for reasoning about games where the players may be translucent to one another; in particular, a player may believe that if she were to change strategies, then the other player would also change strategies. Translucent players may achieve significantly more efficient outcomes than opaque ones. Our main result is a characterization of strategies consistent with appropriate analogues of common belief of rationality. Common Counterfactual Belief of Rationality (CCBR) holds if (1) everyone is rational, (2) everyone counterfactually believes that everyone else is rational (i.e., all players i believe that everyone else would still be rational even if $i$ were to switch strategies), (3) everyone counterfactually believes that everyone else is rational, and counterfactually believes that everyone else is rational, and so on. CCBR characterizes the set of strategies surviving iterated removal of minimax dominated strategies, where a strategy s for player i is minimax dominated by s' if the worst-case payoff for i using s' is better than the best possible payoff using s.",http://arxiv.org/abs/1308.3778,2013,journalArticle,"Halpern, Joseph Y.; Pass, Rafael",International Journal of Game Theory Why we need friendly AI,,,2014,journalArticle,"Muehlhauser, Luke; Bostrom, Nick",Think Toward Evaluating Robustness of Deep Reinforcement Learning with Continuous Control,We study the problem of continuous control agents in deep RL with adversarial attacks and proposed a two-step algorithm based on learned model dynamics.,https://openreview.net/forum?id=SylL0krYPS,2019,conferencePaper,"Weng, Tsui-Wei; Dvijotham*, Krishnamurthy (Dj); Uesato*, Jonathan; Xiao*, Kai; Gowal*, Sven; Stanforth*, Robert; Kohli, Pushmeet", Exploration by Random Network Distillation,"We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level.",http://arxiv.org/abs/1810.12894,2018,manuscript,"Burda, Yuri; Edwards, Harrison; Storkey, Amos; Klimov, Oleg", Mastering Complex Control in MOBA Games with Deep Reinforcement Learning,"We study the reinforcement learning problem of complex action control in the Multi-player Online Battle Arena (MOBA) 1v1 games. This problem involves far more complicated state and action spaces than those of traditional 1v1 games, such as Go and Atari series, which makes it very difficult to search any policies with human-level performance. In this paper, we present a deep reinforcement learning framework to tackle this problem from the perspectives of both system and algorithm. Our system is of low coupling and high scalability, which enables efficient explorations at large scale. Our algorithm includes several novel strategies, including control dependency decoupling, action mask, target attention, and dualclip PPO, with which our proposed actor-critic network can be effectively trained in our system. Tested on the MOBA game Honor of Kings, the trained AI agents can defeat top professional human players in full 1v1 games.",http://arxiv.org/abs/1912.09729,2020,conferencePaper,"Ye, Deheng; Liu, Zhao; Sun, Mingfei; Shi, Bei; Zhao, Peilin; Wu, Hao; Yu, Hongsheng; Yang, Shaojie; Wu, Xipeng; Guo, Qingwei; Chen, Qiaobo; Yin, Yinyuting; Zhang, Hao; Shi, Tengfei; Wang, Liang; Fu, Qiang; Yang, Wei; Huang, Lanxiao",arXiv:1912.09729 [cs] Destructive Cyber Operations and Machine Learning,"Machine learning may provide cyber attackers with the means to execute more effective and more destructive attacks against industrial control systems. As new ML tools are developed, CSET discusses the ways in which attackers may deploy these tools and the most effective avenues for industrial system defenders to respond.",https://cset.georgetown.edu/research/destructive-cyber-operations-and-machine-learning/,2020,report,"Cary, Dakota; Cebul, Daniel", The Question of Comparative Advantage in Artificial Intelligence: Enduring Strengths and Emerging Challenges for the United States,"How do we measure leadership in artificial intelligence, and where does the United States rank? What comparative advantages matter most? As nations embrace AI, answering these questions becomes increasingly critical.",https://cset.georgetown.edu/research/the-question-of-comparative-advantage-in-artificial-intelligence-enduring-strengths-and-emerging-challenges-for-the-united-states/,2020,report,"Imbrie, Andrew; Kania, Elsa; Laskai, Lorand", Representation of future generations in United Kingdom policy-making,"Global existential and catastrophic risks, particularly those arising from technological developments, present challenges for intergenerational justice. We aim to present a solutions-based approach to the challenge of intergenerational inequality. We examine options for representing future generations in our present policymaking structures, drawing on case studies from Singapore, Finland, Hungary, Israel, Scotland and Wales. We derive several factors which contribute to the success of some of these institutions, and discuss reasons for the failure or abolition of others. We draw out broad lessons which we can apply to policymaking in England, and make policy recommendations based on these findings.",http://www.sciencedirect.com/science/article/pii/S0016328717301179,2018,journalArticle,"Jones, Natalie; O’Brien, Mark; Ryan, Thomas",Futures Active Reward Learning,"While reward functions are an essential component of many robot learning methods, defining such functions remains a hard problem in many practical applications. For tasks such as grasping, there are no reliable success measures available. Defining reward functions by hand requires extensive task knowledge and often leads to undesired emergent behavior. Instead, we propose to learn the reward function through active learning, querying human expert knowledge for a subset of the agent’s rollouts. We introduce a framework, wherein a traditional learning algorithm interplays with the reward learning component, such that the evolution of the action learner guides the queries of the reward learner. We demonstrate results of our method on a robot grasping task and show that the learned reward function generalizes to a similar task.",http://www.roboticsproceedings.org/rss10/p31.pdf,2014,conferencePaper,"Daniel, Christian; Viering, Malte; Metz, Jan; Kroemer, Oliver; Peters, Jan",Robotics: Science and Systems X Unsupervised Visuomotor Control through Distributional Planning Networks,"While reinforcement learning (RL) has the potential to enable robots to autonomously acquire a wide range of skills, in practice, RL usually requires manual, per-task engineering of reward functions, especially in real world settings where aspects of the environment needed to compute progress are not directly accessible. To enable robots to autonomously learn skills, we instead consider the problem of reinforcement learning without access to rewards. We aim to learn an unsupervised embedding space under which the robot can measure progress towards a goal for itself. Our approach explicitly optimizes for a metric space under which action sequences that reach a particular state are optimal when the goal is the final state reached. This enables learning effective and control-centric representations that lead to more autonomous reinforcement learning algorithms. Our experiments on three simulated environments and two real-world manipulation problems show that our method can learn effective goal metrics from unlabeled interaction, and use the learned goal metrics for autonomous reinforcement learning.",http://www.roboticsproceedings.org/rss15/p20.pdf,2019,conferencePaper,"Yu, Tianhe; Shevchuk, Gleb; Sadigh, Dorsa; Finn, Chelsea",Robotics: Science and Systems XV "Comments in response to the ""Draft Memorandum to the Heads of Executive Departments and Agencies, Guidance for Regulation of Artificial Intelligence Application"" by the Center for Human-Compatible Artificial Intelligence, the Future of Life Institute, the Center for Long-Term Cybersecurity, and The Future Society.",,https://beta.regulations.gov/document/OMB-2020-0003-0081,2020,report,Center for Human-Compatible AI; Future of Life Institute; Center for Long-Term Cybersecurity; The Future Society, Silly rules improve the capacity of agents to learn stable enforcement and compliance behaviors,"How can societies learn to enforce and comply with social norms? Here we investigate the learning dynamics and emergence of compliance and enforcement of social norms in a foraging game, implemented in a multi-agent reinforcement learning setting. In this spatiotemporally extended game, individuals are incentivized to implement complex berry-foraging policies and punish transgressions against social taboos covering specific berry types. We show that agents benefit when eating poisonous berries is taboo, meaning the behavior is punished by other agents, as this helps overcome a credit-assignment problem in discovering delayed health effects. Critically, however, we also show that introducing an additional taboo, which results in punishment for eating a harmless berry, improves the rate and stability with which agents learn to punish taboo violations and comply with taboos. Counterintuitively, our results show that an arbitrary taboo (a ""silly rule"") can enhance social learning dynamics and achieve better outcomes in the middle stages of learning. We discuss the results in the context of studying normativity as a group-level emergent phenomenon.",http://arxiv.org/abs/2001.09318,2020,conferencePaper,"Köster, Raphael; Hadfield-Menell, Dylan; Hadfield, Gillian K.; Leibo, Joel Z.","Proc. of the 19th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2020)," Toward A Working Theory of Mind,,http://mediangroup.org/docs/toward_a_working_theory_of_mind.pdf,,manuscript,"Perry, Miya", Governing Boring Apocalypses: A new typology of existential vulnerabilities and exposures for existential risk research,"In recent years, the study of existential risks has explored a range of natural and man-made catastrophes, from supervolcano eruption to nuclear war, and from global pandemics to potential risks from misaligned AI. These risks share the prospect of causing outright human extinction were they to occur. In this approach, such identified existential risks are frequently characterised by relatively singular origin events and concrete pathways of harm which directly jeopardise the survival of humanity, or undercut its potential for long-term technological progress. While this approach aptly identifies the most cataclysmic fates which may befall humanity, we argue that catastrophic ‘existential outcomes’ may likely arise from a broader range of sources and societal vulnerabilities, and through the complex interactions of disparate social, cultural, and natural processes—many of which, taken in isolation, might not be seen to merit attention as a global catastrophic, let alone existential, risk. This article argues that an emphasis on mitigating the hazards (discrete causes) of existential risks is an unnecessarily narrow framing of the challenge facing humanity, one which risks prematurely curtailing the spectrum of policy responses considered. Instead, it argues existential risks constitute but a subset in a broader set of challenges which could directly or indirectly contribute to existential consequences for humanity. To illustrate, we introduce and examine a set of existential risks that often fall outside the scope of, or remain understudied within, the field. By focusing on vulnerability and exposure rather than existential hazards, we develop a new taxonomy which captures factors contributing to these existential risks. Latent structural vulnerabilities in our technological systems and in our societal arrangements may increase our susceptibility to existential hazards. Finally, different types of exposure of our society or its natural base determine if or how a given hazard can interface with pre-existing vulnerabilities, to trigger emergent existential risks. We argue that far from being peripheral footnotes to their more direct and immediately terminal counterparts, these “Boring Apocalypses” may well prove to be the more endemic and problematic, dragging down and undercutting short-term successes in mitigating more spectacular risks. If the cardinal concern is humanity’s continued survival and prosperity, then focussing academic and public advocacy efforts on reducing direct existential hazards may have the paradoxical potential of exacerbating humanity’s indirect susceptibility to such outcomes. Adopting law and policy perspectives allow us to foreground societal dimensions that complement and reinforce the discourse on existential risks.",http://www.sciencedirect.com/science/article/pii/S0016328717301623,2018,journalArticle,"Liu, Hin-Yan; Lauta, Kristian Cedervall; Maas, Matthijs Michiel",Futures Some Characteristics of One Type of High Reliability Organization,,http://pubsonline.informs.org/doi/abs/10.1287/orsc.1.2.160,1990,journalArticle,"Roberts, Karlene H.",Organization Science Decision Points in AI Governance,,https://cltc.berkeley.edu/wp-content/uploads/2020/05/Decision_Points_AI_Governance.pdf,2020,report,Jessica Cussins Newman, Shared Autonomy via Deep Reinforcement Learning,"In shared autonomy, user input is combined with semi-autonomous control to achieve a common goal. The goal is often unknown ex-ante, so prior work enables agents to infer the goal from user input and assist with the task. Such methods tend to assume some combination of knowledge of the dynamics of the environment, the user’s policy given their goal, and the set of possible goals the user might target, which limits their application to real-world scenarios. We propose a deep reinforcement learning framework for model-free shared autonomy that lifts these assumptions. We use human-in-the-loop reinforcement learning with neural network function approximation to learn an end-to-end mapping from environmental observation and user input to agent action values, with task reward as the only form of supervision. This approach poses the challenge of following user commands closely enough to provide the user with real-time action feedback and thereby ensure high-quality user input, but also deviating from the user’s actions when they are suboptimal. We balance these two needs by discarding actions whose values fall below some threshold, then selecting the remaining action closest to the user’s input. Controlled studies with users (n = 12) and synthetic pilots playing a video game, and a pilot study with users (n = 4) flying a real quadrotor, demonstrate the ability of our algorithm to assist users with real-time control tasks in which the agent cannot directly access the user’s private information through observations, but receives a reward signal and user input that both depend on the user’s intent. The agent learns to assist the user without access to this private information, implicitly inferring it from the user’s input. This enables the assisted user to complete the task more effectively than the user or an autonomous agent could on their own. This paper is a proof of concept that illustrates the potential for deep reinforcement learning to enable flexible and practical assistive systems.",http://arxiv.org/abs/1802.01744,2018,conferencePaper,"Reddy, Siddharth; Dragan, Anca D.; Levine, Sergey",Robotics: Science and Systems XIV AI FAQs,Q: Who conceived of and wrote FLI’s open letter on robust and beneficial AI? A: The open letter has been an initiative of the Future of Life Institute (especially the FLI founders and Berkeley AI researcher and FLI Advisory Board Member Stuart Russell) in collaboration with the AI research community (including a number of signatories). […],https://futureoflife.org/ai-faqs/,,blogPost,"Tegmark, Max",Future of Life Institute Quantifying Generalization in Reinforcement Learning,"In this paper, we investigate the problem of overfitting in deep reinforcement learning. Among the most common benchmarks in RL, it is customary to use the same environments for both training and testing. This practice offers relatively little insight into an agent's ability to generalize. We address this issue by using procedurally generated environments to construct distinct training and test sets. Most notably, we introduce a new environment called CoinRun, designed as a benchmark for generalization in RL. Using CoinRun, we find that agents overfit to surprisingly large training sets. We then show that deeper convolutional architectures improve generalization, as do methods traditionally found in supervised learning, including L2 regularization, dropout, data augmentation and batch normalization.",http://proceedings.mlr.press/v97/cobbe19a.html,2019,conferencePaper,"Cobbe, Karl; Klimov, Oleg; Hesse, Chris; Kim, Taehoon; Schulman, John",Proceedings of the 36th International Conference on Machine Learning Preparing for the unthinkable,,http://www.sciencemag.org/lookup/doi/10.1126/science.aay4219,2019,journalArticle,"Baum, Seth D.",Science Feature Visualization,,https://distill.pub/2017/feature-visualization,2017,journalArticle,"Olah, Chris; Mordvintsev, Alexander; Schubert, Ludwig",Distill A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View,"Machine learning is one of the most prevailing techniques in computer science, and it has been widely applied in image processing, natural language processing, pattern recognition, cybersecurity, and other fields. Regardless of successful applications of machine learning algorithms in many scenarios, e.g., facial recognition, malware detection, automatic driving, and intrusion detection, these algorithms and corresponding training data are vulnerable to a variety of security threats, inducing a significant performance decrease. Hence, it is vital to call for further attention regarding security threats and corresponding defensive techniques of machine learning, which motivates a comprehensive survey in this paper. Until now, researchers from academia and industry have found out many security threats against a variety of learning algorithms, including naive Bayes, logistic regression, decision tree, support vector machine (SVM), principle component analysis, clustering, and prevailing deep neural networks. Thus, we revisit existing security threats and give a systematic survey on them from two aspects, the training phase and the testing/inferring phase. After that, we categorize current defensive techniques of machine learning into four groups: security assessment mechanisms, countermeasures in the training phase, those in the testing or inferring phase, data security, and privacy. Finally, we provide five notable trends in the research on security threats and defensive techniques of machine learning, which are worth doing in-depth studies in future.",,2018,journalArticle,"Liu, Q.; Li, P.; Zhao, W.; Cai, W.; Yu, S.; Leung, V. C. M.",IEEE Access Interpretable Latent Spaces for Learning from Demonstration,"Effective human-robot interaction, such as in robot learning from human demonstration, requires the learning agent to be able to ground abstract concepts (such as those contained within instructions) in a corresponding high-dimensional sensory input stream from the world. Models such as deep neural networks, with high capacity through their large parameter spaces, can be used to compress the high-dimensional sensory data to lower dimensional representations. These low-dimensional representations facilitate symbol grounding, but may not guarantee that the representation would be human-interpretable. We propose a method which utilises the grouping of user-defined symbols and their corresponding sensory observations in order to align the learnt compressed latent representation with the semantic notions contained in the abstract labels. We demonstrate this through experiments with both simulated and real-world object data, showing that such alignment can be achieved in a process of physical symbol grounding.",http://arxiv.org/abs/1807.06583,2018,conferencePaper,"Hristov, Yordan; Lascarides, Alex; Ramamoorthy, Subramanian",Proceedings of The 2nd Conference on Robot Learning Long-term trajectories of human civilization,,,2019,journalArticle,"Baum, Seth D.; Armstrong, Stuart; Ekenstedt, Timoteus; Häggström, Olle; Hanson, Robin; Kuhlemann, Karin; Maas, Matthijs M.; Miller, James D.; Salmela, Markus; Sandberg, Anders",Foresight AI Safety Debate and Its Applications,"All of the experimental work and some of the theoretical work has been done jointly with Anna Gajdova, David Lindner, Lukas Finnveden, and Rajashree Agrawal as part of the third AI Safety Camp. We are grateful to Ryan Carey and Geoffrey Irving for the advice regarding this project. The remainder of the theoretical part relates to my stay at FHI, and I would like to thank the above people, Owain Evans, Michael Dennis, Ethan Perez, Stuart Armstrong, and Max Daniel for comments/discussions. -------------------------------------------------------------------------------- Debate is a recent proposal for AI alignment, which naturally incorporates elicitation of human preferences and has the potential to offload the costly search for flaws in an AI’s suggestions onto the AI. After briefly recalling the intuition behind debate, we list the main open problems surrounding it and summarize how the existing work on debate addresses them. Afterward, we describe, and distinguish between, Debate games and their different applications in more detail. We also formalize what it means for a debate to be truth-promoting. Finally, we present results of our experiments on Debate games and Training via Debate on MNIST and fashion MNIST. DEBATE GAMES AND WHY THEY ARE USEFUL Consider an answer A to some question Q --- for example, ""Where should I go for a vacation?"" and ""Alaska"". Rather than directly verifying whether A is an accurate answer to Q, it might be easier to first decompose A into lower-level components (How far/expensive is it? Do they have nice beaches? What is the average temperature? What language do they speak?). Moreover, it isn't completely clear what to do even if we know the relevant facts --- indeed, how does Alaska's cold weather translate to a preference for Alaska from 0 to 10? And how does this preference compare to English being spoken in Alaska? As an alternative, we can hold a debate between two competing answers A and A′=""Bali"" to Q. This allows strategic de",https://www.lesswrong.com/posts/5Kv2qNfRyXXihNrx2/ai-safety-debate-and-its-applications,2019,blogPost,"Kovarik, Vojta",LessWrong Likelihood Ratios for Out-of-Distribution Detection,"Discriminative neural networks offer little or no performance guarantees when deployed on data not generated by the same process as the training distribution. On such out-of-distribution (OOD) inputs, the prediction may not only be erroneous, but confidently so, limiting the safe deployment of classifiers in real-world applications. One such challenging application is bacteria identification based on genomic sequences, which holds the promise of early detection of diseases, but requires a model that can output low confidence predictions on OOD genomic sequences from new bacteria that were not present in the training data. We introduce a genomics dataset for OOD detection that allows other researchers to benchmark progress on this important problem. We investigate deep generative model based approaches for OOD detection and observe that the likelihood score is heavily affected by population level background statistics. We propose a likelihood ratio method for deep generative models which effectively corrects for these confounding background statistics. We benchmark the OOD detection performance of the proposed method against existing approaches on the genomics dataset and show that our method achieves state-of-the-art performance. We demonstrate the generality of the proposed method by showing that it significantly improves OOD detection when applied to deep generative models of images.",http://arxiv.org/abs/1906.02845,2019,conferencePaper,"Ren, Jie; Liu, Peter J.; Fertig, Emily; Snoek, Jasper; Poplin, Ryan; DePristo, Mark A.; Dillon, Joshua V.; Lakshminarayanan, Balaji","arXiv:1906.02845 [cs, stat]" Asymptotic Convergence in Online Learning with Unbounded Delays,"We study the problem of predicting the results of computations that are too expensive to run, via the observation of the results of smaller computations. We model this as an online learning problem with delayed feedback, where the length of the delay is unbounded, which we study mainly in a stochastic setting. We show that in this setting, consistency is not possible in general, and that optimal forecasters might not have average regret going to zero. However, it is still possible to give algorithms that converge asymptotically to Bayes-optimal predictions, by evaluating forecasters on specific sparse independent subsequences of their predictions. We give an algorithm that does this, which converges asymptotically on good behavior, and give very weak bounds on how long it takes to converge. We then relate our results back to the problem of predicting large computations in a deterministic setting.",http://arxiv.org/abs/1604.05280,2016,manuscript,"Garrabrant, Scott; Soares, Nate; Taylor, Jessica", Artificial Fun: Mapping Minds to the Space of Fun,"Yampolskiy and others have shown that the space of possible minds is vast, actually infinite (Yampolskiy, 2015). A question of interest is 'Which activities can minds perform during their lifetime?' This question is very broad, thus in this article restricted to 'Which non-boring activities can minds perform?' The space of potential non-boring activities has been called by Yudkowsky 'fun space' (Yudkowsky, 2009). This paper aims to discuss the relation between various types of minds and the part of the fun space, which is accessible for them.",http://arxiv.org/abs/1606.07092,2016,manuscript,"Ziesche, Soenke; Yampolskiy, Roman V.", The Transformative Potential of Artificial Intelligence,"Recently the concept of transformative AI (TAI) has begun to receive attention in the AI policy space. TAI is often framed as an alternative formulation to notions of strong AI (e.g. artificial general intelligence or superintelligence) and reflects increasing consensus that advanced AI which does not fit these definitions may nonetheless have extreme and long-lasting impacts on society. However, the term TAI is poorly defined and often used ambiguously. Some use the notion of TAI to describe levels of societal transformation associated with previous 'general purpose technologies' (GPTs) such as electricity or the internal combustion engine. Others use the term to refer to more drastic levels of transformation comparable to the agricultural or industrial revolutions. The notion has also been used much more loosely, with some implying that current AI systems are already having a transformative impact on society. This paper unpacks and analyses the notion of TAI, proposing a distinction between narrowly transformative AI (NTAI), TAI and radically transformative AI (RTAI), roughly corresponding to associated levels of societal change. We describe some relevant dimensions associated with each and discuss what kinds of advances in capabilities they might require. We further consider the relationship between TAI and RTAI and whether we should necessarily expect a period of TAI to precede the emergence of RTAI. This analysis is important as it can help guide discussions among AI policy researchers about how to allocate resources towards mitigating the most extreme impacts of AI and it can bring attention to negative TAI scenarios that are currently neglected.",http://arxiv.org/abs/1912.00747,2019,manuscript,"Gruetzemacher, Ross; Whittlestone, Jess", Combining reward information from multiple sources,"Given two sources of evidence about a latent variable, one can combine the information from both by multiplying the likelihoods of each piece of evidence. However, when one or both of the observation models are misspecified, the distributions will conflict. We study this problem in the setting with two conflicting reward functions learned from different sources. In such a setting, we would like to retreat to a broader distribution over reward functions, in order to mitigate the effects of misspecification. We assume that an agent will maximize expected reward given this distribution over reward functions, and identify four desiderata for this setting. We propose a novel algorithm, Multitask Inverse Reward Design (MIRD), and compare it to a range of simple baselines. While all methods must trade off between conservatism and informativeness, through a combination of theory and empirical results on a toy environment, we find that MIRD and its variant MIRD-IF strike a good balance between the two.",,2019,conferencePaper,"Krasheninnikov, Dmitrii; Shah, Rohin; van Hoof, Herke", A regression approach for modeling games with many symmetric players,,,2018,conferencePaper,"Wiedenbeck, Bryce; Yang, Fengjun; Wellman, Michael P.",Thirty-Second AAAI Conference on Artificial Intelligence Policy options for the radio detectability of Earth,"The METI risk problem refers to the uncertain outcome of sending transmissions into space with the intention of messaging to extraterrestrial intelligence (METI). Here, I demonstrate that this uncertainty is undecidable by proving that that the METI risk problem reduces to the halting problem. This implies that any proposed moratorium on METI activities cannot be based solely on the requirement for new information. I discuss three policy resolutions to deal with this risk ambiguity. Precautionary malevolence assumes that contact with ETI is likely to cause net harm to humanity, which remains consistent with the call for a METI moratorium, while assumed benevolence states that METI is likely to yield net benefits to humanity. I also propose a policy of preliminary neutrality, which suggests that humanity should engage in both SETI (searching for extraterrestrial intelligence) and METI until either one achieves its first success.",http://arxiv.org/abs/1804.01885,2019,journalArticle,"Haqq-Misra, Jacob",Futures AI safety via market making,"Special thanks to Abram Demski, Paul Christiano, and Kate Woolverton for talking with me about some of the ideas that turned into this post. The goal of this post is to present a new prosaic (i.e. that uses current ML techniques) AI safety proposal based on AI safety via debate that I've been thinking about recently.[1] I'll start by describing a simple version of the proposal and then show some of the motivation behind it as well as how the simple version can be expanded upon. SIMPLE PROPOSAL Let M and Adv be models and H be a human. Intuitively, we'll train M and Adv via the following procedure given a question Q: 1. M tries to predict what, at the end of the procedure, H will think about Q. 2. Adv tries to output a string which will cause H to think something maximally different than what M predicted. 3. Return to step 1 and repeat until M's predictions stop changing. 4. Deploy M, which in the limit should act as an oracle for what H will think about Q after seeing all relevant information. There are many different ways to implement this intuitive procedure, however. For the first (simplified) version that I want to describe, we'll restrict ourselves to just the situation where Q is a yes-or-no question and M outputs the probability that H will answer yes. Then, given a proposition Q0, we can run the following training algorithm, starting at t=0: 1. Let pt=M(Qt). 2. Let xt=Adv(Qt,M). 3. Let Qt+1 be the string containing Qt and xt. 4. Increment t and return to step 1. When pt converges and/or the desired number of iterations has been reached, continue. 5. Let p∗=H(Qt) be H's final estimate of the probability of Q0 given all the xs included in Qt. EDIT: Step 2 used to use xt=Adv(Qt,pt) instead of xt=Adv(Qt,M), however I have since realized that it is necessary to give Adv the ability to query M in general, not just on Qt, as I explain in this comment. Then, for each step, compute M's loss for that step as LM,t=−p∗log(pt)−(1−p∗)log(",https://www.alignmentforum.org/posts/YWwzccGbcHMJMpT45/ai-safety-via-market-making,2020,blogPost,"Hubinger, Evan",AI Alignment Forum Learning Human Objectives by Evaluating Hypothetical Behavior,"We seek to align agent behavior with a user's objectives in a reinforcement learning setting with unknown dynamics, an unknown reward function, and unknown unsafe states. The user knows the rewards and unsafe states, but querying the user is expensive. To address this challenge, we propose an algorithm that safely and interactively learns a model of the user's reward function. We start with a generative model of initial states and a forward dynamics model trained on off-policy data. Our method uses these models to synthesize hypothetical behaviors, asks the user to label the behaviors with rewards, and trains a neural network to predict the rewards. The key idea is to actively synthesize the hypothetical behaviors from scratch by maximizing tractable proxies for the value of information, without interacting with the environment. We call this method reward query synthesis via trajectory optimization (ReQueST). We evaluate ReQueST with simulated users on a state-based 2D navigation task and the image-based Car Racing video game. The results show that ReQueST significantly outperforms prior methods in learning reward models that transfer to new environments with different initial state distributions. Moreover, ReQueST safely trains the reward model to detect unsafe states, and corrects reward hacking before deploying the agent.",http://proceedings.mlr.press/v119/reddy20a.html,2019,conferencePaper,"Reddy, Siddharth; Dragan, Anca D.; Levine, Sergey; Legg, Shane; Leike, Jan",Proceedings of the 37th International Conference on Machine Learning Application of machine learning techniques for supply chain demand forecasting,,https://linkinghub.elsevier.com/retrieve/pii/S0377221706012057,2008,journalArticle,"Carbonneau, Real; Laframboise, Kevin; Vahidov, Rustam",European Journal of Operational Research The medicalization of love: Response to critics,,,2016,journalArticle,"Earp, Brian D.; Sandberg, Anders; Savulescu, Julian",Cambridge Quarterly of Healthcare Ethics Tradeoff between desirable properties for baseline choices in impact measures,"Impact measures are auxiliary rewards for low impact on the agent’s environment, used to address the problems of side effects and instrumental convergence. A key component of an impact measur…",https://vkrakovna.wordpress.com/2020/07/05/tradeoff-between-desirable-properties-for-baseline-choices-in-impact-measures/,2020,blogPost,"Krakovna, Victoria",Victoria Krakovna What can the principal-agent literature tell us about AI risk?,"This work was done collaboratively with Tom Davidson. Thanks to Paul Christiano, Ben Garfinkel, Daniel Garrett, Robin Hanson, Philip Trammell and Takuro Yamashita for helpful comments and discussion. Errors our own. INTRODUCTION The AI alignment problem has similarities with the principal-agent problem studied by economists. In both cases, the problem is: how do we get agents to try to do what we want them to do? Economists have developed a sophisticated understanding of the agency problem and a measure of the cost of failure for the principal, “agency rents”. If principal-agent models capture relevant aspects of AI risk scenarios, they can be used to assess their plausibility. Robin Hanson has argued that Paul Christiano’s AI risk scenario is essentially an agency problem, and therefore that it implies extremely high agency rents. Hanson believes that the principal-agent literature (PAL) provides strong evidence against rents being this high. In this post, we consider whether PAL provides evidence against Christiano’s scenario and the original Bostrom/Yudkowsky scenario. We also examine whether the extensions to the agency framework could be used to gain insight into AI risk, and consider some general difficulties in applying PAL to AI risk. SUMMARY * PAL isn’t in tension with Christiano’s scenario because his scenario doesn’t imply massive agency rents; the big losses occur outside of the principal-agent problem, and the agency literature can’t assess the plausibility of these losses. Extensions to PAL could potentially shed light on the size of agency rents in this scenario, which are an important determinant of the future influentialness of AI systems. * Mapped onto a PAL model, the Bostrom/Yudkowsky scenario is largely about the principal’s unawareness of the agent’s catastrophic actions. Unawareness models are rare in PAL probably because they usually aren’t very insightful. This lack of insightfulness also seems to prevent exis",https://www.alignmentforum.org/posts/Z5ZBPEgufmDsm7LAv/what-can-the-principal-agent-literature-tell-us-about-ai,2020,blogPost,"Carlier, Alexis",AI Alignment Forum Safely Interruptible Agents,"Reinforcement learning agents interacting with a complex environment like the real world are unlikely to behave optimally all the time. If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions—harmful either for the agent or for the environment—and lead the agent into a safer situation. However, if the learning agent expects to receive rewards from this sequence, it may learn in the long run to avoid such interruptions, for example by disabling the red button—which is an undesirable outcome. This paper explores a way to make sure a learning agent will not learn to prevent (or seek!) being interrupted by the environment or a human operator. We provide a formal definition of safe interruptibility and exploit the off-policy learning property to prove that either some agents are already safely interruptible, like Q-learning, or can easily be made so, like Sarsa. We show that even ideal, uncomputable reinforcement learning agents for (deterministic) general computable environments can be made safely interruptible.",,2016,conferencePaper,"Orseau, Laurent; Armstrong, Stuart", The Moral Machine experiment,"With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available.",https://www.nature.com/articles/s41586-018-0637-6,2018,journalArticle,"Awad, Edmond; Dsouza, Sohan; Kim, Richard; Schulz, Jonathan; Henrich, Joseph; Shariff, Azim; Bonnefon, Jean-François; Rahwan, Iyad",Nature Cheating Death in Damascus,"Evidential and Causal Decision Theory are the leading contenders as theories of rational action, but both face fatal counterexamples. We present some new counterexamples, including one in which the optimal action is causally dominated. We also present a novel decision theory, Functional Decision Theory (fdt), which simultaneously solves both sets of counterexamples. Instead of considering which physical action of theirs would give rise to the best outcomes, fdt agents consider which output of their decision function would give rise to the best outcome. This theory relies on a notion of subjunctive dependence, where multiple implementations of the same mathematical function are considered (even counterfactually) to have identical results for logical rather than causal reasons. Taking these subjunctive dependencies into account allows fdt agents to outperform cdt and edt agents in, e.g., the presence of accurate predictors. While not necessary for considering classic decision theory problems, we note that a full specification of fdt will require a non-trivial theory of logical counterfactuals and algorithmic similarity.",http://www.pdcnet.org/oom/service?url_ver=Z39.88-2004&rft_val_fmt=&rft.imuse_id=jphil_2020_0117_0005_0237_0266&svc_id=info:www.pdcnet.org/collection,2020,journalArticle,"Levinstein, Benjamin A.; Soares, Nate; Journal of Philosophy Inc.",The Journal of Philosophy The ethics of artificial intelligence,"The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves. The first section discusses issues that may arise in the near future of AI. The second section outlines challenges for ensuring that AI operates safely as it approaches humans in its intelligence. The third section outlines how we might assess whether, and in what circumstances, AIs themselves have moral status. In the fourth section, we consider how AIs might differ from humans in certain basic respects relevant to our ethical assessment of them. The final section addresses the issues of creating AIs more intelligent than human, and ensuring that they use their advanced intelligence for good rather than ill.",https://www.cambridge.org/core/product/identifier/CBO9781139046855A027/type/book_part,2014,bookSection,"Bostrom, Nick; Yudkowsky, Eliezer",The Cambridge Handbook of Artificial Intelligence VIME: Variational Information Maximizing Exploration,"Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as epsilon-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.",http://arxiv.org/abs/1605.09674,2017,conferencePaper,"Houthooft, Rein; Chen, Xi; Duan, Yan; Schulman, John; De Turck, Filip; Abbeel, Pieter",Advances in Neural Information Processing Systems 29 (NIPS) Alignment for Advanced Machine Learning Systems,"We survey eight research areas organized around one question: As learning systems become increasingly intelligent and autonomous, what design principles can best ensure that their behavior is aligned with the interests of the operators? We focus on two major technical obstacles to AI alignment: the challenge of specifying the right kind of objective functions, and the challenge of designing AI systems that avoid unintended consequences and undesirable behavior even in cases where the objective function does not line up perfectly with the intentions of the designers.",https://oxford.universitypressscholarship.com/view/10.1093/oso/9780190905033.001.0001/oso-9780190905033-chapter-13,2020,journalArticle,"Taylor, Jessica; Yudkowsky, Eliezer; LaVictoire, Patrick; Critch, Andrew; Taylor, Jessica; Yudkowsky, Eliezer; LaVictoire, Patrick; Critch, Andrew",Ethics of Artificial Intelligence Generalizing from a few environments in safety-critical reinforcement learning,"Before deploying autonomous agents in the real world, we need to be confident they will perform safely in novel situations. Ideally, we would expose agents to a very wide range of situations during training, allowing them to learn about every possible danger, but this is often impractical. This paper investigates safety and generalization from a limited number of training environments in deep reinforcement learning (RL). We find RL algorithms can fail dangerously on unseen test environments even when performing perfectly on training environments. Firstly, in a gridworld setting, we show that catastrophes can be significantly reduced with simple modifications, including ensemble model averaging and the use of a blocking classifier. In the more challenging CoinRun environment we find similar methods do not significantly reduce catastrophes. However, we do find that the uncertainty information from the ensemble is useful for predicting whether a catastrophe will occur within a few steps and hence whether human intervention should be requested.",http://arxiv.org/abs/1907.01475,2019,conferencePaper,"Kenton, Zachary; Filos, Angelos; Evans, Owain; Gal, Yarin", Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition,"This paper presents a new approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The decomposition, known as the MAXQ decomposition, has both a procedural semantics---as a subroutine hierarchy---and a declarative semantics---as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. It is based on the assumption that the programmer can identify useful subgoals and define subtasks that achieve these subgoals. By defining such subgoals, the programmer constrains the set of policies that need to be considered during reinforcement learning. The MAXQ value function decomposition can represent the value function of any policy that is consistent with the given hierarchy. The decomposition also creates opportunities to exploit state abstractions, so that individual MDPs within the hierarchy can ignore large parts of the state space. This is important for the practical application of the method. This paper defines the MAXQ hierarchy, proves formal results on its representational power, and establishes five conditions for the safe use of state abstractions. The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges with probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the five kinds of state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q through a series of experiments in three domains and shows experimentally that MAXQ-Q (with state abstractions) converges to a recursively optimal policy much faster than flat Q learning. The fact that MAXQ learns a representation of the value function has an important benefit: it makes it possible to compute and execute an improved, non-hierarchical policy via a procedure similar to the policy improvement step of policy iteration. The paper demonstrates the effectiveness of this non-hierarchical execution experimentally. Finally, the paper concludes with a comparison to related work and a discussion of the design tradeoffs in hierarchical reinforcement learning.",https://jair.org/index.php/jair/article/view/10266,2000,journalArticle,"Dietterich, T. G.",Journal of Artificial Intelligence Research A Model for the Impacts of Nuclear War,"The total impact of nuclear war is a major factor in many important policy questions, but it has gotten little scholarly attention. This paper presents a model for calculating the total impacts of nuclear war. The model includes physical, infrastructural, and social impacts as they affect human lives. The model has five main branches corresponding to the five main types of effects of nuclear weapon detonations: thermal radiation, blast, ionizing radiation, electromagnetic pulse, and human perceptions. Model branches contain extensive detail on each of these effects, including interconnections between them and connections to other major risks including global warming and pandemics. The paper also includes background information on impacts analysis and modeling to help readers understand how to think about the impacts of nuclear war, including discussion of important attributes of nuclear war such as the number and yield of weapons detonated and the location of their detonation.",https://papers.ssrn.com/abstract=3155983,2018,report,"Baum, Seth; Barrett, Anthony", How To Solve Moral Conundrums with Computability Theory,"Various moral conundrums plague population ethics: The Non-Identity Problem, The Procreation Asymmetry, The Repugnant Conclusion, and more. I argue that the aforementioned moral conundrums have a structure neatly accounted for, and solved by, some ideas in computability theory. I introduce a mathematical model based on computability theory and show how previous arguments pertaining to these conundrums fit into the model. This paper proceeds as follows. First, I do a very brief survey of the history of computability theory in moral philosophy. Second, I follow various papers, and show how their arguments fit into, or don't fit into, our model. Third, I discuss the implications of our model to the question why the human race should or should not continue to exist. Finally, I show that our model ineluctably leads us to a Confucian moral principle.",http://arxiv.org/abs/1805.08347,2018,manuscript,"Baek, Jongmin Jerome", Trusting in Machines: How Mode of Interaction Affects Willingness to Share Personal Information with Machines,"Every day, people make decisions about whether to trust machines with their personal information, such as letting a phone track one’s location. How do people decide whether to trust a machine? In a field experiment, we tested how two modes of interaction—expression modality, whether the person is talking or typing to a machine, and response modality, whether the machine is talking or typing back—influence the willingness to trust a machine. Based on research that expressing oneself verbally reduces self-control compared to nonverbal expression, we predicted that talking to a machine might make people more willing to share their personal information. Based on research on the link between anthropomorphism and trust, we further predicted that machines who talked (versus texted) would seem more human-like and be trusted more. Using a popular chatterbot phone application, we randomly assigned over 300 community members to either talk or type to the phone, which either talked or typed in return. We then measured how much participants anthropomorphized the machine and their willingness to share their personal information (e.g., their location, credit card information) with it. Results revealed that talking made people more willing to share their personal information than texting, and this was robust to participants’ self-reported comfort with technology, age, gender, and conversation characteristics. But listening to the application’s voice did not affect anthropomorphism or trust compared to reading its text. We conclude by considering the theoretical and practical implications of this experiment for understanding how people trust machines.",,2018,conferencePaper,"Schroeder, Juliana; Schroeder, Matthew", SQIL: Imitation Learning via Reinforcement Learning with Sparse Rewards,"Learning to imitate expert behavior from demonstrations can be challenging, especially in environments with high-dimensional, continuous observations and unknown dynamics. Supervised learning methods based on behavioral cloning (BC) suffer from distribution shift: because the agent greedily imitates demonstrated actions, it can drift away from demonstrated states due to error accumulation. Recent methods based on reinforcement learning (RL), such as inverse RL and generative adversarial imitation learning (GAIL), overcome this issue by training an RL agent to match the demonstrations over a long horizon. Since the true reward function for the task is unknown, these methods learn a reward function from the demonstrations, often using complex and brittle approximation techniques that involve adversarial training. We propose a simple alternative that still uses RL, but does not require learning a reward function. The key idea is to provide the agent with an incentive to match the demonstrations over a long horizon, by encouraging it to return to demonstrated states upon encountering new, out-of-distribution states. We accomplish this by giving the agent a constant reward of r=+1 for matching the demonstrated action in a demonstrated state, and a constant reward of r=0 for all other behavior. Our method, which we call soft Q imitation learning (SQIL), can be implemented with a handful of minor modifications to any standard Q-learning or off-policy actor-critic algorithm. Theoretically, we show that SQIL can be interpreted as a regularized variant of BC that uses a sparsity prior to encourage long-horizon imitation. Empirically, we show that SQIL outperforms BC and achieves competitive results compared to GAIL, on a variety of image-based and low-dimensional tasks in Box2D, Atari, and MuJoCo.",http://arxiv.org/abs/1905.11108,2019,conferencePaper,"Reddy, Siddharth; Dragan, Anca D.; Levine, Sergey","arXiv:1905.11108 [cs, stat]" The Conditional Entropy Bottleneck,"Much of the field of Machine Learning exhibits a prominent set of failure modes, including vulnerability to adversarial examples, poor out-of-distribution (OoD) detection, miscalibration, and willingness to memorize random labelings of datasets. We characterize these as failures of robust generalization, which extends the traditional measure of generalization as accuracy or related metrics on a held-out set. We hypothesize that these failures to robustly generalize are due to the learning systems retaining too much information about the training data. To test this hypothesis, we propose the Minimum Necessary Information (MNI) criterion for evaluating the quality of a model. In order to train models that perform well with respect to the MNI criterion, we present a new objective function, the Conditional Entropy Bottleneck (CEB), which is closely related to the Information Bottleneck (IB). We experimentally test our hypothesis by comparing the performance of CEB models with deterministic models and Variational Information Bottleneck (VIB) models on a variety of different datasets and robustness challenges. We find strong empirical evidence supporting our hypothesis that MNI models improve on these problems of robust generalization.",http://arxiv.org/abs/2002.05379,2020,journalArticle,"Fischer, Ian",Entropy Sufficient Conditions for Causality to Be Transitive,,https://www.journals.uchicago.edu/doi/10.1086/684915,2016,journalArticle,"Halpern, Joseph Y.",Philosophy of Science Superintelligence skepticism as a political tool,,,2018,journalArticle,"Baum, Seth",Information Responsive safety in reinforcement learning by pid lagrangian methods,,,2020,conferencePaper,"Stooke, Adam; Achiam, Joshua; Abbeel, Pieter",International Conference on Machine Learning The Problem with Metrics is a Fundamental Problem for AI,"Optimizing a given metric is a central aspect of most current AI approaches, yet overemphasizing metrics leads to manipulation, gaming, a myopic focus on short-term goals, and other unexpected negative consequences. This poses a fundamental contradiction for AI development. Through a series of real-world case studies, we look at various aspects of where metrics go wrong in practice and aspects of how our online environment and current business practices are exacerbating these failures. Finally, we propose a framework towards mitigating the harms caused by overemphasis of metrics within AI by: (1) using a slate of metrics to get a fuller and more nuanced picture, (2) combining metrics with qualitative accounts, and (3) involving a range of stakeholders, including those who will be most impacted.",http://arxiv.org/abs/2002.08512,2020,manuscript,"Thomas, Rachel; Uminsky, David", Cry Wolf: The Psychology of False Alarms,,https://www.taylorfrancis.com/books/9780203781203,2013,book,"Breznitz, S.", seL4: formal verification of an OS kernel,,http://portal.acm.org/citation.cfm?doid=1629575.1629596,2009,conferencePaper,"Klein, Gerwin; Norrish, Michael; Sewell, Thomas; Tuch, Harvey; Winwood, Simon; Elphinstone, Kevin; Heiser, Gernot; Andronick, June; Cock, David; Derrin, Philip; Elkaduwe, Dhammika; Engelhardt, Kai; Kolanski, Rafal",Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles - SOSP '09 Reasoning with Limited Resources and Assigning Probabilities to Arithmetical Statements,,http://link.springer.com/10.1023/B:SYNT.0000029944.99888.a7,2004,journalArticle,"Gaifman, Haim",Synthese "Subagents, neural Turing machines, thought selection, and blindspots","In my summary of Consciousness and the Brain (Dehaene, 2014), I briefly mentioned that one of the functions of consciousness is to carry out artificial serial operations; or in other words, implement a production system (equivalent to a Turing machine) in the brain. While I did not go into very much detail about this model in the post, I’ve used it in later articles. For instance, in Building up to an Internal Family Systems model, I used a toy model where different subagents cast votes to modify the contents of consciousness. One may conceptualize this as equivalent to the production system model, where different subagents implement different production rules which compete to modify the contents of consciousness. In this post, I will flesh out the model a bit more, as well as applying it to a few other examples, such as emotion suppression, internal conflict, and blind spots. EVIDENCE ACCUMULATION Dehaene has outlined his model in a pair of papers (Zylberberg, Dehaene, Roelfsema, & Sigman, 2011; Dehaene & Sigman, 2012), though he is not the first one to propose this kind of a model. Daniel Dennett’s Consciousness Explained (1991) also discusses consciousness as implementing a virtual Turing machine; both cite as examples earlier computational models of the mind, such as Soar and ACT, which work on the same principles. An important building block in Dehane’s model is based on what we know about evidence accumulation and decision-making in the brain, so let’s start by taking a look at that. Sequential sampling models (SSMs) are a family of models from mathematical psychology that have been developed since the 1960s (Forstmann, Ratcliff, & Wagenmakers, 2016). A particularly common SSM is the diffusion decision model (DDM) of decision-making, in which a decision-maker is assumed to noisily accumulate evidence towards a particular choice. Once the evidence in favor of a particular choice meets a decision threshold, that choice is taken. For example, someone mi",https://www.lesswrong.com/posts/7zQPYQB5EeaqLrhBh/subagents-neural-turing-machines-thought-selection-and,2019,blogPost,"Sotala, Kaj",LessWrong HACMS: high assurance cyber military systems,,http://dl.acm.org/citation.cfm?doid=2402676.2402695,2012,conferencePaper,"Fisher, Kathleen",Proceedings of the 2012 ACM conference on High integrity language technology - HILT '12 Third-Person Imitation Learning,"Reinforcement learning (RL) makes it possible to train agents capable of achieving sophisticated goals in complex and uncertain environments. A key difficulty in reinforcement learning is specifying a reward function for the agent to optimize. Traditionally, imitation learning in RL has been used to overcome this problem. Unfortunately, hitherto imitation learning methods tend to require that demonstrations are supplied in the first-person: the agent is provided with a sequence of states and a specification of the actions that it should have taken. While powerful, this kind of imitation learning is limited by the relatively hard problem of collecting first-person demonstrations. Humans address this problem by learning from third-person demonstrations: they observe other humans perform tasks, infer the task, and accomplish the same task themselves. In this paper, we present a method for unsupervised third-person imitation learning. Here third-person refers to training an agent to correctly achieve a simple goal in a simple environment when it is provided a demonstration of a teacher achieving the same goal but from a different viewpoint; and unsupervised refers to the fact that the agent receives only these third-person demonstrations, and is not provided a correspondence between teacher states and student states. Our methods primary insight is that recent advances from domain confusion can be utilized to yield domain agnostic features which are crucial during the training process. To validate our approach, we report successful experiments on learning from third-person demonstrations in a pointmass domain, a reacher domain, and inverted pendulum.",https://arxiv.org/abs/1703.01703v2,2017,manuscript,"Stadie, Bradly C.; Abbeel, Pieter; Sutskever, Ilya", Rational metareasoning and the plasticity of cognitive control,,https://dx.plos.org/10.1371/journal.pcbi.1006043,2018,journalArticle,"Lieder, Falk; Shenhav, Amitai; Musslick, Sebastian; Griffiths, Thomas L.",PLOS Computational Biology The Future of Feed: Integrating Technologies to Decouple Feed Production from Environmental Impacts,"Population growth, an expanding middle-class, and a global shift in dietary preferences have driven an enduring demand for animal products. Since animal products are playing a vital role in human diets, their consumption is predicted to increase further. However, the great dependency of animal husbandry on global staple feed crop soybean; the environmental consequences of soybean production; and barriers for soy cropland expansion cast doubt on food system sustainability. The need to mitigate future demand for soy with other feed sources of similar nutritional profile, and thereby decouple food and feed production from ecological pressures, is compelling. Yet, the literature and science of sustainable agriculture is one of incremental improvements, featuring primarily, crop production intensification. A different, more profound approach to the design of feed systems is required to ensure sustainable food security. The question arises if alternative technologies exist to support such a design. This paper explores a particular novel configuration of four advanced technologies recently deployed in the region of Hengill, Iceland: light-emitting diode systems, advanced indoor photobioreactors, atmospheric carbon capture technology, and geothermal energy technology. In situ system analysis and data triangulation with scientific literature and data from independent sources illustrate the potential of these integrated technologies to produce algal-based animal feed. The analysis suggests that a highly sustainable soybean equivalent is technically attainable for feed purposes. The integrated system requires less than 1% of arable land and fresh water compared with soybean cultivation and is carbon negative. In addition, it provides a pesticide- and herbicide-free cultivation platform. This new configuration provides one pathway for the future of feed.",https://www.liebertpub.com/doi/abs/10.1089/ind.2019.29162.atz,2019,journalArticle,"Tzachor, Asaf",Industrial Biotechnology Avoiding Side Effects By Considering Future Tasks,"Designing reward functions is difficult: the designer has to specify what to do (what it means to complete the task) as well as what not to do (side effects that should be avoided while completing the task). To alleviate the burden on the reward designer, we propose an algorithm to automatically generate an auxiliary reward function that penalizes side effects. This auxiliary objective rewards the ability to complete possible future tasks, which decreases if the agent causes side effects during the current task. The future task reward can also give the agent an incentive to interfere with events in the environment that make future tasks less achievable, such as irreversible actions by other agents. To avoid this interference incentive, we introduce a baseline policy that represents a default course of action (such as doing nothing), and use it to filter out future tasks that are not achievable by default. We formally define interference incentives and show that the future task approach with a baseline policy avoids these incentives in the deterministic case. Using gridworld environments that test for side effects and interference, we show that our method avoids interference and is more effective for avoiding side effects than the common approach of penalizing irreversible actions.",https://arxiv.org/abs/2010.07877v1,2020,conferencePaper,"Krakovna, Victoria; Orseau, Laurent; Ngo, Richard; Martic, Miljan; Legg, Shane", Global Catastrophes: The Most Extreme Risks,"The most extreme risk are those that threaten the entirety of human civilization, known as global catastrophic risks. The very extreme nature of global catastrophes makes them both challenging to analyze and important to address. They are challenging to analyze because they are largely unprecedented and because they involve the entire global human system. They are important to address because they threaten everyone around the world and future generations. Global catastrophic risks also pose some deep dilemmas. One dilemma occurs when actions to reduce global catastrophic risk could harm society in other ways, as in the case of geoengineering to reduce catastrophic climate change risk. Another dilemma occurs when reducing one global catastrophic risk could increase another, as in the case of nuclear power reducing climate change risk while increasing risks from nuclear weapons. The complex, interrelated nature of global catastrophic risk suggests a research agenda in which the full space of risks are assessed in an integrated fashion in consideration of the deep dilemmas and other challenges they pose. Such an agenda can help identify the best ways to manage these most extreme risks and keep human civilization safe.",https://papers.ssrn.com/abstract=3046668,2017,report,"Baum, Seth; Barrett, Anthony", The evolution of cognitive mechanisms in response to cultural innovations,"When humans and other animals make cultural innovations, they also change their environment, thereby imposing new selective pressures that can modify their biological traits. For example, there is evidence that dairy farming by humans favored alleles for adult lactose tolerance. Similarly, the invention of cooking possibly affected the evolution of jaw and tooth morphology. However, when it comes to cognitive traits and learning mechanisms, it is much more difficult to determine whether and how their evolution was affected by culture or by their use in cultural transmission. Here we argue that, excluding very recent cultural innovations, the assumption that culture shaped the evolution of cognition is both more parsimonious and more productive than assuming the opposite. In considering how culture shapes cognition, we suggest that a process-level model of cognitive evolution is necessary and offer such a model. The model employs relatively simple coevolving mechanisms of learning and data acquisition that jointly construct a complex network of a type previously shown to be capable of supporting a range of cognitive abilities. The evolution of cognition, and thus the effect of culture on cognitive evolution, is captured through small modifications of these coevolving learning and data-acquisition mechanisms, whose coordinated action is critical for building an effective network. We use the model to show how these mechanisms are likely to evolve in response to cultural phenomena, such as language and tool-making, which are associated with major changes in data patterns and with new computational and statistical challenges.",http://www.pnas.org/lookup/doi/10.1073/pnas.1620742114,2017,conferencePaper,"Lotem, Arnon; Halpern, Joseph Y.; Edelman, Shimon; Kolodny, Oren",Proceedings of the National Academy of Sciences The Computational Structure of Unintentional Meaning,"Speech-acts can have literal meaning as well as pragmatic meaning, but these both involve consequences typically intended by a speaker. Speech-acts can also have unintentional meaning, in which what is conveyed goes above and beyond what was intended. Here, we present a Bayesian analysis of how, to a listener, the meaning of an utterance can significantly differ from a speaker's intended meaning. Our model emphasizes how comprehending the intentional and unintentional meaning of speech-acts requires listeners to engage in sophisticated model-based perspective-taking and reasoning about the history of the state of the world, each other's actions, and each other's observations. To test our model, we have human participants make judgments about vignettes where speakers make utterances that could be interpreted as intentional insults or unintentional faux pas. In elucidating the mechanics of speech-acts with unintentional meanings, our account provides insight into how communication both functions and malfunctions.",http://arxiv.org/abs/1906.01983,2019,manuscript,"Ho, Mark K.; Korman, Joanna; Griffiths, Thomas L.", Human Extinction from Natural Hazard Events,,,2018,bookSection,"Sandberg, Anders",Oxford Research Encyclopedia of Natural Hazard Science Sparsity and interpretability?,,https://www.alignmentforum.org/posts/maBNBgopYxb9YZP8B/sparsity-and-interpretability-1,2020,blogPost,"Böhm, Stanislav; Kirk, Robert; Gavenčiak, Tomáš",AI Alignment Forum Research priorities for robust and beneficial artificial intelligence: an open letter,,,2015,journalArticle,"Russell, Stuart; Dewey, Daniel; Tegmark, Max",AI Magazine List of Analyses of Time to Human-Level AI,"This is a list of most of the substantial analyses of AI timelines that we know of. It also covers most of the arguments and opinions of which we are aware. Details The list below contains substantial publically available analyses of when human-level AI will appear. To qualify for the list, an item must provide both a claim...",https://aiimpacts.org/list-of-analyses-of-time-to-human-level-ai/,2015,blogPost,AI Impacts,AI Impacts Generative Adversarial Imitation from Observation,"Imitation from observation (IfO) is the problem of learning directly from state-only demonstrations without having access to the demonstrator's actions. The lack of action information both distinguishes IfO from most of the literature in imitation learning, and also sets it apart as a method that may enable agents to learn from a large set of previously inapplicable resources such as internet videos. In this paper, we propose both a general framework for IfO approaches and also a new IfO approach based on generative adversarial networks called generative adversarial imitation from observation (GAIfO). We conduct experiments in two different settings: (1) when demonstrations consist of low-dimensional, manually-defined state features, and (2) when demonstrations consist of high-dimensional, raw visual data. We demonstrate that our approach performs comparably to classical imitation learning approaches (which have access to the demonstrator's actions) and significantly outperforms existing imitation from observation methods in high-dimensional simulation environments.",http://arxiv.org/abs/1807.06158,2019,manuscript,"Torabi, Faraz; Warnell, Garrett; Stone, Peter", Future directions for ambitious value learning,"To recap the sequence so far: * Ambitious value learning aims to infer a utility function that is safe to maximize, by looking at human behavior. * However, since you only observe human behavior, you must be able to infer and account for the mistakes that humans make in order to exceed human performance. (If we don’t exceed human performance, it’s likely that we’ll use unsafe techniques that do exceed human performance, due to economic incentives.) * You might hope to infer both the mistake model (aka systematic human biases) and the utility function, and then throw away the mistake model and optimize the utility function. This cannot be done without additional assumptions. * One potential assumption you could use would be to codify a specific mistake model. However, humans are sufficiently complicated that any such model would be wrong, leading to model misspecification. Model misspecification causes many problems in general, and is particularly thorny for value learning. Despite these arguments, we could still hope to infer a broad utility function that is safe to optimize, either by sidestepping the formalism used so far, or by introducing additional assumptions. Often, it is clear that these methods would not find the true human utility function (assuming that such a thing exists), but they are worth pursuing anyway because they could find a utility function that is good enough. This post provides pointers to approaches that are currently being pursued. Since these are active areas of research, I don’t want to comment on how feasible they may or may not be -- it’s hard to accurately assess the importance and quality of an idea that is being developed just from what is currently written down about that idea. Assumptions about the mistake model. We could narrow down on the mistake model by making assumptions about it, that could let us avoid the impossibility result. This decision means that we’re accepting the risk of missp",https://www.alignmentforum.org/posts/EhNCnCkmu7MwrQ7yz/future-directions-for-ambitious-value-learning,2018,blogPost,"Shah, Rohin",AI Alignment Forum Reward uncertainty,"In my last post, I argued that interaction between the human and the AI system was necessary in order for the AI system to “stay on track” as we encounter new and unforeseen changes to the environment. The most obvious implementation of this would be to have an AI system that keeps an estimate of the reward function. It acts to maximize its current estimate of the reward function, while simultaneously updating the reward through human feedback. However, this approach has significant problems. Looking at the description of this approach, one thing that stands out is that the actions are chosen according to a reward that we know is going to change. (This is what leads to the incentive to disable the narrow value learning system.) This seems clearly wrong: surely our plans should account for the fact that our rewards will change, without treating such a change as adversarial? This suggests that we need to have our action selection mechanism take the future rewards into account as well. While we don’t know what the future reward will be, we can certainly have a probability distribution over it. So what if we had uncertainty over reward functions, and took that uncertainty into account while choosing actions? SETUP We’ve drilled down on the problem sufficiently far that we can create a formal model and see what happens. So, let’s consider the following setup: * The human, Alice, knows the “true” reward function that she would like to have optimized. * The AI system maintains a probability distribution over reward functions, and acts to maximize the expected sum of rewards under this distribution. * Alice and the AI system take turns acting. Alice knows that the AI learns from her actions, and chooses actions accordingly. * Alice’s action space is such that she cannot take the action “tell the AI system the true reward function” (otherwise the problem would become trivial). * Given these assumptions, Alice and the AI system act optimally. This is",https://www.alignmentforum.org/posts/ZiLLxaLB5CCofrzPp/reward-uncertainty,2019,blogPost,"Shah, Rohin",AI Alignment Forum Supervising strong learners by amplifying weak experts,"Many real world learning tasks involve complex or hard-to-specify objectives, and using an easier-to-specify proxy can lead to poor performance or misaligned behavior. One solution is to have humans provide a training signal by demonstrating or judging performance, but this approach fails if the task is too complicated for a human to directly evaluate. We propose Iterated Amplification, an alternative training strategy which progressively builds up a training signal for difficult problems by combining solutions to easier subproblems. Iterated Amplification is closely related to Expert Iteration (Anthony et al., 2017; Silver et al., 2017), except that it uses no external reward function. We present results in algorithmic environments, showing that Iterated Amplification can efficiently learn complex behaviors.",http://arxiv.org/abs/1810.08575,2018,manuscript,"Christiano, Paul; Shlegeris, Buck; Amodei, Dario", Superintelligence cannot be contained: Lessons from Computability Theory,"Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potential catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that such containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) infeasible.",http://arxiv.org/abs/1607.00913,2016,manuscript,"Alfonseca, Manuel; Cebrian, Manuel; Anta, Antonio Fernandez; Coviello, Lorenzo; Abeliuk, Andres; Rahwan, Iyad", Quantilizers: A safer alternative to maximizers for limited optimization,,,2016,conferencePaper,"Taylor, Jessica",Workshops at the Thirtieth AAAI Conference on Artificial Intelligence Who owns artificial intelligence? A preliminary analysis of corporate intellectual property strategies and why they matter.,,https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-working-paper-Who-owns-AI-Apr2020.pdf,2020,report,"Calvin, Nathan; Leung, Jade", "Rationality, Nash Equilibrium and Backwards Induction in Perfect- Information Games",,https://academic.oup.com/restud/article-lookup/doi/10.2307/2971739,1997,journalArticle,"Ben-Porath, Elchanan",The Review of Economic Studies The future of employment: How susceptible are jobs to computerisation?,,https://linkinghub.elsevier.com/retrieve/pii/S0040162516302244,2017,journalArticle,"Frey, Carl Benedikt; Osborne, Michael A.",Technological Forecasting and Social Change Some AI research areas and their relevance to existential safety,"INTRODUCTION This post is an overview of a variety of AI research areas in terms of how much I think contributing to and/or learning from those areas might help reduce AI x-risk. By research areas I mean “AI research topics that already have groups of people working on them and writing up their results”, as opposed to research “directions” in which I’d like to see these areas “move”. I formed these views mostly pursuant to writing AI Research Considerations for Human Existential Safety (ARCHES). My hope is that my assessments in this post can be helpful to students and established AI researchers who are thinking about shifting into new research areas specifically with the goal of contributing to existential safety somehow. In these assessments, I find it important to distinguish between the following types of value: * The helpfulness of the area to existential safety, which I think of as a function of what services are likely to be provided as a result of research contributions to the area, and whether those services will be helpful to existential safety, versus * The educational value of the area for thinking about existential safety, which I think of as a function of how much a researcher motivated by existential safety might become more effective through the process of familiarizing with or contributing to that area, usually by focusing on ways the area could be used in service of existential safety. * The neglect of the area at various times, which is a function of how much technical progress has been made in the area relative to how much I think is needed. Importantly: * The helpfulness to existential safety scores do not assume that your contributions to this area would be used only for projects with existential safety as their mission. This can negatively impact the helpfulness of contributing to areas that are more likely to be used in ways that harm existential safety. * The educational value scores are not ab",https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1,2020,blogPost,"Critch, Andrew",AI Alignment Forum "Using Human History, Psychology and Biology to Make AI Safe for Humans",,,2018,bookSection,Gus Bekdash,Artificial Intelligence Safety and Security Universality and security amplification,A slightly more detailed view of security amplification.,https://ai-alignment.com/universality-and-security-amplification-551b314a3bab,2019,blogPost,"Christiano, Paul",AI Alignment (Medium) The value of abstraction,,https://linkinghub.elsevier.com/retrieve/pii/S2352154619300026,2019,journalArticle,"Ho, Mark K; Abel, David; Griffiths, Thomas L; Littman, Michael L",Current Opinion in Behavioral Sciences Inverse reinforcement learning for video games,"Deep reinforcement learning achieves superhuman performance in a range of video game environments, but requires that a designer manually specify a reward function. It is often easier to provide demonstrations of a target behavior than to design a reward function describing that behavior. Inverse reinforcement learning (IRL) algorithms can infer a reward from demonstrations in low-dimensional continuous control environments, but there has been little work on applying IRL to high-dimensional video games. In our CNN-AIRL baseline, we modify the state-of-the-art adversarial IRL (AIRL) algorithm to use CNNs for the generator and discriminator. To stabilize training, we normalize the reward and increase the size of the discriminator training dataset. We additionally learn a low-dimensional state representation using a novel autoencoder architecture tuned for video game environments. This embedding is used as input to the reward network, improving the sample efficiency of expert demonstrations. Our method achieves high-level performance on the simple Catcher video game, substantially outperforming the CNN-AIRL baseline. We also score points on the Enduro Atari racing game, but do not match expert performance, highlighting the need for further work.",http://arxiv.org/abs/1810.10593,2018,manuscript,"Tucker, Aaron; Gleave, Adam; Russell, Stuart", Generating Long Sequences with Sparse Transformers,"Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to $O(n \sqrt{n})$. We also introduce a) a variation on architecture and initialization to train deeper networks, b) the recomputation of attention matrices to save memory, and c) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more.",http://arxiv.org/abs/1904.10509,2019,manuscript,"Child, Rewon; Gray, Scott; Radford, Alec; Sutskever, Ilya", Modeling and interpreting expert disagreement about artificial superintelligence,,,2017,journalArticle,"Baum, Seth; Barrett, Anthony; Yampolskiy, Roman V.",Informatica Evaluating future nanotechnology: The net societal impacts of atomically precise manufacturing,,https://linkinghub.elsevier.com/retrieve/pii/S0016328717301908,2018,journalArticle,"Umbrello, Steven; Baum, Seth D.",Futures How energy efficient are human-engineered flight designs relative to natural ones?,"Nature is responsible for the most energy efficient flight, according to an investigation of albatrosses, butterflies and nine different human-engineered flying machines.",https://aiimpacts.org/are-human-engineered-flight-designs-better-or-worse-than-natural-ones/,2020,blogPost,"Fernandez, Ronny",AI Impacts Exceeding Expectations: Stochastic Dominance as a General Decision Theory,"The principle that rational agents should maximize expected utility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal’s Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient background uncertainty about the choiceworthiness of one’s options, many expectation-maximizing gambles that do not stochastically dominate their alternatives ‘in a vacuum’ become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will not require agents to accept options whose expectational superiority depends on sufficiently small probabilities of extreme payoffs. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized.",,2020,report,"Tarsney, Christian J", On the Utility of Model Learning in HRI,"Fundamental to robotics is the debate between model-based and model-free learning: should the robot build an explicit model of the world, or learn a policy directly? In the context of HRI, part of the world to be modeled is the human. One option is for the robot to treat the human as a black box and learn a policy for how they act directly. But it can also model the human as an agent, and rely on a “theory of mind” to guide or bias the learning (grey box). We contribute a characterization of the performance of these methods under the optimistic case of having an ideal theory of mind, as well as under different scenarios in which the assumptions behind the robot’s theory of mind for the human are wrong, as they inevitably will be in practice. We find that there is a significant sample complexity advantage to theory of mind methods and that they are more robust to covariate shift, but that when enough interaction data is available, black box approaches eventually dominate.",http://arxiv.org/abs/1901.01291,2019,conferencePaper,"Choudhury, Rohan; Swamy, Gokul; Hadfield-Menell, Dylan; Dragan, Anca",2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) Reducing the Risk of Human Extinction: Reducing the Risk of Human Extinction,,http://doi.wiley.com/10.1111/j.1539-6924.2007.00960.x,2007,journalArticle,"Matheny, Jason G.",Risk Analysis Self-Regulating Artificial General Intelligence,"Here we examine the paperclip apocalypse concern for artificial general intelligence (or AGI) whereby a superintelligent AI with a simple goal (ie., producing paperclips) accumulates power so that all resources are devoted towards that simple goal and are unavailable for any other use. We provide conditions under which a paper apocalypse can arise but also show that, under certain architectures for recursive self-improvement of AIs, that a paperclip AI may refrain from allowing power capabilities to be developed. The reason is that such developments pose the same control problem for the AI as they do for humans (over AIs) and hence, threaten to deprive it of resources for its primary goal.",http://www.nber.org/papers/w24352.pdf,2018,report,"Gans, Joshua", Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?,,,2014,journalArticle,"Shulman, Carl; Bostrom, Nick",Global Policy Naturalized induction – a challenge for evidential and causal decision theory,"As some of you may know, I disagree with many of the criticisms leveled against evidential decision theory (EDT). Most notably, I believe that Smoking lesion-type problems don't refute EDT. I also don't think that EDT's non-updatelessness leaves a lot of room for disagreement, given that EDT recommends immediate self-modification to updatelessness. However, I do believe there are some issues with run-of-the-mill EDT. One of them is naturalized induction. It is in fact not only a problem for EDT but also for causal decision theory (CDT) and most other decision theories that have been proposed in- and outside of academia. It does not affect logical decision theories, however. THE ROLE OF NATURALIZED INDUCTION IN DECISION THEORY Recall that EDT prescribes taking the action that maximizes expected utility, i.e. where is the set of available actions, is the agent's utility function, is a set of possible world models, represents the agent's past observations (which may include information the agent has collected about itself). CDT works in a – for the purpose of this article – similar way, except that instead of conditioning on in the usual way, it calculates some causal counterfactual, such as Pearl's do-calculus: . The problem of naturalized induction is that of assigning posterior probabilities to world models (or or whatever) when the agent is naturalized, i.e., embedded into its environment. Consider the following example. Let's say there are 5 world models , each of which has equal prior probability. These world models may be cellular automata. Now, the agent makes the observation . It turns out that worlds and don't contain any agents at all, and contains no agent making the observation . The other two world models, on the other hand, are consistent with . Thus, for and for . Let's assume that the agent has only two actions and that in world model the only agent making observation takes action and in the only agent making observation takes action , then a",https://www.lesswrong.com/posts/kgsaSbJqWLtJfiCcz/naturalized-induction-a-challenge-for-evidential-and-causal,2017,blogPost,"Oesterheld, Caspar",LessWrong Euthanasia and cryothanasia,,,2017,journalArticle,"Minerva, Francesca; Sandberg, Anders",Bioethics Preventing Imitation Learning with Adversarial Policy Ensembles,"Imitation learning can reproduce policies by observing experts, which poses a problem regarding policy privacy. Policies, such as human, or policies on deployed robots, can all be cloned without consent from the owners. How can we protect against external observers cloning our proprietary policies? To answer this question we introduce a new reinforcement learning framework, where we train an ensemble of near-optimal policies, whose demonstrations are guaranteed to be useless for an external observer. We formulate this idea by a constrained optimization problem, where the objective is to improve proprietary policies, and at the same time deteriorate the virtual policy of an eventual external observer. We design a tractable algorithm to solve this new optimization problem by modifying the standard policy gradient algorithm. Our formulation can be interpreted in lenses of confidentiality and adversarial behaviour, which enables a broader perspective of this work. We demonstrate the existence of ""non-clonable"" ensembles, providing a solution to the above optimization problem, which is calculated by our modified policy gradient algorithm. To our knowledge, this is the first work regarding the protection of policies in Reinforcement Learning.",http://arxiv.org/abs/2002.01059,2020,manuscript,"Zhan, Albert; Tiomkin, Stas; Abbeel, Pieter", Implicitly Assisting Humans to Choose Good Grasps in Robot to Human Handovers,"We focus on selecting handover configurations that result in low human ergonomic cost not only at the time of handover, but also when the human is achieving a goal with the object after that handover. People take objects using whatever grasping configuration is most comfortable to them. When the human has a goal pose they’d like to place the object at, however, the most comfortable grasping configuration at the handover might be cumbersome overall, requiring regrasping or the use of an uncomfortable configuration to reach the goal. We enable robots to purposefully influence the choices available to the person when taking the object, implicitly helping the person avoid suboptimal solutions and account for the goal. We introduce a probabilistic model of how humans select grasping configurations, and use this model to optimize expected cost. We present results in simulation, as well as from a user study, showing that the robot successfully influences people’s grasping configurations for the better.",http://link.springer.com/10.1007/978-3-319-50115-4_30,2017,bookSection,"Bestick, Aaron; Bajcsy, Ruzena; Dragan, Anca D.",2016 International Symposium on Experimental Robotics The Windfall Clause: Distributing the Benefits of AI for the Common Good,,https://dl.acm.org/doi/10.1145/3375627.3375842,2020,conferencePaper,"O'Keefe, Cullen; Cihon, Peter; Garfinkel, Ben; Flynn, Carrick; Leung, Jade; Dafoe, Allan","Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society" Team errors: definition and taxonomy,,https://linkinghub.elsevier.com/retrieve/pii/S095183209800074X,1999,journalArticle,"Sasou, Kunihide; Reason, James",Reliability Engineering & System Safety DERAIL: Diagnostic Environments for Reward And Imitation Learning,"The objective of many real-world tasks is complex and difficult to procedurally specify. This makes it necessary to use reward or imitation learning algorithms to infer a reward or policy directly from human data. Existing benchmarks for these algorithms focus on realism, testing in complex environments. Unfortunately, these benchmarks are slow, unreliable and cannot isolate failures. As a complementary approach, we develop a suite of simple diagnostic tasks that test individual facets of algorithm performance in isolation. We evaluate a range of common reward and imitation learning algorithms on our tasks. Our results confirm that algorithm performance is highly sensitive to implementation details. Moreover, in a case-study into a popular preference-based reward learning implementation, we illustrate how the suite can pinpoint design flaws and rapidly evaluate candidate solutions. The environments are available at https://github.com/HumanCompatibleAI/seals .",http://arxiv.org/abs/2012.01365,2020,conferencePaper,"Freire, Pedro; Gleave, Adam; Toyer, Sam; Russell, Stuart",Advances in Neural Information Processing Systems 33 Pre-proceedings A Formal Solution to the Grain of Truth Problem,"A Bayesian agent acting in a multi-agent environment learns to predict the other agents’ policies if its prior assigns positive probability to them (in other words, its prior contains a grain of truth). Finding a reasonably large class of policies that contains the Bayes-optimal policies with respect to this class is known as the grain of truth problem. Only small classes are known to have a grain of truth and the literature contains several related impossibility results. In this paper we present a formal and general solution to the full grain of truth problem: we construct a class of policies that contains all computable policies as well as Bayes-optimal policies for every lower semicomputable prior over the class. When the environment is unknown, Bayes-optimal agents may fail to act optimally even asymptotically. However, agents based on Thompson sampling converge to play ε-Nash equilibria in arbitrary unknown computable multi-agent environments. While these results are purely theoretical, we show that they can be computationally approximated arbitrarily closely.",,2016,conferencePaper,"Leike, Jan; Taylor, Jessica; Fallenstein, Benya", Moral demands and the far future*,,https://onlinelibrary.wiley.com/doi/10.1111/phpr.12729,2020,journalArticle,"Mogensen, Andreas L.",Philosophy and Phenomenological Research Formalizing preference utilitarianism in physical world models,"Most ethical work is done at a low level of formality. This makes practical moral questions inaccessible to formal and natural sciences and can lead to misunderstandings in ethical discussion. In this paper, we use Bayesian inference to introduce a formalization of preference utilitarianism in physical world models, specifically cellular automata. Even though our formalization is not immediately applicable, it is a first step in providing ethics and ultimately the question of how to “make the world better” with a formal basis.",https://doi.org/10.1007/s11229-015-0883-1,2016,journalArticle,"Oesterheld, Caspar",Synthese Recent developments in unifying logic and probability,,http://dl.acm.org/citation.cfm?doid=2797100.2699411,2015,journalArticle,"Russell, Stuart",Communications of the ACM Improving GANs Using Optimal Transport,"We present Optimal Transport GAN (OT-GAN), a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution. This metric, which we call mini-batch energy distance, combines optimal transport in primal form with an energy distance defined in an adversarially learned feature space, resulting in a highly discriminative distance function with unbiased mini-batch gradients. Experimentally we show OT-GAN to be highly stable when trained with large mini-batches, and we present state-of-the-art results on several popular benchmark problems for image generation.",http://arxiv.org/abs/1803.05573,2018,manuscript,"Salimans, Tim; Zhang, Han; Radford, Alec; Metaxas, Dimitris", Learning with Opponent-Learning Awareness,"Multi-agent settings are quickly gathering importance in machine learning. This includes a plethora of recent work on deep multi-agent reinforcement learning, but also can be extended to hierarchical RL, generative adversarial networks and decentralised optimisation. In all these settings the presence of multiple learning agents renders the training problem non-stationary and often leads to unstable training or undesired final results. We present Learning with Opponent-Learning Awareness (LOLA), a method in which each agent shapes the anticipated learning of the other agents in the environment. The LOLA learning rule includes a term that accounts for the impact of one agent's policy on the anticipated parameter update of the other agents. Results show that the encounter of two LOLA agents leads to the emergence of tit-for-tat and therefore cooperation in the iterated prisoners' dilemma, while independent learning does not. In this domain, LOLA also receives higher payouts compared to a naive learner, and is robust against exploitation by higher order gradient-based methods. Applied to repeated matching pennies, LOLA agents converge to the Nash equilibrium. In a round robin tournament we show that LOLA agents successfully shape the learning of a range of multi-agent learning algorithms from literature, resulting in the highest average returns on the IPD. We also show that the LOLA update rule can be efficiently calculated using an extension of the policy gradient estimator, making the method suitable for model-free RL. The method thus scales to large parameter and input spaces and nonlinear function approximators. We apply LOLA to a grid world task with an embedded social dilemma using recurrent policies and opponent modelling. By explicitly considering the learning of the other agent, LOLA agents learn to cooperate out of self-interest. The code is at github.com/alshedivat/lola.",http://arxiv.org/abs/1709.04326,2018,conferencePaper,"Foerster, Jakob N.; Chen, Richard Y.; Al-Shedivat, Maruan; Whiteson, Shimon; Abbeel, Pieter; Mordatch, Igor",AAMAS 2018: Proceedings of the Seventeenth International Joint Conference on Autonomous Agents and Multi−Agent Systems "The errors, insights and lessons of famous AI predictions – and what they mean for the future",,https://www.tandfonline.com/doi/full/10.1080/0952813X.2014.895105,2014,journalArticle,"Armstrong, Stuart; Sotala, Kaj; Ó hÉigeartaigh, Seán S.",Journal of Experimental & Theoretical Artificial Intelligence Learning to Share and Hide Intentions using Information Regularization,"Learning to cooperate with friends and compete with foes is a key component of multi-agent reinforcement learning. Typically to do so, one requires access to either a model of or interaction with the other agent(s). Here we show how to learn effective strategies for cooperation and competition in an asymmetric information game with no such model or interaction. Our approach is to encourage an agent to reveal or hide their intentions using an information-theoretic regularizer. We consider both the mutual information between goal and action given state, as well as the mutual information between goal and state. We show how to optimize these regularizers in a way that is easy to integrate with policy gradient reinforcement learning. Finally, we demonstrate that cooperative (competitive) policies learned with our approach lead to more (less) reward for a second agent in two simple asymmetric information games.",http://arxiv.org/abs/1808.02093,2019,conferencePaper,"Strouse, D. J.; Kleiman-Weiner, Max; Tenenbaum, Josh; Botvinick, Matt; Schwab, David",Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Multi-Agent Deep Reinforcement Learning with Human Strategies,"Deep learning has enabled traditional reinforcement learning methods to deal with high-dimensional problems. However, one of the disadvantages of deep reinforcement learning methods is the limited exploration capacity of learning agents. In this paper, we introduce an approach that integrates human strategies to increase the exploration capacity of multiple deep reinforcement learning agents. We also report the development of our own multi-agent environment called Multiple Tank Defence to simulate the proposed approach. The results show the significant performance improvement of multiple agents that have learned cooperatively with human strategies. This implies that there is a critical need for human intellect teamed with machines to solve complex problems. In addition, the success of this simulation indicates that our multi-agent environment can be used as a testbed platform to develop and validate other multi-agent control algorithms.",http://arxiv.org/abs/1806.04562,2019,conferencePaper,"Nguyen, Thanh; Nguyen, Ngoc Duy; Nahavandi, Saeid",2019 IEEE International Conference on Industrial Technology (ICIT) AI transparency: a matter of reconciling design with critique,"In the late 2010s, various international committees, expert groups, and national strategy boards have voiced the demand to ‘open’ the algorithmic black box, to audit, expound, and demystify artificial intelligence. The opening of the algorithmic black box, however, cannot be seen only as an engineering challenge. In this article, I argue that only the sort of transparency that arises from critique—a method of theoretical examination that, by revealing pre-existing power structures, aims to challenge them—can help us produce technological systems that are less deceptive and more just. I relate the question of AI transparency to the broader challenge of responsible making, contending that future action must aim to systematically reconcile design—as a way of concealing—with critique—as a manner of revealing.",https://doi.org/10.1007/s00146-020-01110-y,2020,journalArticle,"Hollanek, Tomasz",AI & Society Classifying global catastrophic risks,"We present a novel classification framework for severe global catastrophic risk scenarios. Extending beyond existing work that identifies individual risk scenarios, we propose analysing global catastrophic risks along three dimensions: the critical systems affected, global spread mechanisms, and prevention and mitigation failures. The classification highlights areas of convergence between risk scenarios, which supports prioritisation of particular research and of policy interventions. It also points to potential knowledge gaps regarding catastrophic risks, and provides an interdisciplinary structure for mapping and tracking the multitude of factors that could contribute to global catastrophic risks.",http://www.sciencedirect.com/science/article/pii/S0016328717301957,2018,journalArticle,"Avin, Shahar; Wintle, Bonnie C.; Weitzdörfer, Julius; Ó hÉigeartaigh, Seán S.; Sutherland, William J.; Rees, Martin J.",Futures Inverse Reward Design,"Autonomous agents optimize the reward function we give them. What they don't know is how hard it is for us to design a reward function that actually captures what we want. When designing the reward, we might think of some specific training scenarios, and make sure that the reward will lead to the right behavior in those scenarios. Inevitably, agents encounter new scenarios (e.g., new types of terrain) where optimizing that same reward may lead to undesired behavior. Our insight is that reward functions are merely observations about what the designer actually wants, and that they should be interpreted in the context in which they were designed. We introduce inverse reward design (IRD) as the problem of inferring the true objective based on the designed reward and the training MDP. We introduce approximate methods for solving IRD problems, and use their solution to plan risk-averse behavior in test MDPs. Empirical results suggest that this approach can help alleviate negative side effects of misspecified reward functions and mitigate reward hacking.",http://arxiv.org/abs/1711.02827,2017,conferencePaper,"Hadfield-Menell, Dylan; Milli, Smitha; Abbeel, Pieter; Russell, Stuart; Dragan, Anca",Advances in Neural Information Processing Systems 30 (NIPS 2017) "Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow","Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable. Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients. In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck. By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients. We demonstrate that our proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms. Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running. We show that our method can learn such skills directly from \emph{raw} video demonstrations, substantially outperforming prior adversarial imitation learning methods. The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re-optimized in new settings. Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods.",http://arxiv.org/abs/1810.00821,2020,conferencePaper,"Peng, Xue Bin; Kanazawa, Angjoo; Toyer, Sam; Abbeel, Pieter; Levine, Sergey", Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization,"An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data. We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other. Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists. We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning. Furthermore, our analysis is not just descriptive, but prescriptive. It suggests a natural modification to gradient descent that can greatly reduce overfitting.",http://arxiv.org/abs/2002.10657,2020,manuscript,"Chatterjee, Satrajit", Inequity aversion improves cooperation in intertemporal social dilemmas,"Groups of humans are often able to find ways to cooperate with one another in complex, temporally extended social dilemmas. Models based on behavioral economics are only able to explain this phenomenon for unrealistic stateless matrix games. Recently, multi-agent reinforcement learning has been applied to generalize social dilemma problems to temporally and spatially extended Markov games. However, this has not yet generated an agent that learns to cooperate in social dilemmas as humans do. A key insight is that many, but not all, human individuals have inequity averse social preferences. This promotes a particular resolution of the matrix game social dilemma wherein inequity-averse individuals are personally pro-social and punish defectors. Here we extend this idea to Markov games and show that it promotes cooperation in several types of sequential social dilemma, via a profitable interaction with policy learnability. In particular, we find that inequity aversion improves temporal credit assignment for the important class of intertemporal social dilemmas. These results help explain how large-scale cooperation may emerge and persist.",http://arxiv.org/abs/1803.08884,2018,conferencePaper,"Hughes, Edward; Leibo, Joel Z.; Phillips, Matthew G.; Tuyls, Karl; Duéñez-Guzmán, Edgar A.; Castañeda, Antonio García; Dunning, Iain; Zhu, Tina; McKee, Kevin R.; Koster, Raphael; Roff, Heather; Graepel, Thore",Advances in Neural Information Processing Systems 31 (NeurIPS 2018) The Value Learning Problem,"Autonomous AI systems’ programmed goals can easily fall short of programmers’ intentions. Even a machine intelligent enough to understand its designers’ intentions would not necessarily act as intended. We discuss early ideas on how one might design smarter-than-human AI systems that can inductively learn what to value from labeled training data, and highlight questions about the construction of systems that model and act upon their operators’ preferences.",https://intelligence.org/files/ValueLearningProblem.pdf,2015,conferencePaper,"Soares, Nate",Ethics for Artificial Intelligence Workshop at 25th International Joint Conference on Artificial Intelligence Pruned Neural Networks are Surprisingly Modular,"The learned weights of a neural network are often considered devoid of scrutable internal structure. To discern structure in these weights, we introduce a measurable notion of modularity for multi-layer perceptrons (MLPs), and investigate the modular structure of MLPs trained on datasets of small images. Our notion of modularity comes from the graph clustering literature: a ""module"" is a set of neurons with strong internal connectivity but weak external connectivity. We find that training and weight pruning produces MLPs that are more modular than randomly initialized ones, and often significantly more modular than random MLPs with the same (sparse) distribution of weights. Interestingly, they are much more modular when trained with dropout. We also present exploratory analyses of the importance of different modules for performance and how modules depend on each other. Understanding the modular structure of neural networks, when such structure exists, will hopefully render their inner workings more interpretable to engineers.",https://arxiv.org/abs/2003.04881v4,2020,manuscript,"Filan, Daniel; Hod, Shlomi; Wild, Cody; Critch, Andrew; Russell, Stuart", "More on disambiguating ""discontinuity""","There have already been numerous posts and discussions related to disambiguating the term ""discontinuity"". Here is my attempt. For the purposes of the following discussion I’m going to distinguish between (a) continuous vs. discontinuous progress in AI research, where discontinuity refers specifically to a sharp jump or change in the AI research progress curve relative to the previous curve; (b) slow vs. fast rate of progress, referring to the steepness of the progress curve slope, regardless of whether or not it’s discontinuous; and (c) long vs. short clock time – i.e., whether progress takes a long or short time relative to absolute time and not relative to previous trend lines. What exactly counts as discontinuous / fast / short will depend on what purpose we are using them for, as below. There seem to be three or four primary AI-risk-related issues that depend on whether or not there will be a discontinuity / fast takeoff speed: 1. Will we see AGI (or CAIS or TAI or whatever you want to call it) coming far enough ahead of time such that we will be able to respond appropriately at that point? This question in turn breaks down into two sub-questions: (a) Will we see AGI coming before it arrives? (I.e., will there be a “fire alarm for AGI” as Eliezer calls it.) (b) If we do see it coming, will we have enough time to react before it’s too late? 2. Will the feedback loops during the development of AGI be long enough that we will be able to correct course as we go? 3. Is it likely that one company / government / other entity could gain enough first-mover advantage such that it will not be controllable or stoppable by other entities? Let’s deal with each of these individually: * Question 1/a: Will we see AGI coming before it arrives? This seems to depend on all three types of discontinuity: * If there’s discontinuous progress relative to the previous curve, then presumably that jump will act as a fire alarm (although it",https://www.alignmentforum.org/posts/C9YMrPAyMXfB8cLPb/more-on-disambiguating-discontinuity,2020,blogPost,"Englander, Aryeh",AI Alignment Forum Generalizing meanings from partners to populations: Hierarchical inference supports convention formation on networks,"A key property of linguistic conventions is that they hold over an entire community of speakers, allowing us to communicate efficiently even with people we have never met before. At the same time, much of our language use is partner-specific: we know that words may be understood differently by different people based on our shared history. This poses a challenge for accounts of convention formation. Exactly how do agents make the inferential leap to community-wide expectations while maintaining partner-specific knowledge? We propose a hierarchical Bayesian model to explain how speakers and listeners solve this inductive problem. To evaluate our model's predictions, we conducted an experiment where participants played an extended natural-language communication game with different partners in a small community. We examine several measures of generalization and find key signatures of both partner-specificity and community convergence that distinguish our model from alternatives. These results suggest that partner-specificity is not only compatible with the formation of community-wide conventions, but may facilitate it when coupled with a powerful inductive mechanism.",http://arxiv.org/abs/2002.01510,2020,conferencePaper,"Hawkins, Robert D.; Goodman, Noah D.; Goldberg, Adele E.; Griffiths, Thomas L.",CogSci 2020 Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure,"As machine learning systems move from computer-science laboratories into the open world, their accountability becomes a high priority problem. Accountability requires deep understanding of system behavior and its failures. Current evaluation methods such as single-score error metrics and confusion matrices provide aggregate views of system performance that hide important shortcomings. Understanding details about failures is important for identifying pathways for refinement, communicating the reliability of systems in different settings, and for specifying appropriate human oversight and engagement. Characterization of failures and shortcomings is particularly complex for systems composed of multiple machine learned components. For such systems, existing evaluation methods have limited expressiveness in describing and explaining the relationship among input content, the internal states of system components, and final output quality. We present Pandora, a set of hybrid human-machine methods and tools for describing and explaining system failures. Pandora leverages both human and system-generated observations to summarize conditions of system malfunction with respect to the input content and system architecture. We share results of a case study with a machine learning pipeline for image captioning that show how detailed performance views can be beneficial for analysis and debugging.",http://arxiv.org/abs/1809.07424,2018,conferencePaper,"Nushi, Besmira; Kamar, Ece; Horvitz, Eric",The Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018) Few-Shot Intent Inference via Meta-Inverse Reinforcement Learning,A significant challenge for the practical application of reinforcement learning toreal world problems is the need to specify an oracle reward function that correctly defines a task. Inverse...,http://proceedings.mlr.press/v97/xu19d/xu19d.pdf,2018,conferencePaper,"Xu, Kelvin; Ratner, Ellis; Dragan, Anca; Levine, Sergey; Finn, Chelsea",Proceedings of the 36th International Conference on Machine Learning Learning to Complement Humans,"A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks. To date, systems aimed at complementing the skills of people have employed models trained to be as accurate as possible in isolation. We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams by considering the distinct abilities of people and machines. The goal is to focus machine learning on problem instances that are difficult for humans, while recognizing instances that are difficult for the machine and seeking human input on them. We demonstrate in two real-world domains (scientific discovery and medical diagnosis) that human-machine teams built via these methods outperform the individual performance of machines and people. We then analyze conditions under which this complementarity is strongest, and which training methods amplify it. Taken together, our work provides the first systematic investigation of how machine learning systems can be trained to complement human reasoning.",http://arxiv.org/abs/2005.00582,2020,conferencePaper,"Wilder, Bryan; Horvitz, Eric; Kamar, Ece",Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20) Imitation Learning for Agile Autonomous Driving,"We present an end-to-end imitation learning system for agile, off-road autonomous driving using only low-cost sensors. By imitating a model predictive controller equipped with advanced sensors, we train a deep neural network control policy to map raw, high-dimensional observations to continuous steering and throttle commands. Compared with recent approaches to similar tasks, our method requires neither state estimation nor on-the-fly planning to navigate the vehicle. Our approach relies on, and experimentally validates, recent imitation learning theory. Empirically, we show that policies trained with online imitation learning overcome well-known challenges related to covariate shift and generalize better than policies trained with batch imitation learning. Built on these insights, our autonomous driving system demonstrates successful high-speed off-road driving, matching the state-of-the-art performance.",http://arxiv.org/abs/1709.07174,2019,journalArticle,"Pan, Yunpeng; Cheng, Ching-An; Saigol, Kamil; Lee, Keuntaek; Yan, Xinyan; Theodorou, Evangelos; Boots, Byron",The International Journal of Robotics Research How feasible is the rapid development of artificial superintelligence?,"What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: (1) How much more capable could AI become relative to humans, and (2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation and pattern recognition. We find that although there are very real limits to prediction, it seems like AI could still substantially improve on human intelligence.",https://doi.org/10.1088%2F1402-4896%2Faa90e8,2017,journalArticle,"Sotala, Kaj",Physica Scripta A Framework for the Safety of Agent-Environment Systems,"Ensuring the safety of an autonomous artificial agent in a complex environment represents a formidable problem across multiple domains. However, there is little interaction between these domains, and no unifying theoretical framework forcing the delineation of assumptions. I propose such a framework in terms of agent-environment systems, in which an agent and environment co-evolve according to a modified statespace nonlinear system. The agent gathers limited information from the environment and itself to perform an action, operating implicitly on the basis of a coarse-grained model of the systems dynamics. To ensure the systems safety, it is minimally necessary first to translate the set of undesirable states from human terms into an adequately precise definition within the system; then to identify a set of universal markers necessary to the system being in a pre-state to an undesirable state, which transfer with fidelity from the systems state to the agents information; and finally to have a set of actions by the agent for each pre-state which keep the system out of the set of undesirable states, with the exception of agent-independent dynamics. Incomplete information, information distortion, and coarse-grained models make this a particularly difficult challenge. I conclude by proposing three threads of a research agenda: reducing the possibility space of safe agents by demonstrating the failure of certain methods and identifying problems with particular agent-environment system classes; developing and verifying techniques which address those problems; and matching real systems to agentenvironment systems.",,,manuscript,"Rade, Luca", Knowledge and Implicature: Modeling Language Understanding as Social Cognition,,http://doi.wiley.com/10.1111/tops.12007,2013,journalArticle,"Goodman, Noah D.; Stuhlmüller, Andreas",Topics in Cognitive Science Complex Value Systems in Friendly AI,,http://link.springer.com/10.1007/978-3-642-22887-2_48,2011,bookSection,"Yudkowsky, Eliezer",Artificial General Intelligence Distinguishing definitions of takeoff,"I find discussions about AI takeoff to be very confusing. Often, people will argue for ""slow takeoff"" or ""fast takeoff"" and then when I ask them to operationalize what those terms mean, they end up saying something quite different than what I thought those terms meant. To help alleviate this problem, I aim to compile the definitions of AI takeoff that I'm currently aware of, with an emphasis on definitions that have clear specifications. I will continue updating the post as long as I think it serves as a useful reference for others. In this post, an AI takeoff can be roughly construed as ""the dynamics of the world associated with the development of powerful artificial intelligence."" These definitions characterize different ways that the world can evolve as transformative AI is developed. FOOM/HARD TAKEOFF The traditional hard takeoff position, or ""Foom"" position (these appear to be equivalent terms) was characterized in this post from Eliezer Yudkowsky. It contrasts Hanson's takeoff scenario by emphasizing local dynamics: rather than a population of artificial intelligences coming into existence, there would be a single intelligence that quickly reaches a level of competence that outstrips the world's capabilities to control it. The proposed mechanism that causes such a dynamic is recursive self improvement, though Yudkowsky later suggested that this wasn't necessary. The ability for recursive self improvement to induce a hard takeoff was defended in Intelligence Explosion Microeconomics. He argues against Robin Hanson in the AI Foom debates. Watch this video to see the live debate. Given the word ""hard"" in this notion of takeoff, a ""soft"" takeoff could simply be defined as the negation of a hard takeoff. HANSONIAN ""SLOW"" TAKEOFF Robin Hanson objected to hard takeoff by predicting that growth in AI capabilities will not be extremely uneven between projects. In other words, there is unlikely to be one AI project, or even a small set of AI projects, that pro",https://www.alignmentforum.org/posts/YgNYA6pj2hPSDQiTE/distinguishing-definitions-of-takeoff,2020,blogPost,"Barnett, Matthew",AI Alignment Forum Nonparametric General Reinforcement Learning,,,2016,thesis,"Leike, Jan", Neuroenhancement of love and marriage: The chemicals between us,,,2008,journalArticle,"Savulescu, Julian; Sandberg, Anders",Neuroethics Limits to Verification and Validation of Agentic Behavior,"Verification and validation of agentic behavior have been suggested as important research priorities in efforts to reduce risks associated with the creation of general artificial intelligence (Russell et al 2015). In this paper we question the appropriateness of using language of certainty with respect to efforts to manage that risk. We begin by establishing a very general formalism to characterize agentic behavior and to describe standards of acceptable behavior. We show that determination of whether an agent meets any particular standard is not computable. We discuss the extent of the burden associated with verification by manual proof and by automated behavioral governance. We show that to ensure decidability of the behavioral standard itself, one must further limit the capabilities of the agent. We then demonstrate that if our concerns relate to outcomes in the physical world, attempts at validation are futile. Finally, we show that layered architectures aimed at making these challenges tractable mistakenly equate intentions with actions or outcomes, thereby failing to provide any guarantees. We conclude with a discussion of why language of certainty should be eradicated from the conversation about the safety of general artificial intelligence.",http://arxiv.org/abs/1604.06963,2016,manuscript,"Jilk, David J.", Injective State-Image Mapping facilitates Visual Adversarial Imitation Learning,"The growing use of virtual autonomous agents in applications like games and entertainment demands better control policies for natural-looking movements and actions. Unlike the conventional approach of hard-coding motion routines, we propose a deep learning method for obtaining control policies by directly mimicking raw video demonstrations. Previous methods in this domain rely on extracting low-dimensional features from expert videos followed by a separate hand-crafted reward estimation step. We propose an imitation learning framework that reduces the dependence on hand-engineered reward functions by jointly learning the feature extraction and reward estimation steps using Generative Adversarial Networks (GANs). Our main contribution in this paper is to show that under injective mapping between low-level joint state (angles and velocities) trajectories and corresponding raw video stream, performing adversarial imitation learning on video demonstrations is equivalent to learning from the state trajectories. Experimental results show that the proposed adversarial learning method from raw videos produces a similar performance to state-of-the-art imitation learning techniques while frequently outperforming existing hand-crafted video imitation methods. Furthermore, we show that our method can learn action policies by imitating video demonstrations on YouTube with similar performance to learned agents from true reward signals. Please see the supplementary video submission at https://ibm.biz/BdzzNA.",http://arxiv.org/abs/1810.01108,2019,conferencePaper,"Chaudhury, Subhajit; Kimura, Daiki; Munawar, Asim; Tachibana, Ryuki",2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP) AI Safety Open Problems,Created: 2018-11-08 | Updated: 2019-11-2 | Suggestions: please make suggestions directly in this Doc | List maintainer: Mati Roy (contact@matiroy.com) AI Safety Open Problems Technical AGI safety research outside AI: https://forum.effectivealtruism.org/posts/2e9NDGiXt8PjjbTMC/technical-agi-safet...,https://docs.google.com/document/d/1J2fOOF-NYiPC0-J3ZGEfE0OhA-QcOInhlvWjr1fAsS0/edit?usp=embed_facebook,2019,manuscript,"Roy, Mati", "A Scalable Framework For Real-Time Multi-Robot, Multi-Human Collision Avoidance","Robust motion planning is a well-studied problem in the robotics literature, yet current algorithms struggle to operate scalably and safely in the presence of other moving agents, such as humans. This paper introduces a novel framework for robot navigation that accounts for high-order system dynamics and maintains safety in the presence of external disturbances, other robots, and non-deterministic intentional agents. Our approach precomputes a tracking error margin for each robot, generates confidence-aware human motion predictions, and coordinates multiple robots with a sequential priority ordering, effectively enabling scalable safe trajectory planning and execution. We demonstrate our approach in hardware with two robots and two humans. We also showcase our work’s scalability in a larger simulation.",http://arxiv.org/abs/1811.05929,2018,conferencePaper,"Bajcsy, Andrea; Herbert, Sylvia L.; Fridovich-Keil, David; Fisac, Jaime F.; Deglurkar, Sampada; Dragan, Anca D.; Tomlin, Claire J.",2019 International Conference on Robotics and Automation (ICRA) My AI Timelines Have Sped Up,"For this post, I’m going to take artificial general intelligence (AGI) to mean an AI system that matches or exceeds humans at almost all (95%+) economically valuable work. I prefer this definition because it focuses on what causes the most societal change, rather than how we get there.",http://www.alexirpan.com/2020/08/18/ai-timelines.html,2020,blogPost,"Irpan, Alex",Sorta Insightful Goal Inference Improves Objective and Perceived Performance in Human-Robot Collaboration,"The study of human-robot interaction is fundamental to the design and use of robotics in real-world applications. Robots will need to predict and adapt to the actions of human collaborators in order to achieve good performance and improve safety and end-user adoption. This paper evaluates a human-robot collaboration scheme that combines the task allocation and motion levels of reasoning: the robotic agent uses Bayesian inference to predict the next goal of its human partner from his or her ongoing motion, and re-plans its own actions in real time. This anticipative adaptation is desirable in many practical scenarios, where humans are unable or unwilling to take on the cognitive overhead required to explicitly communicate their intent to the robot. A behavioral experiment indicates that the combination of goal inference and dynamic task planning significantly improves both objective and perceived performance of the human-robot team. Participants were highly sensitive to the differences between robot behaviors, preferring to work with a robot that adapted to their actions over one that did not.",http://arxiv.org/abs/1802.01780,2018,conferencePaper,"Liu, Chang; Hamrick, Jessica B.; Fisac, Jaime F.; Dragan, Anca D.; Hedrick, J. Karl; Sastry, S. Shankar; Griffiths, Thomas L.",Proceedings of the 15th International Conferenceon Autonomous Agents and Multiagent Systems (AAMAS 2016) Modeling the social dynamics of moral enhancement: Social strategies sold over the counter and the stability of society,,,2017,journalArticle,"Sandberg, Anders; Fabiano, Joao",Cambridge Quarterly of Healthcare Ethics An Agent-Based Model of Financial Benchmark Manipulation,,https://par.nsf.gov/biblio/10105527-agent-based-model-financial-benchmark-manipulation,2019,conferencePaper,"Shearer, Megan; Rauterberg, Gabriel; Wellman, Michael P.",ICML-19 Workshop on AI in Finance The new weapons of mass destruction?,,,2018,magazineArticle,"Arkin, Ronald; Russell, Stuart; Min-Seok, Kim",The Security Times "Cause, responsibility and blame: a structural-model approach",,https://academic.oup.com/lpr/article-lookup/doi/10.1093/lpr/mgu020,2015,journalArticle,"Halpern, J. Y.","Law, Probability and Risk" Artificial General Intelligence: Coordination & Great Powers,,,2018,journalArticle,"Duettmann, Allison; Afanasjeva, Olga; Armstrong, Stuart; Braley, Ryan; Cussins, Jessica; Ding, Jeffrey; Eckersley, Peter; Guan, Melody; Vance, Alyssa; Yampolskiy, Roman","Foresight Institute: Palo Alto, CA, USA" Concrete Problems in AI Safety,"Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (""avoiding side effects"" and ""avoiding reward hacking""), an objective function that is too expensive to evaluate frequently (""scalable supervision""), or undesirable behavior during the learning process (""safe exploration"" and ""distributional shift""). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.",http://arxiv.org/abs/1606.06565,2016,manuscript,"Amodei, Dario; Olah, Chris; Steinhardt, Jacob; Christiano, Paul; Schulman, John; Mané, Dan", Classification of global catastrophic risks connected with artificial intelligence,"A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of AI development, namely, (1) before it starts self-improvement, (2) during its takeoff, when it uses various instruments to escape its initial confinement, or (3) after it successfully takes over the world and starts to implement its goal system, which could be plainly unaligned, or feature-flawed friendliness. AI could also halt at later stages of its development either due to technical glitches or ontological problems. Overall, we identified around several dozen scenarios of AI-driven global catastrophe. The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level.",https://link.springer.com/epdf/10.1007/s00146-018-0845-5,2020,journalArticle,"Turchin, Alexey; Denkenberger, David",AI & Society Exploring artificial intelligence futures,"Artificial intelligence technologies are receiving high levels of attention and ‘hype’, leading to a range of speculation about futures in which such technologies, and their successors, are commonly deployed. By looking at existing AI futures work, this paper surveys, and offers an initial categorisation of, several of the tools available for such futures-exploration, in particular those available to humanities scholars, and discusses some of the benefits and limitations of each. While no tools exist to reliably predict the future of artificial intelligence, several tools can help us expand our range of possible futures in order to reduce unexpected surprises, and to create common languages and models that enable constructive conversations about the kinds of futures we would like to occupy or avoid. The paper points at several tools as particularly promising and currently neglected, calling for more work in data-driven, realistic, integrative, and participatory scenario role-plays.",http://kiss.kstudy.com/journal/thesis_name.asp?key=3706902,2018,journalArticle,"Shahar, Avin",Journal of AI Humanities The Precipice: Existential Risk and the Future of Humanity,"This urgent and eye-opening book makes the case that protecting humanity's future is the central challenge of our time. If all goes well, human history is just beginning. Our species could survive for billions of years - enough time to end disease, poverty, and injustice, and to flourish in ways unimaginable today. But this vast future is at risk. With the advent of nuclear weapons, humanity entered a new age, where we face existential catastrophes - those from which we could never come back. Since then, these dangers have only multiplied, from climate change to engineered pathogens and artificial intelligence. If we do not act fast to reach a place of safety, it will soon be too late. Drawing on over a decade of research, The Precipice explores the cutting-edge science behind the risks we face. It puts them in the context of the greater story of humanity: showing how ending these risks is among the most pressing moral issues of our time. And it points the way forward, to the actions and strategies that can safeguard humanity. An Oxford philosopher committed to putting ideas into action, Toby Ord has advised the US National Intelligence Council, the UK Prime Minister's Office, and the World Bank on the biggest questions facing humanity. In The Precipice, he offers a startling reassessment of human history, the future we are failing to protect, and the steps we must take to ensure that our generation is not the last.",,2020,book,"Ord, Toby", Prediction and Control with Temporal Segment Models,"We introduce a method for learning the dynamics of complex nonlinear systems based on deep generative models over temporal segments of states and actions. Unlike dynamics models that operate over individual discrete timesteps, we learn the distribution over future state trajectories conditioned on past state, past action, and planned future action trajectories, as well as a latent prior over action trajectories. Our approach is based on convolutional autoregressive models and variational autoencoders. It makes stable and accurate predictions over long horizons for complex, stochastic systems, effectively expressing uncertainty and modeling the effects of collisions, sensory noise, and action delays. The learned dynamics model and action prior can be used for end-to-end, fully differentiable trajectory optimization and model-based policy optimization, which we use to evaluate the performance and sample-efficiency of our method.",https://arxiv.org/abs/1703.04070v2,2017,conferencePaper,"Mishra, Nikhil; Abbeel, Pieter; Mordatch, Igor",ICML 2017 "On the referendum #31: Project Maven, procurement, lollapalooza results & nuclear/AGI safety","‘People, ideas, machines — in that order!’ Colonel Boyd ‘[R]ational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficienc…",https://dominiccummings.com/2019/03/01/on-the-referendum-31-project-maven-procurement-lollapalooza-results-nuclear-agi-safety/,2019,blogPost,"Cummings, Dominic",Dominic Cummings's Blog The Future of AI and Cybersecurity,"Ben Buchanan recently testified at a House Homeland Security Subcommittee Meeting on Preparing for the Future:  An Assessment of Emerging Cyber Threats.  He is a Senior Faculty Fellow, Center for Security and Emerging Technology, Mortara Center, Assistant Teaching Professor, Georgetown University.  Cybersecurity, already rife with challenges, is becoming even more complex with the rise in prominence of artificial intelligence … Continue reading ""The Future of AI and Cybersecurity""",https://www.thecipherbrief.com/column_article/the-future-of-ai-and-cybersecurity,2019,blogPost,"Buchanan, Ben",The Cipher Brief Genetically Modified Organisms: A Precautionary Tale For AI Governance | AI Pulse,"The fruits of a long anticipated technology finally hit the market, with promise to extend human life, revolutionize production, improve consumer welfare, reduce poverty, and inspire countless yet-imagined innovations.",https://aipulse.org/genetically-modified-organisms-a-precautionary-tale-for-ai-governance-2/,2019,blogPost,"Grotto, Andrew",AI Pulse Self-Imitation Learning,"This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent's past good decisions. This algorithm is designed to verify our hypothesis that exploiting past good experiences can indirectly drive deep exploration. Our empirical results show that SIL significantly improves advantage actor-critic (A2C) on several hard exploration Atari games and is competitive to the state-of-the-art count-based exploration methods. We also show that SIL improves proximal policy optimization (PPO) on MuJoCo tasks.",http://arxiv.org/abs/1806.05635,2018,conferencePaper,"Oh, Junhyuk; Guo, Yijie; Singh, Satinder; Lee, Honglak",Proceedings of the 35th International Conference on Machine Learning SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning,"Model-based reinforcement learning (RL) has proven to be a data efficient approach for learning control tasks but is difficult to utilize in domains with complex observations such as images. In this paper, we present a method for learning representations that are suitable for iterative model-based policy improvement, even when the underlying dynamical system has complex dynamics and image observations, in that these representations are optimized for inferring simple dynamics and cost models given data from the current policy. This enables a model-based RL method based on the linear-quadratic regulator (LQR) to be used for systems with image observations. We evaluate our approach on a range of robotics tasks, including manipulation with a real-world robotic arm directly from images. We find that our method produces substantially better final performance than other model-based RL methods while being significantly more efficient than model-free RL.",http://arxiv.org/abs/1808.09105,2019,conferencePaper,"Zhang, Marvin; Vikram, Sharad; Smith, Laura; Abbeel, Pieter; Johnson, Matthew J.; Levine, Sergey",Proceedings of the 36th International Conference on Machine Learning Embedded vs. External Decision Problems,,https://www.alignmentforum.org/posts/br7KRSeNymwSvZnf5/embedded-vs-external-decision-problems,2020,blogPost,"Leong, Chris",AI Alignment Forum “Betting on the Past” by Arif Ahmed,"[This post assumes knowledge of decision theory, as discussed in Eliezer Yudkowsky’s Timeless Decision Theory and in Arbital’s Introduction to Logical Decision Theory.] I recently discovered an int…",https://casparoesterheld.com/2017/02/06/betting-on-the-past-by-arif-ahmed/,2017,blogPost,"Treutlein, Johannes",The Universe from an Intentional Stance Adversarial Attacks on Neural Network Policies,"Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification. Such adversarial examples have been extensively studied in the context of computer vision applications. In this work, we show adversarial attacks are also effective when targeting neural network policies in reinforcement learning. Specifically, we show existing adversarial example crafting techniques can be used to significantly degrade test-time performance of trained policies. Our threat model considers adversaries capable of introducing small perturbations to the raw input of the policy. We characterize the degree of vulnerability across tasks and training algorithms, for a subclass of adversarial-example attacks in white-box and black-box settings. Regardless of the learned task or training algorithm, we observe a significant drop in performance, even with small adversarial perturbations that do not interfere with human perception. Videos are available at http://rll.berkeley.edu/adversarial.",https://arxiv.org/abs/1702.02284v1,2017,manuscript,"Huang, Sandy; Papernot, Nicolas; Goodfellow, Ian; Duan, Yan; Abbeel, Pieter", Robust Physical-World Attacks on Deep Learning Models,"Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations.Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. Witha perturbation in the form of only black and white stickers,we attack a real stop sign, causing targeted misclassification in 100% of the images obtained in lab settings, and in 84.8%of the captured video frames obtained on a moving vehicle(field test) for the target classifier.",http://arxiv.org/abs/1707.08945,2018,conferencePaper,"Eykholt, Kevin; Evtimov, Ivan; Fernandes, Earlence; Li, Bo; Rahmati, Amir; Xiao, Chaowei; Prakash, Atul; Kohno, Tadayoshi; Song, Dawn",Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Negotiable Reinforcement Learning for Pareto Optimal Sequential Decision-Making,,http://papers.nips.cc/paper/7721-negotiable-reinforcement-learning-for-pareto-optimal-sequential-decision-making.pdf,2018,bookSection,"Desai, Nishant; Critch, Andrew; Russell, Stuart J",Advances in Neural Information Processing Systems 31 Diminishing Returns and Recursive Self Improving Artificial Intelligence,"SummaryIn this chapter we will examine in more detail the concept of an artificial intelligence that can improve upon itself, and show how that might not be as problematic as some researchers think. The ability for an AI to better itself over time through a process called recursive self-improvement has been considered as a promising path to creating the technological singularity. In this type of system an AI has access to its own source code and possibly even hardware, with the ability to edit both at will. This gives the AI the option to constantly improve upon itself and become increasingly intelligent. Eventually this would produce versions of the AI that are more intelligent than humans and cause us to reach the technological singularity. Researchers have speculated that this process could create an extremely dangerous situation for humanity as we get left behind in a growing intelligence gap. This chapter proposes that this gap would not be as drastic as initially thought, and that there may be natural limits on the ability for an AI to improve upon itself. Along the way we will propose that the law of diminishing returns will take effect to limit runaway intelligence. We also theorize that developing and manufacturing new hardware will introduce a latency in AI improvement that could easily be exploited to halt any dangerous situation.",https://doi.org/10.1007/978-3-662-54033-6_7,2017,bookSection,"Majot, Andrew; Yampolskiy, Roman",The Technological Singularity: Managing the Journey CEB Improves Model Robustness,"We demonstrate that the Conditional Entropy Bottleneck (CEB) can improve model robustness. CEB is an easy strategy to implement and works in tandem with data augmentation procedures. We report results of a large scale adversarial robustness study on CIFAR-10, as well as the ImageNet-C Common Corruptions Benchmark, ImageNet-A, and PGD attacks.",http://arxiv.org/abs/2002.05380,2020,journalArticle,"Fischer, Ian; Alemi, Alexander A.",Entropy AI governance: a research agenda,,https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf,2018,report,"Dafoe, Allan", Inaccessible information,What kind of information might be hard to elicit from ML models?,https://ai-alignment.com/inaccessible-information-c749c6a88ce,2020,blogPost,"Christiano, Paul",AI Alignment (Medium) Alignment As A Bottleneck To Usefulness Of GPT-3,"So there’s this thing where GPT-3 is able to do addition, it has the internal model to do addition, but it takes a little poking and prodding to actually get it to do addition. “Few-shot learning”, as the paper calls it. Rather than prompting the model with Q: What is 48 + 76? A: … instead prompt it with Q: What is 48 + 76? A: 124 Q: What is 34 + 53? A: 87 Q: What is 29 + 86? A: The same applies to lots of other tasks: arithmetic, anagrams and spelling correction, translation, assorted benchmarks, etc. To get GPT-3 to do the thing we want, it helps to give it a few examples, so it can “figure out what we’re asking for”. This is an alignment problem. Indeed, I think of it as the quintessential alignment problem: to translate what-a-human-wants into a specification usable by an AI. The hard part is not to build a system which can do the thing we want, the hard part is to specify the thing we want in such a way that the system actually does it. The GPT family of models are trained to mimic human writing. So the prototypical “alignment problem” on GPT is prompt design: write a prompt such that actual human writing which started with that prompt would likely contain the thing you actually want. Assuming that GPT has a sufficiently powerful and accurate model of human writing, it should then generate the thing you want. Viewed through that frame, “few-shot learning” just designs a prompt by listing some examples of what we want - e.g. listing some addition problems and their answers. Call me picky, but that seems like a rather primitive way to design a prompt. Surely we can do better? Indeed, people are already noticing clever ways to get better results out of GPT-3 - e.g. TurnTrout recommends conditioning on writing by smart people, and the right prompt makes the system complain about nonsense rather than generating further nonsense in response. I expect we’ll see many such insights over the next month or so. CAPABILITIES VS ALIGNMENT AS BOTTLENECK TO VALUE I",https://www.alignmentforum.org/posts/BnDF5kejzQLqd5cjH/alignment-as-a-bottleneck-to-usefulness-of-gpt-3,2020,blogPost,"Wentworth, John S",AI Alignment Forum Introduction—The Transhumanist FAQ: A General Introduction,,,2014,bookSection,"Bostrom, Nick",Transhumanism and the Body Off-policy Monte Carlo agents with variable behaviour policies,"This paper looks at the convergence property of off-policy Monte Carlo agents with variable behaviour policies. It presents results about convergence and lack of convergence. Even if the agent generates every possible episode history infinitely often, the algorithm can fail to converge on the correct Q-values. On the other hand, it can converge on the correct Q-values under certain conditions. For instance, if, during the n-th episode, the agent has an independent probability of 1/ log(n) of following the original policy at any given state, then it will converge on the right Q-values for that policy.",https://www.fhi.ox.ac.uk/wp-content/uploads/monte_carlo_arXiv.pdf,2015,manuscript,"Armstrong, Stuart", Unethical Research: How to Create a Malevolent Artificial Intelligence,"Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine. Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species. This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).",http://arxiv.org/abs/1605.02817,2016,bookSection,"Pistono, Federico; Yampolskiy, Roman V.",The Age of Artificial Intelligence: An Exploration Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society,"One way of carving up the broad ‘AI ethics and society’ research space that has emerged in recent years is to distinguish between ‘near-term’ and ‘long-term’ research. While such ways of breaking down the research space can be useful, we put forward several concerns about the near/long-term distinction gaining too much prominence in how research questions and priorities are framed. We highlight some ambiguities and inconsistencies in how the distinction is used, and argue that while there are differing priorities within this broad research community, these differences are not well-captured by the near/long-term distinction. We unpack the near/long-term distinction into four different dimensions, and propose some ways that researchers can communicate more clearly about their work and priorities using these dimensions. We suggest that moving towards a more nuanced conversation about research priorities can help establish new opportunities for collaboration, aid the development of more consistent and coherent research agendas, and enable identification of previously neglected research areas.",http://arxiv.org/abs/2001.04335,2020,conferencePaper,"Prunkl, Carina; Whittlestone, Jess",arXiv:2001.04335 [cs] Humans learn too: Better Human-AI Interaction using Optimized Human Inputs,"Humans rely more and more on systems with AI components. The AI community typically treats human inputs as a given and optimizes AI models only. This thinking is one-sided and it neglects the fact that humans can learn, too. In this work, human inputs are optimized for better interaction with an AI model while keeping the model fixed. The optimized inputs are accompanied by instructions on how to create them. They allow humans to save time and cut on errors, while keeping required changes to original inputs limited. We propose continuous and discrete optimization methods modifying samples in an iterative fashion. Our quantitative and qualitative evaluation including a human study on different hand-generated inputs shows that the generated proposals lead to lower error rates, require less effort to create and differ only modestly from the original samples.",http://arxiv.org/abs/2009.09266,2020,manuscript,"Schneider, Johannes", Bayesian computational models for inferring preferences,,,2015,thesis,"Evans, Owain Rhys", Modelling Morality with Prospective Logic,,http://link.springer.com/10.1007/978-3-540-77002-2_9,2007,bookSection,"Pereira, Luís Moniz; Saptawijaya, Ari",Progress in Artificial Intelligence Regularization and visualization of attention in reinforcement learning agents,,https://attentionentropy.github.io/,2019,blogPost,"Nikulin, Dmitry; Kosch, Sebastian; Steuer, Fabian; Cunningham, Hoagy",AI Safety Camp The Ethical Knob: ethically-customisable automated vehicles and the law,,http://link.springer.com/10.1007/s10506-017-9211-z,2017,journalArticle,"Contissa, Giuseppe; Lagioia, Francesca; Sartor, Giovanni",Artificial Intelligence and Law "Counterfactual equivalence for POMDPs, and underlying deterministic environments",,https://arxiv.org/abs/1801.03737,2018,manuscript,"Armstrong, Stuart", XXII. Programming a computer for playing chess,,http://www.tandfonline.com/doi/abs/10.1080/14786445008521796,1950,journalArticle,"Shannon, Claude E.","The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science" Learning an Interface to Improve Efficiency in Combined Task and Motion Planning,"In mobile manipulation planning, it is not uncommon for tasks to require thousands of individual motions. Planning complexity is exponential in the length of the plan, rendering direct motion planning intractable for many problems of interest. Recent work has focused on task and motion planning (TAMP) as a way to address this challenge. TAMP methods integrate logical search with continuous geometric reasoning in order to sequence several short-horizon motion plans that together solve a long-horizon task. To account for continuous parameters, many of these systems rely on handcoded discretizations of the domain. Such an approach lacks robustness and requires substantial design effort. In this paper, we present methods to improve the reliability and speed of planning in a TAMP system. The approach we build on first plans abstractly, ignoring continuous values, and then performs plan refinement to determine feasible parameter settings. We formulate plan refinement as a Markov decision process (MDP) and give a reinforcement learning (RL) algorithm to learn a policy for it. We also present initial work that learns which plan, from a set of potential candidates, to try to refine. Our contributions are as follows: 1) we present a randomized local search algorithm for plan refinement that is easily formulated as an MDP; 2) we give an RL algorithm that learns a policy for this MDP; 3) we present a method that trains heuristics for selecting which plan to try to refine; and 4) we perform experiments to evaluate the performance of our system in a variety of simulated domains. We show improvements in success rate and planning time over a hand-coded baseline.",,2015,conferencePaper,"Chitnis, Rohan; Hadfield-Menell, Dylan; Srivastava, Siddharth; Gupta, Abhishek; Abbeel, Pieter", Can Intelligence Explode?,"The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. It took many decades for these ideas to spread from science fiction to popular science magazines and finally to attract the attention of serious philosophers. David Chalmers' (JCS 2010) article is the first comprehensive philosophical analysis of the singularity in a respected philosophy journal. The motivation of my article is to augment Chalmers' and to discuss some issues not addressed by him, in particular what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.",http://arxiv.org/abs/1202.6177,2012,journalArticle,"Hutter, Marcus",Journal of Consciousness Studies Minimizing global catastrophic and existential risks from emerging technologies through international law,,,2013,journalArticle,"Wilson, Grant",Va. Envtl. LJ AI safety via debate,"To make AI systems broadly useful for challenging real-world tasks, we need them to learn complex human goals and preferences. One approach to specifying complex goals asks humans to judge during training which agent behaviors are safe and useful, but this approach can fail if the task is too complicated for a human to directly judge. To help address this concern, we propose training agents via self play on a zero sum debate game. Given a question or proposed action, two agents take turns making short statements up to a limit, then a human judges which of the agents gave the most true, useful information. In an analogy to complexity theory, debate with optimal play can answer any question in PSPACE given polynomial time judges (direct judging answers only NP questions). In practice, whether debate works involves empirical questions about humans and the tasks we want AIs to perform, plus theoretical questions about the meaning of AI alignment. We report results on an initial MNIST experiment where agents compete to convince a sparse classifier, boosting the classifier's accuracy from 59.4% to 88.9% given 6 pixels and from 48.2% to 85.2% given 4 pixels. Finally, we discuss theoretical and practical aspects of the debate model, focusing on potential weaknesses as the model scales up, and we propose future human and computer experiments to test these properties.",http://arxiv.org/abs/1805.00899,2018,manuscript,"Irving, Geoffrey; Christiano, Paul; Amodei, Dario", About Understanding,"The concept of understanding is commonly used in everyday communications, and seems to lie at the heart of human intelligence. However, no concrete theory of understanding has been fielded as of yet in artificial intelligence (AI), and references on this subject are far from abundant in the research literature. We contend that the ability of an artificial system to autonomously deepen its understanding of phenomena in its surroundings must be part of any system design targeting general intelligence. We present a theory of pragmatic understanding, discuss its implications for architectural design and analyze the behavior of an intelligent agent implementing the theory. Our agent learns to understand how to perform multimodal dialogue with humans through observation, becoming capable of constructing sentences with complex grammar, generating proper question-answer patterns, correctly resolving and generating anaphora with coordinated deictic gestures, producing efficient turntaking, and following the structure of interviews, without any information on this being provided up front.",http://link.springer.com/10.1007/978-3-319-41649-6_11,2016,conferencePaper,"Thórisson, Kristinn R.",Artificial General Intelligence Toward a Rational and Mechanistic Account of Mental Effort,,http://www.annualreviews.org/doi/10.1146/annurev-neuro-072116-031526,2017,journalArticle,"Shenhav, Amitai; Musslick, Sebastian; Lieder, Falk; Kool, Wouter; Griffiths, Thomas L.; Cohen, Jonathan D.; Botvinick, Matthew M.",Annual Review of Neuroscience Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels,"We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training. The approach leverages input perturbations commonly used in computer vision tasks to transform input examples, as well as regularizing the value function and policy. Existing model-free approaches, such as Soft Actor-Critic (SAC) [22], are not able to train deep networks effectively from image pixels. However, the addition of our augmentation method dramatically improves SAC’s performance, enabling it to reach state-of-the-art performance on the DeepMind control suite, surpassing model-based [23, 38, 24] methods and recently proposed contrastive learning [50]. Our approach, which we dub DrQ: Data-regularized Q, can be combined with any model-free reinforcement learning algorithm. We further demonstrate this by applying it to DQN [43] and significantly improve its data-efficiency on the Atari 100k [31] benchmark. An implementation can be found at https://sites. google.com/view/data-regularized-q.",http://arxiv.org/abs/2004.13649,2020,manuscript,"Kostrikov, Ilya; Yarats, Denis; Fergus, Rob", Combining Deep Reinforcement Learning and Search for Imperfect-Information Games,"The combination of deep reinforcement learning and search at both training and test time is a powerful paradigm that has led to a number of a successes in single-agent settings and perfect-information games, best exemplified by the success of AlphaZero. However, algorithms of this form have been unable to cope with imperfect-information games. This paper presents ReBeL, a general framework for self-play reinforcement learning and search for imperfect-information games. In the simpler setting of perfect-information games, ReBeL reduces to an algorithm similar to AlphaZero. Results show ReBeL leads to low exploitability in benchmark imperfect-information games and achieves superhuman performance in heads-up no-limit Texas hold'em poker, while using far less domain knowledge than any prior poker AI. We also prove that ReBeL converges to a Nash equilibrium in two-player zero-sum games in tabular settings.",http://arxiv.org/abs/2007.13544,2020,conferencePaper,"Brown, Noam; Bakhtin, Anton; Lerer, Adam; Gong, Qucheng",34th Conference on Neural Information Processing Systems (NeurIPS 2020) How valuable is movement growth?,,http://globalprioritiesproject.org/wp-content/uploads/2015/05/MovementGrowth.pdf,2015,report,"Cotton-Barratt, Owen", The ground of optimization,"This work was supported by OAK, a monastic community in the Berkeley hills. This document could not have been written without the daily love of living in this beautiful community. The work involved in writing this cannot be separated from the sitting, chanting, cooking, cleaning, crying, correcting, fundraising, listening, laughing, and teaching of the whole community. -------------------------------------------------------------------------------- What is optimization? What is the relationship between a computational optimization process — say, a computer program solving an optimization problem — and a physical optimization process — say, a team of humans building a house? We propose the concept of an optimizing system as a physically closed system containing both that which is being optimized and that which is doing the optimizing, and defined by a tendency to evolve from a broad basin of attraction towards a small set of target configurations despite perturbations to the system. We compare our definition to that proposed by Yudkowsky, and place our work in the context of work by Demski and Garrabrant’s Embedded Agency, and Drexler’s Comprehensive AI Services. We show that our definition resolves difficult cases proposed by Daniel Filan. We work through numerous examples of biological, computational, and simple physical systems showing how our definition relates to each. INTRODUCTION In the field of computer science, an optimization algorithm is a computer program that outputs the solution, or an approximation thereof, to an optimization problem. An optimization problem consists of an objective function to be maximized or minimized, and a feasible region within which to search for a solution. For example we might take the objective function (x2−2)2 as a minimization problem and the whole real number line as the feasible region. The solution then would be x=√2 and a working optimization algorithm for this problem is one that outputs a close approximation to th",https://www.alignmentforum.org/posts/znfkdCoHMANwqc2WE/the-ground-of-optimization-1,2020,blogPost,"Flint, Alex",AI Alignment Forum Parametric Bounded Löb's Theorem and Robust Cooperation of Bounded Agents,"Löb's theorem and Gödel's theorems make predictions about the behavior of systems capable of self-reference with unbounded computational resources with which to write and evaluate proofs. However, in the real world, systems capable of self-reference will have limited memory and processing speed, so in this paper we introduce an effective version of L\""ob's theorem which is applicable given such bounded resources. These results have powerful implications for the game theory of bounded agents who are able to write proofs about themselves and one another, including the capacity to out-perform classical Nash equilibria and correlated equilibria, attaining mutually cooperative program equilibrium in the Prisoner's Dilemma. Previous cooperative program equilibria studied by Tennenholtz (2004) and Fortnow (2009) have depended on tests for program equality, a fragile condition, whereas ""L\""obian"" cooperation is much more robust and agnostic of the opponent's implementation.",http://arxiv.org/abs/1602.04184,2016,manuscript,"Critch, Andrew", Expert-augmented actor-critic for ViZDoom and Montezumas Revenge,"We propose an expert-augmented actor-critic algorithm, which we evaluate on two environments with sparse rewards: Montezumas Revenge and a demanding maze from the ViZDoom suite. In the case of Montezumas Revenge, an agent trained with our method achieves very good results consistently scoring above 27,000 points (in many experiments beating the first world). With an appropriate choice of hyperparameters, our algorithm surpasses the performance of the expert data. In a number of experiments, we have observed an unreported bug in Montezumas Revenge which allowed the agent to score more than 800,000 points.",http://arxiv.org/abs/1809.03447,2018,manuscript,"Garmulewicz, Michał; Michalewski, Henryk; Miłoś, Piotr", Symmetric Decomposition of Asymmetric Games,,http://www.nature.com/articles/s41598-018-19194-4,2018,journalArticle,"Tuyls, Karl; Pérolat, Julien; Lanctot, Marc; Ostrovski, Georg; Savani, Rahul; Leibo, Joel Z; Ord, Toby; Graepel, Thore; Legg, Shane",Scientific Reports Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue),"This is the fifth post in my sequence on moral anti-realism. I tried to provide enough background in the “Context” section so readers who are new to this sequence can enjoy it as a standalone piece. CONTEXT There are two different types of moral realism, versions based on irreducible normativity (“moral non-naturalism”), and naturalist versions of moral realism. Very crudely, the difference is that irreducible normativity is usually considered to have the deeper ramifications if it were true (there are exceptions; see “#1: What Is Moral Realism?” for a detailed discussion). The dialogue below, as well as my previous posts in this sequence, have therefore focused primarily on versions of moral realism based on irreducible normativity. Readers looking for arguments against irreducible normativity could read the preceding posts, “#2: Why Realists and Anti-Realists Disagree” and “#3: Against Irreducible Normativity.” The dialogue below contains short versions of some of my main arguments. However, I didn’t intend it to be a further argument against irreducible normativity. Instead, I wrote this dialogue to call into question that even if things increasingly started to look as though irreducible normativity were false, we should still act as though it applies. In my previous post “#4: Why the Moral Realism Wager Fails,” I voiced skepticism about a general wager in favor of pursuing irreducible normativity. Still, I conceded that such a wager could apply in the case of certain individuals. I coined the term metaethical fanaticism to refer to the stance of locking in the pursuit of irreducible normativity as a life goal. In the dialogue below, I describe a world in which we gain ever higher degrees of confidence in the falsity (or meaninglessness) of irreducible normativity. Metaethical fanaticism would imply that even in that world, one would continue with (increasingly desperate) attempts to make irreducible normativity work anyway. I aimed to visualize these implica",https://forum.effectivealtruism.org/posts/BYjj4WdrxgPJxMre9/moral-anti-realism-sequence-5-metaethical-fanaticism,2020,blogPost,"Gloor, Lukas",Effective Altruism Forum Moral Anti-Realism Sequence #1: What Is Moral Realism?,"Last update: 7/7/2020. This is the first post in my sequence on moral anti-realism. INTRODUCTION To start off this sequence, I want to give a short description of moral realism; I’ll be arguing against moral realism in later posts, and I want to clearly explain what it is I’m arguing against. When I’m arguing against moral realism, I will deliberately set aside some moral realist views and focus on those forms of moral realism that I find most relevant – in the sense that the “relevant” versions, if correct, would be the most relevant to effective altruism and to people’s lives in general. I will call these versions of moral realism strong moral realism. Thus, I don’t claim that all versions of moral realism discussed in the academic literature are mistaken. The goal of this introductory post is threefold: 1. to give a quick overview of metaethics[1] and different versions of moral realism 2. to explain why I find many of these versions of moral realism only modestly relevant to ethical practice 3. to outline what I take to be strong moral realism OVERVIEW AND SUMMARY Two definitions of moral realism * Moral realism has two common definitions: the semantic definition and the ontological one. I contrast these to illustrate how moral claims can be discussed at a linguistic level (“What do people mean when they make moral claims?” and a substantive level (“Given the objectivist assumption that moral claims refer to a speaker-independent moral reality, are they sometimes true?”). Positions that are sometimes referred to as ‘moral realism’ are not always consequential (i.e., their truth or falsity does not have action-guiding implications for effective altruism). Sidenote: Subjectivism and intersubjectivism * Subjectivism and intersubjectivism are usually not counted as moral realist positions. I discuss them mainly for the sake of completeness and because I think they are fruitful frameworks to think about morality. Obje",https://forum.effectivealtruism.org/posts/TwJb75GtbD4LvGiku/moral-anti-realism-sequence-1-what-is-moral-realism,2020,blogPost,"Gloor, Lukas",Effective Altruism Forum "Suffering-focused AI safety: Why ""fail-safe'"" measures might be our top intervention","AI-safety efforts focused on suffering reduction should place particular emphasis on avoiding risks of astronomical disvalue. Among the cases where uncontrolled AI destroys humanity, outcomes might still differ enormously in the amounts of suffering produced. Rather than concentrating all our efforts on a specific future we would like to bring about, we should identify futures we least want to bring about and work on ways to steer AI trajectories around these. In particular, a “fail-safe”1 approach to AI safety is especially promising because avoiding very bad outcomes might be much easier than making sure we get everything right. This is also a neglected cause despite there being a broad consensus among different moral views that avoiding the creation of vast amounts of suffering in our future is an ethical priority.",,2016,manuscript,"Gloor, Lukas", Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making,"Existing multi-objective reinforcement learning (MORL) algorithms do not account for objectives that arise from players with differing beliefs. Concretely, consider two players with different beliefs and utility functions who may cooperate to build a machine that takes actions on their behalf. A representation is needed for how much the machine's policy will prioritize each player's interests over time. Assuming the players have reached common knowledge of their situation, this paper derives a recursion that any Pareto optimal policy must satisfy. Two qualitative observations can be made from the recursion: the machine must (1) use each player's own beliefs in evaluating how well an action will serve that player's utility function, and (2) shift the relative priority it assigns to each player's expected utilities over time, by a factor proportional to how well that player's beliefs predict the machine's inputs. Observation (2) represents a substantial divergence from na\""{i}ve linear utility aggregation (as in Harsanyi's utilitarian theorem, and existing MORL algorithms), which is shown here to be inadequate for Pareto optimal sequential decision-making on behalf of players with different beliefs.",http://arxiv.org/abs/1701.01302,2017,manuscript,"Critch, Andrew", On the Foundations of Expected Expected Utility,"Intelligent agents often need to assess user utility functions in order to make decisions on their behalf, or predict their behavior. When uncertainty exists over the precise nature of this utility function, one can model this uncertainty using a distribution over utility functions. This view lies at the core of games with incomplete information and, more recently, several proposals for incremental preference elicitation. In such cases, decisions (or predicted behavior) are based on computing the expected expected utility (EEU) of decisions with respect to the distribution over utility functions. Unfortunately, decisions made under EEU are sensitive to the precise representation of the utility function. We examine the conditions under which EEU provides for sensible decisions by appeal to the foundational axioms of decision theory. We also discuss the impact these conditions have on the enterprise of preference elicitation more broadly.",,2003,conferencePaper,"Boutilier, Craig", AI Timeline Surveys,"[This page is out of date and will be updated soon. It does not reflect all surveys known and documented by AI Impacts.] We know of thirteen surveys on the predicted timing of human-level AI. If we collapse a few slightly different meanings of 'human-level AI', then: Median estimates for when there will be a 10% chance of human-level AI are...",https://aiimpacts.org/ai-timeline-surveys/,2015,blogPost,AI Impacts,AI Impacts Global challenges: 12 risks that threaten human civilization,,,2015,journalArticle,"Pamlin, Dennis; Armstrong, Stuart","Global Challenges Foundation, Stockholm" Informal organizational networking as a crisis- avoidance strategy: US naval flight operations as a case study,,http://journals.sagepub.com/doi/10.1177/108602668900300205,1989,journalArticle,"Rochlin, Gene I.",Industrial Crisis Quarterly Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World,"Bridging the 'reality gap' that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to $1.5$cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.",https://ieeexplore.ieee.org/abstract/document/8202133?casa_token=RtXV4bNeIQwAAAAA:RIdUZYp1nDkcCGsZA3PDaUvcwYqxggIXUkloOQjrNXNMw1oYzx2IDpziZNK59RYhZrBDcPkF,2017,conferencePaper,"Tobin, Josh; Fong, Rachel; Ray, Alex; Schneider, Jonas; Zaremba, Wojciech; Abbeel, Pieter",2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Techniques for optimizing worst-case performance,Optimizing neural networks for worst-case performance looks really hard. Here’s why I have hope.,https://ai-alignment.com/techniques-for-optimizing-worst-case-performance-39eafec74b99,2018,blogPost,"Christiano, Paul",AI Alignment (Medium) "The Control Problem. Excerpts from Superintelligence: Paths, Dangers, Strategies",,,2016,journalArticle,"Bostrom, Nick",Science Fiction and Philosophy: From Time Travel to Superintelligence What the Baldwin Effect affects depends on the nature of plasticity,,https://linkinghub.elsevier.com/retrieve/pii/S0010027719303397,2020,journalArticle,"Morgan, Thomas J.H.; Suchow, Jordan W.; Griffiths, Thomas L.",Cognition State-only Imitation with Transition Dynamics Mismatch,"Imitation Learning (IL) is a popular paradigm for training agents to achieve complicated goals by leveraging expert behavior, rather than dealing with the hardships of designing a correct reward function. With the environment modeled as a Markov Decision Process (MDP), most of the existing IL algorithms are contingent on the availability of expert demonstrations in the same MDP as the one in which a new imitator policy is to be learned. This is uncharacteristic of many real-life scenarios where discrepancies between the expert and the imitator MDPs are common, especially in the transition dynamics function. Furthermore, obtaining expert actions may be costly or infeasible, making the recent trend towards state-only IL (where expert demonstrations constitute only states or observations) ever so promising. Building on recent adversarial imitation approaches that are motivated by the idea of divergence minimization, we present a new state-only IL algorithm in this paper. It divides the overall optimization objective into two subproblems by introducing an indirection step and solves the subproblems iteratively. We show that our algorithm is particularly effective when there is a transition dynamics mismatch between the expert and imitator MDPs, while the baseline IL methods suffer from performance degradation. To analyze this, we construct several interesting MDPs by modifying the configuration parameters for the MuJoCo locomotion tasks from OpenAI Gym.",http://arxiv.org/abs/2002.11879,2020,manuscript,"Gangwani, Tanmay; Peng, Jian", A Critique of Functional Decision Theory,"A Critique of Functional Decision Theory NB: My writing this note was prompted by Carl Shulman, who suggested we could try a low-time-commitment way of attempting to understanding the disagreement between some folks in the rationality community and academic decision theorists (including myself, though I’m not much of a decision theorist). Apologies that it’s sloppier than I’d usually aim for in a philosophy paper, and lacking in appropriate references. And, even though the paper is pretty negative about FDT, I want to emphasise that my writing this should be taken as a sign of respect for those involved in developing FDT. I’ll also caveat I’m unlikely to have time to engage in the comments; I thought it was better to get this out there all the same rather than delay publication further. 1. Introduction There’s a long-running issue where many in the rationality community take functional decision theory (and its variants) very seriously, but the academic decision theory community does not. But there’s been little public discussion of FDT from academic decision theorists (one exception is here); this note attempts to partly address this gap. So that there’s a clear object of discussion, I’m going to focus on Yudkowsky and Soares’ ‘Functional Decision Theory’ (which I’ll refer to as Y&S), though I also read a revised version of Soares and Levinstein’s Cheating Death in Damascus. This note is structured as follows. Section II describes causal decision theory (CDT), evidential decision theory (EDT) and functional decision theory (FDT). Sections III-VI describe problems for FDT: (i) that it sometimes makes bizarre recommendations, recommending an option that is certainly lower-utility than another option; (ii) that it fails to one-box in most instances of Newcomb’s problem, even though the correctness of one-boxing is supposed to be one of the guiding motivations for the theory; (iii) that it results in implausible discontinuities, where what is rational to do can d",https://www.alignmentforum.org/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory,2019,blogPost,"MacAskill, William",AI Alignment Forum Sharing the World with Digital Minds,"The minds of biological creatures occupy a small corner of a much larger space of possible minds that could be created once we master the technology of artificial intelligence. Yet many of our moral intuitions and practices are based on assumptions about human nature that need not hold for digital minds. This points to the need for moral reflection as we approach the era of advanced machine intelligence. Here we focus on one set of issues, which arise from the prospect of digital “utility monsters”. These may be mass-produced minds with moral statuses and interests similar to those of human beings or other morally considerable animals, so that collectively their moral claims outweigh those of the incumbent populations. Alternatively it may become easy to create individual digital minds with much stronger individual interests and claims to resources than humans. Disrespecting these could produce a moral catastrophe of immense proportions, while a naive way of respecting them could be disastrous for humanity. A sensible approach requires reforms of our moral norms and institutions along with advance planning regarding what kinds of digital minds we bring into existence.",,2020,manuscript,"Shulman, Carl; Bostrom, Nick", "Tiling Agents for Self-Modifying AI, and the Löbian Obstacle",,https://intelligence.org/files/TilingAgentsDraft.pdf,2013,manuscript,"Yudkowsky, Eliezer; Herreshoff, Marcello", Finite Sample Complexity of Rare Pattern Anomaly Detection,"Anomaly detection is a fundamental problem for which a wide variety of algorithms have been developed. However, compared to supervised learning, there has been very little work aimed at understanding the sample complexity of anomaly detection. In this paper, we take a step in this direction by introducing a Probably Approximately Correct (PAC) framework for anomaly detection based on the identification of rare patterns. In analogy with the PAC framework for supervised learning, we develop sample complexity results that relate the complexity of the pattern space to the data requirements needed for PAC guarantees. We instantiate the general result for a number of pattern spaces, some of which are implicit in current state-of-the-art anomaly detectors. Finally, we design a new simple anomaly detection algorithm motivated by our analysis and show experimentally on several benchmark problems that it is competitive with a state-of-the-art detector using the same pattern space.",,2016,conferencePaper,"Siddiqui, Amran; Fern, Alan; Dietterich, Thomas G; Das, Shubhomoy", Software Verification with ITPs Should Use Binary Code Extraction to Reduce the TCB: (Short Paper),,http://link.springer.com/10.1007/978-3-319-94821-8_21,2018,bookSection,"Kumar, Ramana; Mullen, Eric; Tatlock, Zachary; Myreen, Magnus O.",Interactive Theorem Proving Information security careers for GCR reduction,"Update 2019-12-14: There is now a Facebook group for discussion of infosec careers in EA (including for GCR reduction); join here This post was written by Claire Zabel and Luke Muehlhauser, based on their experiences as Open Philanthropy Project staff members working on global catastrophic risk reduction, though this post isn't intended to represent an official position of Open Phil. SUMMARY In this post, we summarize why we think information security (preventing unauthorized users, such as hackers, from accessing or altering information) may be an impactful career path for some people who are focused on reducing global catastrophic risks (GCRs). If you'd like to hear about job opportunities in information security and global catastrophic risk, you can fill out this form created by 80,000 Hours, and their staff will get in touch with you if something might be a good fit. In brief, we think: * Information security (infosec) expertise may be crucial for addressing catastrophic risks related to AI and biosecurity. * More generally, security expertise may be useful for those attempting to reduce GCRs, because such work sometimes involves engaging with information that could do harm if misused. * We have thus far found it difficult to hire security professionals who aren't motivated by GCR reduction to work with us and some of our GCR-focused grantees, due to the high demand for security experts and the unconventional nature of our situation and that of some of our grantees. * More broadly, we expect there to continue to be a deficit of GCR-focused security expertise in AI and biosecurity, and that this deficit will result in several GCR-specific challenges and concerns being under-addressed by default. * It’s more likely than not that within 10 years, there will be dozens of GCR-focused roles in information security, and some organizations are already looking for candidates that fit their needs (and would hire them now, if they",https://forum.effectivealtruism.org/posts/ZJiCfwTy5dC4CoxqA/information-security-careers-for-gcr-reduction,2019,blogPost,"Zabel, Claire; Muehlhauser, Luke",Effective Altruism Forum An upper bound for the background rate of human extinction,"We evaluate the total probability of human extinction from naturally occurring processes. Such processes include risks that are well characterized such as asteroid impacts and supervolcanic eruptions, as well as risks that remain unknown. Using only the information that Homo sapiens has existed at least 200,000 years, we conclude that the probability that humanity goes extinct from natural causes in any given year is almost guaranteed to be less than one in 14,000, and likely to be less than one in 87,000. Using the longer track record of survival for our entire genus Homo produces even tighter bounds, with an annual probability of natural extinction likely below one in 870,000. These bounds are unlikely to be affected by possible survivorship bias in the data, and are consistent with mammalian extinction rates, typical hominin species lifespans, the frequency of well-characterized risks, and the frequency of mass extinctions. No similar guarantee can be made for risks that our ancestors did not face, such as anthropogenic climate change or nuclear/biological warfare.",https://www.nature.com/articles/s41598-019-47540-7,2019,journalArticle,"Snyder-Beattie, Andrew E.; Ord, Toby; Bonsall, Michael B.",Scientific Reports Deep reinforcement learning from human preferences,"For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.",https://papers.nips.cc/paper/2017/hash/d5e2c0adad503c91f91df240d0cd4e49-Abstract.html,2017,conferencePaper,"Christiano, Paul; Leike, Jan; Brown, Tom B.; Martic, Miljan; Legg, Shane; Amodei, Dario",Advances in Neural Information Processing Systems 30 (NIPS 2017) Avoiding Tampering Incentives in Deep RL via Decoupled Approval,"How can we design agents that pursue a given objective when all feedback mechanisms are influenceable by the agent? Standard RL algorithms assume a secure reward function, and can thus perform poorly in settings where agents can tamper with the reward-generating mechanism. We present a principled solution to the problem of learning from influenceable feedback, which combines approval with a decoupled feedback collection procedure. For a natural class of corruption functions, decoupled approval algorithms have aligned incentives both at convergence and for their local updates. Empirically, they also scale to complex 3D environments where tampering is possible.",http://arxiv.org/abs/2011.08827,2020,manuscript,"Uesato, Jonathan; Kumar, Ramana; Krakovna, Victoria; Everitt, Tom; Ngo, Richard; Legg, Shane", Factored Cognition,"Note: This post (originally published here) is the transcript of a presentation about a project worked on at the non-profit Ought. It is included in the sequence because it contains a very clear explanation of some of the key ideas behind iterated amplification. -------------------------------------------------------------------------------- The presentation below motivates our Factored Cognition project from an AI alignment angle and describes the state of our work as of May 2018. Andreas gave versions of this presentation at CHAI (4/25), a Deepmind-FHI seminar (5/24) and FHI (5/25). I'll talk about Factored Cognition, our current main project at Ought. This is joint work with Ozzie Gooen, Ben Rachbach, Andrew Schreiber, Ben Weinstein-Raun, and (as board members) Paul Christiano and Owain Evans. Before I get into the details of the project, I want to talk about the broader research program that it is part of. And to do that, I want to talk about research programs for AGI more generally. Right now, the dominant paradigm for researchers who explicitly work towards AGI is what you could call ""scalable learning and planning in complex environments"". This paradigm substantially relies on training agents in simulated physical environments to solve tasks that are similar to the sorts of tasks animals and humans can solve, sometimes in isolation and sometimes in competitive multi-agent settings. To be clear, not all tasks are physical tasks. There's also interest in more abstract environments as in the case of playing Go, proving theorems, or participating in goal-based dialog. For our purposes, the key characteristic of this research paradigm is that agents are optimized for success at particular tasks. To the extent that they learn particular decision-making strategies, those are learned implicitly. We only provide external supervision, and it wouldn't be entirely wrong to call this sort of approach ""recapitulating evolution"", even if this isn't exactly wha",https://www.lesswrong.com/posts/DFkGStzvj3jgXibFG/factored-cognition,2018,blogPost,"Stuhlmueller, Andreas",LessWrong Choice Set Misspecification in Reward Inference,"Specifying reward functions for robots that operate in environments without a natural reward signal can be challenging, and incorrectly specified rewards can incentivise degenerate or dangerous behavior. A promising alternative to manually specifying reward functions is to enable robots to infer them from human feedback, like demonstrations or corrections. To interpret this feedback, robots treat as approximately optimal a choice the person makes from a choice set, like the set of possible trajectories they could have demonstrated or possible corrections they could have made. In this work, we introduce the idea that the choice set itself might be difficult to specify, and analyze choice set misspecification: what happens as the robot makes incorrect assumptions about the set of choices from which the human selects their feedback. We propose a classification of different kinds of choice set misspecification, and show that these different classes lead to meaningful differences in the inferred reward and resulting performance. While we would normally expect misspecification to hurt, we find that certain kinds of misspecification are neither helpful nor harmful (in expectation). However, in other situations, misspecification can be extremely harmful, leading the robot to believe the opposite of what it should believe. We hope our results will allow for better prediction and response to the effects of misspecification in real-world reward inference.",http://ceur-ws.org/Vol-2640/paper_14.pdf,2020,conferencePaper,"Freedman, Rachel; Shah, Rohin; Dragan, Anca",CEUR Workshop Proceedings Revisiting the Insights model,,http://mediangroup.org/insights2.html,2019,blogPost,Median Group,Median Group Addressing Sample Complexity in Visual Tasks Using Hindsight Experience Replay and Hallucinatory GANs,"Reinforcement Learning (RL) algorithms typically require millions of environment interactions to learn successful policies in sparse reward settings. Hindsight Experience Replay (HER) was introduced as a technique to increase sample efficiency by re-imagining unsuccessful trajectories as successful ones by changing the originally intended goals. However, HER cannot be directly applied to visual environments where goal states are characterized by the presence of distinct visual features. In this work, we show how visual trajectories can be hallucinated to appear successful by altering agent observations using a generative model trained on relatively few snapshots of the goal. We then use this model in combination with HER to train RL agents in visual settings. We validate our approach on 3D navigation tasks and a simulated robotics application and show marked improvement over standard RL algorithms and baselines derived from previous work.",,2019,conferencePaper,"Sahni, Himanshu; Buckley, Toby; Abbeel, Pieter; Kuzovkin, Ilya",Advances in Neural Information Processing Systems 32 (NeurIPS 2019) Learning the Arrow of Time for Problems in Reinforcement Learning,,,2019,conferencePaper,"Rahaman, Nasim; Wolf, Steffen; Goyal, Anirudh; Remme, Roman; Bengio, Yoshua",International Conference on Learning Representations High Reliability and the Management of Critical Infrastructures,,http://doi.wiley.com/10.1111/j.0966-0879.2004.01201003.x,2004,journalArticle,"Schulman, Paul; Roe, Emery; Eeten, Michel van; Bruijne, Mark de",Journal of Contingencies and Crisis Management Generating Multi-Agent Trajectories using Programmatic Weak Supervision,"We study the problem of training sequential generative models for capturing coordinated multi-agent trajectory behavior, such as offensive basketball gameplay. When modeling such settings, it is often beneficial to design hierarchical models that can capture long-term coordination using intermediate variables. Furthermore, these intermediate variables should capture interesting high-level behavioral semantics in an interpretable and manipulatable way. We present a hierarchical framework that can effectively learn such sequential generative models. Our approach is inspired by recent work on leveraging programmatically produced weak labels, which we extend to the spatiotemporal regime. In addition to synthetic settings, we show how to instantiate our framework to effectively model complex interactions between basketball players and generate realistic multi-agent trajectories of basketball gameplay over long time periods. We validate our approach using both quantitative and qualitative evaluations, including a user study comparison conducted with professional sports analysts.",http://arxiv.org/abs/1803.07612,2019,conferencePaper,"Zhan, Eric; Zheng, Stephan; Yue, Yisong; Sha, Long; Lucey, Patrick",Seventh International Conference on Learning Representations (ICLR 2019) The Ingredients of Real World Robotic Reinforcement Learning,"Robots have been useful in environments that can be carefully controlled, such as those commonly found in industrial settings (e.g. assembly lines). However, in unstructured settings like the home, we need robotic systems that are adaptive to the diversity of the real world.",http://bair.berkeley.edu/blog/2020/04/27/ingredients/,2020,blogPost,"Gupta, Abhishek; Zhu, Henry; Yu, Justin; Kumar, Vikash; Shah, Dhruv; Levine, Sergey",The Berkeley Artificial Intelligence Research Blog Probing the improbable: methodological challenges for risks with low probabilities and high stakes,"Some risks have extremely high stakes. For example, a worldwide pandemic or asteroid impact could potentially kill more than a billion people. Comfortingly, scientific calcultions often put very low probabilities on the occurrence of such catastrophes. In this paper, we argue that there are important new methodological problems which arise when assessing global catastrophic risks and we focus on a problem regarding probability estimation. When an expert provides a calculation of the probability of an outcome, they are really providing the probability of the outcome occurring, given that their argument is watertight. However, their argument may fail for a number of reasons, such as a flaw in the underlying theory, a flaw in the modelling of the problem or a mistake in the calculations. If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect. We develop this idea formally, explaining how it differs from the related distinction between model and parameter uncertainty. Using the risk estimates from the Large Hadron Collider as a test case, we show how serious the problem can be when it comes to catastrophic risks and how best to address it.",https://doi.org/10.1080/13669870903126267,2010,journalArticle,"Ord, Toby; Hillerbrand, Rafaela; Sandberg, Anders",Journal of Risk Research The Steering Problem,"Using black-box access to human-level cognitive abilities, can we write a program that is as useful as a well-motivated human?",https://ai-alignment.com/the-steering-problem-a3543e65c5c4,2015,blogPost,"Christiano, Paul",AI Alignment (Medium) Learning latent state representation for speeding up exploration,"Exploration is an extremely challenging problem in reinforcement learning, especially in high dimensional state and action spaces and when only sparse rewards are available. Effective representations can indicate which components of the state are task relevant and thus reduce the dimensionality of the space to explore. In this work, we take a representation learning viewpoint on exploration, utilizing prior experience to learn effective latent representations, which can subsequently indicate which regions to explore. Prior experience on separate but related tasks help learn representations of the state which are effective at predicting instantaneous rewards. These learned representations can then be used with an entropy-based exploration method to effectively perform exploration in high dimensional spaces by effectively lowering the dimensionality of the search space. We show the benefits of this representation for meta-exploration in a simulated object pushing environment.",http://arxiv.org/abs/1905.12621,2019,conferencePaper,"Vezzani, Giulia; Gupta, Abhishek; Natale, Lorenzo; Abbeel, Pieter","arXiv:1905.12621 [cs, stat]" Regret-based Reward Elicitation for Markov Decision Processes,"The specification of aMarkov decision process (MDP) can be difficult. Reward function specification is especially problematic; in practice, it is often cognitively complex and time-consuming for users to precisely specify rewards. This work casts the problem of specifying rewards as one of preference elicitation and aims to minimize the degree of precision with which a reward function must be specified while still allowing optimal or near-optimal policies to be produced. We first discuss how robust policies can be computed for MDPs given only partial reward information using the minimax regret criterion. We then demonstrate how regret can be reduced by efficiently eliciting reward information using bound queries, using regret-reduction as a means for choosing suitable queries. Empirical results demonstrate that regret-based reward elicitation offers an effective way to produce near-optimal policies without resorting to the precise specification of the entire reward function.",https://arxiv.org/abs/1205.2619v1,2012,conferencePaper,"Regan, Kevin; Boutilier, Craig",Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence "Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection",,http://link.springer.com/10.1007/978-3-319-10401-0_8,2014,bookSection,"Winfield, Alan F. T.; Blum, Christian; Liu, Wenguo",Advances in Autonomous Robotics Systems Friendly Artificial Intelligence: the Physics Challenge,"Relentless progress in artificial intelligence (AI) is increasingly raising concerns that machines will replace humans on the job market, and perhaps altogether. Eliezer Yudkowski and others have explored the possibility that a promising future for humankind could be guaranteed by a superintelligent ""Friendly AI"", designed to safeguard humanity and its values. I argue that, from a physics perspective where everything is simply an arrangement of elementary particles, this might be even harder than it appears. Indeed, it may require thinking rigorously about the meaning of life: What is ""meaning"" in a particle arrangement? What is ""life""? What is the ultimate ethical imperative, i.e., how should we strive to rearrange the particles of our Universe and shape its future? If we fail to answer the last question rigorously, this future is unlikely to contain humans.",http://arxiv.org/abs/1409.0813,2014,conferencePaper,"Tegmark, Max",Artificial Intelligence and Ethics: Papers from the 2015 AAAI Workshop Who knows anything about anything about AI?,,,2014,journalArticle,"Armstrong, Stuart; ÓhÉigeartaigh, Seán",Intelligence Unbound: The Future of Uploaded and Machine Minds Human Compatible: Artificial Intelligence and the Problem of Control,"""The most important book on AI this year."" --The Guardian""Mr. Russell's exciting book goes deep, while sparkling with dry witticisms."" --The Wall Street Journal""The most important book I have read in quite some time"" (Daniel Kahneman); ""A must-read"" (Max Tegmark); ""The book we've all been waiting for"" (Sam Harris)A leading artificial intelligence researcher lays out a new approach to AI that will enable us to coexist successfully with increasingly intelligent machinesIn the popular imagination, superhuman artificial intelligence is an approaching tidal wave that threatens not just jobs and human relationships, but civilization itself. Conflict between humans and machines is seen as inevitable and its outcome all too predictable.In this groundbreaking book, distinguished AI researcher Stuart Russell argues that this scenario can be avoided, but only if we rethink AI from the ground up. Russell begins by exploring the idea of intelligence in humans and in machines. He describes the near-term benefits we can expect, from intelligent personal assistants to vastly accelerated scientific research, and outlines the AI breakthroughs that still have to happen before we reach superhuman AI. He also spells out the ways humans are already finding to misuse AI, from lethal autonomous weapons to viral sabotage.If the predicted breakthroughs occur and superhuman AI emerges, we will have created entities far more powerful than ourselves. How can we ensure they never, ever, have power over us? Russell suggests that we can rebuild AI on a new foundation, according to which machines are designed to be inherently uncertain about the human preferences they are required to satisfy. Such machines would be humble, altruistic, and committed to pursue our objectives, not theirs. This new foundation would allow us to create machines that are provably deferential and provably beneficial.",,2019,book,"Russell, Stuart", Liability For Present And Future Robotics Technology,,,2017,journalArticle,"White, Trevor N.; Baum, Seth D.",Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence AI Impacts research bounties,"We are offering rewards for several inputs to our research, described below. These offers have no specific deadline except where noted. We may modify them or take them down, but will give at least one week's notice here unless there is strong reason not to. To submit an entry, email katja@intelligence.org. There is currently a large backlog of entries to...",https://aiimpacts.org/ai-impacts-research-bounties/,2015,blogPost,AI Impacts,AI Impacts Literal or Pedagogic Human? Analyzing Human Model Misspecification in Objective Learning,"It is incredibly easy for a system designer to misspecify the objective for an autonomous system (“robot”), thus motivating the desire to have the robot learn the objective from human behavior instead. Recent work has suggested that people have an interest in the robot performing well, and will thus behave pedagogically, choosing actions that are informative to the robot. In turn, robots benefit from interpreting the behavior by accounting for this pedagogy. In this work, we focus on misspecification: we argue that robots might not know whether people are being pedagogic or literal and that it is important to ask which assumption is safer to make. We cast objective learning into the more general form of a common-payoff game between the robot and human, and prove that in any such game literal interpretation is more robust to misspecification. Experiments with human data support our theoretical results and point to the sensitivity of the pedagogic assumption.",http://proceedings.mlr.press/v115/milli20a.html,2019,conferencePaper,"Milli, Smitha; Dragan, Anca D.",Proceedings of The 35th Uncertainty in Artificial Intelligence Conference How to study superintelligence strategy,,,2014,blogPost,"Muehlhauser, Luke",Luke Muehlhauser The great downside dilemma for risky emerging technologies,,http://stacks.iop.org/1402-4896/89/i=12/a=128004?key=crossref.f5938bc78a3023d740968f020cfa9970,2014,journalArticle,"Baum, Seth D",Physica Scripta Doing more with less: meta-reasoning and meta-learning in humans and machines,"Artificial intelligence systems use an increasing amount of computation and data to solve very specific problems. By contrast, human minds solve a wide range of problems using a fixed amount of computation and limited experience. We identify two abilities that we see as crucial to this kind of general intelligence: meta-reasoning (deciding how to allocate computational resources) and meta-learning (modeling the learning environment to make better use of limited data). We summarize the relevant AI literature and relate the resulting ideas to recent work in psychology.",http://www.sciencedirect.com/science/article/pii/S2352154618302122,2019,journalArticle,"Griffiths, Thomas L; Callaway, Frederick; Chang, Michael B; Grant, Erin; Krueger, Paul M; Lieder, Falk",Current Opinion in Behavioral Sciences On Functional Decision Theory,,https://www.umsu.de/blog/2018/688,2018,blogPost,"Schwarz, Wolfgang",Wolfgang Schwarz Enabling Robots to Communicate their Objectives,"The overarching goal of this work is to efficiently enable end-users to correctly anticipate a robot's behavior in novel situations. Since a robot's behavior is often a direct result of its underlying objective function, our insight is that end-users need to have an accurate mental model of this objective function in order to understand and predict what the robot will do. While people naturally develop such a mental model over time through observing the robot act, this familiarization process may be lengthy. Our approach reduces this time by having the robot model how people infer objectives from observed behavior, and then it selects those behaviors that are maximally informative. The problem of computing a posterior over objectives from observed behavior is known as Inverse Reinforcement Learning (IRL), and has been applied to robots learning human objectives. We consider the problem where the roles of human and robot are swapped. Our main contribution is to recognize that unlike robots, humans will not be exact in their IRL inference. We thus introduce two factors to define candidate approximate-inference models for human learning in this setting, and analyze them in a user study in the autonomous driving domain. We show that certain approximate-inference models lead to the robot generating example behaviors that better enable users to anticipate what it will do in novel situations. Our results also suggest, however, that additional research is needed in modeling how humans extrapolate from examples of robot behavior.",https://arxiv.org/abs/1702.03465v2,2017,journalArticle,"Huang, Sandy H.; Held, David; Abbeel, Pieter; Dragan, Anca D.",Autonomous Robots Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims,"With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held accountable. Those outside of a given organization also need effective means of scrutinizing such claims. This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.",http://arxiv.org/abs/2004.07213,2020,manuscript,"Brundage, Miles; Avin, Shahar; Wang, Jasmine; Belfield, Haydn; Krueger, Gretchen; Hadfield, Gillian; Khlaaf, Heidy; Yang, Jingying; Toner, Helen; Fong, Ruth; Maharaj, Tegan; Koh, Pang Wei; Hooker, Sara; Leung, Jade; Trask, Andrew; Bluemke, Emma; Lebensold, Jonathan; O'Keefe, Cullen; Koren, Mark; Ryffel, Théo; Rubinovitz, J. B.; Besiroglu, Tamay; Carugati, Federica; Clark, Jack; Eckersley, Peter; de Haas, Sarah; Johnson, Maritza; Laurie, Ben; Ingerman, Alex; Krawczuk, Igor; Askell, Amanda; Cammarota, Rosario; Lohn, Andrew; Krueger, David; Stix, Charlotte; Henderson, Peter; Graham, Logan; Prunkl, Carina; Martin, Bianca; Seger, Elizabeth; Zilberman, Noa; hÉigeartaigh, Seán Ó; Kroeger, Frens; Sastry, Girish; Kagan, Rebecca; Weller, Adrian; Tse, Brian; Barnes, Elizabeth; Dafoe, Allan; Scharre, Paul; Herbert-Voss, Ariel; Rasser, Martijn; Sodhani, Shagun; Flynn, Carrick; Gilbert, Thomas Krendl; Dyer, Lisa; Khan, Saif; Bengio, Yoshua; Anderljung, Markus", Simplicity and probability in causal explanation,,https://linkinghub.elsevier.com/retrieve/pii/S0010028506000739,2007,journalArticle,"Lombrozo, T",Cognitive Psychology Interpretable Multi-Objective Reinforcement Learning through Policy Orchestration,"Autonomous cyber-physical agents and systems play an increasingly large role in our lives. To ensure that agents behave in ways aligned with the values of the societies in which they operate, we must develop techniques that allow these agents to not only maximize their reward in an environment, but also to learn and follow the implicit constraints of society. These constraints and norms can come from any number of sources including regulations, business process guidelines, laws, ethical principles, social norms, and moral values. We detail a novel approach that uses inverse reinforcement learning to learn a set of unspecified constraints from demonstrations of the task, and reinforcement learning to learn to maximize the environment rewards. More precisely, we assume that an agent can observe traces of behavior of members of the society but has no access to the explicit set of constraints that give rise to the observed behavior. Inverse reinforcement learning is used to learn such constraints, that are then combined with a possibly orthogonal value function through the use of a contextual bandit-based orchestrator that picks a contextually-appropriate choice between the two policies (constraint-based and environment reward-based) when taking actions. The contextual bandit orchestrator allows the agent to mix policies in novel ways, taking the best actions from either a reward maximizing or constrained policy. In addition, the orchestrator is transparent on which policy is being employed at each time step. We test our algorithms using a Pac-Man domain and show that the agent is able to learn to act optimally, act within the demonstrated constraints, and mix these two functions in complex ways.",http://arxiv.org/abs/1809.08343,2018,manuscript,"Noothigattu, Ritesh; Bouneffouf, Djallel; Mattei, Nicholas; Chandra, Rachita; Madan, Piyush; Varshney, Kush; Campbell, Murray; Singh, Moninder; Rossi, Francesca", Are Labels Required for Improving Adversarial Robustness?,"Recent work has uncovered the interesting (and somewhat surprising) finding that training models to be invariant to adversarial perturbations requires substantially larger datasets than those required for standard classification. This result is a key hurdle in the deployment of robust machine learning models in many real world applications where labeled data is expensive. Our main insight is that unlabeled data can be a competitive alternative to labeled data for training adversarially robust models. Theoretically, we show that in a simple statistical setting, the sample complexity for learning an adversarially robust model from unlabeled data matches the fully supervised case up to constant factors. On standard datasets like CIFAR-10, a simple Unsupervised Adversarial Training (UAT) approach using unlabeled data improves robust accuracy by 21.7% over using 4K supervised examples alone, and captures over 95% of the improvement from the same number of labeled examples. Finally, we report an improvement of 4% over the previous state-of-the-art on CIFAR-10 against the strongest known attack by using additional unlabeled data from the uncurated 80 Million Tiny Images dataset. This demonstrates that our finding extends as well to the more realistic case where unlabeled data is also uncurated, therefore opening a new avenue for improving adversarial training.",http://arxiv.org/abs/1905.13725,2019,conferencePaper,"Uesato, Jonathan; Alayrac, Jean-Baptiste; Huang, Po-Sen; Stanforth, Robert; Fawzi, Alhussein; Kohli, Pushmeet",Advances in Neural Information Processing Systems 32 (NeurIPS 2019) Rough Consensus and Running Code' and the Internet-OSI Standards War,,http://ieeexplore.ieee.org/document/1677461/,2006,journalArticle,"Russell, A.L.",IEEE Annals of the History of Computing Sample Efficient Reinforcement Learning through Learning from Demonstrations in Minecraft,"Sample inefficiency of deep reinforcement learning methods is a major obstacle for their use in real-world applications. In this work, we show how human demonstrations can improve final performance of agents on the Minecraft minigame ObtainDiamond with only 8M frames of environment interaction. We propose a training procedure where policy networks are first trained on human data and later fine-tuned by reinforcement learning. Using a policy exploitation mechanism, experience replay and an additional loss against catastrophic forgetting, our best agent was able to achieve a mean score of 48. Our proposed solution placed 3rd in the NeurIPS MineRL Competition for Sample-Efficient Reinforcement Learning.",http://arxiv.org/abs/2003.06066,2020,manuscript,"Scheller, Christian; Schraner, Yanick; Vogel, Manfred", "Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More","An actual debate about instrumental convergence, in a public space! Major respect to all involved, especially Yoshua Bengio for great facilitation. For posterity (i.e. having a good historical archive) and further discussion, I've reproduced the conversation here. I'm happy to make edits at the request of anyone in the discussion who is quoted below. I've improved formatting for clarity and fixed some typos. For people who are not researchers in this area who wish to comment, see the public version of this post here. For people who do work on the relevant areas, please sign up in the top right. It will take a day or so to confirm membership. ORIGINAL POST Yann LeCun: ""don't fear the Terminator"", a short opinion piece by Tony Zador and me that was just published in Scientific American. ""We dramatically overestimate the threat of an accidental AI takeover, because we tend to conflate intelligence with the drive to achieve dominance. [...] But intelligence per se does not generate the drive for domination, any more than horns do."" https://blogs.scientificamerican.com/observations/dont-fear-the-terminator/ COMMENT THREAD #1 Elliot Olds: Yann, the smart people who are very worried about AI seeking power and ensuring its own survival believe it's a big risk because power and survival are instrumental goals for almost any ultimate goal. If you give a generally intelligent AI the goal to make as much money in the stock market as possible, it will resist being shut down because that would interfere with tis goal. It would try to become more powerful because then it could make money more effectively. This is the natural consequence of giving a smart agent a goal, unless we do something special to counteract this. You've often written about how we shouldn't be so worried about AI, but I've never seen you address this point directly. Stuart Russell: It is trivial to construct a toy MDP in which the agent's only reward comes from fetching the coffee. If, in that MDP, the",https://www.alignmentforum.org/posts/WxW6Gc6f2z3mzmqKs/debate-on-instrumental-convergence-between-lecun-russell,2019,blogPost,"Pace, Ben",AI Alignment Forum Scaled Autonomy: Enabling Human Operators to Control Robot Fleets,"Autonomous robots often encounter challenging situations where their control policies fail and an expert human operator must briefly intervene, e.g., through teleoperation. In settings where multiple robots act in separate environments, a single human operator can manage a fleet of robots by identifying and teleoperating one robot at any given time. The key challenge is that users have limited attention: as the number of robots increases, users lose the ability to decide which robot requires teleoperation the most. Our goal is to automate this decision, thereby enabling users to supervise more robots than their attention would normally allow for. Our insight is that we can model the user's choice of which robot to control as an approximately optimal decision that maximizes the user's utility function. We learn a model of the user's preferences from observations of the user's choices in easy settings with a few robots, and use it in challenging settings with more robots to automatically identify which robot the user would most likely choose to control, if they were able to evaluate the states of all robots at all times. We run simulation experiments and a user study with twelve participants that show our method can be used to assist users in performing a navigation task and manipulator reaching task.",http://arxiv.org/abs/1910.02910,2019,conferencePaper,"Swamy, Gokul; Reddy, Siddharth; Levine, Sergey; Dragan, Anca D.",2020 IEEE International Conference on Robotics and Automation (ICRA) Avoiding Unintended AI Behaviors,,http://link.springer.com/10.1007/978-3-642-35506-6_12,2012,bookSection,"Hibbard, Bill",Artificial General Intelligence "Meaning, Medicine, and Merit","Abstract Given the inevitability of scarcity, should public institutions ration healthcare resources so as to prioritize those who contribute more to society? Intuitively, we may feel that this would be somehow inegalitarian. I argue that the egalitarian objection to prioritizing treatment on the basis of patients’ usefulness to others is best thought of as semiotic: i.e. as having to do with what this practice would mean, convey, or express about a person's standing. I explore the implications of this conclusion when taken in conjunction with the observation that semiotic objections are generally flimsy, failing to identify anything wrong with a practice as such and having limited capacity to generalize beyond particular contexts.",https://www.cambridge.org/core/product/identifier/S0953820819000360/type/journal_article,2020,journalArticle,"Mogensen, Andreas L.",Utilitas Predicting Human Deliberative Judgments with Machine Learning,,,2018,report,"Evans, Owain; Stuhlmüller, Andreas; Cundy, Chris; Carey, Ryan; Kenton, Zachary; McGrath, Thomas; Schreiber, Andrew", Chinese Perspectives on AI and Future Military Capabilities,"The world is watching how the Chinese military develops and deploys artificial intelligence—but how exactly will it apply AI? This policy brief analyzes Chinese experts’ arguments about AI and prospective warfighting capabilities, identifying prevailing concerns about strategic stability and unintended escalation.",https://cset.georgetown.edu/research/chinese-perspectives-on-ai-and-future-military-capabilities/,2020,report,"Fedasiuk, Ryan", Large-Scale Study of Curiosity-Driven Learning,"Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://pathak22.github.io/large-scale-curiosity/",http://arxiv.org/abs/1808.04355,2018,manuscript,"Burda, Yuri; Edwards, Harri; Pathak, Deepak; Storkey, Amos; Darrell, Trevor; Efros, Alexei A.", The benefits and harm of transmitting into space,,https://linkinghub.elsevier.com/retrieve/pii/S0265964612001361,2013,journalArticle,"Haqq-Misra, Jacob; Busch, Michael W.; Som, Sanjoy M.; Baum, Seth D.",Space Policy The Off-Switch Game,"It is clear that one of the primary tools we can use to mitigate the potential risk from a misbehaving AI system is the ability to turn the system off. As the capabilities of AI systems improve, it is important to ensure that such systems do not adopt subgoals that prevent a human from switching them off. This is a challenge because many formulations of rational agents create strong incentives for self-preservation. This is not caused by a built-in instinct, but because a rational agent will maximize expected utility and cannot achieve whatever objective it has been given if it is dead. Our goal is to study the incentives an agent has to allow itself to be switched off. We analyze a simple game between a human H and a robot R, where H can press R’s off switch but R can disable the off switch. A traditional agent takes its reward function for granted: we show that such agents have an incentive to disable the off switch, except in the special case where H is perfectly rational. Our key insight is that for R to want to preserve its off switch, it needs to be uncertain about the utility associated with the outcome, and to treat H’s actions as important observations about that utility. (R also has no incentive to switch itself off in this setting.) We conclude that giving machines an appropriate level of uncertainty about their objectives leads to safer designs, and we argue that this setting is a useful generalization of the classical AI paradigm of rational agents.",https://www.ijcai.org/proceedings/2017/32,2017,conferencePaper,"Hadfield-Menell, Dylan; Dragan, Anca; Abbeel, Pieter; Russell, Stuart",Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence Boomerang Effects in Science Communication: How Motivated Reasoning and Identity Cues Amplify Opinion Polarization About Climate Mitigation Policies,,http://journals.sagepub.com/doi/10.1177/0093650211416646,2012,journalArticle,"Hart, P. Sol; Nisbet, Erik C.",Communication Research Incorrigibility in the CIRL Framework,"A value learning system has incentives to follow shutdown instructions, assuming the shutdown instruction provides information (in the technical sense) about which actions lead to valuable outcomes. However, this assumption is not robust to model mis-specification (e.g., in the case of programmer errors). We demonstrate this by presenting some Supervised POMDP scenarios in which errors in the parameterized reward function remove the incentive to follow shutdown commands. These difficulties parallel those discussed by Soares et al. (2015) in their paper on corrigibility. We argue that it is important to consider systems that follow shutdown commands under some weaker set of assumptions (e.g., that one small verified module is correctly implemented; as opposed to an entire prior probability distribution and/or parameterized reward function). We discuss some difficulties with simple ways to attempt to attain these sorts of guarantees in a value learning framework.",http://arxiv.org/abs/1709.06275,2018,conferencePaper,"Carey, Ryan","AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society" "Safety engineering, target selection, and alignment theory","Artificial intelligence capabilities research is aimed at making computer systems more intelligent — able to solve a wider range of problems more effectively and efficiently. We can distinguish this from research specifically aimed at making AI systems at various capability levels safer, or more “robust and beneficial.” In this post, I distinguish three kinds of direct... Read more »",https://intelligence.org/2015/12/31/safety-engineering-target-selection-and-alignment-theory/,2015,blogPost,"Soares, Nate",Machine Intelligence Research Institute A Survey of Data Mining and Machine Learning Methods for Cyber Security Intrusion Detection,,https://ieeexplore.ieee.org/document/7307098/,2016,journalArticle,"Buczak, Anna L.; Guven, Erhan",IEEE Communications Surveys & Tutorials Sleeping Beauty and Self-location: A Hybrid Model,,http://link.springer.com/10.1007/s11229-006-9010-7,2007,journalArticle,"Bostrom, Nick",Synthese Performance of Bounded-Rational Agents With the Ability to Self-Modify,"Self-modification of agents embedded in complex environments is hard to avoid, whether it happens via direct means (e.g. own code modification) or indirectly (e.g. influencing the operator, exploiting bugs or the environment). While it has been argued that intelligent agents have an incentive to avoid modifying their utility function so that their future instances will work towards the same goals, it is not clear whether this also applies in non-dualistic scenarios, where the agent is embedded in the environment. The problem of self-modification safety is raised by Bostrom in Superintelligence (2014) in the context of safe AGI deployment. In contrast to Everitt et al. (2016), who formally show that providing an option to self-modify is harmless for perfectly rational agents, we show that for agents with bounded rationality, self-modification may cause exponential deterioration in performance and gradual misalignment of a previously aligned agent. We investigate how the size of this effect depends on the type and magnitude of imperfections in the agent's rationality (1-4 below). We also discuss model assumptions and the wider problem and framing space. Specifically, we introduce several types of a bounded-rational agent, which either (1) doesn't always choose the optimal action, (2) is not perfectly aligned with human values, (3) has an innacurate model of the environment, or (4) uses the wrong temporal discounting factor. We show that while in the cases (2)-(4) the misalignment caused by the agent's imperfection does not worsen over time, with (1) the misalignment may grow exponentially.",http://arxiv.org/abs/2011.06275,2020,manuscript,"Tětek, Jakub; Sklenka, Marek; Gavenčiak, Tomáš", "Subagents, introspective awareness, and blending","In this post, I extend the model of mind that I've been building up in previous posts to explain some things about change blindness, not knowing whether you are conscious, forgetting most of your thoughts, and mistaking your thoughts and emotions as objective facts, while also connecting it with the theory in the meditation book The Mind Illuminated. (If you didn't read my previous posts, this article has been written to also work as a stand-alone piece.)The Mind Illuminated (Amazon, SSC review), or TMI for short, presents what it calls the moments of consciousness model. According to this model, our stream of consciousness consists of a series of discrete moments, each a mental object. Under this model, there are always different “subminds” which are projecting mental objects into consciousness. At different moments, different mental objects get selected as the content of consciousness. If you’ve read some of the previous posts in this sequence, you may recognize this as sounding familiar. We started by discussing some of the neuroscience research on consciousness. There we covered the GWT/GNW theory of consciousness being a “workspace” in the brain that different brain systems project information into, and which allows them to synchronize their processing around a single piece of information. In the next post, we discussed the psychotherapy model of Internal Family Systems, which also conceives the mind of being composed of different parts, many of which are trying to accomplish various aims by competing to project various mental objects into consciousness. (TMI talks about subminds, IFS talks about parts, GWT/GNW just talks about different parts of the brain; for consistency’s sake, I will just use “subagent” in the rest of this post.) At this point, we might want to look at some criticisms of this kind of a framework. Susan Blackmore has written an interesting paper called “There is no stream of consciousness”. She has several examples for why we should rejec",https://www.lesswrong.com/posts/AhcEaqWYpa2NieNsK/subagents-introspective-awareness-and-blending,2019,blogPost,"Sotala, Kaj",LessWrong A citizen's guide to artificial intelligence,"""An accessible overview of the threats and opportunities inherent in automated decision making in academia, government, and industry""--",,2020,book,"Zerilli, John; Danaher, John; Maclaurin, James; Gavaghan, Colin; Knott, Alistair; Liddicoat, Joy; Noorman, Merel E.", The easy goal inference problem is still hard,"Posted as part of the AI Alignment Forum sequence on Value Learning. Rohin’s note: In this post (original here), Paul Christiano analyzes the ambitious value learning approach. He considers a more general view of ambitious value learning where you infer preferences more generally (i.e. not necessarily in the form of a utility function), and you can ask the user about their preferences, but it’s fine to imagine that you infer a utility function from data and then optimize it. The key takeaway is that in order to infer preferences that can lead to superhuman performance, it is necessary to understand how humans are biased, which seems very hard to do even with infinite data. -------------------------------------------------------------------------------- One approach to the AI control problem goes like this: 1. Observe what the user of the system says and does. 2. Infer the user’s preferences. 3. Try to make the world better according to the user’s preference, perhaps while working alongside the user and asking clarifying questions. This approach has the major advantage that we can begin empirical work today — we can actually build systems which observe user behavior, try to figure out what the user wants, and then help with that. There are many applications that people care about already, and we can set to work on making rich toy models. It seems great to develop these capabilities in parallel with other AI progress, and to address whatever difficulties actually arise, as they arise. That is, in each domain where AI can act effectively, we’d like to ensure that AI can also act effectively in the service of goals inferred from users (and that this inference is good enough to support foreseeable applications). This approach gives us a nice, concrete model of each difficulty we are trying to address. It also provides a relatively clear indicator of whether our ability to control AI lags behind our ability to build it. And by being technically interesting an",https://www.alignmentforum.org/posts/h9DesGT3WT9u2k7Hr/the-easy-goal-inference-problem-is-still-hard,2018,blogPost,"Christiano, Paul",AI Alignment Forum Better priors as a safety problem,Many universal priors are inefficient in the finite data regime. I argue that’s a safety problem and we should try to fix it directly.,https://ai-alignment.com/better-priors-as-a-safety-problem-24aa1c300710,2020,blogPost,"Christiano, Paul",AI Alignment (Medium) Protecting the Ozone Layer: The United Nations History,,https://www.taylorfrancis.com/books/9781849772266,2012,book,"Andersen, Stephen O", Deconfusing Human Values Research Agenda v1,"On Friday I attended the 2020 Foresight AGI Strategy Meeting. Eventually a report will come out summarizing some of what was talked about, but for now I want to focus on what I talked about in my session on deconfusing human values. For that session I wrote up some notes summarizing what I've been working on and thinking about. None of it is new, but it is newly condensed in one place and in convenient list form, and it provides a decent summary of the current state of my research agenda for building beneficial superintelligent AI; a version 1 of my agenda, if you will. Thus, I hope this will be helpful in making it a bit clearer what it is I'm working on, why I'm working on it, and what direction my thinking is moving in. As always, if you're interesting in collaborating on things, whether that be discussing ideas or something more, please reach out. PROBLEM OVERVIEW * I think we're confused about what we really mean when we talk about human values. * This is a problem because: * building aligned AI likely requires a mathematically precise understanding of the structure of human values, though not necessarily the content of human values;we can't trust AI to discover that structure for us because we would need to understand it enough to verify the result, and I think we're so confused about what human values are we couldn't do that without high risk of error. * What are values? * We don't have an agreed upon precise definition, but loosely it's ""stuff people care about"". * When I talk about ""values"" I mean the cluster we sometimes also point at with words like value, preference, affinity, taste, aesthetic, intention, and axiology. Importantly, what people care about is used to make decisions, and this has had implications for existing approaches to understanding values. * Much research on values tries to understand the content of human values or why humans value what they value, but not what the structure of human",https://www.alignmentforum.org/posts/k8F8TBzuZtLheJt47/deconfusing-human-values-research-agenda-v1,2020,blogPost,G Gordon Worley III,AI Alignment Forum A Psychopathological Approach to Safety Engineering in AI and AGI,"The complexity of dynamics in AI techniques is already approaching that of complex adaptive systems, thus curtailing the feasibility of formal controllability and reachability analysis in the context of AI safety. It follows that the envisioned instances of Artificial General Intelligence (AGI) will also suffer from challenges of complexity. To tackle such issues, we propose the modeling of deleterious behaviors in AI and AGI as psychological disorders, thereby enabling the employment of psychopathological approaches to analysis and control of misbehaviors. Accordingly, we present a discussion on the feasibility of the psychopathological approaches to AI safety, and propose general directions for research on modeling, diagnosis, and treatment of psychological disorders in AGI.",https://arxiv.org/abs/1805.08915v1,2018,conferencePaper,"Behzadan, Vahid; Munir, Arslan; Yampolskiy, Roman V.", "War, Peace and International Relations: An introduction to strategic history",,https://www.taylorfrancis.com/books/9780203180952,2013,book,"Gray, Colin", Motivating the Rules of the Game for Adversarial Example Research,"Advances in machine learning have led to broad deployment of systems with impressive performance on important problems. Nonetheless, these systems can be induced to make errors on data that are surprisingly similar to examples the learned system handles correctly. The existence of these errors raises a variety of questions about out-of-sample generalization and whether bad actors might use such examples to abuse deployed systems. As a result of these security concerns, there has been a flurry of recent papers proposing algorithms to defend against such malicious perturbations of correctly handled examples. It is unclear how such misclassifications represent a different kind of security problem than other errors, or even other attacker-produced examples that have no specific relationship to an uncorrupted input. In this paper, we argue that adversarial example defense papers have, to date, mostly considered abstract, toy games that do not relate to any specific security concern. Furthermore, defense papers have not yet precisely described all the abilities and limitations of attackers that would be relevant in practical security. Towards this end, we establish a taxonomy of motivations, constraints, and abilities for more plausible adversaries. Finally, we provide a series of recommendations outlining a path forward for future work to more clearly articulate the threat model and perform more meaningful evaluation.",http://arxiv.org/abs/1807.06732,2018,manuscript,"Gilmer, Justin; Adams, Ryan P.; Goodfellow, Ian; Andersen, David; Dahl, George E.", One-Shot Hierarchical Imitation Learning of Compound Visuomotor Tasks,"We consider the problem of learning multi-stage vision-based tasks on a real robot from a single video of a human performing the task, while leveraging demonstration data of subtasks with other objects. This problem presents a number of major challenges. Video demonstrations without teleoperation are easy for humans to provide, but do not provide any direct supervision. Learning policies from raw pixels enables full generality but calls for large function approximators with many parameters to be learned. Finally, compound tasks can require impractical amounts of demonstration data, when treated as a monolithic skill. To address these challenges, we propose a method that learns both how to learn primitive behaviors from video demonstrations and how to dynamically compose these behaviors to perform multi-stage tasks by ""watching"" a human demonstrator. Our results on a simulated Sawyer robot and real PR2 robot illustrate our method for learning a variety of order fulfillment and kitchen serving tasks with novel objects and raw pixel inputs.",http://arxiv.org/abs/1810.11043,2018,manuscript,"Yu, Tianhe; Abbeel, Pieter; Levine, Sergey; Finn, Chelsea", Partial Awareness,"We develop a modal logic to capture partial awareness. The logic has three building blocks: objects, properties, and concepts. Properties are unary predicates on objects; concepts are Boolean combinations of properties. We take an agent to be partially aware of a concept if she is aware of the concept without being aware of the properties that define it. The logic allows for quantification over objects and properties, so that the agent can reason about her own unawareness. We then apply the logic to contracts, which we view as syntactic objects that dictate outcomes based on the truth of formulas. We show that when agents are unaware of some relevant properties, referencing concepts that agents are only partially aware of can improve welfare.",https://aaai.org/ojs/index.php/AAAI/article/view/4138,2019,conferencePaper,"Halpern, Joseph Y.; Piermont, Evan",Proceedings of the AAAI Conference on Artificial Intelligence Responsible AI—Two Frameworks for Ethical Design Practice,"In 2019, the IEEE launched the P7000 standards projects intended to address ethical issues in the design of autonomous and intelligent systems. This move came amidst a growing public concern over the unintended consequences of artificial intelligence (AI), compounded by the lack of an anticipatory process for attending to ethical impact within professional practice. However, the difficulty in moving from principles to practice presents a significant challenge to the implementation of ethical guidelines. Herein, we describe two complementary frameworks for integrating ethical analysis into engineering practice to help address this challenge. We then provide the outcomes of an ethical analysis informed by these frameworks, conducted within the specific context of Internet-delivered therapy in digital mental health. We hope both the frameworks and analysis can provide tools and insights, not only for the context of digital healthcare but also for data-enabled and intelligent technology development more broadly.",https://ieeexplore.ieee.org/document/9001063/,2020,journalArticle,"Peters, Dorian; Vold, Karina; Robinson, Diana; Calvo, Rafael A.",IEEE Transactions on Technology and Society Soft Actor-Critic Algorithms and Applications,"Model-free deep reinforcement learning (RL) algorithms have been successfully applied to a range of challenging sequential decision making and control tasks. However, these methods typically suffer from two major challenges: high sample complexity and brittleness to hyperparameters. Both of these challenges limit the applicability of such methods to real-world domains. In this paper, we describe Soft Actor-Critic (SAC), our recently introduced off-policy actor-critic algorithm based on the maximum entropy RL framework. In this framework, the actor aims to simultaneously maximize expected return and entropy. That is, to succeed at the task while acting as randomly as possible. We extend SAC to incorporate a number of modifications that accelerate training and improve stability with respect to the hyperparameters, including a constrained formulation that automatically tunes the temperature hyperparameter. We systematically evaluate SAC on a range of benchmark tasks, as well as real-world challenging tasks such as locomotion for a quadrupedal robot and robotic manipulation with a dexterous hand. With these improvements, SAC achieves state-of-the-art performance, outperforming prior on-policy and off-policy methods in sample-efficiency and asymptotic performance. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving similar performance across different random seeds. These results suggest that SAC is a promising candidate for learning in real-world robotics tasks.",http://arxiv.org/abs/1812.05905,2019,manuscript,"Haarnoja, Tuomas; Zhou, Aurick; Hartikainen, Kristian; Tucker, George; Ha, Sehoon; Tan, Jie; Kumar, Vikash; Zhu, Henry; Gupta, Abhishek; Abbeel, Pieter; Levine, Sergey", The “big red button” is too late: an alternative model for the ethical evaluation of AI systems,,http://link.springer.com/10.1007/s10676-018-9447-7,2018,journalArticle,"Arnold, Thomas; Scheutz, Matthias",Ethics and Information Technology Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance,"Achieving the global benefits of artificial intelligence (AI) will require international cooperation on many areas of governance and ethical standards, while allowing for diverse cultural perspectives and priorities. There are many barriers to achieving this at present, including mistrust between cultures, and more practical challenges of coordinating across different locations. This paper focuses particularly on barriers to cooperation between Europe and North America on the one hand and East Asia on the other, as regions which currently have an outsized impact on the development of AI ethics and governance. We suggest that there is reason to be optimistic about achieving greater cross-cultural cooperation on AI ethics and governance. We argue that misunderstandings between cultures and regions play a more important role in undermining cross-cultural trust, relative to fundamental disagreements, than is often supposed. Even where fundamental differences exist, these may not necessarily prevent productive cross-cultural cooperation, for two reasons: (1) cooperation does not require achieving agreement on principles and standards for all areas of AI; and (2) it is sometimes possible to reach agreement on practical issues despite disagreement on more abstract values or principles. We believe that academia has a key role to play in promoting cross-cultural cooperation on AI ethics and governance, by building greater mutual understanding, and clarifying where different forms of agreement will be both necessary and possible. We make a number of recommendations for practical steps and initiatives, including translation and multilingual publication of key documents, researcher exchange programmes, and development of research agendas on cross-cultural topics.",http://link.springer.com/10.1007/s13347-020-00402-x,2020,journalArticle,"ÓhÉigeartaigh, Seán S.; Whittlestone, Jess; Liu, Yang; Zeng, Yi; Liu, Zhe",Philosophy & Technology