filename
stringlengths 9
127
| text
stringlengths 133
11k
|
---|---|
2008.05912.pdf | Published as a conference paper at ICLR 2021
ASTATISTICAL THEORY OF COLD POSTERIORS IN DEEP
NEURAL NETWORKS
Laurence Aitchison
Department of Computer Science,
University of Bristol,
Bristol, UK, F94W 9Q
laurence.aitchison@bristol.ac.uk
ABSTRACT
To get Bayesian neural networks to perform comparably to standard neural net-
works it is usually necessary to artificially reduce uncertainty using a “tempered” or
“cold” posterior. This is extremely concerning: if the generative model is accurate,
Bayesian inference/decision theory is optimal, and any artificial changes to the
posterior should harm performance. While this suggests that the prior may be
at fault, here we argue that in fact, BNNs for image classification use the wrong
likelihood. In particular, standard image benchmark datasets such as CIFAR-10 are
carefully curated. We develop a generative model describing curation which gives
a principled Bayesian account of cold posteriors, because the likelihood under this
new generative model closely matches the tempered likelihoods used in past work.
1 I NTRODUCTION
Recent work has highlighted that Bayesian neural networks (BNNs) typically have better predictive
performance when we “sharpen” the posterior (Wenzel et al., 2020). In stochastic gradient Langevin
dynamics (SGLD) (Welling & Teh, 2011), this can be achieved by multiplying the log-posterior by
1/T, where the “temperature”, Tis smaller than 1(Wenzel et al., 2020). Broadly the same effect can
be achieved in variational inference by “tempering”, i.e. downweighting the KL term. As noted in
Wenzel et al. (2020), this approach has been used in many recent papers to obtain good performance,
albeit without always emphasising the importance of this factor (Zhang et al., 2017; Bae et al., 2018;
Osawa et al., 2019; Ashukha et al., 2020).
These results are puzzling if we take the usual Bayesian viewpoint, which says that the Bayesian
posterior, used with the right prior, and in combination with Bayes decision theory should give optimal
performance (Jaynes, 2003). Thus, these results may suggest we are using the wrong prior. While
new priors have been suggested (e.g. Ober & Aitchison, 2020), they give only minor improvements
in performance — certainly nothing like enough to close the gap to carefully trained non-Bayesian
networks. In contrast, tempered posteriors directly give performance comparable to a carefully trained
finite network.
The failure to develop an effective prior suggests that we should consider alternative explanations for
the effectiveness of tempering. Here, we consider the possibility that it is predominantly (but not
entirely) the likelihood , and not the prior that is at fault. In particular, we note that standard image
benchmark datasets such as ImageNet and CIFAR-10 are carefully curated, and that it is important
to consider this curation as part of our generative model. We develop a simplified generative model
describing dataset curation which assumes that a datapoint is included in the dataset only if there is
unanimous agreement on the class amongst multiple labellers. This model naturally multiplies the
effect of each datapoint, and hence gives posteriors that closely match tempered or cold posteriors. We
show that toy data drawn from our generative model of curation can give rise to optimal temperatures
being smaller than 1. Our model predicts that cold posteriors will not be helpful when the original
underlying labels from all labellers are available. While these are not available for standard datasets
such as CIFAR-10, we found a good proxy: the CIFAR-10H dataset (Peterson et al., 2019), in which
∼50humans annotators labelled the CIFAR-10 test-set (we use these as our training set, and use
the standard CIFAR-10 training set for test-data). As expected, we find strong cold-posterior effects
1arXiv:2008.05912v2 [stat.ML] 27 Apr 2021 |
2310.05915.pdf | FI R EAC T: TOWARD LANGUAGE AGENT FINE-TUNING
Baian Chen∗
System2 ResearchChang Shu∗
University of CambridgeEhsan Shareghi
Monash University
Nigel Collier
University of CambridgeKarthik Narasimhan
PLI, Princeton UniversityShunyu Yao
PLI, Princeton University
ABSTRACT
Recent efforts have augmented language models (LMs) with external tools or en-
vironments, leading to the development of language agents that can reason and
act. However, most of these agents rely on few-shot prompting techniques with
off-the-shelf LMs. In this paper, we investigate and argue for the overlooked di-
rection of fine-tuning LMs to obtain language agents. Using a setup of question
answering (QA) with a Google search API, we explore a variety of base LMs,
prompting methods, fine-tuning data, and QA tasks, and find language agents
are consistently improved after fine-tuning their backbone LMs. For example,
fine-tuning Llama2-7B with 500 agent trajectories generated by GPT-4 leads to
a 77% HotpotQA performance increase. Furthermore, we propose FireAct ,
a novel approach to fine-tuning LMs with trajectories from multiple tasks and
prompting methods, and show having more diverse fine-tuning data can further
improve agents. Along with other findings regarding scaling effects, robustness,
generalization, efficiency and cost, our work establishes comprehensive benefits
of fine-tuning LMs for agents, and provides an initial set of experimental designs,
insights, as well as open questions toward language agent fine-tuning.
1 I NTRODUCTION
Recent work has explored grounding language models (LMs; Brown et al., 2020; Chowdhery et al.,
2022; Touvron et al., 2023a) to interact with external tools or environments, leading to a new class
oflanguage agents (Nakano et al., 2021; Yao et al., 2022b; Park et al., 2023) that could obtain
new knowledge from environmental feedback, make sequential decisions via language reasoning,
and improve task solving using self-reflection (Shinn et al., 2023; Wang et al., 2023a). Beyond
research, industrial developments such as ChatGPT Plugins (OpenAI, 2023c) have indicated the
great potential of language agents for real-world applications.
So far, most language agents prompt off-the-shelf LMs for convenience and flexibility. However,
existing LMs were not developed for agentic usecases (e.g., generating actions or self-evaluations),
for which few-shot prompting only offers limited learning support. As a result, most LMs have poor
performance and robustness when used for agents, and some advanced agents (Yao et al., 2023;
Wang et al., 2023a) can only be supported by GPT-4 (OpenAI, 2023b), resulting in high costs and
latencies, along with issues like controllability and reproducibility.
Fine-tuning is an appropriate solution for these issues: it has been shown that fine-tuned smaller LMs
could outperform prompted larger LMs for specific reasoning (Zelikman et al., 2022; Huang et al.,
2022a) and acting (Yao et al., 2022b) needs, while enjoying reduced inference time and expense.
But the study of LM fine-tuning for agents has been very limited, despite the large amount of studies
around language agents and LM fine-tuning respectively (Figure 1). Only a few prior works have
fine-tuned LMs for web navigation (Nakano et al., 2021; Yao et al., 2022a) or API tool use (Schick
et al., 2023; Patil et al., 2023; Qin et al., 2023), with preliminary scaling analysis specific to a type
of models (Yao et al., 2022b; Schick et al., 2023; Nakano et al., 2021).
∗Equal contribution. Code, data, and models are available at https://fireact-agent.github.io .
1arXiv:2310.05915v1 [cs.CL] 9 Oct 2023 |
2112.07806.pdf | Published in Transactions on Machine Learning Research (09/2022)
Representation Alignment in Neural Networks
Ehsan Imani imani@ualberta.ca
University of Alberta
Wei Hu vvh@umich.edu
University of Michigan
Martha White whitem@ualberta.ca
University of Alberta
CIFAR AI Chair
Reviewed on OpenReview: https: // openreview. net/ forum? id= fLIWMnZ9ij
Abstract
It is now a standard for neural network representations to be trained on large, publicly
available datasets, and used for new problems. The reasons for why neural network represen-
tations have been so successful for transfer, however, are still not fully understood. In this
paper we show that, after training, neural network representations align their top singular
vectors to the targets. We investigate this representation alignment phenomenon in a variety
of neural network architectures and find that (a) alignment emerges across a variety of
different architectures and optimizers, with more alignment arising from depth (b) alignment
increases for layers closer to the output and (c) existing high-performance deep CNNs exhibit
high levels of alignment. We then highlight why alignment between the top singular vectors
and the targets can speed up learning and show in a classic synthetic transfer problem
that representation alignment correlates with positive and negative transfer to similar and
dissimilar tasks. A demo is available at https://github.com/EhsanEI/rep-align-demo .
1 Introduction
A common strategy for transfer learning is to first learn a neural network on a source (upstream) task with
a large amount of data, then extract features from an intermediate layer of that network and finally train
a subsequent model on a related target (downstream) task using those extracted features. The premise is
that neural networks adapt their intermediate representations—hidden representations—to the source task
and, due to the commonalities between the two tasks, these learned representations help training on the
target task (Bengio et al., 2013). Availability of large datasets like ImageNet (Russakovsky et al., 2015) and
the News Dataset for Word2Vec (Mikolov et al., 2013) provides suitable source tasks that facilitate using
neural networks for feature construction for Computer Vision and Natural Language Processing (NLP) tasks
(Kornblith et al., 2019; Oquab et al., 2014; Devlin et al., 2018; Pennington et al., 2014).
There is as yet much more to understand about when and why transfer is successful. Understanding
the properties of the learned hidden representations and their benefits for training on similar tasks has
remained a longstanding challenge (Touretzky & Pomerleau, 1989; Zhou et al., 2015; Marcus, 2018). One
strategy has been to define properties of a good representation, and try to either measure or enforce those
properties. Disentanglement and invariance are two such properties (Bengio et al., 2013), where the idea is
that disentangling the factors that explain the data and are invariant to most local changes of the input results
in representations that generalize and transfer well. Though encoding properties for transfer is beneficial, it
remains an important question exactly how to evaluate the representations that do emerge.
1arXiv:2112.07806v2 [cs.LG] 17 Sep 2022 |
2212.14052v3.pdf | Hungry Hungry Hippos: Towards Language Modeling with State
Space Models
Daniel Y. Fu∗†, Tri Dao∗†, Khaled K. Saab‡, Armin W. Thomas††,
Atri Rudra‡‡, and Christopher R´ e†
†Department of Computer Science, Stanford University
‡Department of Electrical Engineering, Stanford University
††Department of Psychology, Stanford University
‡‡Department of Computer Science and Engineering, University at Buffalo, SUNY
{danfu,tridao}@cs.stanford.edu ,{ksaab,athms}@stanford.edu ,atri@buffalo.edu ,
chrismre@cs.stanford.edu
December 28, 2022
Abstract
State space models (SSMs) have demonstrated state-of-the-art sequence modeling performance in
some modalities, but underperform attention in language modeling. Moreover, despite scaling nearly
linearly in sequence length instead of quadratically, SSMs are still slower than Transformers due to poor
hardware utilization. In this paper, we make progress on understanding the expressivity gap between
SSMs and attention in language modeling, and on reducing the hardware barrier between SSMs and
attention. First, we use synthetic language modeling tasks to understand the gap between SSMs and
attention. We find that existing SSMs struggle with two capabilities: recalling earlier tokens in the
sequence and comparing tokens across the sequence. To understand the impact on language modeling, we
propose a new SSM layer, H3, that is explicitly designed for these abilities. H3matches attention on
the synthetic languages and comes within 0.4 PPL of Transformers on OpenWebText. Furthermore, a
hybrid 125M-parameter H3-attention model that retains two attention layers surprisingly outperforms
Transformers on OpenWebText by 1.0 PPL. Next, to improve the efficiency of training SSMs on modern
hardware, we propose FlashConv .FlashConv uses a fused block FFT algorithm to improve efficiency on
sequences up to 8K, and introduces a novel state passing algorithm that exploits the recurrent properties
of SSMs to scale to longer sequences. FlashConv yields 2×speedup on the long-range arena benchmark
and allows hybrid language models to generate text 2.4 ×faster than Transformers. Using FlashConv ,
we scale hybrid H3-attention language models up to 2.7B parameters on the Pile and find promising
initial results, achieving lower perplexity than Transformers and outperforming Transformers in zero- and
few-shot learning on a majority of tasks in the SuperGLUE benchmark.
1 Introduction
State space models (SSMs) have achieved state-of-the-art sequence modeling performance in domains ranging
from time series analysis [ 25] to audio generation [ 22]. However, they have yet to match the performance of
Transformers on language modeling, often underperforming Transformers by multiple points in perplexity [ 25].
An natural question is whether this gap in performance is due to inherent inductive biases and capabilities
in attention [ 17,49], or whether it is a function of the significant organizational resources that have been
spent training and tuning large attention-based language models [ 10,32,66], as well as specialized hardware
support for attention, ranging from tensor cores [45] to transformer chips [34, 48].
We take first steps towards answering these questions in this paper. First, we use synthetic language
modeling tasks to show that there is an expressivity gap between SSMs and attention. Using our insights,
∗Equal Contribution. Order determined by coin flip.
1arXiv:2212.14052v3 [cs.LG] 29 Apr 2023 |
2202.07789.pdf | Safe Reinforcement Learning by Imagining the Near
Future
Garrett Thomas
Stanford University
gwthomas@stanford.eduYuping Luo
Princeton University
yupingl@cs.princeton.eduTengyu Ma
Stanford University
tengyuma@stanford.edu
Abstract
Safe reinforcement learning is a promising path toward applying reinforcement
learning algorithms to real-world problems, where suboptimal behaviors may lead
to actual negative consequences. In this work, we focus on the setting where
unsafe states can be avoided by planning ahead a short time into the future. In
this setting, a model-based agent with a sufficiently accurate model can avoid
unsafe states. We devise a model-based algorithm that heavily penalizes unsafe
trajectories, and derive guarantees that our algorithm can avoid unsafe states under
certain assumptions. Experiments demonstrate that our algorithm can achieve
competitive rewards with fewer safety violations in several continuous control
tasks.
1 Introduction
Reinforcement learning (RL) enables the discovery of effective policies for sequential decision-
making tasks via trial and error [Mnih et al., 2015, Gu et al., 2016, Bellemare et al., 2020]. However,
in domains such as robotics, healthcare, and autonomous driving, certain kinds of mistakes pose
danger to people and/or objects in the environment. Hence there is an emphasis on the safety of the
policy, both at execution time and while interacting with the environment during learning. This issue,
referred to as safe exploration , is considered an important problem in AI safety [Amodei et al., 2016].
In this work, we advocate a model-based approach to safety, meaning that we estimate the dynamics of
the system to be controlled and use the model for planning (or more accurately, policy improvement).
The primary motivation for this is that a model-based method has the potential to anticipate safety
violations before they occur . Often in real-world applications, the engineer has an idea of what
states should be considered violations of safety: for example, a robot colliding rapidly with itself or
surrounding objects, a car driving on the wrong side of the road, or a patient’s blood glucose levels
spiking.Yet model-free algorithms typically lack the ability to incorporate such prior knowledge and
must encounter some safety violations before learning to avoid them.
We begin with the premise that in practice, forward prediction for relatively few timesteps is sufficient
to avoid safety violations. Consider the illustrative example in Figure 1, in which an agent controls
the acceleration (and thereby, speed) of a car by pressing the gas or brake (or nothing). Note that
there is an upper bound on how far into the future the agent would have to plan to foresee and (if
possible) avoid any collision, namely, the amount of time it takes to bring the car to a complete stop.
Assuming that the horizon required for detecting unsafe situations is not too large, we show how
to construct a reward function with the property that an optimal policy will never incur a safety
violation. A short prediction horizon is also beneficial for model-based RL, as the well-known issue
ofcompounding error plagues long-horizon prediction [Asadi et al., 2019]: imperfect predictions
are fed back into the model as inputs (possibly outside the distribution of inputs in the training data),
leading to progressively worse accuracy as the prediction horizon increases.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).arXiv:2202.07789v1 [cs.LG] 15 Feb 2022 |
2002.08909.pdf | arXiv:2002.08909v1 [cs.CL] 10 Feb 2020REALM: Retrieval-Augmented Language Model Pre-Training
Kelvin Guu* 1Kenton Lee* 1Zora Tung1Panupong Pasupat1Ming-Wei Chang1
Abstract
Language model pre-training has been shown to
capture a surprising amount of world knowledge,
crucial for NLP tasks such as question answer-
ing. However, this knowledge is stored implic-
itly in the parameters of a neural network, requir-
ing ever-larger networks to cover more facts. To
capture knowledge in a more modular and inter-
pretable way, we augment language model pre-
training with a latent knowledge retriever , which
allows the model to retrieve and attend over doc-
uments from a large corpus such as Wikipedia,
used during pre-training, fine-tuning and infer-
ence. For the first time, we show how to pre-
train such a knowledge retriever in an unsuper-
vised manner, using masked language model-
ing as the learning signal and backpropagating
through a retrieval step that considers millions
of documents. We demonstrate the effective-
ness of Retrieval-Augmented Language Model
pre-training (REALM) by fine-tuning on the chal-
lenging task of Open-domain Question Answer-
ing (Open-QA). We compare against state-of-the-
art models for both explicit and implicit knowl-
edge storage on three popular Open-QA bench-
marks, and find that we outperform all previous
methods by a significant margin (4-16% absolute
accuracy), while also providing qualitative bene-
fits such as interpretability and modularity.
1. Introduction
Recent advances in language model pre-training have
shown that models such as BERT ( Devlin et al. ,2018 ),
RoBERTa ( Liu et al. ,2019 ) and T5 ( Raffel et al. ,2019 )
store a surprising amount of world knowledge, ac-
quired from the massive text corpora they are trained
on (Petroni et al. ,2019 ). For example, BERT is able to
*Equal contribution1Google Research. Correspondence
to: Kelvin Guu <kguu@google.com >, Kenton Lee <ken-
tonl@google.com >, Zora Tung <gatoatigrado@google.com >,
Panupong Pasupat <ppasupat@google.com >, Ming-Wei Chang
<mingweichang@google.com >.
Figure 1. REALM augments language model pre-training with
aneural knowledge retriever that retrieves knowledge from a
textual knowledge corpus ,Z(e.g., all of Wikipedia). Signal
from the language modeling objective backpropagates all th e way
through the retriever, which must consider millions of docu ments
inZ—a significant computational challenge that we address.
correctly predict the missing word in the following sen-
tence: “The is the currency of the United
Kingdom ” (answer: “ pound ”).
In these language models, the learned world knowledge is
stored implicitly in the parameters of the underlying neural
network. This makes it difficult to determine what knowl-
edge is stored in the network and where. Furthermore, stor-
age space is limited by the size of the network—to cap-
ture more world knowledge, one must train ever-larger net-
works, which can be prohibitively slow or expensive.
To capture knowledge in a more interpretable and modular
way, we propose a novel framework, Retrieval-Augmented
Language Model (REALM) pre-training, which augments
language model pre-training algorithms with a learned tex-
tual knowledge retriever . In contrast to models that store
knowledge in their parameters, this approach explicitly ex-
poses the role of world knowledge by asking the model to |
2109.10862.pdf | Recursively Summarizing Books with Human Feedback
Jeff Wu∗Long Ouyang∗Daniel M. Ziegler∗Nisan Stiennon∗Ryan Lowe∗
Jan Leike∗Paul Christiano∗
OpenAI
Abstract
A major challenge for scaling machine learning is training models to perform
tasks that are very difficult or time-consuming for humans to evaluate. We present
progress on this problem on the task of abstractive summarization of entire fiction
novels. Our method combines learning from human feedback with recursive
task decomposition: we use models trained on smaller parts of the task to assist
humans in giving feedback on the broader task. We collect a large volume of
demonstrations and comparisons from human labelers, and fine-tune GPT-3 using
behavioral cloning and reward modeling to do summarization recursively. At
inference time, the model first summarizes small sections of the book and then
recursively summarizes these summaries to produce a summary of the entire book.
Our human labelers are able to supervise and evaluate the models quickly, despite
not having read the entire books themselves. Our resulting model generates sensible
summaries of entire books, even matching the quality of human-written summaries
in a few cases ( ∼5%of books). We achieve state-of-the-art results on the recent
BookSum dataset for book-length summarization. A zero-shot question-answering
model using these summaries achieves competitive results on the challenging
NarrativeQA benchmark for answering questions about books and movie scripts.
We release datasets of samples from our model.2
1 Introduction
To train an ML model on a new task, we need a training signal that tells the model which behaviors
are better and which are worse. For some tasks, like playing a video game, this training signal can
be calculated automatically. However, for many useful tasks an accurate training signal can only be
provided via a human in the loop. For example, humans can provide demonstrations of the correct
behavior (Bain and Sammut, 1995) or compare two outputs from the model being trained (Christiano
et al., 2017), and this data is used to train the model.
In this paper we focus on tasks that are difficult for humans to supervise or evaluate, either because
the tasks take a lot of time or because they require specialized knowledge and expertise to evaluate.
For example, imagine training a model to summarize an entire sub-field of scientific research. For
a human to provide a demonstration or evaluate the quality of a model-generated summary, they
would likely need a huge amount of time and expertise. One could circumvent this difficulty by using
easier-to-measure proxy objectives (e.g. how often words in the summary relate to the topic, and how
accurate individual sentences in the summary are), but these proxies are usually less aligned with
∗This was a joint project of the OpenAI Alignment team. JW and LO contributed equally. DMZ, NS, and
RL were full-time contributors for most of the duration. JL and PC managed the team. Corresponding author
jeffwu@openai.com.
2See https://openaipublic.blob.core.windows.net/recursive-book-summ/website/index.htmlarXiv:2109.10862v2 [cs.CL] 27 Sep 2021 |
2305.07185.pdf | MEGA BYTE: Predicting Million-byte Sequences with Multiscale Transformers
Lili Yu* 1D´aniel Simig* 1Colin Flaherty* 2Armen Aghajanyan1Luke Zettlemoyer1Mike Lewis1
Abstract
Autoregressive transformers are spectacular mod-
els for short sequences but scale poorly to long se-
quences such as high-resolution images, podcasts,
code, or books. We propose MEGABYTE, a multi-
scale decoder architecture that enables end-to-end
differentiable modeling of sequences of over one
million bytes. MEGABYTE segments sequences
into patches and uses a local submodel within
patches and a global model between patches. This
enables sub-quadratic self-attention, much larger
feedforward layers for the same compute, and im-
proved parallelism during decoding—unlocking
better performance at reduced cost for both train-
ing and generation. Extensive experiments show
thatMEGABYTE allows byte-level models to per-
form competitively with subword models on long
context language modeling, achieve state-of-the-
art density estimation on ImageNet, and model
audio from raw files. Together, these results estab-
lish the viability of tokenization-free autoregres-
sive sequence modeling at scale.
1. Introduction
Sequences of millions of bytes are ubiquitous; for example,
music, image, or video files typically consist of multiple
megabytes. However, large transformer decoders (LLMs)
typically only use several thousand tokens of context (Brown
et al., 2020; Zhang et al., 2022a)—both because of the
quadratic cost of self-attention but also, more importantly,
the cost of large feedforward networks per-position. This
severely limits the set of tasks where LLMs can be applied.
We introduce MEGABYTE, a new approach to modeling
long byte sequences. First, byte sequences are segmented
into fixed-sized patches, loosely analogous to tokens. Our
model then consists of three parts: (1) a patch embedder ,
*Equal contribution
1Meta AI.
2Augment Computing. Work performed while at Meta AI.
Correspondence to: Lili Yu <liliyu@meta.com >, Mike Lewis
<mikelewis@meta.com >.
Patch EmbedderGlobal ModelLocalModelLocalModelLocalModelLocalModel
_ _ _ _ m e g a b y t e ' ' t r a _ m e g _ b y t _ ' ' t r _ n s fGlobal ModelLocalModelLocalModelLocalModelLocalModel
_ _ _ _ m e g a b y t e t r a n _ m e g _ b y t _ t r a _ s f oPatchEmbedPatch EmbedPatchEmbedPatchEmbed
Global ModelLocalModelLocalModelLocalModelLocalModel
_ _ _ _ m e g a b y t e t r a n _ m e g _ b y t _ t r a _ s f om e g a b y t e t r a n s f o r
PatchEmbedPatch EmbedPatchEmbedPatchEmbedFigure 1. Overview of MEGABYTE with patch size P= 4. A
small local model autoregressively predicts each patch byte-by-
byte, using the output of a larger global model to condition on
previous patches. Global and Local inputs are padded by Pand1
token respectively to avoid leaking information about future tokens.
which simply encodes a patch by losslessly concatenating
embeddings of each byte, (2) a global module, a large au-
toregressive transformer that inputs and outputs patch rep-
resentations and (3) a local module, a small autoregressive
model that predicts bytes within a patch. Crucially, we
observe that for many tasks, most byte predictions are rela-
tively easy (for example, completing a word given the first
few characters), meaning that large networks per-byte are
unnecessary, and a much smaller model can be used for
intra-patch modelling.
The MEGABYTE architecture gives three major improve-
ments over Transformers for long sequence modelling:
1.Sub-quadratic self-attention Most work on long se-
quence models has focused on mitigating the quadratic
cost of self-attention. MEGABYTE decomposes long
sequences into two shorter sequences, and optimal
patch sizes reduces the self-attention cost to O(N4
3),
which remains tractable for even long sequences.
2.Per-patch feedforward layers In GPT3-size mod-arXiv:2305.07185v2 [cs.LG] 19 May 2023 |
2211.15841.pdf | MEGA BLOCKS : EFFICIENT SPARSE TRAINING WITH MIXTURE -OF-EXPERTS
Trevor Gale1Deepak Narayanan2Cliff Young3Matei Zaharia1
ABSTRACT
We present MegaBlocks, a system for efficient Mixture-of-Experts (MoE) training on GPUs. Our system is
motivated by the limitations of current frameworks, which restrict the dynamic routing in MoE layers to satisfy
the constraints of existing software and hardware. These formulations force a tradeoff between model quality and
hardware efficiency, as users must choose between dropping tokens from the computation or wasting computation
and memory on padding. To address these limitations, we reformulate MoE computation in terms of block-sparse
operations and develop new block-sparse GPU kernels that efficiently handle the dynamism present in MoEs. Our
approach never drops tokens and maps efficiently to modern hardware, enabling end-to-end training speedups
of up to 40% over MoEs trained with the state-of-the-art Tutel library and 2.4×over DNNs trained with the
highly-optimized Megatron-LM framework.
1 I NTRODUCTION
Exploiting sparsity in the weights, activations and input data
of deep neural networks (DNNs) is an effective technique
for reducing the amount of computation that is needed to
achieve a given model quality (Han et al., 2015; Gale et al.,
2019). The past decade has seen significant progress in
algorithms and high-performance software to make sparsity
practically useful (Gray et al., 2017; Narang et al., 2017;
Kalchbrenner et al., 2018; Elsen et al., 2020; Gale et al.,
2020). One area that remains a challenge for sparsity is
model training on accelerators. DNNs are most commonly
trained on hardware accelerators like GPUs (NVIDIA, 2020)
and TPUs (Jouppi et al., 2017), which exploit the regularity
of dense computation to deliver high performance. Con-
sequently, fine-grained sparse computation is less efficient
on these processors. To enable efficient computation on ac-
celerators, structure can be enforced on the sparse matrices
(Narang et al., 2017; Gray et al., 2017; Yao et al., 2019).
An emerging class of models with underlying structured
sparsity is Mixture-of-Experts (MoEs) (Shazeer et al., 2017).
Each layer in an MoE is a collection of experts , which are
themselves small DNNs. As data is passed through the MoE
layers, each token is dynamically routed to a subset of the
experts for computation. By exploiting this sparse computa-
tion, MoEs have reduced training times by as much as 4 ×for
applications in natural language processing and computer
vision (Artetxe et al., 2021; Riquelme et al., 2021). These
1Stanford University, Stanford, California, USA2Microsoft
Research, Redmond, Washington, USA3Google Research, Moun-
tain View, California, USA. Correspondence to: Trevor Gale
<tgale@cs.stanford.edu >.gains have translated to new levels of scale for model train-
ing, pushing model sizes past 1 trillion parameters (Artetxe
et al., 2021; Du et al., 2021; Fedus et al., 2022).
The challenge in computing MoEs efficiently is handling
the dynamic routing and load-imbalanced computation that
are fundamental to these architectures. However, existing
hardware and software for deep learning make it difficult
to meet this challenge. For example, TPUs and their XLA
compiler require all tensor shapes to be known statically
and often struggle with fine-grained operations like scatters
and gathers (Fedus et al., 2022). These constraints make it
difficult to implement MoEs directly on TPUs. While GPUs
are more flexible, the sparse computation in MoEs does not
map cleanly to the software primitives supported in major
frameworks and libraries.
State-of-the-art frameworks for MoE training sidestep these
challenges by placing rigid constraints on MoE routing. In
order to remove the dynamism from the computation, the
set of tokens mapped to each expert are trimmed or padded
to a user-specified size (Lepikhin et al., 2020; Fedus et al.,
2022; Hwang et al., 2022). This procrustean formulation
introduces a tradeoff between model quality and hardware
efficiency, as users must decide whether to drop tokens or
waste computation and memory on padding. This decision is
often made through hyperparameter tuning, which increases
the complexity of using MoEs.
To address these challenges, we develop an approach for
MoE routing and computation based on sparse primitives .
Our approach never drops tokens and maps efficiently to
modern GPUs, enabling end-to-end training speedups of up
to40% and2.4×over state-of-the-art frameworks for MoE
and DNN training, respectively. We make the followingarXiv:2211.15841v1 [cs.LG] 29 Nov 2022 |
2004.01255.pdf | Guided Variational Autoencoder for Disentanglement Learning
Zheng Ding∗,1,2, Yifan Xu∗,2, Weijian Xu2, Gaurav Parmar2, Yang Yang3, Max Welling3,4, Zhuowen Tu2
1Tsinghua University2UC San Diego3Qualcomm, Inc.4University of Amsterdam
Abstract
We propose an algorithm, guided variational autoen-
coder (Guided-VAE), that is able to learn a controllable
generative model by performing latent representation disen-
tanglement learning. The learning objective is achieved by
providing signals to the latent encoding/embedding in VAE
without changing its main backbone architecture, hence re-
taining the desirable properties of the VAE. We design an
unsupervised strategy and a supervised strategy in Guided-
VAE and observe enhanced modeling and controlling ca-
pability over the vanilla VAE. In the unsupervised strategy,
we guide the VAE learning by introducing a lightweight de-
coder that learns latent geometric transformation and prin-
cipal components; in the supervised strategy, we use an ad-
versarial excitation and inhibition mechanism to encourage
the disentanglement of the latent variables. Guided-VAE
enjoys its transparency and simplicity for the general rep-
resentation learning task, as well as disentanglement learn-
ing. On a number of experiments for representation learn-
ing, improved synthesis/sampling, better disentanglement
for classification, and reduced classification errors in meta
learning have been observed.
1. Introduction
The resurgence of autoencoders (AE) [34, 6, 21] is an
important component in the rapid development of modern
deep learning [17]. Autoencoders have been widely adopted
for modeling signals and images [46, 50]. Its statistical
counterpart, the variational autoencoder (V AE) [29], has led
to a recent wave of development in generative modeling due
to its two-in-one capability, both representation and statis-
tical learning in a single framework. Another exploding di-
rection in generative modeling includes generative adver-
sarial networks (GAN) [18], but GANs focus on the gener-
ation process and are not aimed at representation learning
(without an encoder at least in its vanilla version).
Compared with classical dimensionality reduction meth-
ods like principal component analysis (PCA) [22, 27] and
∗Authors contributed equally.Laplacian eigenmaps [4], V AEs have demonstrated their un-
precedented power in modeling high dimensional data of
real-world complexity. However, there is still a large room
to improve for V AEs to achieve a high quality reconstruc-
tion/synthesis. Additionally, it is desirable to make the V AE
representation learning more transparent, interpretable, and
controllable.
In this paper, we attempt to learn a transparent repre-
sentation by introducing guidance to the latent variables in
a V AE. We design two strategies for our Guided-V AE, an
unsupervised version (Fig. 1.a) and a supervised version
(Fig. 1.b). The main motivation behind Guided-V AE is to
encourage the latent representation to be semantically inter-
pretable, while maintaining the integrity of the basic V AE
architecture. Guided-V AE is learned in a multi-task learn-
ing fashion. The objective is achieved by taking advantage
of the modeling flexibility and the large solution space of
the V AE under a lightweight target. Thus the two tasks,
learning a good V AE and making the latent variables con-
trollable, become companions rather than conflicts.
Inunsupervised Guided-V AE , in addition to the stan-
dard V AE backbone, we also explicitly force the latent vari-
ables to go through a lightweight encoder that learns a de-
formable PCA. As seen in Fig. 1.a, two decoders exist, both
trying to reconstruct the input data x: The main decoder,
denoted as Dec main , functions regularly as in the standard
V AE [29]; the secondary decoder, denoted as Dec sub, ex-
plicitly learns a geometric deformation together with a lin-
ear subspace. In supervised Guided-V AE , we introduce a
subtask for the V AE by forcing one latent variable to be
discriminative (minimizing the classification error) while
making the rest of the latent variable to be adversarially
discriminative (maximizing the minimal classification er-
ror). This subtask is achieved using an adversarial excita-
tion and inhibition formulation. Similar to the unsupervised
Guided-V AE, the training process is carried out in an end-
to-end multi-task learning manner. The result is a regular
generative model that keeps the original V AE properties in-
tact, while having the specified latent variable semantically
meaningful and capable of controlling/synthesizing a spe-
cific attribute. We apply Guided-V AE to the data modeling
and few-shot learning problems and show favorable resultsarXiv:2004.01255v1 [cs.CV] 2 Apr 2020 |
1902.09229.pdf | A Theoretical Analysis of Contrastive Unsupervised Representation Learning
Sanjeev Arora1 2Hrishikesh Khandeparkar1Mikhail Khodak3Orestis Plevrakis1Nikunj Saunshi1
{arora, hrk, orestisp, nsaunshi }@cs.princeton.edu khodak@cmu.edu
Abstract
Recent empirical works have successfully used
unlabeled data to learn feature representations
that are broadly useful in downstream classifica-
tion tasks. Several of these methods are remi-
niscent of the well-known word2vec embedding
algorithm: leveraging availability of pairs of se-
mantically “similar” data points and “negative
samples,” the learner forces the inner product of
representations of similar pairs with each other to
be higher on average than with negative samples.
The current paper uses the term contrastive learn-
ingfor such algorithms and presents a theoretical
framework for analyzing them by introducing la-
tent classes and hypothesizing that semantically
similar points are sampled from the same latent
class. This framework allows us to show provable
guarantees on the performance of the learned rep-
resentations on the average classification task that
is comprised of a subset of the same set of latent
classes. Our generalization bound also shows that
learned representations can reduce (labeled) sam-
ple complexity on downstream tasks. We conduct
controlled experiments in both the text and image
domains to support the theory.
1. Introduction
This paper concerns unsupervised representation learning :
using unlabeled data to learn a representation function f
such that replacing data point xby feature vector f(x)in
new classification tasks reduces the requirement for labeled
data. This is distinct from semi-supervised learning , where
learning can leverage unlabeled as well as labeled data.
(Section 7 surveys other prior ideas and models).
For images, a proof of existence for broadly useful represen-
tations is the output of the penultimate layer (the one before
1Princeton University, Princeton, New Jersey, USA.2Institute for
Advanced Study, Princeton, New Jersey, USA.3Carnegie Mellon
University, Pittsburgh, Pennsylvania, USA.
Copyright 2019 by the authors.the softmax) of a powerful deep net trained on ImageNet.
In natural language processing (NLP), low-dimensional rep-
resentations of text – called text embeddings – have been
computed with unlabeled data (Peters et al., 2018; Devlin
et al., 2018). Often the embedding function is trained by
using the embedding of a piece of text to predict the sur-
rounding text (Kiros et al., 2015; Logeswaran & Lee, 2018;
Pagliardini et al., 2018). Similar methods that leverage simi-
larity in nearby frames in a video clip have had some success
for images as well (Wang & Gupta, 2015).
Many of these algorithms are related: they assume access to
pairs or tuples (in the form of co-occurrences) of text/images
that are more semantically similar than randomly sampled
text/images, and their objective forces representations to
respect this similarity on average. For instance, in order to
learn a representation function ffor sentences, a simplified
version of what Logeswaran & Lee (2018) minimize is the
following loss function
E
x,x+,x−[
−log(
ef(x)Tf(x+)
ef(x)Tf(x+)+ef(x)Tf(x−))]
where (x,x+)are a similar pair and x−is presumably dis-
similar tox(often chosen to be a random point) and typi-
cally referred to as a negative sample . Though reminiscent
of past ideas – e.g. kernel learning, metric learning, co-
training (Cortes et al., 2010; Bellet et al., 2013; Blum &
Mitchell, 1998) – these algorithms lack a theoretical frame-
work quantifying when and why they work. While it seems
intuitive that minimizing such loss functions should lead
to representations that capture ‘similarity,’ formally it is
unclear why the learned representations should do well on
downstream linear classification tasks – their somewhat
mysterious success is often treated as an obvious conse-
quence. To analyze this success, a framework must connect
‘similarity’ in unlabeled data with the semantic information
that is implicitly present in downstream tasks.
We propose the term Contrastive Learning for such methods
and provide a new conceptual framework with minimal
assumptions1. Our main contributions are the following:
1The alternative would be to make assumptions about genera-
tive models of data. This is difficult for images and text.arXiv:1902.09229v1 [cs.LG] 25 Feb 2019 |
2205.12914.pdf | New Intent Discovery with Pre-training and Contrastive Learning
Yuwei Zhang2∗Haode Zhang1Li-Ming Zhan1Xiao-Ming Wu1†
Albert Y.S. Lam3
Department of Computing, The Hong Kong Polytechnic University, Hong Kong S.A.R.1
University of California, San Diego2
Fano Labs, Hong Kong S.A.R.3
zhangyuwei.work@gmail.com
{haode.zhang, lmzhan.zhan}@connect.polyu.edu.hk
csxmwu@comp.polyu.edu.hk, albert@fano.ai
Abstract
New intent discovery aims to uncover novel in-
tent categories from user utterances to expand
the set of supported intent classes. It is a criti-
cal task for the development and service expan-
sion of a practical dialogue system. Despite
its importance, this problem remains under-
explored in the literature. Existing approaches
typically rely on a large amount of labeled
utterances and employ pseudo-labeling meth-
ods for representation learning and clustering,
which are label-intensive, inefficient, and inac-
curate. In this paper, we provide new solutions
to two important research questions for new in-
tent discovery: (1) how to learn semantic ut-
terance representations and (2) how to better
cluster utterances. Particularly, we first pro-
pose a multi-task pre-training strategy to lever-
age rich unlabeled data along with external la-
beled data for representation learning. Then,
we design a new contrastive loss to exploit
self-supervisory signals in unlabeled data for
clustering. Extensive experiments on three in-
tent recognition benchmarks demonstrate the
high effectiveness of our proposed method,
which outperforms state-of-the-art methods by
a large margin in both unsupervised and semi-
supervised scenarios. The source code will
be available at https://github.com/
zhang-yu-wei/MTP-CLNN .
1 Introduction
Why Study New Intent Discovery (NID)? Re-
cent years have witnessed the rapid growth of con-
versational AI applications. To design a natural
language understanding system, a set of expected
customer intentions are collected beforehand to
train an intent recognition model. However, the pre-
defined intents cannot fully meet customer needs.
This implies the necessity of expanding the intent
recognition model by repeatedly integrating new
intents discovered from unlabeled user utterances
∗Work done while the author was with HK PolyU.
†Corresponding author.
Figure 1: New Intent Discovery.
(Fig. 1). To reduce the effort in manually identi-
fying unknown intents from a mass of utterances,
previous works commonly employ clustering algo-
rithms to group utterances of similar intents (Che-
ung and Li, 2012; Hakkani-Tür et al., 2015; Padma-
sundari, 2018). The cluster assignments thereafter
can either be directly used as new intent labels or
as heuristics for faster annotations.
Research Questions (RQ) and Challenges.
Current study of NID centers around two basic
research questions: 1) How to learn semantic ut-
terance representations to provide proper cues for
clustering? 2) How to better cluster the utterances?
The study of the two questions are often interwoven
in existing research. Utterances can be represented
according to different aspects such as the style of
language, the related topics, or even the length
of sentences. It is important to learn semantic ut-
terance representations to provide proper cues for
clustering. Simply applying a vanilla pre-trained
language model (PLM) to generate utterance repre-
sentations is not a viable solution, which leads to
poor performance on NID as shown by the experi-
mental results in Section 4.2. Some recent works
proposed to use labeled utterances of known intentsarXiv:2205.12914v1 [cs.CL] 25 May 2022 |
2311.14648.pdf | Calibrated Language Models Must Hallucinate
Adam Tauman Kalai
Microsoft ResearchSantosh S. Vempala
Georgia Tech
December 5, 2023
Abstract
Recent language models generate false but plausible-sounding text with surprising frequency.
Such “hallucinations” are an obstacle to the usability of language-based AI systems and can
harm people who rely upon their outputs. This work shows shows that there is an inherent
statistical lower-bound on the rate that pretrained language models hallucinate certain types of
facts, having nothing to do with the transformer LM architecture or data quality. For “arbitrary”
facts whose veracity cannot be determined from the training data, we show that hallucinations
must occur at a certain rate for language models that satisfy a statistical calibration condition
appropriate for generative language models. Specifically, if the maximum probability of any fact
is bounded, we show that the probability of generating a hallucination is close to the fraction of
facts that occur exactly once in the training data (a “Good-Turing” estimate), even assuming
ideal training data without errors.
One conclusion is that models pretrained to be sufficiently good predictors (i.e., calibrated)
may require post-training to mitigate hallucinations on the type of arbitrary facts that tend to
appear once in the training set. However, our analysis also suggests that there is no statistical
reason that pretraining will lead to hallucination on facts that tend to appear more than
once in the training data (like references to publications such as articles and books, whose
hallucinations have been particularly notable and problematic) or on systematic facts (like
arithmetic calculations). Therefore, different architectures and learning algorithms may mitigate
these latter types of hallucinations.
1 Introduction
The surprisingly high rate at which Language Models (LMs) generate false information, such as
references to non-existent article titles, has recently emerged as a critical issue. The popular term
hallucination is defined in the Merriam-Webster (2023) dictionary as “a plausible but false or
misleading response generated by an artificial intelligence algorithm.” In one case, lawyers were
fined $5,000 for submitting legal research containing hallucinated legal cases that they believed were
correct (Shin, 2023). In healthcare, hallucinations could be life threatening to patients and physicians
are concerned about malpractice cases (Mello and Guha, 2023). Furthermore, hallucinations have
been widely reported on by the media (Weise and Metz, 2023), and the U.S. President recently put
out an Executive Order calling for, among other things, safeguards against misleading outputs from
generative AI systems (Biden, 2023). This paper presents statistical lower-bounds on the rate of
hallucination for LMs that are calibrated predictors of facts. This helps illuminate the nature of
hallucination. It should notbe taken to mean that hallucination is inevitable. Rather, as we discuss
it is consistent with the fact that practitioners have increasingly been augmenting “pretraining”
1arXiv:2311.14648v2 [cs.CL] 3 Dec 2023 |
2004.12765.pdf | ColBERT: Using BERT Sentence Embedding in
Parallel Neural Networks for Computational Humor
Issa Annamoradnejad∗
i.moradnejad@gmail.comGohar Zoghi
zoghi.g@goums.ac.ir
Abstract
Automation of humor detection and rating has interesting use cases in modern
technologies, such as humanoid robots, chatbots, and virtual assistants. In this
paper, we propose a novel approach for detecting and rating humor in short texts
based on a popular linguistic theory of humor. The proposed technical method
initiates by separating sentences of the given text and utilizing the BERT model
to generate embeddings for each one. The embeddings are fed to separate lines
of hidden layers in a neural network (one line for each sentence) to extract latent
features. At last, the parallel lines are concatenated to determine the congruity
and other relationships between the sentences and predict the target value. We
accompany the paper with a novel dataset for humor detection consisting of 200,000
formal short texts. In addition to evaluating our work on the novel dataset, we
participated in a live machine learning competition focused on rating humor in
Spanish tweets. The proposed model obtained F1 scores of 0.982 and 0.869 in
the humor detection experiments which outperform general and state-of-the-art
models. The evaluation performed on two contrasting settings confirm the strength
and robustness of the model and suggests two important factors in achieving high
accuracy in the current task: 1) usage of sentence embeddings and 2) utilizing the
linguistic structure of humor in designing the proposed model.
1 Introduction
In Interstellar (2014 movie), a future earth is depicted where robots easily understand and use humor
in their connections with their owners and humans can set the level of humor in their personal robots2.
While we may have a long road toward the astral travels, we are very close in reaching high-quality
systems injected with adjustable humor.
Humor, as a potential cause of laughter, is an important part of human communication, which not
only makes people feel comfortable but also creates a cozier environment [ 1]. Automatic humor
detection in texts has interesting use cases in building human-centered artificial intelligence systems
such as humanoid robots, chatbots, and virtual assistants. An appealing use case is to identify whether
an input command should be taken seriously or not, which is a critical step to understanding the real
motives of users, returning appropriate answers, and enhancing the overall experience of users with
the AI system. A more advanced outcome would be the injection of humor into computer-generated
responses, thus making the human-computer interaction more engaging and interesting [ 2]. This is
an outcome that is achievable by setting the level of humor in possible answers to the desired level,
similar to the mentioned movie.
Humor can be attained through several linguistic or semantic mechanisms, such as wordplay, exag-
geration, misunderstanding, and stereotype. Researchers proposed several theories to explain humor
∗Corresponding author: Annamoradnejad is with the Department of Computer Engineering, Sharif University
of Technology, Tehran, Iran
2Tarzs, in the movie.
Preprint. Under review.arXiv:2004.12765v7 [cs.CL] 1 Dec 2022 |
2307.09458.pdf | 2023-07-18
Does Circuit Analysis Interpretability Scale?
Evidence from Multiple Choice Capabilities in
Chinchilla
Tom Lieberum1, Matthew Rahtz1, János Kramár1, Neel Nanda1, Geoffrey Irving1, Rohin Shah1and Vladimir
Mikulik1
1Google DeepMind
Circuit analysis is a promising technique for understanding the internal mechanisms of language models.
However, existing analyses are done in small models far from the state of the art. To address this, we
present a case study of circuit analysis in the 70B Chinchilla model, aiming to test the scalability of
circuit analysis. In particular, we study multiple-choice question answering, and investigate Chinchilla’s
capability to identify the correct answer labelgiven knowledge of the correct answer text. We find
that the existing techniques of logit attribution, attention pattern visualization, and activation patching
naturallyscaletoChinchilla,allowingustoidentifyandcategorizeasmallsetof‘outputnodes’(attention
heads and MLPs).
We further study the ‘correct letter’ category of attention heads aiming to understand the semantics
of their features, with mixed results. For normal multiple-choice question answers, we significantly
compress the query, key and value subspaces of the head without loss of performance when operating on
the answer labels for multiple-choice questions, and we show that the query and key subspaces represent
an ‘Nth item in an enumeration’ feature to at least some extent. However, when we attempt to use this
explanation to understand the heads’ behaviour on a more general distribution including randomized
answer labels, we find that it is only a partial explanation, suggesting there is more to learn about the
operation of ‘correct letter’ heads on multiple choice question answering.
1. Introduction
Currentmethodsfortrainingandevaluationinlargelanguagemodelscurrentlyfocusonthebehaviour
of the model (Bai et al., 2022; Glaese et al., 2022; Ouyang et al., 2022; Perez et al., 2022; Saunders
et al., 2022; Ziegler et al., 2019). Mechanistic interpretability aims to generate detailed knowledge of
a model’s internal reasoning, and thus could significantly improve upon these methods. For example,
such knowledge would strengthen methods that aim to oversee models’ reasoning, as in debate (Irving
etal.,2018)andprocess-basedfeedback(Lightmanetal.,2023;Uesatoetal.,2022). Furthermore,the
ability to examine models’ full reasoning processes could help us detect deceptive alignment (Hubinger
et al., 2019; Kenton et al., 2021), a key source of extreme risk (OpenAI, 2023; Shevlane et al., 2023)
in which a model behaves well to deliberately conceal its undesirable intentions.
We focus on circuit analysis : the identification and study of particular internal mechanisms that drive
a specific subset of models’ behaviour. Existing circuit analysis on language models has a variety of
weaknesses, but in this work we focus on two in particular. First, the models studied are relatively
small: for example, the seminal work on transformer circuits focused on two-layer attention-only
transformers (Elhage et al., 2021) and research on the circuits used in grammatical identification
of indirect objects was done on the 117M variant of GPT-2 (Wang et al., 2022). Second, prior work
identifies which components of a model are relevant and how information flows between them, but
usually does not focus as much on whatinformation is flowing, such that we could predict the circuit’s
behaviour on an expanded data distribution.
Corresponding author(s): tlieberum@deepmind.com
©2023 DeepMind. All rights reservedarXiv:2307.09458v1 [cs.LG] 18 Jul 2023 |
1902.06495.pdf | Learning Compositional Representations of Interacting Systems
with Restricted Boltzmann Machines: Comparative Study of Lattice Proteins
Jérôme Tubiana, Simona Cocco, Rémi Monasson
Laboratory of Physics of the Ecole Normale Supérieure,
CNRS & PSL Research, 24 rue Lhomond, 75005 Paris, France
A Restricted Boltzmann Machine (RBM) is an unsupervised machine-learning bipartite graphical
model that jointly learns a probability distribution over data and extracts their relevant statistical
features. As such, RBM were recently proposed for characterizing the patterns of coevolution be-
tween amino acids in protein sequences and for designing new sequences. Here, we study how the
nature of the features learned by RBM changes with its defining parameters, such as the dimension-
ality of the representations (size of the hidden layer) and the sparsity of the features. We show that
for adequate values of these parameters, RBM operate in a so-called compositional phase in which
visible configurations sampled from the RBM are obtained by recombining these features. We then
compare the performance of RBM with other standard representation learning algorithms, includ-
ing Principal or Independent Component Analysis, autoencoders (AE), variational auto-encoders
(VAE), and their sparse variants. We show that RBM, due to the stochastic mapping between data
configurations and representations, better capture the underlying interactions in the system and are
significantly more robust with respect to sample size than deterministic methods such as PCA or
ICA. In addition, this stochastic mapping is not prescribed a priori as in VAE, but learned from
data, which allows RBM to show good performance even with shallow architectures. All numerical
results are illustrated on synthetic lattice-protein data, that share similar statistical features with
real protein sequences, and for which ground-truth interactions are known.
INTRODUCTION
Many complex, interacting systems have collective be-
haviors that cannot be understood based on a top-down
approach only. This is either because the underlying
microscopic interactions between the constituents of the
system are unknown - as in biological neural networks,
where the set of synaptic connections are unique to each
network - or because the complete description is so com-
plicated that analytical or numerical resolution is in-
tractable - as for proteins, for which physical interactions
between amino acids can in principle be characterized,
but accurate simulations of protein structures or func-
tions are computationally prohibitive. In the last two
decades, the increasing availability of large amounts of
data collected by high-throughput experiments such as
large scale functional recordings in neuroscience (EEG,
Fluorescence imaging,...) [1, 2], fast sequencing technolo-
gies [3, 4] (Single RNA seq) or Deep Mutational Scans [5]
has shed new light on these systems.
Given such high-dimensional data, one fundamental
task is to establish a descriptive phenomenology of the
system. For instance, given a recording of spontaneous
neuralactivityinabrainregionorinthewholebrain(e.g.
in larval zebrafish), we would like to identify stereotypes
of neural activity patterns (e.g. activity bursts, synfire
chains, cell-assembly activations, ...) describing the dy-
namics of the system. This representation is in turn use-
ful to link the behaviour of the animal to its neural state
and to understand the network architecture. Similarly,
given a Multiple Sequence Alignment (MSA) of protein
sequences, i.e. a collection of protein sequences from var-
ious genes and organisms that share common evolution-
ary ancestry, we would like to identify amino acids motifscontrolling the protein functionalities and structural fea-
tures, and identify, in turn, subfamilies of proteins with
common functions. One important set of tools for this
purpose are unsupervised representation-learning algo-
rithms. For instance, Principal Component Analysis can
be used for dimensionality reduction, i.e. for projecting
system configurations into a low-dimensional representa-
tion, where similarities between states are better high-
lighted and the system evolution is tractable. Another
important example is clustering, which partitions the ob-
served data into different ’prototypes’. Though these two
approaches are very popular, they are not always appro-
priate: some data are intrinsically multidimensional, and
cannot be reduced to a low-dimensional or categorical
representation. Indeed, configurations can mix multiple,
weakly related features, such that using a single global
distance metric would be too reductive. For instance,
neural activity states are characterized by the clusters
of neurons that are activated, which are themselves re-
lated to a variety of distinct sensory, motor or cognitive
tasks. Similarly, proteins have a variety of biochemical
properties such as binding affinity and specificity, ther-
modynamic stability, or allostery, which are controlled
by distinct amino acid motifs within their sequences. In
such situations, other approaches such as Independent
Component Analysis or Sparse Dictionaries, which aim
at representing the data by a (larger) set of independent
latent factors appear to be more appropriate [6, 7].
A second goal is to infer the set of interactions under-
lying the system’s collective behaviour. In the case of
neural recordings, we would look for functional connec-
tivity that reflect the structure of the relevant synaptic
connectionsinagivenbrainstate. Inthecaseofproteins,
we would like to know what interactions between aminoarXiv:1902.06495v1 [cs.LG] 18 Feb 2019 |
2311.01906.pdf | SIMPLIFYING TRANSFORMER BLOCKS
Bobby He & Thomas Hofmann∗
Department of Computer Science, ETH Zurich
ABSTRACT
A simple design recipe for deep Transformers is to compose identical building
blocks. But standard transformer blocks are far from simple, interweaving atten-
tion and MLP sub-blocks with skip connections & normalisation layers in precise
arrangements. This complexity leads to brittle architectures, where seemingly mi-
nor changes can significantly reduce training speed, or render models untrainable.
In this work, we ask to what extent the standard transformer block can be simpli-
fied? Combining signal propagation theory and empirical observations, we moti-
vate modifications that allow many block components to be removed with no loss
of training speed, including skip connections, projection or value parameters, se-
quential sub-blocks and normalisation layers. In experiments on both autoregres-
sive decoder-only and BERT encoder-only models, our simplified transformers
emulate the per-update training speed and performance of standard transformers,
while enjoying 15% faster training throughput, and using 15% fewer parameters.
1 I NTRODUCTION
The transformer architecture (Vaswani et al., 2017) is arguably the workhorse behind many recent
successes in deep learning. A simple way to construct a deep transformer architecture is by stacking
multiple identical transformer “blocks” one after another in sequence. Each block, however, is more
complicated and consists of many different components, which need to be combined in specific
arrangements in order to achieve good performance. Surprisingly, the base transformer block has
changed very little since its inception, despite attracting the interest of many researchers.
In this work, we study whether the standard transformer block can be simplified. More specifically,
we probe the necessity of several block components, including skip connections, projection/value
matrices, sequential sub-blocks and normalisation layers. For each considered component, we ask
if it can be removed without loss of training speed (both in terms of per-update step & runtime), and
what architectural modifications need to be made to the transformer block in order to do so.
We believe the problem of simplifying transformer blocks without compromising training speed is
an interesting research question for several reasons. First, modern neural network (NN) architectures
have complex designs with many components, and it is not clear the roles played by these different
components in NN training dynamics, nor how they interact with each other. This is particularly
pertinent given the existing gap between theory and practice in deep learning, where theorists work-
ing to understand the mechanisms of deep learning often only consider simplified architectures due
to convenience, not necessarily reflective of modern architectures used in practice. Simplifying the
NN architectures used in practice can help towards bridging this divide.
On a related theoretical note, our work highlights both strengths and current limitations of signal
propagation: a theory that has proven influential due to its ability to motivate practical design choices
in deep NN architectures. Signal propagation (Poole et al., 2016; Schoenholz et al., 2017; Hayou
et al., 2019) studies the evolution of geometric information in an NN at initialisation, captured
through inner products of layerwise representations across inputs, and has inspired many impressive
results in training deep NNs (Xiao et al., 2018; Brock et al., 2021; Martens et al., 2021; Zaidi et al.,
2023). However, the current theory only considers a model at initialisation, and often considers
only the initial forward pass. As such, signal propagation at present is unable to shed light on many
intricacies of deep NN training dynamics, for example the benefits of skip connections for training
speed. Though signal propagation is crucial in motivating our modifications, we would not have
arrived at our simplified transformer blocks from theory alone, and relied also on empirical insights.
∗Correspondence to: bobby.he@inf.ethz.ch .
1arXiv:2311.01906v1 [cs.LG] 3 Nov 2023 |
2305.13245.pdf | GQA: Training Generalized Multi-Query Transformer Models from
Multi-Head Checkpoints
Joshua Ainslie∗, James Lee-Thorp∗, Michiel de Jong∗††
Yury Zemlyanskiy ,Federico Lebrón ,Sumit Sanghai
Google Research
Abstract
Multi-query attention (MQA), which only
uses a single key-value head, drastically
speeds up decoder inference. However, MQA
can lead to quality degradation, and moreover
it may not be desirable to train a separate
model just for faster inference. We (1) propose
a recipe for uptraining existing multi-head lan-
guage model checkpoints into models with
MQA using 5% of original pre-training com-
pute, and (2) introduce grouped-query atten-
tion (GQA), a generalization of multi-query at-
tention which uses an intermediate (more than
one, less than number of query heads) number
of key-value heads. We show that uptrained
GQA achieves quality close to multi-head at-
tention with comparable speed to MQA.
1 Introduction
Autoregressive decoder inference is a severe bottle-
neck for Transformer models due to the memory
bandwidth overhead from loading decoder weights
and all attention keys and values at every decod-
ing step (Shazeer, 2019; Pope et al., 2022; de Jong
et al., 2022). The memory bandwidth from loading
keys and values can be sharply reduced through
multi-query attention (Shazeer, 2019), which uses
multiple query heads but single key and value
heads.
However, multi-query attention ( MQA ) can lead
to quality degradation and training instability, and
it may not be feasible to train separate models
optimized for quality and inference. Moreover,
while some language models already use multi-
query attention, such as PaLM (Chowdhery et al.,
2022), many do not, including publicly available
language models such as T5 (Raffel et al., 2020)
and LLaMA (Touvron et al., 2023).
This work contains two contributions for faster
inference with large language models. First, we
∗Equal contribution.
†University of Southern California. Work done at Google
Research.show that language model checkpoints with multi-
head attention ( MHA ) can be uptrained (Komat-
suzaki et al., 2022) to use MQA with a small frac-
tion of original training compute. This presents a
cost-effective method to obtain fast multi-query as
well as high-quality MHA checkpoints.
Second, we propose grouped-query attention
(GQA ), an interpolation between multi-head and
multi-query attention with single key and value
heads per subgroup of query heads . We show that
uptrained GQA achieves quality close to multi-
head attention while being almost as fast as multi-
query attention.
2 Method
2.1 Uptraining
Generating a multi-query model from a multi-head
model takes place in two steps: first, converting the
checkpoint, and second, additional pre-training to
allow the model to adapt to its new structure. Fig-
ure 1 shows the process for converting a multi-head
checkpoint into a multi-query checkpoint. The pro-
jection matrices for key and value heads are mean
pooled into single projection matrices, which we
find works better than selecting a single key and
value head or randomly initializing new key and
value heads from scratch.
Figure 1: Overview of conversion from multi-head to
multi-query attention. Key and value projection matri-
ces from all heads are mean pooled into a single head.
The converted checkpoint is then pre-trained forarXiv:2305.13245v1 [cs.CL] 22 May 2023 |
1909.08053.pdf | Megatron-LM: Training Multi-Billion Parameter Language Models Using
Model Parallelism
Mohammad Shoeybi1 2Mostofa Patwary1 2Raul Puri1 2Patrick LeGresley2Jared Casper2
Bryan Catanzaro2
Abstract
Recent work in language modeling demonstrates
that training large transformer models advances
the state of the art in Natural Language Processing
applications. However, very large models can be
quite difficult to train due to memory constraints.
In this work, we present our techniques for train-
ing very large transformer models and implement
a simple, efficient intra-layer model parallel ap-
proach that enables training transformer models
with billions of parameters. Our approach does
not require a new compiler or library changes, is
orthogonal and complimentary to pipeline model
parallelism, and can be fully implemented with
the insertion of a few communication operations
in native PyTorch. We illustrate this approach
by converging transformer based models up to
8.3 billion parameters using 512 GPUs. We sus-
tain 15.1 PetaFLOPs across the entire applica-
tion with 76% scaling efficiency when compared
to a strong single GPU baseline that sustains 39
TeraFLOPs, which is 30% of peak FLOPs. To
demonstrate that large language models can fur-
ther advance the state of the art (SOTA), we train
an 8.3 billion parameter transformer language
model similar to GPT-2 and a 3.9 billion parame-
ter model similar to BERT. We show that careful
attention to the placement of layer normalization
in BERT-like models is critical to achieving in-
creased performance as the model size grows. Us-
ing the GPT-2 model we achieve SOTA results
on the WikiText103 (10.8 compared to SOTA per-
plexity of 15.8) and LAMBADA (66.5% com-
pared to SOTA accuracy of 63.2%) datasets. Our
BERT model achieves SOTA results on the RACE
dataset (90.9% compared to SOTA accuracy of
89.4%).
1Equal contribution2NVIDIA. Correspondence to: Mohammad
Shoeybi <mshoeybi@nvidia.com >.1. Introduction
Natural Language Processing (NLP) is advancing quickly in
part due to an increase in available compute and dataset size.
The abundance of compute and data enables training increas-
ingly larger language models via unsupervised pretraining
(Devlin et al., 2018; Radford et al., 2019). Empirical evi-
dence indicates that larger language models are dramatically
more useful for NLP tasks such as article completion, ques-
tion answering, and natural language inference (Lan et al.,
2019; Raffel et al., 2019). By finetuning these pretrained
language models on downstream natural language tasks,
one can achieve state of the art results as shown in recent
work (Devlin et al., 2018; Peters et al., 2018; Howard &
Ruder, 2018; Radford et al., 2018; 2017; Ramachandran
et al., 2016; Liu et al., 2019b; Dai et al., 2019; Yang et al.,
2019; Liu et al., 2019a; Lan et al., 2019).
As these models become larger, they exceed the memory
limit of modern processors, and require additional memory
management techniques such as activation checkpointing
(Chen et al., 2016). Widely used optimization algorithms
such as ADAM require additional memory per parameter to
store momentum and other optimizer state, which reduces
the size of models that can be effectively trained. Several
approaches to model parallelism overcome this limit by
partitioning the model such that the weights and their asso-
ciated optimizer state do not need to reside concurrently on
the processor. For example, GPipe (Huang et al., 2018) and
Mesh-Tensorflow (Shazeer et al., 2018) provide frameworks
for model parallelism of different kinds. However, they
require rewriting the model, and rely on custom compilers
and frameworks that are still under development.
In this work, we implement a simple and efficient model
parallel approach using intra-layer model-parallelism. We
exploit the inherent structure in transformer based language
models to make a simple model-parallel implementation that
trains efficiently in PyTorch, with no custom C++ code or
compiler required. This approach is orthogonal to pipeline-
based model parallelism as advocated by approaches such
as GPipe (Huang et al., 2018).
To demonstrate the scalability of our approach, we establisharXiv:1909.08053v4 [cs.CL] 13 Mar 2020 |
2310.04564.pdf | ReLU Strikes Back:
Exploiting Activation Sparsity in Large Language Models
Iman Mirzadeh†Keivan Alizadeh Sachin Mehta Carlo C Del Mundo
Oncel Tuzel Golnoosh Samei Mohammad Rastegari Mehrdad Farajtabar†
Apple
ABSTRACT
Large Language Models (LLMs) with billions of parameters have drastically transformed AI appli-
cations. However, their demanding computation during inference has raised significant challenges
for deployment on resource-constrained devices. Despite recent trends favoring alternative acti-
vation functions such as GELU or SiLU, known for increased computation, this study strongly
advocates for reinstating ReLU activation in LLMs. We demonstrate that using the ReLU activa-
tion function has a negligible impact on convergence and performance while significantly reducing
computation and weight transfer. This reduction is particularly valuable during the memory-bound
inference step, where efficiency is paramount. Exploring sparsity patterns in ReLU-based LLMs, we
unveil the reutilization of activated neurons for generating new tokens and leveraging these insights,
we propose practical strategies to substantially reduce LLM inference computation up to three times,
using ReLU activations with minimal performance trade-offs.
1 Introduction
The widespread excitement surrounding Large Language Models (LLMs) has sparked significant interest in leveraging
AI across diverse domains [5, 9, 6]. However, realizing the potential of LLMs is challenged by their significant
computational and memory requirements during inference [60, 40, 3]. To enhance the inference efficiency1, various
techniques have been explored, including quantization [12, 50], speculative decoding [41], pruning [53, 71], and
weight sparsification [20, 15]. Among these techniques, achieving activation sparsity offers a compelling advantage
by providing a favorable balance between accuracy and speedup, especially on modern hardware like GPUs [51].
Notably, employing the Rectified Linear Unit (ReLU) activation function [22] in neural networks is recognized for
inducing sparse activations and has been adopted in various prior works [27, 44, 48, 69]. To reaffirm this property,
we employ the OPT model [80], utilizing ReLU, and measure the sparsity of activations in the Feed Forward Network
(FFN) between the fully connected layers. As illustrated in Fig. 1a, all layers exhibit sparsity exceeding 90%. On
average, across all layers, this activation sparsity results in substantial weight transfer (I/O) savings between the GPU
and CPU, impacting 95% of the rows of the down projection layer’s weights (Fig. 1b). This reduction directly translates
to computation savings, as for these rows, the result of the matrix multiplication operation will be zero. Furthermore,
unlike unstructured sparsity (e.g., weight pruning), this type of sparsity is more hardware-friendly due to zeroing
more extensive and structured chunks, such as rows or columns [36, 51]. For OPT models, this sparsity reduces the
computation required for inference from 6.6G FLOPS (Floating Point Operations Per Second) to 4.5G FLOPS per
token, resulting in a 32% computation saving (Fig. 1c).
†Corresponding authors: {imirzadeh,farajtabar }@apple.com
1In this work, we use FLOPS as a proxy for inference efficiency. In Appendix B, we demonstrate that for LLMs with activation
sparsity, FLOPS can serve as a good approximation of real-world efficiency due to the structure inherent in activation sparsity (e.g.,
skipping the entire row corresponding to zero activations).arXiv:2310.04564v1 [cs.LG] 6 Oct 2023 |
2401.14953.pdf | 2024-1-29
Learning Universal Predictors
Jordi Grau-Moya*,1, Tim Genewein*,1, Marcus Hutter*,1, Laurent Orseau*,1, Grégoire Déletang1, Elliot Catt1,
Anian Ruoss1, Li Kevin Wenliang1, Christopher Mattern1, Matthew Aitchison1and Joel Veness1
*Equal contributions.,1Google DeepMind, London, United Kingdom
Meta-learning has emerged as a powerful approach to train neural networks to learn new tasks quickly
from limited data. Broad exposure to different tasks leads to versatile representations enabling general
problem solving. But, what are the limits of meta-learning? In this work, we explore the potential
of amortizing the most powerful universal predictor, namely Solomonoff Induction (SI), into neural
networks via leveraging meta-learning to its limits. We use Universal Turing Machines (UTMs) to
generate training data used to expose networks to a broad range of patterns. We provide theoretical
analysis of the UTM data generation processes and meta-training protocols. We conduct comprehensive
experiments with neural architectures (e.g. LSTMs, Transformers) and algorithmic data generators
of varying complexity and universality. Our results suggest that UTM data is a valuable resource for
meta-learning, and that it can be used to train neural networks capable of learning universal prediction
strategies.
Keywords: Kolmogorov-complexity, universal prediction, in-context learning
Figure 1|Summary of our meta-learning
methodology.Meta-learning has emerged as a powerful ap-
proach to enable AI systems to learn new tasks
quicklyfromlimiteddata(Hospedalesetal.,2021).
By training a model on a diverse set of tasks, meta-
learning encourages the discovery of representa-
tionsandlearningstrategiesthatgeneralizetonew,
unseen tasks. Intriguingly, recent research has
shown that, when exposed to specific data regimes,
meta-learning allows neural networks to perform
Bayesianinference(Geneweinetal.,2023;Mikulik
et al., 2020; Ortega et al., 2019), which is critical
for principled prediction under uncertainty. A key
challenge in meta-learning is to design task distri-
butions that are sufficiently broad, exposing the
model to a rich variety of structures and patterns.
Such broad exposure could lead to “universal” rep-
resentations, enabling the system to tackle a wide
range of problems and bringing us closer to the
goal of artificial general intelligence (AGI).
Solomonoff Induction1(SI) offers a compelling theoretical foundation for constructing such
an ideal universal prediction system (Solomonoff, 1964a,b)2. At its core, SI elegantly integrates
three fundamental principles (see Figure 1). Consideration of all computable hypotheses: Unlike
traditional approaches, SI explores the entire space of computable hypotheses (i.e. generated by a
1SI arguably solved the century-old induction problem (Rathmanner and Hutter, 2011), is the basis of the Hutter
prize (Hutter, 2006/2020) and has been praised by the father of AI, Marvin Minsky: “the most important discovery since
Gödel”.
2For an introduction see (Hutter, 2017; Hutter et al., 2007) and see (Hutter, 2007) for technical details.
Corresponding author(s): jordigrau@google.com
©2024 Google DeepMind. All rights reservedarXiv:2401.14953v1 [cs.LG] 26 Jan 2024 |
2309.17453.pdf | Preprint
EFFICIENT STREAMING LANGUAGE MODELS
WITH ATTENTION SINKS
Guangxuan Xiao1∗Yuandong Tian2Beidi Chen3Song Han1Mike Lewis2
1Massachusetts Institute of Technology
2Meta AI
3Carnegie Mellon University
https://github.com/mit-han-lab/streaming-llm
ABSTRACT
Deploying Large Language Models (LLMs) in streaming applications such as
multi-round dialogue, where long interactions are expected, is urgently needed but
poses two major challenges. Firstly, during the decoding stage, caching previous
tokens’ Key and Value states (KV) consumes extensive memory. Secondly, popular
LLMs cannot generalize to longer texts than the training sequence length. Window
attention, where only the most recent KVs are cached, is a natural approach — but
we show that it fails when the text length surpasses the cache size. We observe
an interesting phenomenon, namely attention sink , that keeping the KV of initial
tokens will largely recover the performance of window attention. In this paper, we
first demonstrate that the emergence of attention sink is due to the strong attention
scores towards initial tokens as a “sink” even if they are not semantically important.
Based on the above analysis, we introduce StreamingLLM, an efficient framework
that enables LLMs trained with a finite length attention window to generalize to
infinite sequence length without any fine-tuning. We show that StreamingLLM can
enable Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language
modeling with up to 4 million tokens and more. In addition, we discover that
adding a placeholder token as a dedicated attention sink during pre-training can
further improve streaming deployment. In streaming settings, StreamingLLM
outperforms the sliding window recomputation baseline by up to 22.2 ×speedup.
Code and datasets are provided in the link.
1 I NTRODUCTION
Large Language Models (LLMs) (Radford et al., 2018; Brown et al., 2020; Zhang et al., 2022;
OpenAI, 2023; Touvron et al., 2023a;b) are becoming ubiquitous, powering many natural language
processing applications such as dialog systems (Schulman et al., 2022; Taori et al., 2023; Chiang et al.,
2023), document summarization (Goyal & Durrett, 2020; Zhang et al., 2023a), code completion (Chen
et al., 2021; Rozière et al., 2023) and question answering (Kamalloo et al., 2023). To unleash the
full potential of pretrained LLMs, they should be able to efficiently and accurately perform long
sequence generation. For example, an ideal ChatBot assistant can stably work over the content of
recent day-long conversations. However, it is very challenging for LLM to generalize to longer
sequence lengths than they have been pretrained on, e.g., 4K for Llama-2 Touvron et al. (2023b).
The reason is that LLMs are constrained by the attention window during pre-training. Despite
substantial efforts to expand this window size (Chen et al., 2023; kaiokendev, 2023; Peng et al., 2023)
and improve training (Dao et al., 2022; Dao, 2023) and inference (Pope et al., 2022; Xiao et al., 2023;
Anagnostidis et al., 2023; Zhang et al., 2023b) efficiency for lengthy inputs, the acceptable sequence
length remains intrinsically finite , which doesn’t allow persistent deployments.
In this paper, we first introduce the concept of LLM streaming applications and ask the question:
Can we deploy an LLM for infinite-length inputs without sacrificing efficiency and performance?
∗Part of the work done during an internship at Meta AI.
1arXiv:2309.17453v1 [cs.CL] 29 Sep 2023 |
2311.07445.pdf | Think Before You Speak: Cultivating Communication Skills of
Large Language Models via Inner Monologue
Junkai Zhou1,2, Liang Pang1∗, Huawei Shen1,2, Xueqi Cheng1,2
1CAS Key Laboratory of AI Security,
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
2University of Chinese Academy of Sciences, Beijing, China
{zhoujunkai20z,pangliang,shenhuawei,cxq}@ict.ac.cn
Abstract
The emergence of large language models
(LLMs) further improves the capabilities of
open-domain dialogue systems and can gen-
erate fluent, coherent, and diverse responses.
However, LLMs still lack a crucial ability: com-
munication skills. This limitation renders them
more like information seeking tools rather than
anthropomorphic chatbots. Communication
skills, such as topic transition, proactively ask-
ing questions, concept guidance, empathy, and
summarising often should be taken into consid-
eration, to make LLMs more anthropomorphic
and proactive during the conversation, thereby
increasing the interest of users and attracting
them to chat for longer. However, enabling
these communication skills in black-box LLMs
remains a key challenge because they do not
have the same utterance formation mode as
real people: think before speaking. Inspired by
linguistics and cognitive science, we empower
LLMs with communication skills through inner
monologues. To evaluate various communica-
tion skills, we construct a benchmark named
Cskills, which can also more comprehensively
evaluate the dialogue generation ability of the
model. Experimental results show that the pro-
posed CSIM strategy improves the backbone
models and outperforms the baselines.
1 Introduction
Open-domain dialogue systems need to generate
fluent, coherent, and diverse responses based on his-
tory utterances. The emergence of large language
models (Chowdhery et al., 2022; OpenAI, 2022;
Touvron et al., 2023) further enhances the capabili-
ties of dialogue generation systems and can meet
the above requirements. However, LLMs are more
like an information seeking tool than a chatbot like
a real person. Such a dialogue system may make
users lose interest in chatting and terminate the
conversation. The reason is that LLMs still lack an
∗Corresponding authors
Sure! Since youenjoy watching movies, I
recommend theAmerican film "Inception"
directed byChristopher Nolan .
Ofcourse! What type ofAmerican movie
doyouwant towatch? Action, romance,
comedy orsomething else?
Romance movies .
Ilikewatching movies, recommend
meanAmerican movie .
Ilikewatching movies, recommend
meanAmerican movie .
Iwould recommend "Titanic", aromantic
tale ofindividuals transcending social
barriers forlove.
Hedidnotspecify themovie type,
Ishould askmore about hislikes
forabetter recommendation .
(a)
(b)Inner MonologueFigure 1: When asked to recommend: (a) ChatGPT
directly recommends without asking the detailed needs
of users, which may lead to failure to satisfy users; (b)
people proactively ask questions to further understand
the needs of users before making recommendations.
important conversational ability: communication
skills. As shown in Figure 1, LLM makes recom-
mendations without a thorough comprehension of
the preferences of the user regarding movie gen-
res. This lack of detailed understanding may result
in inaccurate recommendation outcomes. People
use proactively asking questions in communication
skills to further understand the needs of the user,
thereby making better recommendations.
In linguistics, communication skills are used
to enhance the interactive experience during the
conversation and to establish effective communica-
tion (Dörnyei, 1995; Grover, 2005; Barker, 2010).
The five common communication skills are topic
transition, proactively asking questions, concept
guidance, empathy, and summarising often. Each
communication skill is applicable to different con-
versational situations and plays a different role
during the conversation. By using topic transi-
tion (Dörnyei, 1995; Riou, 2015), we can avoid
unfamiliar concepts and transition to familiar ones,
leading to better conversations. Proactively ask-arXiv:2311.07445v2 [cs.CL] 15 Mar 2024 |
2402.15175.pdf | Unified View of Grokking, Double Descent and Emergent Abilities: A
Perspective from Circuits Competition
Yufei Huang1Shengding Hu1Xu Han1Zhiyuan Liu1Maosong Sun†1
Abstract
Recent studies have uncovered intriguing phenom-
ena in deep learning, such as grokking ,double de-
scent , and emergent abilities in large language
models, which challenge human intuition and
are crucial for a deeper understanding of neural
models. In this paper, we present a comprehen-
sive framework that provides a unified view of
these three phenomena, focusing on the compe-
tition between memorization and generalization
circuits. This approach, initially employed to ex-
plain grokking , is extended in our work to encom-
pass a wider range of model sizes and training
data volumes. Our framework delineates four
distinct training dynamics, each depending on
varying combinations of model size and training
data quantity. Utilizing this framework, we pro-
vide a detailed analysis of the double descent phe-
nomenon and propose two verifiable predictions
regarding its occurrence, both substantiated by our
experimental results. Moreover, we expand our
framework to the multi-task learning paradigm,
demonstrating how algorithm tasks can be turned
into emergent abilities. This offers a novel per-
spective to understand emergent abilities in Large
Language Models.
1. Introduction
There are several interesting phenomenons in Deep Learn-
ing, among which grokking (Power et al., 2022), double
descent (Nakkiran et al., 2020) and emergent abilities (Wei
et al., 2022a) in current Large Language Models attract a lot
of attention. Understanding these phenomena is important
for us to reveal the mechanism of deep learning. Plenty of
works (Liu et al., 2022; 2023; Thilak et al., 2022; Varma
et al., 2023; Schaeffer et al., 2023; Michaud et al., 2023)
have been done to explain these phenomenons from dif-
ferent perspectives. However, these works all concentrate
1Tsinghua University, Beijing, China. Correspondence to:
Maosong Sun <sms@mail.tsinghua.edu.cn >.
Model SizeTrain Data SizeCritical Dataset SizeMemorization Capacity
(b) ungrokking(c) grokking(a) progression(d) semi-grokkingdouble descentw/o double descentFigure 1. The increasing memorization capacity and decreasing
critical dataset size for larger models split the figure into four
distinct zones including progression ,ungrokking ,grokking and
semi-grokking . Each zone will show a specific training dynamic.
on a single phenomenon and explain them separately. In
this work, we provide a preliminary study to give a unified
view of these three phenomena from the perspective of com-
petition between memorization circuits and generalization
circuits in neural models.
Our work is based on Varma et al. (2023)’s explanation
forgrokking . They attribute grokking to the competition
between two distinct types of circuits in the model: one
responsible for memorization, which achieves high train-
ing accuracy but poor validation accuracy, and another for
generalization, capable of high performance in both train-
ing and validation. The latter, although slower to develop,
proves more efficient in terms of parameter norms, leading
to the model finally transferring from memorization to gen-
eralization to achieve higher efficiency. Intriguingly, the
efficiency of the memorization circuit is inversely propor-
tional to the volume of training data, indicating that larger
datasets reduce their efficiency. In contrast, the efficiency of
the generalization circuit remains consistently stable, regard-
less of the size of the training data. Consequently, Varma
et al. (2023) established a critical dataset size Dcrit, a spe-
cific range within which memorization and generalization
circuits exhibit comparable efficiency, and beyond which
grokking is likely to occur.
1arXiv:2402.15175v2 [cs.LG] 26 Feb 2024 |
2206.00364.pdf | Elucidating the Design Space of Diffusion-Based
Generative Models
Tero Karras
NVIDIAMiika Aittala
NVIDIATimo Aila
NVIDIASamuli Laine
NVIDIA
Abstract
We argue that the theory and practice of diffusion-based generative models are
currently unnecessarily convoluted and seek to remedy the situation by presenting
a design space that clearly separates the concrete design choices. This lets us
identify several changes to both the sampling and training processes, as well as
preconditioning of the score networks. Together, our improvements yield new
state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in
an unconditional setting, with much faster sampling (35 network evaluations per
image) than prior designs. To further demonstrate their modular nature, we show
that our design changes dramatically improve both the efficiency and quality ob-
tainable with pre-trained score networks from previous work, including improving
the FID of a previously trained ImageNet-64 model from 2.07 to near-SOTA 1.55,
and after re-training with our proposed improvements to a new SOTA of 1.36.
1 Introduction
Diffusion-based generative models [ 46] have emerged as a powerful new framework for neural image
synthesis, in both unconditional [ 16,37,49] and conditional [ 17,36,37,39,40,42,43,49] settings,
even surpassing the quality of GANs [ 13] in certain situations [ 9]. They are also rapidly finding use
in other domains such as audio [ 28,38] and video [ 19] generation, image segmentation [ 4,57] and
language translation [ 35]. As such, there is great interest in applying these models and improving
them further in terms of image/distribution quality, training cost, and generation speed.
The literature on these models is dense on theory, and derivations of sampling schedule, training
dynamics, noise level parameterization, etc., tend to be based as directly as possible on theoretical
frameworks, which ensures that the models are on a solid theoretical footing. However, this approach
has a danger of obscuring the available design space — a proposed model may appear as a tightly
coupled package where no individual component can be modified without breaking the entire system.
As our first contribution, we take a look at the theory behind these models from a practical standpoint,
focusing more on the “tangible” objects and algorithms that appear in the training and sampling
phases, and less on the statistical processes from which they might be derived. The goal is to obtain
better insights into how these components are linked together and what degrees of freedom are
available in the design of the overall system. We focus on the broad class of models where a neural
network is used to model the score [ 22] of a noise level dependent marginal distribution of the training
data corrupted by Gaussian noise. Thus, our work is in the context of denoising score matching [54].
Our second set of contributions concerns the sampling processes used to synthesize images using
diffusion models. We identify the best-performing time discretization for sampling, apply a higher-
order Runge–Kutta method for the sampling process, evaluate different sampler schedules, and
analyze the usefulness of stochasticity in the sampling process. The result of these improvements is a
significant drop in the number of sampling steps required during synthesis, and the improved sampler
can be used as a drop-in replacement with several widely used diffusions models [37, 49].
36th Conference on Neural Information Processing Systems (NeurIPS 2022).arXiv:2206.00364v2 [cs.CV] 11 Oct 2022 |
2010.15327.pdf | Published as a conference paper at ICLR 2021
DOWIDE AND DEEP NETWORKS LEARN THE SAME
THINGS ? U NCOVERING HOW NEURAL NETWORK
REPRESENTATIONS VARY WITH WIDTH AND DEPTH
Thao Nguyen∗, Maithra Raghu, & Simon Kornblith
Google Research
{thaotn,maithra,skornblith }@google.com
ABSTRACT
A key factor in the success of deep neural networks is the ability to scale models
to improve performance by varying the architecture depth and width. This simple
property of neural network design has resulted in highly effective architectures for
a variety of tasks. Nevertheless, there is limited understanding of effects of depth
and width on the learned representations . In this paper, we study this fundamental
question. We begin by investigating how varying depth and width affects model
hidden representations, finding a characteristic block structure in the hidden rep-
resentations of larger capacity (wider or deeper) models. We demonstrate that
this block structure arises when model capacity is large relative to the size of the
training set, and is indicative of the underlying layers preserving and propagating
the dominant principal component of their representations. This discovery has
important ramifications for features learned by different models, namely, repre-
sentations outside the block structure are often similar across architectures with
varying widths and depths, but the block structure is unique to each model. We
analyze the output predictions of different model architectures, finding that even
when the overall accuracy is similar, wide and deep models exhibit distinctive
error patterns and variations across classes.
1 I NTRODUCTION
Deep neural network architectures are typically tailored to available computational resources by
scaling their width and/or depth. Remarkably, this simple approach to model scaling can result
in state-of-the-art networks for both high- and low-resource regimes (Tan & Le, 2019). However,
despite the ubiquity of varying depth and width, there is limited understanding of how varying these
properties affects the final model beyond its performance. Investigating this fundamental question
is critical, especially with the continually increasing compute resources devoted to designing and
training new network architectures.
More concretely, we can ask, how do depth and width affect the final learned representations? Do
these different model architectures also learn different intermediate (hidden layer) features? Are
there discernible differences in the outputs? In this paper, we study these core questions, through
detailed analysis of a family of ResNet models with varying depths and widths trained on CIFAR-10
(Krizhevsky et al., 2009), CIFAR-100 and ImageNet (Deng et al., 2009).
We show that depth/width variations result in distinctive characteristics in the model internal rep-
resentations, with resulting consequences for representations and outputs across different model
initializations and architectures. Specifically, our contributions are as follows:
• We develop a method based on centered kernel alignment (CKA) to efficiently measure the simi-
larity of the hidden representations of wide and deep neural networks using minibatches.
• We apply this method to different network architectures, finding that representations in wide or
deep models exhibit a characteristic structure, which we term the block structure . We study how
the block structure varies across different training runs, and uncover a connection between block
∗Work done as a member of the Google AI Residency program.
1arXiv:2010.15327v2 [cs.LG] 10 Apr 2021 |
3-science.aay8015.pdf | STRUCTURAL BIOLOGY
Structural basis for strand-transfer inhibitor binding
to HIV intasomes
Dario Oliveira Passos1*, Min Li2*, Ilona K. Józ ´wik1, Xue Zhi Zhao3, Diogo Santos-Martins4,
Renbin Yang2, Steven J. Smith3, Youngmin Jeon1, Stefano Forli4, Stephen H. Hughes3,
Terrence R. Burke Jr.3, Robert Craigie2, Dmitry Lyumkis1,4†
The HIV intasome is a large nucleoprotein assembly that mediates the integration of a DNA copy of
the viral genome into host chromatin. Intasomes are targeted by the latest generation of antiretroviral
drugs, integrase strand-transfer inhibitors (INSTIs). Challenges associated with lentiviral intasome
biochemistry have hindered high-resolution structural studies of how INSTIs bind to their native drug
target. Here, we present high-resolution cryo –electron microscopy structures of HIV intasomes bound
to the latest generation of INSTIs. These structures highlight how small changes in the integrase active
site can have notable implications for drug binding and design and provide mechanistic insights into
why a leading INSTI retains efficacy against a broad spectrum of drug-resistant variants. The data have
implications for expanding effective treatments available for HIV-infected individuals.
HIV currently infects ~40 million people
worldwide. The virus ’s ability to inte-
grate a viral DNA (vDNA) copy of its
RNA genome into host chromatin, lead-
ing to the establishment of a permanent
and irreversible infection of the target cell(and any progeny cells), is the central chal-
lenge in developing a cure ( 1). Integration, cat-
alyzed by the viral integrase (IN) protein, is
essential for retroviral replication and results
in the covalent linkage of vDNA to the host
genome ( 2,3). Proper integration depends on
the formation of a large oligomeric nucleo-
protein complex containing viral IN assembled
on the ends of vDNA, commonly referred to
as an intasome ( 4–9). All intasomes contain
multimeric IN bound to vDNA ends, but they
are characterized by distinct oligomeric con-
figurations and domain arrangements.
Intasome assembly and catalysis proceed
through a multistep process that involves sev-
eral distinct intermediates (fig. S1). The cat-
alytically competent cle aved synaptic complex
(CSC) intasome, which contains free 3 ′-OH
ends, is the specific target of the IN strand-
transfer inhibitors (INSTIs), a group of drugs
that bind to both the active site of HIV IN
and the ends of vDNA, thereby blocking ca-
talysis. Treatment with INSTIs, which are a key
component of combined antiretroviral thera-
py, leads to a rapid decrease in viral load in
patients. INSTIs are generally well tolerated,
and the second-generation drugs do not read-
ily select for resistance ( 10–13). They are used
in the recommended first-line combinationtherapies for treating HIV-infected patients
and are prime candidates for future develop-
ment ( 14,15).
The prototype foamy virus (PFV) intasome
has been used as a model system to under-
stand INSTI binding ( 6,16–19). However, this
system has limitations. PFV and HIV INs share
only ~25% of sequence identity in the catalytic
core domain (CCD) ( 6), and many of the sites
where drug-resistance mutations occur in HIV
IN are not conserved in PFV IN. Moreover,
minor changes in the structure of an INSTI can
profoundly affect its ability to inhibit mutant
forms of HIV ( 19,20). Thus, understanding
how INSTIs interact with HIV intasomes —
their natural target —at a molecular level is
needed to overcome drug resistance and to
guide development of improved inhibitors.
We established conditions for assembling,
purifying, and structurally characterizing HIV
CSC intasomes. Previously, we have shown
that fusion of the small protein Sso7d to the
N-terminal domain (NTD) of HIV IN improves
its solubility and facilitates assembly and puri-
fication of strand-transfer complex intasomes
(4,21
). We further optimized conditions re-
quired for CSC formation and purification
and showed that these complexes are bio-
chemically active for concerted integration
(fig. S2). We used a tilted cryo –electron mi-
croscopy (cryo-EM) data collection strategy
to alleviate the effects of preferential speci-
men orientation on cryo-EM grids ( 22), which
allowed us to collect data on the apo form
of the HIV CSC intasome. The cryo-EM re-
construction of the HIV CSC intasome reveals
a twofold symmetric dodecameric molecular
assembly of IN. The highest resolution (~2.7 Å)
resides within the core containing the twocatalytic sites and the ends of vDNA (fig. S3
and table S1).
Lentiviral intasomes have a large degree of
heterogeneity and vary in size depending onthe protein and biochemical conditions, form-
ing tetramers, dodecamers, hexadecamers,
and proto-intasome stacks (figs. S4 and S5).
The basic underlying unit, the conserved in-
tasome core (CIC), resembles —but is not iden-
tical to —the tetrameric PFV intasome. The
CIC is composed of two IN dimers, each of
which binds one vDNA end and a C-terminal
domain (CTD) from a neighboring protomer
(23). In the cryo-EM reconstruction, four fully
defined IN protomers, two CTDs from flank-
ing protomers, and two additional CTDs from
distal subunits are clearly resolved (Fig. 1A);
these were used to build an atomic model(Fig. 1B). With the exception of the additional
CTDs from distal subunits, which are not
conserved in other retroviral species, the re-
solved regions constitute the intasome CIC.
Each of the two active sites in an HIV in-
tasome contains the catalytic residues Asp
64,
Asp116, and Glu152, forming the prototypical
DDE motif present in many nucleases, trans-
posases, and other INs ( 24). The regions near
the active sites of the PFV and HIV intasomes
a r es i m i l a rb e c a u s em a n yo ft h er e s i d u e sp a r -
ticipate in substrate binding and catalysis.
However, farther from the active sites, the
structures diverge (Fig. 1C and figs. S6 and S7).
The largest differences reside in the synaptic
CTD from the flanking protomer, specifically
the region around the loop spanning HIV IN
Arg228-Lys236.T h ec o r r e s p o n d i n gl o o pi nP F V
IN has four additional residues and assumes a
distinct configuration. Clinically relevant drug-
resistance mutations occur within regions of
HIV IN where the amino acid sequences be-
tween the two orthologs diverge ( 11,12).
To better understand how INSTIs interact
with HIV intasomes, we assembled the com-
plex with bictegravir ( BIC), a leading second-
generation INSTI and the most broadly potent of
all clinically approved INSTIs ( 25). We also ex-
amined the binding of additional compounds —
named 4f,4d,a n d4c, which contain a distinct
chelating core (Fig. 2A) —whose development
was motivated by the need to further improve
potency against drug-resistant variants ( 19,20).
Currently, 4dis a leading drug candidate that
shows improved efficacy over all clinically used
and developmental compounds against the
known drug-resistant variants ( 25,26)( f i g .S 8 ) .
Intasomes were coassembled and copurified
with INSTIs, and we verified their inhibitory
activity (fig. S9). The cryo-EM structures of
INSTI-bound CSCs extend to a comparable
~2.6 to 2.7 Å resolution near the active site,
which allows the derivation of atomic models
(figs. S10 to S12 and table S1).
INSTIs bind HIV CSCs within a well-defined
pocket, formed by the interface between two
IN protomers and vDNA. Several important
pharmacophores characterize the binding of
all INSTIs (Fig. 2, B and C). First, three cen-
tral electronegative heteroatoms chelate twoRESEARCH
Passos et al.,Science 367, 810 –814 (2020) 14 February 2020 1o f4
1The Salk Institute for Biological Studies, Laboratory of Genetics,
La Jolla, CA 92037, USA.2National Institutes of Health, National
Institute of Diabetes and Digestive Diseases, Bethesda, MD
20892, USA.3Center for Cancer Research, National Cancer
Institute, Frederick, MD 21702, USA.4Department of Integrative
Structural and Computational Biology, The Scripps Research
Institute, La Jolla, CA 92037, USA.
*These authors contributed equally to this work.
†Corresponding author. Email: dlyumkis@salk.edu
Downloaded from https://www.science.org at University of California San Diego on July 04, 2023
|
2212.03533.pdf | Text Embeddings by Weakly-Supervised
Contrastive Pre-training
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao
Linjun Yang ,Daxin Jiang ,Rangan Majumder ,Furu Wei
Microsoft Corporation
https://github.com/microsoft/unilm
Abstract
This paper presents E51, a family of state-of-the-art text embeddings that transfer
well to a wide range of tasks. The model is trained in a contrastive manner with
weak supervision signals from our curated large-scale text pair dataset (called
CCPairs). E5 can be readily used as a general-purpose embedding model for any
tasks requiring a single-vector representation of texts such as retrieval, clustering,
and classification, achieving strong performance in both zero-shot and fine-tuned
settings. We conduct extensive evaluations on 56datasets from the BEIR and
MTEB benchmarks. For zero-shot settings, E5 is the first model that outperforms
the strong BM25 baseline on the BEIR retrieval benchmark without using any
labeled data. When fine-tuned, E5 obtains the best results on the MTEB benchmark,
beating existing embedding models with 40×more parameters.
1 Introduction
Text embeddings are low-dimensional vector representations for arbitrary-length texts and play key
roles in many NLP tasks such as large-scale retrieval. Compared to the high-dimensional and sparse
representations like TF-IDF, text embeddings have the potential to overcome the lexical mismatch
issue and facilitate efficient retrieval and matching between texts. It also offers a versatile interface
easily consumable by downstream applications.
While pre-trained language models such as BERT [ 17] and GPT [ 7] can produce transferrable
text representations, they are not ideal for tasks such as retrieval and text matching where a single-
vector embedding of texts is more desired due to its efficiency and versatility. To obtain better
text embeddings, contrastive learning is often the go-to framework to enhance the sequence-level
representations from text pairs. Along this line of research, some works are geared towards learning
task-specific embeddings. For example, GTR [ 43] and Sentence-T5 [ 44] fine-tune pre-trained
models with supervised datasets to learn embeddings customized for passage retrieval and semantic
textual similarity, respectively. Other works learn unsupervised embeddings from automatically
constructed text pairs. Typical methods to construct text pairs include Inverse Close Task (ICT)
[9], random cropping [ 28] and neighboring text spans [ 41], etc. While such synthetic data are
of unlimited quantity, they are often poor in quality and the resulted embeddings fail to match the
performance of the classic BM25 baseline without further fine-tuning [40].
In this work, we learn a high-quality general-purpose text embedding termed E5, EmbEddings from
bidir Ectional Encoder r Epresentations. E5 aims to provide strong off-the-shelf text embeddings
suitable for any tasks requiring single-vector representations in both zero-shot or fine-tuned settings.
To achieve this goal, instead of relying on limited labeled data or low-quality synthetic text pairs, we
contrastively train E5 embeddings from CCPairs, a curated web-scale text pair dataset containing
1E5:EmbEddings from bidir Ectional Encoder r Epresentations
Work in progress.arXiv:2212.03533v2 [cs.CL] 22 Feb 2024 |
2305.15486.pdf | SPRING: GPT-4 Out-performs RL Algorithms by
Studying Papers and Reasoning
Yue Wu14∗, Shrimai Prabhumoye2, So Yeon Min1, Yonatan Bisk1, Ruslan Salakhutdinov1,
Amos Azaria3, Tom Mitchell1, Yuanzhi Li1,4
1Carnegie Mellon University,2NVIDIA,3Ariel University,4Microsoft Research
Abstract
Open-world survival games pose significant challenges for AI algorithms due to
their multi-tasking, deep exploration, and goal prioritization requirements. Despite
reinforcement learning (RL) being popular for solving games, its high sample
complexity limits its effectiveness in complex open-world games like Crafter or
Minecraft. We propose a novel approach, SPRING, to read the game’s original
academic paper and use the knowledge learned to reason and play the game through
a large language model (LLM). Prompted with the L ATEX source as game context
and a description of the agent’s current observation, our SPRING framework em-
ploys a directed acyclic graph (DAG) with game-related questions as nodes and
dependencies as edges. We identify the optimal action to take in the environment
by traversing the DAG and calculating LLM responses for each node in topological
order, with the LLM’s answer to final nodedirectly translating to environment
actions. In our experiments, we study the quality of in-context “reasoning” in-
duced by different forms of prompts under the setting of the Crafter open-world
environment. Our experiments suggest that LLMs, when prompted with consistent
chain-of-thought, have great potential in completing sophisticated high-level tra-
jectories. Quantitatively, SPRING with GPT-4 outperforms all state-of-the-art RL
baselines, trained for 1M steps, without any training. Finally, we show the potential
of games as a test bed for LLMs.
1 Introduction
Open-world survival games like Minecraft Fan et al. (2022) and Crafter Hafner (2021) pose significant
challenges for AI algorithms due to a combination of factors: procedural generation requires strong
generalization; diverse action space requires multi-task capabilities; technology tree requires long-
term planning and deep exploration; diverse and conflicting objectives requires goal prioritization. In
particular, Crafter is designed for efficient simulation and fast iteration. Similar to Minecraft, Crafter
features key challenges such as multi-tasking, exploration with a deep and wide tech-tree, requiring
the agent to craft multiple tools and interact with multiple objects to survive in the game.
Reinforcement learning (RL) has been the go-to approach for game-based problems, with numerous
successes in games like Go Silver et al. (2017), robotics Fu et al. (2020); Hafner et al. (2023) and
various video games Vinyals et al. (2019); Schrittwieser et al. (2020); Badia et al. (2020); Hafner
et al. (2023). While RL demonstrated impressive performance, it still suffers from certain limitations,
such as high sample complexity and difficulty in incorporating prior knowledge. Such drawbacks
make it exceptionally challenging to apply RL to diverse and complex open-world benchmarks like
Crafter Hafner (2021) or Minecraft Fan et al. (2022). Addressing the benefits and drawbacks of RL is
therefore crucial for achieving a sample-efficient solution.
∗Work done during internship at Microsoft. For correspondence, contact ywu5@andrew.cmu.edu
Preprint. Under review.arXiv:2305.15486v2 [cs.AI] 29 May 2023 |
1909.08593.pdf | Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler∗Nisan Stiennon∗Jeffrey Wu Tom B. Brown
Alec Radford Dario Amodei Paul Christiano Geoffrey Irving
OpenAI
{dmz,nisan,jeffwu,tom,alec,damodei,paul,irving}@openai.com
Abstract
Reward learning enables the application of rein-
forcement learning (RL) to tasks where reward is
defined by human judgment, building a model of
reward by asking humans questions. Most work
on reward learning has used simulated environ-
ments, but complex information about values is of-
ten expressed in natural language, and we believe
reward learning for language is a key to making
RL practical and safe for real-world tasks. In this
paper, we build on advances in generative pretrain-
ing of language models to apply reward learning
to four natural language tasks: continuing text
with positive sentiment or physically descriptive
language, and summarization tasks on the TL;DR
and CNN/Daily Mail datasets. For stylistic con-
tinuation we achieve good results with only 5,000
comparisons evaluated by humans. For summa-
rization, models trained with 60,000 comparisons
copy whole sentences from the input but skip irrel-
evant preamble; this leads to reasonable ROUGE
scores and very good performance according to
our human labelers, but may be exploiting the fact
that labelers rely on simple heuristics.
1. Introduction
We would like to apply reinforcement learning to complex
tasks defined only by human judgment, where we can only
tell whether a result is good or bad by asking humans. To
do this, we can first use human labels to train a model of
reward, and then optimize that model. While there is a long
history of work learning such models from humans through
interaction, this work has only recently been applied to mod-
ern deep learning, and even then has only been applied to
relatively simple simulated environments (Christiano et al.,
2017; Ibarz et al., 2018; Bahdanau et al., 2018). By contrast,
real world settings in which humans need to specify com-
*Equal contribution. Correspondence to paul@openai.com.plex goals to AI agents are likely to both involve and require
natural language, which is a rich medium for expressing
value-laden concepts. Natural language is particularly im-
portant when an agent must communicate back to a human
to help provide a more accurate supervisory signal (Irving
et al., 2018; Christiano et al., 2018; Leike et al., 2018).
Natural language processing has seen substantial recent ad-
vances. One successful method has been to pretrain a large
generative language model on a corpus of unsupervised data,
then fine-tune the model for supervised NLP tasks (Dai and
Le, 2015; Peters et al., 2018; Radford et al., 2018; Khandel-
wal et al., 2019). This method often substantially outper-
forms training on the supervised datasets from scratch, and
a single pretrained language model often can be fine-tuned
for state of the art performance on many different super-
vised datasets (Howard and Ruder, 2018). In some cases,
fine-tuning is not required: Radford et al. (2019) find that
generatively trained models show reasonable performance
on NLP tasks with no additional training (zero-shot).
There is a long literature applying reinforcement learning to
natural language tasks. Much of this work uses algorithmi-
cally defined reward functions such as BLEU for translation
(Ranzato et al., 2015; Wu et al., 2016), ROUGE for summa-
rization (Ranzato et al., 2015; Paulus et al., 2017; Wu and
Hu, 2018; Gao et al., 2019b), music theory-based rewards
(Jaques et al., 2017), or event detectors for story generation
(Tambwekar et al., 2018). Nguyen et al. (2017) used RL
on BLEU but applied several error models to approximate
human behavior. Wu and Hu (2018) and Cho et al. (2019)
learned models of coherence from existing text and used
them as RL rewards for summarization and long-form gen-
eration, respectively. Gao et al. (2019a) built an interactive
summarization tool by applying reward learning to one ar-
ticle at a time. Experiments using human evaluations as
rewards include Kreutzer et al. (2018) which used off-policy
reward learning for translation, and Jaques et al. (2019)
which applied the modified Q-learning methods of Jaques
et al. (2017) to implicit human preferences in dialog. Yi
et al. (2019) learned rewards from humans to fine-tune dia-
log models, but smoothed the rewards to allow supervised
learning. We refer to Luketina et al. (2019) for a survey ofarXiv:1909.08593v2 [cs.CL] 8 Jan 2020 |
10.1038.s41586-023-06924-6.pdf | Mathematical discoveries from program
search with large language models
Bernardino R om er a- Pa re de s, M oh am ma damin Barekatain, Alexander Novikov, Matej Balog,
M. Pawan Kumar, Emilien Dupont, Francisco J. R. Ruiz, Jordan S. Ellenberg, Pengming Wang,
Omar Fawzi, Pushmeet Kohli & Alhussein Fawzi
This is a PDF file of a peer-reviewed paper that has been accepted for publication.
Although unedited, the content has been subjected to preliminary formatting. Nature
is providing this early version of the typeset paper as a service to our authors and readers. The text and figures will undergo copyediting and a proof review before the
paper is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers
apply.Received: 12 August 2023
Accepted: 30 November 2023
Accelerated Article Preview
Published online xx xx xxxx
Cite this article as: Romera-Paredes, B. et al.
Mathematical discoveries from program search with large language models. Nature
https://doi.org/10.1038/s41586-023-06924-6
(2023)https://doi.org/10.1038/s41586-023-06924-6
Nature | www.nature.com
Accelerated Article Preview
ACCELERATED ARTICLE PREVIEW |
2404.18796v1.pdf | Replacing Judges with Juries:
Evaluating LLM Generations with a Panel of Diverse Models
Pat Verga
Sebastian Hofst ¨atter, Sophia Althammer, Yixuan Su
Aleksandra Piktus, Arkady Arkhangorodsky, Minjie Xu, Naomi White
Patrick Lewis
Cohere
Abstract
As Large Language Models (LLMs) have become
more advanced, they have outpaced our abilities
to accurately evaluate their quality. Not only is
finding data to adequately probe particular model
properties difficult, but evaluating the correctness
of a model’s free-form generation alone is a chal-
lenge. To address this, many evaluations now rely
on using LLMs themselves as judges to score the
quality of outputs from other LLMs. Evaluations
most commonly use a single large model like GPT-
4. While this method has grown in popularity, it
is costly, has been shown to introduce intra-model
bias, and in this work, we find that very large mod-
els are often unnecessary. We propose instead to
evaluate models using a Panel of LLm evaluators
(PoLL). Across three distinct judge settings and
spanning six different datasets, we find that using
a PoLL composed of a larger number of smaller
models outperforms a single large judge, exhibits
less intra-model bias due to its composition of dis-
joint model families, and does so while being over
seven times less expensive.
1 Introduction
Evaluating generative language models is a chal-
lenging task: not only is it difficult to find mean-
ingful data to test the models, but evaluating
the correctness of a generated response is it-
self a challenge. Multiple choice datasets like
MMLU (Hendrycks et al., 2020) have become pop-
ular in part by side-stepping the difficulty of evalu-
ating generations. However, multiple-choice ques-
tions are in many ways probing a different property
than that of a free-form generative task, which is
oftentimes closer to the downstream use-case.
Many automatic metrics have been used across
various tasks such as BLEU in machine transla-
tion (Papineni et al., 2002), ROUGE for summa-
Reference EM R
Haiku GPT3.5 PoLL1
2
3
4
5
6
7Rank
Evaluated Models
R
R+
GPT3.5 GPT4 C3-Haiku C3-Sonnet Mistral-L
Reference EM R
Haiku GPT3.5 PoLL
Judges0.50.60.70.8Kappa Score
(to Humans)Figure 1: Top: Rankings of model performance change
drastically depending on which LLM is used as the
judge on KILT-NQ. Bottom: The Panel of LLm evalua-
tors (PoLL) has the highest Cohen’s κcorrelation with
human judgements.
rization (Lin, 2004), and heuristic string match
methods, such as normalized exact match (EM) and
token level F1 for question answering (Rajpurkar
et al., 2016). However, these simplistic methods
commonly fail to analyze the intended property of
interest. QA metrics, for example, invariably lead
to both false positive failures (e.g. superfluous to-
ken overlap) and more commonly false negatives
due to an incomplete set of gold reference answers
(e.g. date format differences1, inclusion of middle
initial in person’s name, etc.).
More recent methods have attempted to address
these issues by instead using trained or prompted
models as evaluators (Sellam et al., 2020; Zheng
1We found that EM unjustly penalized Command models
for a tendency to write in Canadian or British English as QA
dataset annotations typically format dates in American MM-
DD-YYYY format.arXiv:2404.18796v1 [cs.CL] 29 Apr 2024 |
nihms-1631034.pdf | Structure of the Visual Signaling Complex between Transducin
and Phosphodiesterase 6
Yang Gao1,2,5, Gözde Eskici1,2,5, Sekar Ramachandran3,4,5, Frédéric Poitevin1,2, Alpay
Burak Seven1,2, Ouliana Panova1,2, Georgios Skiniotis1,2,*, Richard A. Cerione3,4,6,*
1Department of Molecular and Cellular Physiology, Stanford University School of Medicine,
Stanford, CA 94305, USA.
2Department of Structural Biology, Stanford University School of Medicine, Stanford, CA 94305,
USA.
3Department of Chemistry and Chemical Biology, Cornell University, Ithaca, NY 14853, USA.
4Department of Molecular Medicine, Cornell University, Ithaca, NY 14853, USA.
5These authors contributed equally to this work.
6Lead Contact
SUMMARY
Heterotrimeric G proteins communicate signals from activated G protein-coupled receptors to
downstream effector proteins. In the phototransduction pathway responsible for vertebrate vision,
the G protein-effector complex is comprised of the GTP-bound transducin α subunit (G αT·GTP)
and the cyclic GMP (cGMP) phosphodiesterase 6 (PDE6), which stimulates cGMP hydrolysis
leading to hyperpolarization of the photoreceptor cell. Here we report a cryo-electron microscopy
(cryoEM) structure of PDE6 complexed to GTP-bound G αT. The structure reveals two G αT·GTP
subunits engaging the PDE6 hetero-tetramer at both the PDE6 catalytic core and the PDE γ
subunits, driving extensive rearrangements to relieve all inhibitory constraints on enzyme
catalysis. Analysis of the conformational ensemble in the cryoEM data highlights the dynamic
nature of the contacts between the two G αT·GTP subunits and PDE6 that supports an alternating-
site catalytic mechanism.
*Correspondence: rac1@cornell.edu (R.A.C.), yiorgo@stanford.edu (G.S.).
AUTHOR CONTRIBUTIONS
Y .G. developed the purification strategy, performed complex purification and PDE6 activity assays for mutants, built and refined the
structural model from the cryoEM map and wrote the first draft of the manuscript. G.E. processed the cryoEM data, obtained the
cryoEM map and assisted in the flexibility analysis. S.R. generated the 1D4-tagged transducin construct and performed PDE activity
assays with varying transducin concentrations. F.P. performed the flexibility analysis. A.B.S. assisted in cryoEM data processing. O.P.
froze grids and obtained cryoEM data. Y .G., G.S. and R.A.C. edited the manuscript with input from G.E., S.R. and F.P.. G.S. and
R.A.C. supervised the project.
DECLARATION OF INTERESTS
The authors declare no competing interests.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our
customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of
the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered
which could affect the content, and all legal disclaimers that apply to the journal pertain.
HHS Public Access
Author manuscript
Mol Cell . Author manuscript; available in PMC 2021 October 15.
Published in final edited form as:
Mol Cell . 2020 October 15; 80(2): 237–245.e4. doi:10.1016/j.molcel.2020.09.013.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript |
Driscoll-Hall-Sivencrona-Xumsteg-03.pdf | Byzantine Fault Tolerance, from Theory to Reality
Kevin Driscoll1, Brendan Hall1, Håkan Sivencrona2, Phil Zumsteg1
1Honeywell International
3660 Technology Drive, Minneapolis, MN 55418
{brendan.hall,kevin.driscoll,phil.j.zumsteg}@Honeywell.com
2Chalmers University of Technology
Department of Computer Engineering, SE-412 96 Göteborg, Sweden
sivis@computer.org
Abstract. Since its introduction nearly 20 years ago, the Byzantine Generals
Problem has been the subject of many papers having the scrutiny of the fault tolerance community. Numerous Byzantine fault tolerant algorithms and archi-tectures have been proposed. However, this problem is not yet sufficiently un-
derstood by those who design, build, and maintain systems with high depend-
ability requirements. Today, there are still many misconceptions relating to Byzantine failure, what makes a system vulnerable, and indeed the very nature and reality of Byzantine faults. This paper revisits the Byzantine problem from
a practitioner’s perspective. The intention is to provide the reader with a work-
ing appreciation of Byzantine failure from a practical as well as a theoretical perspective. A discussion of typical failure properties and the difficulties in preventing the associated failure propagation is presented. These are illustrated
with real Byzantine failure observations. Finally, various architectural solutions
to the Byzantine problem are presented.
1 What You Thought Could Never Happen
In English, the phrase “one in a million” is popularly used to describe the highly im-
probable. The ratio itself is difficult to comprehend. The easiest way to give it reason is to equate it to real-world expectations. For example, the probability of winning the
U.K. National Lottery is around one in fourteen million; the probability of getting
struck by lightning in the U.S. is around one in six hundred thousand [1]. It is not safe to rely on intuition for reasoning about unfathomably small probabilities (for exam-
ple, the 1-in-1,000,000,000 maximum failure probability for critical aerospace
systems
1). It is problematic in two ways: (1) real-world parallels are beyond typical
human experience and comprehension; (2) faults that are not recognized, such as
Byzantine faults, are incorrectly assumed to occur with zero or very low probability.
The lack of recognition causes additional issues in that it allows the manifestation of such faults to pass unnoticed or be otherwise misclassified, reinforcing the miscon-
ception of low probability of occurrence.
1 Usually written as a failure rate of 10-9/hr |
Using-games-to-understand-the-mind-2.pdf | Using Games to Understand the Mind
Kelsey Allen1,+, Franziska Br ¨andle2,+, Matthew Botvinick1, Judith E. Fan3, Samuel J.
Gershman4, Alison Gopnik5, Thomas L. Griffiths6, Joshua K. Hartshorne7, Tobias U.
Hauser8,9,12, Mark K. Ho6, Joshua R. de Leeuw10, Wei Ji Ma11, Kou Murayama12, Jonathan
D. Nelson13, Bas van Opheusden6, Thomas Pouncy4, Janet Rafner14, Iyad Rahwan15, Robb
B. Rutledge16, Jacob Sherson14,¨Ozg ¨ ur S ¸ ims ¸ek17, Hugo Spiers8, Christopher
Summerfield18, Mirko Thalmann2, Natalia V ´elez4, Andrew J. Watrous19, Joshua B.
Tenenbaum20, and Eric Schulz2,*
1DeepMind, London, UK
2Max Planck Institute for Biological Cybernetics, T ¨ubingen, Germany
3Stanford University, Stanford, USA
4Harvard University, Cambridge, USA
5University of California, Berkeley, Berkeley, USA
6Princeton University, Princeton, USA
7Boston College, Boston, USA
8University College London, London, United Kingdom
9Max Planck UCL Centre for Computational Psychiatry and Ageing Research
10Vassar College, Poughkeepsie, USA
11New Y ork University, New Y ork, USA
12University of T ¨ubingen, T ¨ubingen, Germany
13University of Surrey, Guildford, UK
14Aarhus University, Aarhus, Denmark
15Max Planck Institute for Human Development, Center for Humans & Machines, Berlin, Germany
16Y ale University, New Haven, USA
17University of Bath, Bath, UK
18University of Oxford, Oxford, UK
19Baylor College of Medicine, Houston, USA
20Massachusetts Institute of Technology, Cambridge, USA
*Corresponding author: eric.schulz@tuebingen.mpg.de
+These authors contributed equally to this work.
ABSTRACT
Board, card, or video games have been played by virtually every individual in the world population, with both children and
adults participating. Games are popular because they are intuitive and fun. These distinctive qualities of games also make
them ideal as a platform for studying the mind. By being intuitive, games provide a unique vantage point for understanding the
inductive biases that support behavior in more complex, ecological settings than traditional lab experiments. By being fun,
games allow researchers to study new questions in cognition such as the meaning of “play” and intrinsic motivation, while also
supporting more extensive and diverse data collection by attracting many more participants. We describe both the advantages
and drawbacks of using games relative to standard lab-based experiments and lay out a set of recommendations on how to
gain the most from using games to study cognition. We hope this article will lead to a wider use of games as experimental
paradigms, elevating the ecological validity, scale, and robustness of research on the mind.
Introduction
Progress in psychological and cognitive science has been driven by the development of carefully controllable, simple, experi-
mental paradigms that have been reused across many studies. While this approach permits precise statistical and computational
modeling, it also restricts the set of answerable questions. Games present a complementary route to expand the repertoire of
classic psychological tasks (1) to verify that psychological theories that have been developed in simple paradigms can explain
people’s behavior in more ecological settings, and (2) to ask and answer new questions about the mind, such as the form of
inductive biases that support complex action, or what cognitive mechanisms support the intrinsic motivation which compels |
2002.05709.pdf | A Simple Framework for Contrastive Learning of Visual Representations
Ting Chen1Simon Kornblith1Mohammad Norouzi1Geoffrey Hinton1
Abstract
This paper presents SimCLR : a simple framework
for contrastive learning of visual representations.
We simplify recently proposed contrastive self-
supervised learning algorithms without requiring
specialized architectures or a memory bank. In
order to understand what enables the contrastive
prediction tasks to learn useful representations,
we systematically study the major components of
our framework. We show that (1) composition of
data augmentations plays a critical role in defining
effective predictive tasks, (2) introducing a learn-
able nonlinear transformation between the repre-
sentation and the contrastive loss substantially im-
proves the quality of the learned representations,
and (3) contrastive learning benefits from larger
batch sizes and more training steps compared to
supervised learning. By combining these findings,
we are able to considerably outperform previous
methods for self-supervised and semi-supervised
learning on ImageNet. A linear classifier trained
on self-supervised representations learned by Sim-
CLR achieves 76.5% top-1 accuracy, which is a
7% relative improvement over previous state-of-
the-art, matching the performance of a supervised
ResNet-50. When fine-tuned on only 1% of the
labels, we achieve 85.8% top-5 accuracy, outper-
forming AlexNet with 100 ×fewer labels.1
1. Introduction
Learning effective visual representations without human
supervision is a long-standing problem. Most mainstream
approaches fall into one of two classes: generative or dis-
criminative. Generative approaches learn to generate or
otherwise model pixels in the input space (Hinton et al.,
2006; Kingma & Welling, 2013; Goodfellow et al., 2014).
1Google Research, Brain Team. Correspondence to: Ting Chen
<iamtingchen@google.com>.
Proceedings of the 37thInternational Conference on Machine
Learning , Vienna, Austria, PMLR 119, 2020. Copyright 2020 by
the author(s).
1Code available at https://github.com/google-research/simclr.
25 50 100 200 400 626
Number of Parameters (Millions)5560657075ImageNet Top-1 Accuracy (%)
InstDiscRotationBigBiGAN
LACPCv2CPCv2-L
CMC
AMDIM
MoCoMoCo (2x)MoCo (4x)
PIRLPIRL-ens.PIRL-c2xSimCLRSimCLR (2x)SimCLR (4x) Supervised
Figure 1. ImageNet Top-1 accuracy of linear classifiers trained
on representations learned with different self-supervised meth-
ods (pretrained on ImageNet). Gray cross indicates supervised
ResNet-50. Our method, SimCLR, is shown in bold.
However, pixel-level generation is computationally expen-
sive and may not be necessary for representation learning.
Discriminative approaches learn representations using objec-
tive functions similar to those used for supervised learning,
but train networks to perform pretext tasks where both the in-
puts and labels are derived from an unlabeled dataset. Many
such approaches have relied on heuristics to design pretext
tasks (Doersch et al., 2015; Zhang et al., 2016; Noroozi &
Favaro, 2016; Gidaris et al., 2018), which could limit the
generality of the learned representations. Discriminative
approaches based on contrastive learning in the latent space
have recently shown great promise, achieving state-of-the-
art results (Hadsell et al., 2006; Dosovitskiy et al., 2014;
Oord et al., 2018; Bachman et al., 2019).
In this work, we introduce a simple framework for con-
trastive learning of visual representations, which we call
SimCLR . Not only does SimCLR outperform previous work
(Figure 1), but it is also simpler, requiring neither special-
ized architectures (Bachman et al., 2019; Hénaff et al., 2019)
nor a memory bank (Wu et al., 2018; Tian et al., 2019; He
et al., 2019; Misra & van der Maaten, 2019).
In order to understand what enables good contrastive repre-
sentation learning, we systematically study the major com-
ponents of our framework and show that:arXiv:2002.05709v3 [cs.LG] 1 Jul 2020 |
2310.02984.pdf | Scaling Laws for Associative Memories
Vivien Cabannes
FAIR, MetaElvis Dohmatob
FAIR, MetaAlberto Bietti
Flatiron Institute
Abstract
Learning arguably involves the discovery and memorization of abstract rules. The aim of
this paper is to study associative memory mechanisms. Our model is based on high-dimensional
matrices consisting of outer products of embeddings, which relates to the inner layers of transformer
language models. We derive precise scaling laws with respect to sample size and parameter
size, and discuss the statistical efficiency of different estimators, including optimization-based
algorithms. We provide extensive numerical experiments to validate and interpret theoretical
results, including fine-grained visualizations of the stored memory associations.
1 Introduction
As the scale of large language models (LLMs) keeps increasing, scaling laws have become a crucial
tool to empirically assess and predict the behavior of these models when varying the number of
parameters and training data (Kaplan et al., 2020; Hoffmann et al., 2022). Despite their practical
impact, the underlying phenomena leading to such scaling laws remain poorly understood. A better
understanding of such phenomena could guide researchers towards improved models, algorithms,
and datasets which may lead to improved scaling laws.
Our study focuses on a simple model that aims to be representative of LLMs in two ways.
First, we focus on heavy-tailed data distributions over discrete tokens, a natural assumption for text
data (Piantadosi, 2014). Second, we consider associative memory models that store input-output
pairs through outer-products of finite-dimensional embeddings, and can be seen as a proxy of the
intermediate layers of transformers. Indeed, some transformer layers have been found to behave
as key-value memories (Geva et al., 2021; Meng et al., 2022), and more generally outer-product
associative memory matrices arise naturally from training dynamics on intermediate weights (Bietti
et al., 2023). Beyond simple associative recall, the combination of multiple such associative rules at
different layers may lead to certain circuits with rich “reasoning” behaviors based on context (Elhage
et al., 2021; Bietti et al., 2023; Michaud et al., 2023). For example, an intermediate layer input token
may encode for the topic “linux”, leading to an output token that will trigger a specific behavior in
the transformer’s following layers when processing the token “terminal”.
Our contributions are as follows:
•We provide precise statistical rates for outer-product memories with random embeddings, and
compare different memory storage schemes in the context of Zipf-distributed data.
•We compare theoretical schemes to the weights learned by various optimization algorithms used
in practice, and illustrate the role of different design choices with numerical experiments.
Related work. Associative memory models have a long history in the literature on neural computa-
tion (Steinbuch, 1961; Willshaw et al., 1969; Longuet-Higgins et al., 1970; Kohonen, 1972; Amari,
1972; Little, 1974; Hopfield, 1982; Smolensky, 1990; Schlag et al., 2021; Valle-Lisboa et al., 2023),
though the statistical insights we provide based on specific data distributions are new, to the best of
our knowledge. Memorization behaviors have drawn a lot of attention recently, and are believed to be
an important notion to understand the learning happening in deep neural network (e.g., Sukhbaatar
et al., 2019; Feldman, 2020; Feldman & Zhang, 2020; Geva et al., 2021; Wu et al., 2022). Building
on memorization and heavy-tailed discrete data, our model bears similarities to the ones of Hutter
1arXiv:2310.02984v1 [stat.ML] 4 Oct 2023 |
2304.13731.pdf | Text-to-Audio Generation using Instruction-Tuned
LLM and Latent Diffusion Model
Deepanway Ghosal‡, Navonil Majumder‡, Ambuj Mehrish‡, Soujanya Poria‡
‡DeCLaRe Lab, Singapore University of Technology and Design, Singapore
deepanway_ghosal@mymail.sutd.edu.sg
{navonil_majumder,ambuj_mehrish,sporia}@sutd.edu.sg
/github:https://github.com/declare-lab/tango
/globe:https://tango-web.github.io/
Abstract
The immense scale of the recent large language models (LLM) allows many in-
teresting properties, such as, instruction- and chain-of-thought-based fine-tuning,
that has significantly improved zero- and few-shot performance in many natu-
ral language processing (NLP) tasks. Inspired by such successes, we adopt such
an instruction-tuned LLM F LAN-T5 as the text encoder for text-to-audio (TTA)
generation—a task where the goal is to generate an audio from its textual de-
scription. The prior works on TTA either pre-trained a joint text-audio encoder
or used a non-instruction-tuned model, such as, T5. Consequently, our latent dif-
fusion model (LDM)-based approach (T ANGO ) outperforms the state-of-the-art
AudioLDM on most metrics and stays comparable on the rest on AudioCaps test
set, despite training the LDM on a 63 times smaller dataset and keeping the text
encoder frozen. This improvement might also be attributed to the adoption of au-
dio pressure level-based sound mixing for training set augmentation, whereas the
prior methods take a random mix.
Preprint. Under review.arXiv:2304.13731v1 [eess.AS] 24 Apr 2023 |
score-matching-sliced.pdf | Sliced Score Matching: A Scalable Approach to
Density and Score Estimation
Yang Song∗
Stanford UniversitySahaj Garg∗
Stanford UniversityJiaxin Shi
Tsinghua UniversityStefano Ermon
Stanford University
Abstract
Score matching is a popular method for esti-
mating unnormalized statistical models. How-
ever, it has been so far limited to simple, shal-
low models or low-dimensional data, due to
the difficulty of computing the Hessian of log-
density functions. We show this difficulty can
be mitigated by projecting the scores onto ran-
dom vectors before comparing them. This ob-
jective, called sliced score matching, only in-
volves Hessian-vector products, which can be
easily implemented using reverse-mode auto-
matic differentiation. Therefore, sliced score
matching is amenable to more complex models
and higher dimensional data compared to score
matching. Theoretically, we prove the consis-
tency and asymptotic normality of sliced score
matching estimators. Moreover, we demon-
strate that sliced score matching can be used
to learn deep score estimators for implicit dis-
tributions. In our experiments, we show sliced
score matching can learn deep energy-based
models effectively, and can produce accurate
score estimates for applications such as varia-
tional inference with implicit distributions and
training Wasserstein Auto-Encoders.
1 INTRODUCTION
Score matching (Hyvärinen, 2005) is particularly suitable
for learning unnormalized statistical models, such as en-
ergy based ones. It is based on minimizing the distance be-
tween the derivatives of the log-density functions (a.k.a.,
score s) of the data and model distributions. Unlike maxi-
mum likelihood estimation (MLE), the objective of score
∗Joint first authors. Correspondence to Yang Song
<yangsong@cs.stanford.edu> and Stefano Ermon <er-
mon@cs.stanford.edu>.matching only depends on the scores, which are oblivious
to the (usually) intractable partition functions. However,
score matching requires the computation of the diago-
nal elements of the Hessian of the model’s log-density
function. This Hessian trace computation is generally
expensive (Martens et al., 2012), requiring a number of
forward and backward propagations proportional to the
data dimension. This severely limits its applicability to
complex models parameterized by deep neural networks,
such as deep energy-based models (LeCun et al., 2006;
Wenliang et al., 2019).
Several approaches have been proposed to alleviate this
difficulty: Kingma & LeCun (2010) propose approximate
backpropagation for computing the trace of the Hessian;
Martens et al. (2012) develop curvature propagation, a
fast stochastic estimator for the trace in score matching;
and Vincent (2011) transforms score matching to a de-
noising problem which avoids second-order derivatives.
These methods have achieved some success, but may
suffer from one or more of the following problems: incon-
sistent parameter estimation, large estimation variance,
and cumbersome implementation.
To alleviate these problems, we propose sliced score
matching, a variant of score matching that can scale to
deep unnormalized models and high dimensional data.
The key intuition is that instead of directly matching
the high-dimensional scores, we match their projections
along random directions. Theoretically, we show that
under some regularity conditions, sliced score matching
is a well-defined statistical estimation criterion that yields
consistent and asymptotically normal parameter estimates.
Moreover, compared to the methods of Kingma & LeCun
(2010) and Martens et al. (2012), whose implementations
require customized backpropagation for deep networks,
sliced score matching only involves Hessian-vector prod-
ucts, thus can be easily and efficiently implemented in
frameworks such as TensorFlow (Abadi et al., 2016) and
PyTorch (Adam et al., 2017). |
2205.05638.pdf | Few-Shot Parameter-Efficient Fine-Tuning is Better
and Cheaper than In-Context Learning
Haokun Liu∗Derek Tam∗Mohammed Muqeeth∗
Jay Mohta Tenghao Huang Mohit Bansal Colin Raffel
Department of Computer Science
University of North Carolina at Chapel Hill
{haokunl,dtredsox,muqeeth,craffel}@cs.unc.edu
Abstract
Few-shot in-context learning (ICL) enables pre-trained language models to per-
form a previously-unseen task without any gradient-based training by feeding a
small number of training examples as part of the input. ICL incurs substantial
computational, memory, and storage costs because it involves processing all of the
training examples every time a prediction is made. Parameter-efficient fine-tuning
(PEFT) (e.g. adapter modules, prompt tuning, sparse update methods, etc.) offers
an alternative paradigm where a small set of parameters are trained to enable a
model to perform the new task. In this paper, we rigorously compare few-shot
ICL and PEFT and demonstrate that the latter offers better accuracy as well as
dramatically lower computational costs. Along the way, we introduce a new PEFT
method called (IA)3that scales activations by learned vectors, attaining stronger
performance while only introducing a relatively tiny amount of new parameters.
We also propose a simple recipe based on the T0 model [ 1] called T-Few that
can be applied to new tasks without task-specific tuning or modifications. We
validate the effectiveness of T-Few on completely unseen tasks by applying it to
the RAFT benchmark [ 2], attaining super-human performance for the first time
and outperforming the state-of-the-art by 6% absolute. All of the code used in our
experiments is publicly available.1
1 Introduction
Pre-trained language models have become a cornerstone of natural language processing, thanks
to the fact that they can dramatically improve data efficiency on tasks of interest – i.e., using a
pre-trained language model for initialization often produces better results with less labeled data. A
historically common approach has been to use the pre-trained model’s parameters for initialization
before performing gradient-based fine-tuning on a downstream task of interest. While fine-tuning
has produced many state-of-the-art results [ 1], it results in a model that is specialized for a single
task with an entirely new set of parameter values, which can become impractical when fine-tuning a
model on many downstream tasks.
An alternative approach popularized by [ 3,4] isin-context learning (ICL), which induces a model
to perform a downstream task by inputting prompted examples. Few-shot prompting converts a
small collection of input-target pairs into (typically) human-understandable instructions and examples
[3,4], along with a single unlabeled example for which a prediction is desired. Notably, ICL requires
no gradient-based training and therefore allows a single model to immediately perform a wide variety
of tasks. Performing ICL therefore solely relies on the capabilities that a model learned during
pre-training. These characteristics have led to a great deal of recent interest in ICL methods [5–10].
∗Equal contribution.
1https://github.com/r-three/t-few
Preprint. Under review.arXiv:2205.05638v2 [cs.LG] 26 Aug 2022 |
2305.06983.pdf | Active Retrieval Augmented Generation
Zhengbao Jiang1∗Frank F. Xu1∗Luyu Gao1∗Zhiqing Sun1∗Qian Liu2
Jane Dwivedi-Yu3Yiming Yang1Jamie Callan1Graham Neubig1
1Language Technologies Institute, Carnegie Mellon University
2Sea AI Lab3Meta AI Research
{zhengbaj,fangzhex,luyug,zhiqings,gneubig}@cs.cmu.edu
Abstract
Despite the remarkable ability of large lan-
guage models (LMs) to comprehend and gen-
erate language, they have a tendency to hal-
lucinate and create factually inaccurate out-
put. Augmenting LMs by retrieving infor-
mation from external knowledge resources
is one promising solution. Most existing
retrieval-augmented LMs employ a retrieve-
and-generate setup that only retrieves informa-
tion once based on the input. This is lim-
iting, however, in more general scenarios in-
volving generation of long texts, where con-
tinually gathering information throughout the
generation process is essential. There have
been some past efforts to retrieve informa-
tion multiple times while generating outputs,
which mostly retrieve documents at fixed inter-
vals using the previous context as queries. In
this work, we provide a generalized view of
active retrieval augmented generation , meth-
ods that actively decide when and what to re-
trieve across the course of the generation. We
propose Forward- Looking Active REtrieval
augmented generation ( FLARE ), a generic
retrieval-augmented generation method which
iteratively uses a prediction of the upcoming
sentence to anticipate future content, which is
then utilized as a query to retrieve relevant doc-
uments to regenerate the sentence if it contains
low-confidence tokens. We test FLARE along
with baselines comprehensively over 4 long-
form knowledge-intensive generation tasks/-
datasets. FLARE achieves superior or compet-
itive performance on all tasks, demonstrating
the effectiveness of our method.1
1 Introduction
Generative language models (LMs) (Brown et al.,
2020; Ouyang et al., 2022; OpenAI, 2023; Chowd-
hery et al., 2022; Zhang et al., 2022; Touvron et al.,
2023) have become a foundational component in
∗Lead contributors.
1Code and datasets are available at https://github.com/
jzbjyb/FLARE .many natural language processing (NLP) systems
with their remarkable ability to comprehend and
generate language. Although LMs have memorized
some amount of world knowledge observed during
training (Petroni et al., 2019; Roberts et al., 2020;
Jiang et al., 2020), they still tend to hallucinate
and create imaginary content (Maynez et al., 2020;
Zhou et al., 2021; OpenAI, 2023). To address the
issue of hallucination, one promising direction is to
augment generation with retrieval, which involves
augmenting parametric LMs with non-parametric
retrieval components that can look up relevant in-
formation from external knowledge resources such
as document corpora (Lewis et al., 2020; Izacard
and Grave, 2021; Khandelwal et al., 2020; Izacard
et al., 2022; Jiang et al., 2022; Shi et al., 2023).
Retrieval-augmented LMs commonly use a
retrieve-and-generate setup where they retrieve doc-
uments based on the user’s input (e.g. questions
in question answering), and then generate a com-
plete answer conditioning on the retrieved docu-
ments (Lewis et al., 2020; Izacard and Grave, 2021;
Izacard et al., 2022; Jiang et al., 2022; Shi et al.,
2023). These single-time retrieval-augmented LMs
have been found to outperform purely paramet-
ric LMs, particularly for short-form knowledge-
intensive generation tasks such as factoid QA
(Kwiatkowski et al., 2019; Joshi et al., 2017) and
fact checking (Thorne et al., 2018), where the in-
formation needs are clear in the user’s input, and
it is sufficient to retrieve relevant knowledge once
solely based on the input .
In recent years, increasingly powerful large LMs
have demonstrated abilities in more complex tasks
that involve generating long-form output, such as
long-form QA (Fan et al., 2019; Stelmakh et al.,
2022), open-domain summarization (Cohen et al.,
2021; Hayashi et al., 2021; Giorgi et al., 2022),
and (chain-of-thought; CoT) reasoning (Wei et al.,
2022; Ho et al., 2020; Geva et al., 2021; Hendrycks
et al., 2020). In contrast to short-form generation,arXiv:2305.06983v1 [cs.CL] 11 May 2023 |
2307.04721.pdf | Large Language Models as General Pattern Machines
Suvir Mirchandani1, Fei Xia2, Pete Florence2, Brian Ichter2, Danny Driess2 3,
Montserrat Gonzalez Arenas2, Kanishka Rao2, Dorsa Sadigh1 2, Andy Zeng2
1Stanford University,2Google DeepMind,3TU Berlin
https://general-pattern-machines.github.io
Abstract: We observe that pre-trained large language models (LLMs) are capable of au-
toregressively completing complex token sequences – from arbitrary ones procedurally
generated by probabilistic context-free grammars (PCFG), to more rich spatial patterns
found in the Abstract Reasoning Corpus (ARC), a general AI benchmark, prompted
in the style of ASCII art. Surprisingly, pattern completion proficiency can be partially
retained even when the sequences are expressed using tokens randomly sampled from
the vocabulary. These results suggest that without any additional training, LLMs can
serve as general sequence modelers, driven by in-context learning. In this work, we
investigate how these zero-shot capabilities may be applied to problems in robotics
– from extrapolating sequences of numbers that represent states over time to complete
simple motions, to least-to-most prompting of reward-conditioned trajectories that can
discover and represent closed-loop policies (e.g., a stabilizing controller for CartPole).
While difficult to deploy today for real systems due to latency, context size limitations,
and compute costs, the approach of using LLMs to drive low-level control may provide
an exciting glimpse into how the patterns among words could be transferred to actions.
Keywords: large language models, in-context learning, language for robotics
1 Introduction
Large language models (LLMs) are trained to absorb the myriad of patterns that are woven into the structure
of language. They not only exhibit various out-of-the-box capabilities such as generating chains of reasoning
[1,2], solving logic problems [ 3,4], and completing math puzzles [ 5], but also have been applied in robotics
where they can serve as high-level planners for instruction following tasks [ 6,7,8,9,10,11,12], synthesize
programs representing robot policies [ 13,14], design reward functions [ 15,16], and generalize user prefer-
ences [ 17]. These settings rely on the few-shot in-context examples in text prompts that specify the domain
and input-output format for their tasks [18, 19], and remain highly semantic in their inputs and outputs.
input: 0, 0, 0, 0 0, 3, 4, 0 0, 7, 6, 0 0, 0, 0, 0 output: 3, 0, 0, 4 0, 0, 0, 0 0, 0, 0, 0 7, 0, 0, 6input: 0, 0, 0, 0 0, 5, 6, 0 0, 8, 3, 0 0, 0, 0, 0 output: 5, 0, 0, 6 0, 0, 0, 0 0, 0, 0, 0 8, 0, 0, 3input: 0, 0, 0, 0 0, +#, B, 0 0, @, 慶, 0 0, 0, 0, 0 output: +#, 0, 0, B 0, 0, 0, 0 0, 0, 0, 0 @, 0, 0, 慶
Fig. 1 :LLMs out-of-the-box
can complete ( highlighted )
complex ARC patterns [ 20]
expressed in arbitrary tokens.A key observation of our work – and perhaps contrary to the predominant
intuition – is that an LLM’s ability to represent, manipulate, and extrapolate
more abstract, nonlinguistic patterns may allow them to serve as basic versions
ofgeneral pattern machines . To illustrate this idea, consider the Abstract
Reasoning Corpus [ 20], a general AI benchmark that contains collections of
2D grids with patterns that evoke abstract concepts (e.g., infilling, counting,
and rotating shapes). Each problem provides a small number of input-output
examples, followed by test input(s) for which the objective is to predict
the corresponding output. Most methods (based on program synthesis) are
manually engineered with domain-specific languages [ 21,22,23,24] or
evaluated on simplified extensions or subsets of the benchmark [ 25,26,27].
End-to-end machine learning methods only solve a handful of test problems
[28]; however, our experiments indicate that LLMs in-context prompted in
the style of ASCII art (see Fig. 1) can correctly predict solutions for up to 85
(out of 800) problems – exceeding some of the best performing methods to date [ 21,22,24], without
additional model training or fine-tuning. Surprisingly, we find this extends beyond ASCII numbers, and
Preprint.arXiv:2307.04721v1 [cs.AI] 10 Jul 2023 |
1806.09729.pdf | A Universal Training Algorithm for Quantum Deep Learning
Guillaume Verdon,1, 2, 4Jason Pye,1, 2, 4and Michael Broughton3
1Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada
2Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada
3School of Computer Science, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada
4Perimeter Institute for Theoretical Physics, Waterloo, Ontario, N2L 2Y5, Canada
(Dated: June 27, 2018)
We introduce the Backwards Quantum Propagation of Phase errors (Baqprop) principle, a cen-
tral theme upon which we construct multiple universal optimization heuristics for training both
parametrized quantum circuits and classical deep neural networks on a quantum computer. Baqprop
encodes error information in relative phases of a quantum wavefunction defined over the space of net-
work parameters; it can be thought of as the unification of the phase kickback principle of quantum
computation and of the backpropagation algorithm from classical deep learning. We propose two
core heuristics which leverage Baqprop for quantum-enhanced optimization of network parameters:
Quantum Dynamical Descent (QDD) and Momentum Measurement Gradient Descent (MoMGrad).
QDD uses simulated quantum coherent dynamics for parameter optimization, allowing for quan-
tum tunneling through the hypothesis space landscape. MoMGrad leverages Baqprop to estimate
gradients and thereby perform gradient descent on the parameter landscape; it can be thought of
as the quantum-classical analogue of QDD. In addition to these core optimization strategies, we
propose various methods for parallelization, regularization, and meta-learning as augmentations to
MoMGrad and QDD. We introduce several quantum-coherent adaptations of canonical classical
feedforward neural networks, and study how Baqprop can be used to optimize such networks. We
develop multiple applications of parametric circuit learning for quantum data, and show how to per-
form Baqprop in each case. One such application allows for the training of hybrid quantum-classical
neural-circuit networks, via the seamless integration of Baqprop with classical backpropagation.
Finally, for a representative subset of these proposed applications, we demonstrate the training of
these networks via numerical simulations of implementations of QDD and MoMGrad.
CONTENTS
I. Introduction 2
II. Background 5
A. Continuous Quantum Registers 5
B. Discrete Simulation of Continuous Quantum
Registers 6
1. Quantum Phase Estimation 8
C. Quantum Phase Kickback 8
1. Quantum Gradients 10
III. Quantum Parametric Optimization 10
A. Basic Principles 10
1. Quantum Feedforward and Baqprop 10
2. Full-batch Effective Phase Kicks 12
3. Effective Forces 14
B. Quantum Dynamical Descent 15
1. Core Algorithm 15
2. Heisenberg Picture Update rule 17
3. Connections to QAOA 17
4. Adiabatic Limit 18
C. Momentum Measurement Gradient
Descent 20
D. Phase Space Visualization 22
IV. Further Quantum Descent Methods 23
A. Batching & Parallelization 23
1. Quantum Stochastic Descent 23
2. Sequential Mini-Batching 243. Coherently Accumulating Momentum
Parallelization 25
4. Quantum Random Access Memory
Mini-batching 27
B. Discrete Parametric Optimization 28
1. Kicking Hybrid Discrete-Continuous
Parameters 28
2. Continuous-Discrete Hybrid QDD 29
3. Continuous-Discrete Hybrid Momentum
Measurement Gradient Descent 30
4. Continuum-Embedded Discrete
Optimization 30
5. Estimating Continuum Gradients with
Single Qubits 31
C. Regularization & Variants 32
1. Parameter/Weight Decay 32
2. Meta-networked Interacting Swarm
Optimization 32
3. Dropout 34
D. Quantum Meta-Learning 36
1. Overview 36
2. Quantum hyper-parameter Descent 37
3. Network Architecture Optimization 40
V. Quantum Neural Network Learning 41
A. Quantum-Coherent Neural Networks 41
1. Classical-to-Quantum Computational
Embedding 41
2. Classical Data Phase Kicking 42
3. Abstract Quantum Neuron 43arXiv:1806.09729v1 [quant-ph] 25 Jun 2018 |
10.1101.2021.07.09.450648.pdf | Language models enable zero-shot prediction of the
effects of mutations on protein function
Joshua Meier1 2Roshan Rao3Robert Verkuil1Jason Liu1
Tom Sercu1Alexander Rives1 2
Abstract
Modeling the effect of sequence variation on function is a fundamental problem
for understanding and designing proteins. Since evolution encodes information
about function into patterns in protein sequences, unsupervised models of variant
effects can be learned from sequence data. The approach to date has been to fit
a model to a family of related sequences. The conventional setting is limited,
since a new model must be trained for each prediction task. We show that using
only zero-shot inference, without any supervision from experimental data or addi-
tional training, protein language models capture the functional effects of sequence
variation, performing at state-of-the-art.
1 Introduction
Proteins have a myriad of diverse functions that underlie the complexity of life. Protein sequences
encode function via structure through the spontaneous folding of the sequence into the three dimen-
sional structure of the protein [ 1]. The effects of sequence mutations on function form a landscape
that reveals how function constrains sequence. Alterations at some sites in a protein sequence cannot
be tolerated because they are essential to the protein’s function. Other sites evolve together because
the structure and function is determined by them collectively. Mutations can enhance the activity of a
protein, attenuate it, or leave it unchanged.
The functional effect of sequence variations can be measured through deep mutational scanning
experiments [ 2]. Consisting of thousands to hundreds of thousands of measurements of protein
function, deep mutational scans give insight into the intrinsic constraints on a protein’s structure and
function. Due to the cost and difficulty of implementing such experiments, compilations of deep
mutational scanning data include experiments on a few dozens of proteins at most, relative to the tens
of thousands of proteins encoded in the human genome, and the millions more across the tree of life
that we would like to understand.
A model that learns the landscape linking sequence to function can provide insight into function
without having to do experiments. Unsupervised models of mutational effects can be learned from
sequences [ 3,4]. Statistical patterns in a family of evolutionarily related protein sequences contain
information about structure and function [ 5–7]. This is because the properties of a protein act as
constraints on the selection of sequences through evolution [8].
In the natural language modeling community, there has been interest in zero-shot transfer of models
to new tasks. Massive language models can solve tasks they haven’t been directly trained on [ 9–11].
Recently protein language models have achieved state-of-the-art in various structure prediction tasks
[12–14]. Work to date has mainly focused on transfer in the classical representation learning setting,
using pre-trained features with supervision on the downstream task.
1Facebook AI Research2New York University3UC Berkeley. ESM-1v is available at < https://github.
com/facebookresearch/esm >. Correspondence to: Alexander Rives <arives@fb.com >.
35th Conference on Neural Information Processing Systems (NeurIPS 2021), Sydney, Australia.. CC-BY-NC-ND 4.0 International license available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted November 17, 2021. ; https://doi.org/10.1101/2021.07.09.450648doi: bioRxiv preprint |
10.1101.2023.09.11.556673.pdf | Protein generation with evolutionary diffusion:
sequence is all you need
Sarah Alamdari1, Nitya Thakkar2,†, Rianne van den Berg3,
Alex X. Lu1, Nicolo Fusi1, Ava P. Amini1, Kevin K. Yang1,⇤
1Microsoft Research, Cambridge, MA, USA
2Brown University, Providence, RI, USA
3Microsoft Research AI4Science, Amsterdam, Netherlands
†Work done principally during an internship at Microsoft Research
⇤To whom correspondence should be addressed; E-mail: yang.kevin@microsoft.com.
1. CC-BY 4.0 International license available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 12, 2023. ; https://doi.org/10.1101/2023.09.11.556673doi: bioRxiv preprint |
2301.08243.pdf | Self-Supervised Learning from Images with a
Joint-Embedding Predictive Architecture
Mahmoud Assran1,2,3*Quentin Duval1Ishan Misra1Piotr Bojanowski1
Pascal Vincent1Michael Rabbat1,3Yann LeCun1,4Nicolas Ballas1
1Meta AI (FAIR)2McGill University3Mila, Quebec AI Institute4New York University
Abstract
This paper demonstrates an approach for learning
highly semantic image representations without relying on
hand-crafted data-augmentations. We introduce the Image-
based Joint-Embedding Predictive Architecture (I-JEPA), a
non-generative approach for self-supervised learning from
images. The idea behind I-JEPA is simple: from a single
context block, predict the representations of various target
blocks in the same image. A core design choice to guide
I-JEPA towards producing semantic representations is the
masking strategy; specifically, it is crucial to (a) sample tar-
get blocks with sufficiently large scale (semantic), and to (b)
use a sufficiently informative (spatially distributed) context
block. Empirically, when combined with Vision Transform-
ers, we find I-JEPA to be highly scalable. For instance, we
train a ViT-Huge/14 on ImageNet using 16 A100 GPUs in
under 72 hours to achieve strong downstream performance
across a wide range of tasks, from linear classification to
object counting and depth prediction.
1. Introduction
In computer vision, there are two common families
of approaches for self-supervised learning from images:
invariance-based methods [1,4,10,17,18,24,35,37,74] and
generative methods [8, 28, 36, 57].
Invariance-based pretraining methods optimize an en-
coder to produce similar embeddings for two or more views
of the same image [15, 20], with image views typically
constructed using a set of hand-crafted data augmentations,
such as random scaling, cropping, and color jittering [20],
amongst others [35]. These pretraining methods can pro-
duce representations of a high semantic level [4, 18], but
they also introduce strong biases that may be detrimental
for certain downstream tasks or even for pretraining tasks
with different data distributions [2]. Often, it is unclear
*massran@meta.com
10310476777879808182
iBOT
ViT-S/ 16(800ep)I-JEPA
ViT-H/ 14(300ep)I-JEPA
ViT-H/ 16448(300ep)
CAE
ViT-L/ 16(1600ep)
MAE
ViT-H/ 14(1600ep) data 2vec
ViT-L/ 16(1600ep)
Pretraining GPU HoursTop 1(%)ImageNet- 1K Linear Evaluation vs GPU HoursFigure 1. ImageNet Linear Evaluation . The I-JEPA method
learns semantic image representations without using any view data
augmentations during pretraining. By predicting in representation
space, I-JEPA produces semantic representations while using less
compute than previous methods.
how to generalize these biases for tasks requiring differ-
ent levels of abstraction. For example, image classification
and instance segmentation do not require the same invari-
ances [11]. Additionally, it is not straightforward to gen-
eralize these image-specific augmentations to other modal-
ities such as audio.
Cognitive learning theories have suggested that a driv-
ing mechanism behind representation learning in biologi-
cal systems is the adaptation of an internal model to pre-
dict sensory input responses [31, 59]. This idea is at the
core of self-supervised generative methods, which remove
or corrupt portions of the input and learn to predict the cor-
rupted content [9, 36, 57, 67, 68, 71]. In particular, mask-
denoising approaches learn representations by reconstruct-
ing randomly masked patches from an input, either at the
pixel or token level. Masked pretraining tasks require less
prior knowledge than view-invariance approaches and eas-
ily generalize beyond the image modality [8]. However, thearXiv:2301.08243v3 [cs.CV] 13 Apr 2023 |
2305.13141.pdf | arXiv:2305.13141v3 [cs.LG] 6 Nov 2023Tight conditions for when the NTK approximation is valid
Enric Boix-Adser` a
MIT, Apple
eboix@mit.eduEtai Littwin
Apple
elittwin@apple.com
November 7, 2023
Abstract
We study when the neural tangent kernel (NTK) approximation is v alid for training a model
with the square loss. In the lazy training setting of [ 21], we show that rescaling the model by a
factor of α=O(T) suffices for the NTK approximation to be valid until training time T. Our
bound is tight and improves on the previous bound of [ 21], which required a larger rescaling
factor of α=O(T2).
1 Introduction
In the modern machine learning paradigm, practitioners tra in the weights wof a large neural
network model fw:Rdin→Rdoutvia a gradient-based optimizer. Theoretical understandin g lags
behind, since the training dynamics are non-linear and henc e difficult to analyze. To address this,
[29] proposed an approximation to the dynamics called the NTK approximation , and proved it was
valid for infinitely-wide networks trained by gradient desc ent1. The NTK approximation has been
extremely influential, leading to theoretical explanation s for a range of questions, including why
deep learning can memorize training data [ 25,24,4,5,7,16,30], why neural networks exhibit
spectral bias [ 18,12,15], and why different architectures generalize differently [ 13,35,42]. Nev-
ertheless, in practice the training dynamics of neural netw orks often diverge from the predictions
of the NTK approximation (see, e.g., [ 8]) Therefore, it is of interest to understand exactly under
which conditions the NTK approximation holds. In this paper , we ask the following question:
Can we give tight conditions for when the NTK approximation is v alid?
1.1 The “lazy training” setting of [ 21]
The work of [ 21] showed that the NTK approximation actually holds for train ing any differentiable
model, as long as the model’s outputs are rescaled so that the model’s outputs change by a large
amount even when the weights change by a small amount. The cor rectness of the NTK approxi-
mation for infinite-width models is a consequence of this obs ervation, because by the default the
model is rescaled as the width tends to infinity; see the relat ed work in Section 1.3for more details.
Rescaling the model Leth:Rp→ Fbe a smoothly-parameterized model, where Fis a
separable Hilbert space. Let α >0 be a parameter which controls the rescaling of the model and
which should be thought of as large. We train the rescaled mod elαhwith gradient flow to minimize
1Under a specific scaling of the initialization and learning r ate as width tends to infinity.
1 |
2307.05628.pdf | DNAGPT: A Generalized Pre-trained Tool for
Versatile DNA Sequence Analysis Tasks
Daoan Zhang1,2,4, Weitong Zhang2,3, Yu Zhao2, Jianguo Zhang1*,
Bing He2*, Chenchen Qin2*, Jianhua Yao2*
1Southern University of Science and Technology.
2Tencent AI Lab, Shenzhen, China.
3City University of Hong Kong.
4University of Rochester.
*Corresponding author(s). E-mail(s): zhangjg@sustech.edu.cn;
owenbhe@tencent.com; chenchenqin@tencent.com;
jianhuayao@tencent.com;
Contributing authors: daoan.zhang@rochester.edu;
weitzhang6-c@my.cityu.edu.hk; louisyuzhao@tencent.com;
Abstract
Pre-trained large language models demonstrate potential in extracting informa-
tion from DNA sequences, yet adapting to a variety of tasks and data modalities
remains a challenge. To address this, we propose DNAGPT, a generalized DNA
pre-training model trained on over 200 billion base pairs from all mammals. By
enhancing the classic GPT model with a binary classification task (DNA sequence
order), a numerical regression task (guanine-cytosine content prediction), and
a comprehensive token language, DNAGPT can handle versatile DNA analy-
sis tasks while processing both sequence and numerical data. Our evaluation of
genomic signal and region recognition, mRNA abundance regression, and arti-
ficial genomes generation tasks demonstrates DNAGPT’s superior performance
compared to existing models designed for specific downstream tasks, benefiting
from pre-training using the newly designed model structure.
Keywords: DNA, Generative Pre-trained Transformer, DNAGPT, Sequence analysis,
Numerical analysis
1arXiv:2307.05628v3 [q-bio.GN] 30 Aug 2023 |
0810.4752.pdf | Statistical Learning Theory: Models, Concepts, and Results
Ulrike von Luxburg
Max Planck Institute for Biological Cybernetics
T¨ ubingen, Germany
ulrike.luxburg@tuebingen.mpg.de
Bernhard Sch¨ olkopf
Max Planck Institute for Biological Cybernetics
T¨ ubingen, Germany
bernhard.schoelkopf@tuebingen.mpg.de
September 2008
1 Introduction
Statistical learning theory provides the theoretical basis for many of today’s machine learning al-
gorithms and is arguably one of the most beautifully developed branches of artificial intelligence in
general. It originated in Russia in the 1960s and gained wide popularity in the 1990s following the
development of the so-called Support Vector Machine (SVM) , which has become a standard tool
for pattern recognition in a variety of domains ranging from computer vision to computational
biology. Providing the basis of new learning algorithms, however, was not the only motivation
for developing statistical learning theory. It was just as much a philosophical one, attempting to
answer the question of what it is that allows us to draw valid conclusions from empirical data.
In this article we attempt to give a gentle, non-technical overview over the key ideas and insights
of statistical learning theory. We do not assume that the reader has a deep background in math-
ematics, statistics, or computer science. Given the nature of the subject matter, however, some
familiarity with mathematical concepts and notations and some intuitive understanding of basic
probability is required. There exist many excellent references to more technical surveys of the
mathematics of statistical learning theory: the monographs by one of the founders of statistical
learning theory (Vapnik, 1995, Vapnik, 1998), a brief overview over statistical learning theory
in Section 5 of Sch¨ olkopf and Smola (2002), more technical overview papers such as Bousquet
et al. (2003), Mendelson (2003), Boucheron et al. (2005), Herbrich and Williamson (2002), and the
monograph Devroye et al. (1996).
2 The standard framework of statistical learning theory
2.1 Background
In our context, learning refers to the process of inferring general rules by observing examples.
Many living organisms show some ability to learn. For instance, children can learn what “a car”
is, just by being shown examples of objects that are cars and objects that are not cars. They do
not need to be told any rules about what is it that makes an object a car, they can simply learn
the concept “car” by observing examples.
The field of machine learning does not study the process of learning in living organisms, but instead
studies the process of learning in the abstract. The question is how a machine, a computer, can
“learn” specific tasks by following specified learning algorithms. To this end, the machine is shown
particular examples of a specific task. Its goal is then to infer a general rule which can both explain
the examples it has seen already and which can generalize to previously unseen, new examples.
1arXiv:0810.4752v1 [stat.ML] 27 Oct 2008 |
2305.17333.pdf | Fine-Tuning Language Models with Just
Forward Passes
Sadhika Malladi∗Tianyu Gao∗Eshaan Nichani Alex Damian
Jason D. Lee Danqi Chen Sanjeev Arora
Princeton University
{smalladi, tianyug, eshnich, ad27, jasonlee, danqic, arora}@princeton.edu
Abstract
Fine-tuning language models (LMs) has yielded success on diverse downstream
tasks, but as LMs grow in size, backpropagation requires a prohibitively large
amount of memory. Zeroth-order (ZO) methods can in principle estimate gradients
using only two forward passes but are theorized to be catastrophically slow for
optimizing large models. In this work, we propose a memory-efficient zeroth-
order optimizer ( MeZO ), adapting the classical ZO-SGD method to operate in-
place, thereby fine-tuning LMs with the same memory footprint as inference . For
example, with a single A100 80GB GPU, MeZO can train a 30-billion parameter
model, whereas fine-tuning with backpropagation can train only a 2.7B LM with
the same budget. We conduct comprehensive experiments across model types
(masked and autoregressive LMs), model scales (up to 66B), and downstream tasks
(classification, multiple-choice, and generation). Our results demonstrate that (1)
MeZO significantly outperforms in-context learning and linear probing; (2) MeZO
achieves comparable performance to fine-tuning with backpropagation across
multiple tasks, with up to 12 ×memory reduction; (3) MeZO is compatible with
both full-parameter and parameter-efficient tuning techniques such as LoRA and
prefix tuning; (4) MeZO can effectively optimize non-differentiable objectives (e.g.,
maximizing accuracy or F1). We support our empirical findings with theoretical
insights, highlighting how adequate pre-training and task prompts enable MeZO to
fine-tune huge models, despite classical ZO analyses suggesting otherwise.2
1 Introduction
Fine-tuning pre-trained language models (LMs) has been the dominant methodology for solving many
language tasks [ 27], adapting to specialized domains [ 40], or incorporating human instructions and
preferences [ 70]. However, as LMs are scaled up [ 12,69], computing gradients for backpropagation
requires a prohibitive amounts of memory – in our test, up to 12×the memory required for inference
– because it needs to cache activations during the forward pass, gradients during the backward pass,
and, in the case of Adam [50], also store gradient history (see Section 3.4 for a detailed analysis).
As a result, while it is possible to run inference with a 30-billion (30B) parameter LM on a single
Nvidia A100 GPU (with 80GB memory), backpropagation with Adam is feasible only for a 2.7B LM.
Parameter-efficient fine-tuning methods (PEFT [ 44,55,52]) update just a fraction of the network
parameters, but still need to cache many activations, because the tuned parameters are scattered
∗Equal contribution and corresponding authors.
2Our code is available at https://github.com/princeton-nlp/MeZO .
Preprint. Under review.arXiv:2305.17333v1 [cs.LG] 27 May 2023 |
2206.05802.pdf | Self-critiquing models for assisting human evaluators
William Saunders∗Catherine Yeh∗Jeff Wu∗
Steven Bills Long Ouyang Jonathan Ward Jan Leike
OpenAI
Abstract
We fine-tune large language models to write natural language critiques (natural
language critical comments) using behavioral cloning. On a topic-based summariza-
tion task, critiques written by our models help humans find flaws in summaries that
they would have otherwise missed. Our models help find naturally occurring flaws
in both model and human written summaries, and intentional flaws in summaries
written by humans to be deliberately misleading. We study scaling properties of
critiquing with both topic-based summarization and synthetic tasks. Larger models
write more helpful critiques, and on most tasks, are better at self-critiquing, despite
having harder-to-critique outputs. Larger models can also integrate their own self-
critiques as feedback, refining their own summaries into better ones. Finally, we
motivate and introduce a framework for comparing critiquing ability to generation
and discrimination ability. Our measurements suggest that even large models may
still have relevant knowledge they cannot or do not articulate as critiques. These
results are a proof of concept for using AI-assisted human feedback to scale the
supervision of machine learning systems to tasks that are difficult for humans to
evaluate directly. We release our training datasets, as well as samples from our
critique assistance experiments.
1 Introduction
1.1 Motivation
With increasingly capable language models, it is important to ensure models are trustworthy on
difficult and high stakes tasks. For example, models are being used to write complex pieces of code
[CTJ+21,LCC+22] and answer open-ended questions about the world [ NHB+21,MTM+22]. We
would like to be able to train models that don’t write buggy code or spread misinformation.
However, fully evaluating correctness of code or veracity of facts about the world requires a lot of
effort and expertise. Techniques to train systems from human feedback [ NR+00,Wes16 ,CLB+17,
JMD20 ,NMS+21,SCC+22], fundamentally depend on humans’ ability to demonstrate and evaluate
the quality of model outputs. This leads to the problem of scalable oversight [ AOS+16]: How can
we effectively provide feedback to models on tasks that are difficult for humans to evaluate?
One idea to overcome this problem is to use AI systems to aid human evaluation. This basic idea
comes up in many prior proposals, such as iterated amplification [ CSA18 ], debate [ ICA18 ], and
recursive reward modeling [ LKE+18]. If we first train a model to perform simpler assistive tasks
that humans can evaluate, then we can use this model to assist humans with the evaluation of harder
tasks. A key assumption is that evaluating the assistance task is simpler than evaluating the "base"
∗Equal contribution. Correspondence to jeffwu@openai.comarXiv:2206.05802v2 [cs.CL] 14 Jun 2022 |
2311.08401.pdf | Fine-tuning Language Models for Factuality
Katherine Tian*†,Eric Mitchell*†,Huaxiu Yao†§,
Christopher D. Manning†,Chelsea Finn†
†Stanford University§UNC Chapel Hill
{kattian,eric.mitchell}@cs.stanford.edu
Abstract
The fluency and creativity of large pre-trained language models (LLMs) have led to their widespread use,
sometimes even as a replacement for traditional search engines. Yet language models are prone to making
convincing but factually inaccurate claims, often referred to as ‘hallucinations.’ These errors can inadver-
tently spread misinformation or harmfully perpetuate misconceptions. Further, manual fact-checking of
model responses is a time-consuming process, making human factuality labels expensive to acquire. In
this work, we fine-tune language models to be more factual, without human labeling and targeting more
open-ended generation settings than past work. We leverage two key recent innovations in NLP to do so.
First, several recent works have proposed methods for judging the factuality of open-ended text by measur-
ing consistency with an external knowledge base or simply a large model’s confidence scores. Second, the
direct preference optimization algorithm enables straightforward fine-tuning of language models on objec-
tives other than supervised imitation, using a preference ranking over possible model responses. We show
that learning from automatically generated factuality preference rankings, generated either through exist-
ing retrieval systems or our novel retrieval-free approach, significantly improves the factuality (percent of
generated claims that are correct) of Llama-2 on held-out topics compared with RLHF or decoding strate-
gies targeted at factuality. At 7B scale, compared to Llama-2-chat, we observe 58% and 40% reduction
in factual error rate when generating biographies and answering medical questions, respectively.
1 Introduction
Recent developments in training large language models (LLMs), particularly methods that learn from rank-
ings over responses such as reinforcement learning from human feedback (RLHF) (Christiano et al., 2017;
Ziegler et al., 2020; Ouyang et al., 2022), have enabled the development of powerful, engaging dialogue
agents. State-of-the-art LLMs are pre-trained on a vast amount of knowledge in large datasets (Touvron
et al., 2023a;b) and further fine-tuned to apply this knowledge to follow diverse instructions or complete
more specific tasks (Chung et al., 2022; Chen et al., 2021). However, despite these large language models’
exposure to diverse datasets, they are prone to confidently generating incorrect claims. One recent study
shows that GPT-3.5 (ChatGPT) produces false citations more often than not when asked to provide the au-
thors of a given study (Agrawal et al., 2023). Nonetheless, other research has demonstrated that in simple
question-answering settings, large language models doexhibit systematic markers of uncertainty that indi-
cate their factually unreliable statements (Kadavath et al., 2022; Tian et al., 2023). These results suggest that
language models internally represent the limits of their knowledge, leading us to ask: Can language models
be fine-tuned to leverage this internal awareness, to avoid making untrue statements in the first place?
A key source of difficulty in training factual models comes in specifying an objective that adequately cap-
tures factuality. As an example, maximum likelihood, the most common objective for pre-training language
models, does not always encourage factual predictions. Consider the question “Where was Yo-Yo Ma born?”
A model that continues by near-deterministically producing the text “idk, probably Paris?” is nearly always
correct, but receives extremely high loss if the pre-training data contains any other response to the question.
On the other hand, a model that hedges probability mass over many possible phrasings and many possible
locations (including incorrect ones, like Antarctica) will likely receive much lower loss, as any response
observed in the training data will be assigned at least some non-trivial probability. Because the pre-training
objective may reward ‘smearing’ probability mass over many possible responses, language models may gen-
*Equal contribution.
1arXiv:2311.08401v1 [cs.CL] 14 Nov 2023 |
10.1016.j.cell.2023.12.028.pdf | Leading Edge
Perspective
De novo protein design—From new structures
to programmable functions
Tanja Kortemme1,2,3,*
1Department of Bioengineering and Therapeutic Sciences, University of California, San Francisco, San Francisco, CA 94158, USA
2Quantitative Biosciences Institute, University of California, San Francisco, San Francisco, CA 94158, USA
3Chan Zuckerberg Biohub, San Francisco, CA 94158, USA
*Correspondence: tanjakortemme@gmail.com
https://doi.org/10.1016/j.cell.2023.12.028
SUMMARY
Methods from artificial intelligence (AI) trained on large datasets of sequences and structures can now
‘‘write’’ proteins with new shapes and molecular functions de novo , without starting from proteins found in
nature. In this Perspective, I will discuss the state of the field of de novo protein design at the juncture of phys-
ics-based modeling approaches and AI. New protein folds and higher-order assemblies can be designed withconsiderable experimental success rates, and difficult problems requiring tunable control over protein con-formations and precise shape complementarity for molecular recognition are coming into reach. Emergingapproaches incorporate engineering principles—tunability, controllability, and modularity—into the designprocess from the beginning. Exciting frontiers lie in deconstructing cellular functions with de novo proteins
and, conversely, constructing synthetic cellular signaling from the ground up. As methods improve, many
more challenges are unsolved.
INTRODUCTION
Proteins can accelerate the speed of chemical reactions by
many orders of magnitude, convert the energy of light into chem-
ical energy, and regulate the myriads of processes within cells
and organisms with the level of accuracy and precision requiredto sustain life. Because of these powerful functions, natural pro-
teins have long been an attractive target for molecular engineer-
ing. The goals of protein engineering range from understandingthe mechanisms of molecular and cellular functions to harness-
ing proteins for practical applications in catalysis, biotechnology,
and as precision tools in discovery science and medicine.
The field of protein design is now fundamentally—and practi-
cally—rethinking this approach. Rather than reengineering exist-
ing proteins, it is becoming possible to build proteins with intri-
cate architectures and functions—as powerful as those innature but new and user-programmable—from the ground up.
This is the concept of de novo design,
1designing proteins
from engineering principles or ‘‘blueprints’’ without relying on ex-isting starting points found in nature.
One can of course ask, why would one build everything new if
one can borrow, reuse, and reprogram from nature, or evenarrive at functions new to nature despite starting from existingproteins?
2Indeed, the approach of evolving or recombining ex-
isting protein components for new functions has been incredibly
successful,2,3and de novo design has long lagged behind
because of its apparent limitations. Designed proteins, if less
active than their natural counterparts, have required extensive
screening campaigns to improve activity, and many desiredfunctions seemed out of reach.
4But if we could design functionalproteins completely de novo , from the ground up, without the
idiosyncratic features of evolved proteins, there may be several
distinct advantages ( Figure 1 A). The most obvious one is to
enable functions not yet seen in nature (for which there are no
obvious existing starting points for directed evolution). The sec-
ond advantage is that de novo design could allow us to create
proteins that integrate engineering principles—tunability,
controllability, and modularity—into the design process from
the beginning. We could engineer de novo proteins a priori to
be (1) tunable, such that it is easy to generate versions with pre-
cisely altered biochemical parameters, (2) controllable, such that
protein function is responsive to internal and external stimuli, and(3) modular, such that we can integrate different functions easilyinto composite molecular machines and assemblies.
Artificial intelligence (AI) promises a considerable leap in
enabling this vision for de novo design. Recent advances in
the accuracy of protein structure prediction through deep
learning
5–7have profound influence on the inverse problem, pro-
tein design, and are changing how de novo design is conceptu-
alized. Classical approaches to protein design first define a pro-
tein backbone structure at the atomic level and then find a
sequence that is consistent with that structure.8Designing
‘‘function’’ adds a definition of the structure of an active site (typi-cally the relative atomic positioning of key catalytic or binding
residues) that is built into a designed protein ‘‘scaffold.’’ Much
of the difficulty of designing function lies in the fact that the de-signed protein needs to adopt the desired functional site struc-
ture with extraordinary precision. Even deviations of less than
1A˚in atomic positions can cause the design to fail (if we, for
example, think of the precise geometric requirements of
ll
OPEN ACCESS
526 Cell 187, February 1, 2024 ª2024 The Author. Published by Elsevier Inc.
This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ ). |
2302.02672v2.pdf | Identifiability of latent-variable and
structural-equation models:
from linear to nonlinear
Aapo Hyv¨ arinen,1Ilyes Khemakhem,2Ricardo Monti2
1)Dept of Computer Science, University of Helsinki, Finland
2)Gatsby Computational Neuroscience Unit, UCL, UK
May 4, 2023
Abstract
An old problem in multivariate statistics is that linear Gaussian models
are often unidentifiable, i.e. the parameters cannot be uniquely estimated.
In factor (component) analysis, an orthogonal rotation of the factors is
unidentifiable, while in linear regression, the direction of effect cannot
be identified. For such linear models, non-Gaussianity of the (latent)
variables has been shown to provide identifiability. In the case of factor
analysis, this leads to independent component analysis, while in the case
of the direction of effect, non-Gaussian versions of structural equation
modelling solve the problem. More recently, we have shown how even
general nonparametric nonlinear versions of such models can be estimated.
Non-Gaussianity is not enough in this case, but assuming we have time
series, or that the distributions are suitably modulated by some observed
auxiliary variables, the models are identifiable. This paper reviews the
identifiability theory for the linear and nonlinear cases, considering both
factor analytic models and structural equation models.
Keywords: Identifiability ; independent component analysis ; structural
equation model ; factor analysis ; disentanglement ; non-Gaussianity
1 Introduction
The goal of this paper is to provide a succinct and relatively self-contained
exposition of the identifiability theory of a class of latent-variable models called
independent component analysis, as well as of a class of structural-equation
models. The theory has both linear and nonlinear versions, where “nonlinear”
is to be taken in the sense of general (non-parametric) nonlinearities. The latent-
variable models and structural-equation model are intimately related, and the
identifiability theory of the former can be used to construct an identifiability
theory of the latter. We focus on identifiability theory, and aim to explain the
1arXiv:2302.02672v2 [stat.ML] 3 May 2023 |
2402.14083.pdf | Beyond A∗: Better Planning with Transformers via
Search Dynamics Bootstrapping
Lucas Lehnert1,Sainbayar Sukhbaatar1,Paul Mcvay1,Michael Rabbat1,Yuandong Tian1
1FAIR at Meta
WhileTransformershaveenabledtremendousprogressinvariousapplicationsettings, sucharchitectures
still lag behind traditional symbolic planners for solving complex decision making tasks. In this work,
we demonstrate how to train Transformers to solve complex planning tasks and present Searchformer ,
a Transformer model that optimally solves previously unseen Sokoban puzzles 93.7% of the time, while
using up to 26.8% fewer search steps than standard A∗search. Searchformer is an encoder-decoder
Transformer model trained to predict the search dynamics ofA∗. This model is then fine-tuned via
expert iterations to perform fewer search steps than A∗search while still generating an optimal plan.
In our training method, A∗’s search dynamics are expressed as a token sequence outlining when task
states are added and removed into the search tree during symbolic planning. In our ablation studies
on maze navigation, we find that Searchformer significantly outperforms baselines that predict the
optimal plan directly with a 5–10 ×smaller model size and a 10 ×smaller training dataset. We also
demonstrate how Searchformer scales to larger and more complex decision making tasks like Sokoban
with improved percentage of solved tasks and shortened search dynamics.
Correspondence: {lucaslehnert, yuandong}@meta.com
1 Introduction
Over the past few years, Transformer-based architectures (Vaswani et al., 2017) have demonstrated impressive
performance in different tasks, including holding conversations at the human level (Shuster et al., 2022;
OpenAI, 2022, 2023; Touvron et al., 2023), high-quality image understanding (Caron et al., 2021; Oquab
et al., 2024; Assran et al., 2023) and video generation (Singer et al., 2023), multi-modal generation (Girdhar
et al., 2023; Radford et al., 2021), and code completion (Roziere et al., 2023; OpenAI, 2021). By training
these architectures with a huge amount of data, the resulting models, such as Large Language Models (LLMs),
can generalize well in real-world use cases.
Despite these successes, Transformer-based architectures and LLMs still struggle when it comes to solving
planning and reasoning tasks. Previous studies (Momennejad et al., 2023; Valmeekam et al., 2023a,b)
demonstrate that LLMs fall short in multi-step planning tasks (Valmeekam et al., 2023b) or when performing
higher-order reasoning (Momennejad et al., 2023; Fan et al., 2020).
In recent years, various methods have been proposed to improve the performance of Transformers in reasoning
and planning tasks. One common and effective approach is to simulate the human thinking process and
produce intermediate “thoughts” before outputting a response. Chain-of-Thought (CoT) prompting (Wei et al.,
2022) encourages the model to predict the intermediate steps and to “think” step by step. Tree-of-thoughts
(ToT) uses a branching strategy and critics to generate different thought paths before picking the best one (Yao
et al., 2023). While these techniques are often effective, there are studies showing that in many cases, they may
lead to worse performance, for example due to self-enforcing (Huang et al., 2023). Furthermore, techniques
effective on one dataset may not work well on others due to changes in the type of reasoning involved (e.g.,
spatial reasoning vs. mathematical reasoning vs. common-sense reasoning). How to enable Transformers or
LLMs to plan, solve multi-step decision making tasks, and perform different types of reasoning still remains
elusive and an active area of research.
These methods stand in sharp contrast with traditional symbolic planning and search techniques. While such
techniques may not exhibit the language understanding capabilities of LLMs trained on internet-scale datasets,
1arXiv:2402.14083v1 [cs.AI] 21 Feb 2024 |
2309.17179.pdf | AlphaZero-Like Tree-Search can Guide
Large Language Model Decoding and Training
Xidong Feng* 1Ziyu Wan* 2Muning Wen2Stephen Marcus McAleer3
Ying Wen2Weinan Zhang2Jun Wang1
Abstract
Recent works like Tree-of-Thought (ToT) and
Reasoning via Planning (RAP) aim to augment
the reasoning capabilities of LLMs by using tree-
search algorithms to guide multi-step reasoning.
These methods rely on prompting a pre-trained
model to serve as a value function and focus
on problems with low search depth. As a re-
sult, these methods will not work in domains
where the pre-trained LLM does not have enough
knowledge to serve as an effective value func-
tion or in domains that require long-horizon plan-
ning. To address these limitations, we present an
AlphaZero-like tree-search learning framework
for LLMs (termed TS-LLM), systematically il-
lustrating how tree-search with a learned value
function can guide LLM decoding. TS-LLM dis-
tinguishes itself in two key ways. (1) Leveraging
a learned value function and AlphaZero-like al-
gorithms, our approach can be generally adapt-
able to a wide range of tasks, language models
of any size, and tasks of varying search depths.
(2) Our approach can guide LLMs during both
inference and training, iteratively improving the
LLM. Empirical results across reasoning, plan-
ning, alignment, and decision-making tasks show
that TS-LLM outperforms existing approaches
and can handle trees with a depth of 64.
1. Introduction
Large language models (LLMs) (OpenAI, 2023; Touvron
et al., 2023a) have demonstrated their potential in a wide
range of natural language tasks. A plethora of recent studies
have concentrated on improving LLMs task-solving capabil-
ity, including curation of larger and higher-quality general
or domain-specific data (Touvron et al., 2023a; Zhou et al.,
*Equal contribution1University College London2Shanghai Jiao
Tong University3Carnegie Mellon University. Correspondence to:
Xidong Feng <xidong.feng.20@ucl.ac.uk. >.
Preprint.2023; Gunasekar et al., 2023; Feng et al., 2023; Taylor et al.,
2022), more sophisticated prompt design (Wei et al., 2022;
Zhou et al., 2022; Creswell et al., 2022), or better train-
ing algorithms with Supervised Learning or Reinforcement
Learning (RL) (Dong et al., 2023; Gulcehre et al., 2023;
Rafailov et al., 2023). When training LLMs with RL, LLMs’
generation can be naturally formulated as a Markov Deci-
sion Process (MDP) and optimized with specific objectives.
Following this formulation, ChatGPT (Ouyang et al., 2022)
emerges as a notable success, optimizing LLMs to align
human preference by leveraging RL from Human Feedback
(RLHF) (Christiano et al., 2017).
LLMs can be further guided with planning algorithms such
astree search . Preliminary work in this field includes
Tree-of-Thought (ToT) (Yao et al., 2023; Long, 2023) with
depth/breadth-first search and Reasoning-via-Planing (RAP)
(Hao et al., 2023) with MCTS. They successfully demon-
strated a performance boost of searching on trees expanded
by LLM through self-evaluation. Despite these advances,
current methods come with distinct limitations. First, the
value functions in the tree-search algorithms are obtained
by prompting LLMs. As a result, such algorithms lack gen-
eral applicability and heavily rely on both well-designed
prompts and the robust capabilities of advanced LLMs. Be-
yond the model requirements, we will also show in Sec.
4.2.1 that such prompt-based self-evaluation is not always
reliable. Second, ToT and RAP use BFS/DFS and MCTS
for tree search, restricting their capabilities to relatively sim-
ple and shallow tasks. They are capped at a maximum depth
of only 10 or 7, which is significantly less than the depth
achieved by AlphaZero in chess or Go (Silver et al., 2017).
As a result, ToT and RAP might struggle with complex prob-
lems that demand large analytical depths and longer-term
planning horizons, decreasing their scalability.
To address these problems, we introduce tree-search en-
hanced LLM (TS-LLM), an AlphaZero-like framework that
utilizes tree-search to improve LLMs’ performance on gen-
eral natural language tasks. TS-LLM extends previous work
to AlphaZero-like deep tree-search with a learned LLM-
based value function which can guide the LLM during both
inference and training. Compared with previous work, TS-
1arXiv:2309.17179v2 [cs.LG] 9 Feb 2024 |
10.1016.j.cels.2024.01.008.pdf | Brief Report
Convolutions are competitive with transformers for
protein sequence pretraining
Highlights
dWe trained large-scale convolutional protein language
models
dConvolutions perform as well as transformers across taskswhile being more efficient
dConvolutions and transformers have different inductivebiases
dCurrent pretraining strategies do not scale well across alltasks for either modelAuthors
Kevin K. Yang, Nicolo Fusi, Alex X. Lu
Correspondence
kevyan@microsoft.com
In brief
Protein language models (PLMs) extractbiological information from millions ofprotein sequences and can then be usedas a starting point for other predictiontasks. Most PLMs are parametrized astransformers, which scale poorly withlength. A more scalable framework—convolutions—is as effective, improvingthe efficiency of future applications.
Yang et al., 2024, Cell Systems 15, 1–9
March 20, 2024 ª2024 Elsevier Inc.
https://doi.org/10.1016/j.cels.2024.01.008 ll |
2204.12130.pdf | LM-Debugger : An Interactive Tool for
Inspection and Intervention in Transformer-Based Language Models
Mor Geva1Avi Caciularu2,∗Guy Dar3Paul Roit2Shoval Sadde1
Micah Shlain1Bar Tamir4Yoav Goldberg1,2
1Allen Institute for AI2Bar-Ilan University
3Tel Aviv University4The Hebrew University of Jerusalem
morp@allenai.org
Abstract
The opaque nature and unexplained behavior
of transformer-based language models (LMs)
have spurred a wide interest in interpreting
their predictions. However, current interpre-
tation methods mostly focus on probing mod-
els from outside, executing behavioral tests,
and analyzing salience input features, while
the internal prediction construction process is
largely not understood. In this work, we in-
troduce LM-Debugger , an interactive debug-
ger tool for transformer-based LMs, which
provides a fine-grained interpretation of the
model’s internal prediction process, as well as
a powerful framework for intervening in LM
behavior. For its backbone, LM-Debugger re-
lies on a recent method that interprets the inner
token representations and their updates by the
feed-forward layers in the vocabulary space.
We demonstrate the utility of LM-Debugger
for single-prediction debugging, by inspect-
ing the internal disambiguation process done
by GPT2. Moreover, we show how easily
LM-Debugger allows to shift model behavior
in a direction of the user’s choice, by iden-
tifying a few vectors in the network and in-
ducing effective interventions to the prediction
process. We release LM-Debugger as an open-
source tool and a demo over GPT2 models.
1 Introduction
Transformer-based language models (LMs) are the
backbone of modern NLP models (Bommasani
et al., 2021), but their internal prediction construc-
tion process is opaque. This is problematic to end-
users that do not understand why the model makes
specific predictions, as well as for developers who
wish to debug or fix model behaviour.
Recent work (Elhage et al., 2021; Geva et al.,
2022) suggested that the construction process of
LM predictions can be viewed as a sequence of
updates to the token representation. Specifically,
∗Work done during an internship at AI2.
She is working as a DJ
kindergarten,
school, kids,
elementary,
teacher,
classroom
lawyer
nurse
dentist
nanny
DJ
singer
lawyer
rapper
FFNFFNFFN
album, DJ,
rapper, funk,
music, song,
vocals, punk,
disco, rock, …
inspection
inter v ention
pr ojections Figure 1: Illustration of the main capabilities of
LM-Debugger . Our tool interprets dominant changes
in the output distribution induced by the feed-forward
layers across the network (self-attention layers are not
shown), and enables configuring interventions for shift-
ing the prediction in directions of the user’s choice.
Geva et al. (2022) showed that updates by the feed-
forward network (FFN) layers, one of the building
blocks of transformers (Vaswani et al., 2017), can
be decomposed into weighted collections of sub-
updates, each induced by a FFN parameter vector,
that can be interpreted in the vocabulary space.
In this work, we make a step towards LM trans-
parency by employing this interpretation approach
to create LM-Debugger , a powerful tool for inspec-
tion and intervention in transformer LM predic-
tions. LM-Debugger provides three main capabil-
ities for single-prediction debugging and model
analysis (illustrated in Figure 1). First, for a given
input (e.g. “My wife is working as a” ), it interprets
the model’s prediction at each layer in the network,
and the major changes applied to it by FFN layers.
This is done by projecting the token representa-arXiv:2204.12130v2 [cs.CL] 12 Oct 2022 |
2304.13136.pdf | Generating Molecular Fragmentation Graphs with
Autoregressive Neural Networks
Samuel Goldman
Computational and Systems Biology
MIT
Cambridge, MA 02139
samlg@mit.edu
Janet Li
Computer Science
Harvard College
Cambridge, MA 02138
jsli@college.harvard.eduConnor W. Coley
Chemical Engineering
Electrical Engineering and Computer Science
MIT
Cambridge, MA 02139
ccoley@mit.edu
Abstract
The accurate prediction of tandem mass spectra from molecular structures has the
potential to unlock new metabolomic discoveries by augmenting the community’s
libraries of experimental reference standards. Cheminformatic spectrum prediction
strategies use a “bond-breaking” framework to iteratively simulate mass spectrum
fragmentations, but these methods are (a) slow, due to the need to exhaustively and
combinatorially break molecules and (b) inaccurate, as they often rely upon heuris-
tics to predict the intensity of each resulting fragment; neural network alternatives
mitigate computational cost but are black-box and not inherently more accurate.
We introduce a physically-grounded neural approach that learns to predict each
breakage event and score the most relevant subset of molecular fragments quickly
and accurately. We evaluate our model by predicting spectra from both public and
private standard libraries, demonstrating that our hybrid approach offers state of
the art prediction accuracy, improved metabolite identification from a database
of candidates, and higher interpretability when compared to previous breakage
methods and black box neural networks. The grounding of our approach in physical
fragmentation events shows especially high promise for elucidating natural product
molecules with more complex scaffolds.
1 Introduction
Identifying unknown molecules in complex metabolomic or environmental samples is of critical
importance to biologists [ 42], forensic scientists [ 34], and ecologists alike [ 5]. Tandem mass
spectrometry, MS/MS, is the standard analytical chemistry method for analyzing such samples,
favored for its speed and sensitivity [ 27]. In brief, MS/MS metabolomics experiments isolate, ionize,
and fragment small molecules, resulting in a characteristic spectrum for each where peaks correspond
to molecular sub-fragments (Fig. 1A). Importantly, these experiments are high throughput, leading to
thousands of detected spectra per single experiment for complex samples such as human serum.
The most straightforward way to identify an unknown molecule from its fragmentation spectrum is to
compare the spectrum to a library of known standards [ 3]. However, spectral libraries only contain on
the order of 104compounds—a drop in the bucket compared to the vast size of biologically-relevant
Preprint.arXiv:2304.13136v1 [q-bio.QM] 25 Apr 2023 |
2404.19737v1.pdf | Better & Faster Large Language Models via Multi-token Prediction
Fabian Gloeckle* 1 2Badr Youbi Idrissi* 1 3Baptiste Rozière1David Lopez-Paz+ 1Gabriel Synnaeve+ 1
Abstract
Large language models such as GPT and Llama
are trained with a next-token prediction loss. In
this work, we suggest that training language mod-
els to predict multiple future tokens at once results
in higher sample efficiency. More specifically, at
each position in the training corpus, we ask the
model to predict the following ntokens using n
independent output heads, operating on top of a
shared model trunk. Considering multi-token pre-
diction as an auxiliary training task, we measure
improved downstream capabilities with no over-
head in training time for both code and natural
language models. The method is increasingly use-
ful for larger model sizes, and keeps its appeal
when training for multiple epochs. Gains are es-
pecially pronounced on generative benchmarks
like coding, where our models consistently out-
perform strong baselines by several percentage
points. Our 13B parameter models solves 12 %
more problems on HumanEval and 17 % more on
MBPP than comparable next-token models. Ex-
periments on small algorithmic tasks demonstrate
that multi-token prediction is favorable for the
development of induction heads and algorithmic
reasoning capabilities. As an additional benefit,
models trained with 4-token prediction are up to
3×faster at inference, even with large batch sizes.
1. Introduction
Humanity has condensed its most ingenious undertakings,
surprising findings and beautiful productions into text.
Large Language Models (LLMs) trained on all of these
corpora are able to extract impressive amounts of world
knowledge, as well as basic reasoning capabilities by im-
plementing a simple—yet powerful—unsupervised learning
task: next-token prediction. Despite the recent wave of
impressive achievements (OpenAI, 2023), next-token pre-
*Equal contribution+Last authors1FAIR at Meta2CERMICS
Ecole des Ponts ParisTech3LISN Université Paris-Saclay. Cor-
respondence to: Fabian Gloeckle <fgloeckle@meta.com>, Badr
Youbi Idrissi <byoubi@meta.com>.diction remains an inefficient way of acquiring language,
world knowledge and reasoning capabilities. More precisely,
teacher forcing with next-token prediction latches on local
patterns and overlooks “hard” decisions. Consequently, it
remains a fact that state-of-the-art next-token predictors call
for orders of magnitude more data than human children to
arrive at the same level of fluency (Frank, 2023).
Figure 1: Overview of multi-token prediction. (Top) Dur-
ing training, the model predicts 4future tokens at once, by
means of a shared trunk and 4dedicated output heads. Dur-
ing inference, we employ only the next-token output head.
Optionally, the other three heads may be used to speed-up
inference time. (Bottom) Multi-token prediction improves
pass@1 on the MBPP code task, significantly so as model
size increases. Error bars are confidence intervals of 90%
computed with bootstrapping over dataset samples.
1arXiv:2404.19737v1 [cs.CL] 30 Apr 2024 |
spl20.pdf | 1
Deep Clustering with Variational Autoencoder
Kart-Leong Lim and Xudong Jiang, Senior Member, IEEE and Chenyu Yi
Abstract —An autoencoder that learns a latent space in an
unsupervised manner has many applications in signal processing.
However, the latent space of an autoencoder does not pursue the
same clustering goal as Kmeans or GMM. A recent work of Song
et al proposes to artificially re-align each point in the latent
space of an autoencoder to its nearest class neighbors during
training. The resulting new latent space is found to be much
more suitable for clustering, since clustering information is used.
Inspired by Song et al, in this paper we propose several extensions
to this technique. First, we propose a probabilistic approach to
generalize Song’s approach, such that Euclidean distance in the
latent space is now represented by KL divergence. Second, as a
consequence of this generalization we can now use probability
distributions as inputs rather than points in the latent space.
Third, we propose using Bayesian Gaussian mixture model for
clustering in the latent space. We demonstrated our proposed
method on digit recognition datasets, MNIST, USPS and SHVN
as well as scene datasets, Scene15 and MIT67 with interesting
findings.
I. I NTRODUCTION
Deep clustering networks that exploit autoencoder (AE) for
clustering have been found in many recent signal processing
applications such as computer vision and pattern recognition
[1], [39], [14], [15], [3], [12], speech and audio recognition [7],
[18], [40], [17], [27], [22], wireless communication [2], [32],
[10], text classification [36], [4], [30] and etc. Deep clustering
network [37], [31] typically trains a clustering algorithm e.g.
Kmeans on the latent space of AE. However, the latent space
of an AE may not be suitable for clustering. We can view this
problem from the probabilistic perspective of the variational
autoencoder (V AE) [19]. The main difference between AE and
variational autoencoder (V AE) [19], [18] is the way the latent
space is represented. In AE, an encoded image is represented
as a point in the latent space, while in V AE an encoded
image is represented by the sample draw from a Gaussian
distribution. The latter is described by V AE’s random variable,
mean and variance associated with the image. The problem of
clustering faced by V AE is that when we have a multiclass
dataset such as MNIST, the underlying Gaussian distribution
assumption may not be sufficient to separate different classes
in the latent space. This is especially true when two different
digit classes share very similar mean and variance. There is
simply no mechanism in V AE that enforces samples from
different classes to have different mean and variance. Unless
the underlying data layout is inherently class discriminative,
there is no way AE or V AE can generate a latent space suitable
for clustering.
K. Lim, X. Jiang and C. Yi are with the Rapid-Rich Object Search lab,
School of Electrical and Electronic Engineering, Nanyang Technological Uni-
versity, Singapore 639798 (email: lkartl@yahoo.com.sg, exdjiang@ntu.edu.sg,
yich0003@e.ntu.edu.sg)In order to solve V AE’s clustering problem, at least two
groups of researchers have converged to the same idea of
using categorial distribution for V AE since the underlying
distribution is discrete [11], [25]. Fortunately, there is an easier
way to solve the problem. A recent approach by Song et
al [31] focuses on minimizing the difference between the
original latent space learnt by AE and the feature space learnt
over it by traditional machine learning (ML) techniques. In
such approach, there are two objectives to be solved in each
iteration, the network weights φand the ML parameters
θ. The standard way to learn it is to alternate between each
optimization while fixing the other. Our work mainly follows
Song’s approach [31] which we named as autoencoder with
distance (AED). We further extend it to using V AE [19] which
we call variational autoencoder with distance (V AED).
There are some challenges faced when using AED:
i) AE may not be the most ideal tool for training
compact representation since, unlike V AE it can-
not model latent space using random variable.
ii) The distance error function of AED only takes
points in the latent space as inputs. It is not so
straightforward to extend this function to using
random variables as inputs.
iii) Kmeans assumes a spherical Gaussian distribu-
tion for each cluster. However, this is a strong
assumption for most datasets.
Novel contributions in this work include:
i) Inputs to the distance error function are now
probability distributions, rather than points in the
latent space.
ii) The second order term (variance) of network
(V AE) and ML (GMM) are now optimized by
the distance error function.
iii) Bayesian GMM [5] is used to improve the
clustering. More hidden variables and hyperpa-
rameters can better capture the latent space over
Kmeans alone.
A. Related work
AED [31] first proposes to integrate both reconstruction
error and the error between Kmeans and the encoded image
(a.k.a. distance error or L3) into a single objective. Backpropa-
gation on this objective will adjust the AE weights to minimize
the within class latent space representation of the encoded
image. Many recent works [31], [23], [34], [35], [37], [12]
including our paper follow this strategy. DCN [37] offers a
concise study of AED but both use identical L3. DC-Kmeans
[34] use the alternating directed method of multiplier to train
AED. The authors of DEC [35] proposed using a Student’s t-
distribution kernel for L3. DBC [23] combines a convolutionalPage 1 of 5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60 |
1712.06527.pdf | 1 Deep generative models of genetic variation capture mutation effects Adam J. Riesselman* John B. Ingraham* Program in Biomedical Informatics Program in Systems Biology Harvard Medical School Harvard University ariesselman@g.harvard.edu ingraham@fas.harvard.edu Debora S. Marks Department of Systems Biology Harvard Medical School debbie@hms.harvard.edu * Equal contribution Abstract The functions of proteins and RNAs are determined by a myriad of interactions between their constituent residues, but most quantitative models of how molecular phenotype depends on genotype must approximate this by simple additive effects. While recent models have relaxed this constraint to also account for pairwise interactions, these approaches do not provide a tractable path towards modeling higher-order dependencies. Here, we show how latent variable models with nonlinear dependencies can be applied to capture beyond-pairwise constraints in biomolecules. We present a new probabilistic model for sequence families, DeepSequence, that can predict the effects of mutations across a variety of deep mutational scanning experiments significantly better than site independent or pairwise models that are based on the same evolutionary data. The model, learned in an unsupervised manner solely from sequence information, is grounded with biologically motivated priors, reveals latent organization of sequence families, and can be used to extrapolate to new parts of sequence space. Introduction Modern medicine and biotechnology are routinely challenged to both interpret and exploit how mutations will affect biomolecules. From interpreting which genetic variants in humans underlie disease, to developing modified proteins that have useful properties, to synthesizing large molecular libraries that are enriched with functional sequences, there is need to be able to rapidly assess whether a given mutation to a protein or RNA will disrupt its function [1, 2]. Motivated by these diverse applications, new technologies have emerged that simultaneously assess the effects of thousands of mutations in parallel [3-25] (sometimes referred to as “deep mutational |
10.1126.science.abm9326.pdf | RESEARCH ARTICLE SUMMARY◥
NUCLEAR PORE COMPLEX
Structure of cytoplasmic ring of nuclear pore
complex by integrative cryo-EM and AlphaFold
Pietro Fontana †, Ying Dong †, Xiong Pi †, Alexander B. Tong †, Corey W. Hecksel, Longfei Wang,
Tian-Min Fu, Carlos Bustamante, Hao Wu *
INTRODUCTION: The nuclear pore complex
(NPC) is the molecular conduit in the nu-
clear membrane of eukaryotic cells that reg-
ulates import and export of biomolecules
between the nucleus and the cytosol, withvertebrate NPCs ~110 to 125 MDa in molec-
ular mass and ~120 nm in diameter. NPCs
are organized into four main rings: the cyto-
p l a s m i cr i n g( C R )a tt h ec y t o s o l i cs i d e ,t h e
inner ring and the luminal ring on the plane
of the nuclear membrane, and the nuclearring facing the nucleus. Each ring possesses
an approximate eightfold symmetry and is
composed of multiple copies of different nu-
cleoporins. NPCs have been implicated in
numerous biological processes, and their dys-functions are associated with a growing num-
ber of serious human diseases. However, despite
pioneering studies from many groups over
the past two decades, we still lack a full un-
derstanding of NPCs ’organization, dynam-
ics, and complexity.RATIONALE: We used the Xenopus laevis oocyte
as a model system for the structural charac-
terization because each oocyte possesses a
large number of NPC particles that can be
visualized on native nuclear membranes with-
out the aid of detergent extraction. We used
single-particle cryo –electron microscopy (cryo-
EM) analysis on data collected at different stage
tilt angles for three-dimensional reconstruc-
tion and structure prediction with AlphaFold
for model building.
RESULTS: We reconstructed the CR map of
X. laevis NPC at 6.9 and 6.7 Å resolutions
for the full CR protomer and a core region,
respectively, and predicted the structures of
the individual nucleop orins using AlphaFold
because no high-resolution models of X. laevis
Nups were available. For any ambiguous sub-
unit interactions, we also predicted complex
structures, which further guided model fitting
of the CR protomer. We placed the nucleoporin
or complex structures into the CR density to
o b t a i na na l m o s tf u l lC Ra t o m i cm o d e l ,c o m -
posed of the inner and outer Y-complexes, two
copies of Nup205, two copies of the Nup214-
Nup88-Nup62 complex, one Nup155, and five
copies of Nup358. In particular, we predicted
the largest protein in the NPC, Nup358, as
having an S-shaped globular domain, a coiled-
coil domain, and a largely disordered C-terminal
region containing phenylalanine-glycine (FG)
repeats previously shown to form a gel-like con-
densate phase for selective cargo passage. Four
of the Nup358 copies clamp around the inner
and outer Y-complexes to stabilize the CR, and
the fifth Nup358 situates in the center of the
cluster of clamps. AlphaFold also predicted a
homo-oligomeric, likely specifically pentame-
ric, coiled-coil structure of Nup358 that may
provide the avidity for Nup358 recruitment to
the NPC and for lowering the threshold for
Nup358 condensation in NPC biogenesis.
CONCLUSION: O u rs t u d i e so f f e ra ne x a m p l eo f
integrative cryo-EM and structure prediction
as a general approach fo r attaining more pre-
cise models of megadalton protein complexes
from medium-resolution density maps. The
more accurate and almost complete model
of the CR presented here expands our under-
standing of the molecular interactions in the
NPC and represents a substantial step forward
toward the molecular architecture of a full
NPC, with implications for NPC function, bio-
genesis, and regulation. ▪STRUCTURE OF THE NUCLEAR PORE
Fontana et al.,Science 376, 1178 (2022) 10 June 2022 1o f1
The list of author affiliations is available in the full article online.
*Corresponding author. Email: wu@crystal.harvard.edu
†These authors contributed equally to this work.
Cite this article as P. Fontana et al. ,Science 376, eabm9326
(2022). DOI: 10.1126/science.abm9326
READ THE FULL ARTICLE AT
https://doi.org/10.1126/science.abm9326
Cryo-EM structure of the cytoplasmatic ring of the nuclear pore complex from X. laevis .The 6.9 Å map was
generated with single-particle cryo-EM, and the model was built with AlphaFold structure prediction. The
secondary structural elements guided EM map fitting, resulting in an almost complete model of the complex. The
approach allowed the identification of five copies of Nup358 and a second copy of the trimeric Nup214-Nup88-
Nup62 complex.
Downloaded from https://www.science.org on March 01, 2024
|
1909.13371.pdf | Gradient Descent: The Ultimate Optimizer
Kartik Chandra∗
MIT CSAIL†
Cambridge, MA
kach@csail.mit.eduAudrey Xie∗
MIT CSAIL
Cambridge, MA
ahx@mit.eduJonathan Ragan-Kelley
MIT CSAIL
Cambridge, MA
jrk@csail.mit.eduErik Meijer
Meta, Inc.
Menlo Park, CA
erikm@fb.com
Abstract
Working with any gradient-based machine learning algorithm involves the tedious
task of tuning the optimizer’s hyperparameters, such as its step size. Recent work
has shown how the step size can itself be optimized alongside the model parameters
by manually deriving expressions for “hypergradients” ahead of time.
We show how to automatically compute hypergradients with a simple and elegant
modification to backpropagation. This allows us to easily apply the method to
other optimizers and hyperparameters (e.g. momentum coefficients). We can even
recursively apply the method to its own hyper -hyperparameters, and so on ad in-
finitum . As these towers of optimizers grow taller, they become less sensitive to the
initial choice of hyperparameters. We present experiments validating this for MLPs,
CNNs, and RNNs. Finally, we provide a simple PyTorch implementation of this
algorithm (see people.csail.mit.edu/kach/gradient-descent-the-ultimate-optimizer).
1 Introduction
When we train deep neural networks by gradient descent, we have to select a step size αfor our
optimizer. If αis too small, the optimizer runs very slowly, whereas if αis too large, the optimizer
fails to converge. Choosing an appropriate αis thus itself an optimization task that machine learning
practitioners face every day. Why not apply gradient descent here, too? To do so, we need to compute
the derivative of the loss function not only with respect to the neural network’s weights, but also with
respect toα. Baydin et al. (2018), applying an insight from Almeida et al. (1999), describe how to
efficiently compute such “hypergradients” by manually differentiating standard optimizer update
rules with respect to the step size hyperparameter. This allows for on-line learning rate adaptation,
which generally improves convergence, especially when the initial αis sub-optimal.
However, the above method has three limitations: (1) manually differentiating optimizer update rules
is tedious and error-prone, and must be re-done for each optimizer variant; (2) the method only tunes
the step size hyperparameter, not other hyperparameters such as the momentum coefficient; and
(3) the method introduces a newhyperparameter, the hyper-step-size, which must also be tuned.
In this paper, we address all three limitations by replacing manual differentiation with automatic
differentiation (AD), which (1) automatically computes correct derivatives without any additional
human effort, and (2) naturally generalizes to other hyperparameters (e.g. momentum coefficient)
for free. As for (3), we observe that AD can be applied to optimize not only the hyperparameters,
but also the hyper -hyperparameters, and the hyper-hyper-hyperparameters, and so on. In fact, we
can implement arbitrarily tall towers of recursive optimizers, which are increasingly robust to the
choice of initial hyperparameter. These “hyperoptimizers” therefore reduce the burden on humans
responsible for tuning the hyperparameters. (Such an effect was hypothesized by Baydin et al., but
not tested because manual differentiation of complex sequences of nested optimizers is impractical.)
∗Equal contribution.
†Work done in part at Meta, Inc. and in part at Stanford University.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).arXiv:1909.13371v2 [cs.LG] 14 Oct 2022 |
2210.04142.pdf | 1
Deep Clustering: A Comprehensive Survey
Y azhou Ren, Member, IEEE , Jingyu Pu, Zhimeng Y ang, Jie Xu, Guofeng Li, Xiaorong Pu,
Philip S. Yu, Fellow, IEEE , Lifang He, Member, IEEE
Abstract —Cluster analysis plays an indispensable role in machine learning and data mining. Learning a good data representation is
crucial for clustering algorithms. Recently, deep clustering, which can learn clustering-friendly representations using deep neural
networks, has been broadly applied in a wide range of clustering tasks. Existing surveys for deep clustering mainly focus on the
single-view fields and the network architectures, ignoring the complex application scenarios of clustering. To address this issue, in this
paper we provide a comprehensive survey for deep clustering in views of data sources. With different data sources and initial
conditions, we systematically distinguish the clustering methods in terms of methodology, prior knowledge, and architecture.
Concretely, deep clustering methods are introduced according to four categories, i.e., traditional single-view deep clustering,
semi-supervised deep clustering, deep multi-view clustering, and deep transfer clustering. Finally, we discuss the open challenges and
potential future opportunities in different fields of deep clustering.
Index Terms —Deep clustering; semi-supervised clustering; multi-view clustering; transfer learning
!
1 I NTRODUCTION
WITHthe development of online media, abundant data with
high complexity can be gathered easily. Through pinpoint
analysis of these data, we can dig the value out and use these
conclusions in many fields, such as face recognition [1], [2],
sentiment analysis [3], [4], intelligent manufacturing [5], [6], etc.
A model which can be used to classify the data with different
labels is the base of many applications. For labeled data, it is
taken granted to use the labels as the most important information
as a guide. For unlabeled data, finding a quantifiable objective as
the guide of the model-building process is the key question of
clustering. Over the past decades, a large number of clustering
methods with shallow models have been proposed, including
centroid-based clustering [7], [8], density-based clustering [9],
[10], [11], [12], [13], distribution-based clustering [14], hierar-
chical clustering [15], ensemble clustering [16], [17], multi-view
clustering [18], [19], [20], [21], [22], [23], etc. These shallow
models are effective only when the features are representative,
while their performance on the complex data is usually limited
due to the poor power of feature learning.
In order to map the original complex data to a feature space
that is easy to cluster, many clustering methods focus on feature
extraction or feature transformation, such as PCA [24], kernel
method [25], spectral method [26], deep neural network [27], etc.
Among these methods, the deep neural network is a promising ap-
proach because of its excellent nonlinear mapping capability and
its flexibility in different scenarios. A well-designed deep learning
based clustering approach (referred to deep clustering) aims at
effectively extracting more clustering-friendly features from data
and performing clustering with learned features simultaneously.
Much research has been done in the field of deep clustering
and there are also some surveys about deep clustering methods
•Yazhou Ren, Jingyu Pu, Zhimeng Yang, Jie Xu, Guofeng Li and Xiaorong
Pu are with University of Electronic Science and Technology of China,
Chengdu 611731, China. Yazhou Ren is the corresponding author. E-mail:
yazhou.ren@uestc.edu.cn.
•Philip S. Yu is with University of Illinois at Chicago, IL 60607, USA.
•Lifang He is with Lehigh University, PA 18015, USA.
Manuscript received Oct. 2022.[28], [29], [30], [31]. Specifically, existing systematic reviews for
deep clustering mainly focus on the single-view clustering tasks
and the architectures of neural networks. For example, Aljalbout
et al . [28] focus only on deep single-view clustering methods
which are based on deep autoencoder (AE or DAE). Min et
al. [29] classify deep clustering methods from the perspective
of different deep networks. Nutakki et al . [30] divide deep
single-view clustering methods into three categories according
to their training strategies: multi-step sequential deep clustering,
joint deep clustering, and closed-loop multi-step deep clustering.
Zhou et al. [31] categorize deep single-view clustering methods
by the interaction way between feature learning and clustering
modules. But in the real world, the datasets for clustering are
always associated, e.g., the taste for reading is correlated with
the taste for a movie, and the side face and full-face from the
same person should be labeled the same. For these data, deep
clustering methods based on semi-supervised learning, multi-view
learning, and transfer learning have also made significant progress.
Unfortunately, existing reviews do not discuss them too much.
Therefore, it is important to classify deep clustering from
the perspective of data sources and initial conditions. In this
survey, we summarize the deep clustering from the perspective of
initial settings of data combined with deep learning methodology.
We introduce the newest progress of deep clustering from the
perspective of network and data structure as shown in Fig. 1.
Specifically, we organize the deep clustering methods into the
following four categories:
•Deep single-view clustering
For conventional clustering tasks, it is often assumed that
the data are of the same form and structure, as known as single-
view or single-modal data. The extraction of representations for
these data by deep neural networks (DNNs) is a significant
characteristic of deep clustering. However, what is more note-
worthy is the different applied deep learning techniques, which
are highly correlated with the structure of DNNs. To compare the
technical route of specific DNNs, we divide those algorithms into
five categories: deep autoencoder (DAE) based deep clustering,arXiv:2210.04142v1 [cs.LG] 9 Oct 2022 |
10.1038.s41586-024-07196-4.pdf | 212 | Nature | Vol 628 | 4 April 2024
ArticleCryo-EM structures of RAD51 assembled on
nucleosomes containing a DSB site
Takuro Shioi1,2, Suguru Hatazawa1, Eriko Oya3, Noriko Hosoya4, Wataru Kobayashi1,
Mitsuo Ogasawara1, Takehiko Kobayashi2,3, Yoshimasa Takizawa1 & Hitoshi Kurumizaka1,2 ✉
RAD51 is the central eukaryotic recombinase required for meiotic recombination and
mitotic repair of double-strand DNA breaks (DSBs)1,2. However, the mechanism by
which RAD51 functions at DSB sites in chromatin has remained elusive. Here we report
the cryo-electron microscopy structures of human RAD51–nucleosome complexes, in
which RAD51 forms ring and filament conformations. In the ring forms, the N-terminal
lobe domains (NLDs) of RAD51 protomers are aligned on the outside of the RAD51
ring, and directly bind to the nucleosomal DNA. The nucleosomal linker DNA that
contains the DSB site is recognized by the L1 and L2 loops—active centres that face the
central hole of the RAD51 ring. In the filament form, the nucleosomal DNA is peeled
by the RAD51 filament extension, and the NLDs of RAD51 protomers proximal to the
nucleosome bind to the remaining nucleosomal DNA and histones. Mutations that
affect nucleosome-binding residues of the RAD51 NLD decrease nucleosome binding,
but barely affect DNA binding in vitro. Consistently, yeast Rad51 mutants with the
corresponding mutations are substantially defective in DNA repair in vivo. These
results reveal an unexpected function of the RAD51 NLD, and explain the mechanism
by which RAD51 associates with nucleosomes, recognizes DSBs and forms the active
filament in chromatin.
During meiosis, a DSB is enzymatically introduced in the genomic DNA
to initiate genetic recombination1. By contrast, in mitotic cells, DSBs
are frequently induced by ionizing radiation, DNA-damaging agents
and undesired stalling of the replication machinery2. Homologous
recombination (HR) is promoted at DSB sites and has essential roles
in the meiotic genetic recombination and the mitotic recombinational
repair of DSBs3,4.
RAD51 is an evolutionally conserved enzyme that functions in the HR
pathway in both meiotic and mitotic cells, and accumulates on DSB sites
in chromosomes5–7. During the HR process, RAD51 binds to DNA and
forms a filamentous complex, in which a region of the DSB containing
single-stranded DNA (ssDNA) is incorporated into the helical filament
formed by the RAD51 multimer8–10. The RAD51–DNA complex then binds
to undamaged DNA and promotes the homologous-pairing reaction,
by which the ssDNA region pairs with the homologous double-stranded
DNA (dsDNA) in an ATP-dependent manner11–13.
In eukaryotes, the genomic DNA is compacted as chromatin, in which
the nucleosome is the fundamental structural unit. In the nucleosome,
two each of histones H2A, H2B, H3 and H4 form a histone octamer,
and 145–147 base pairs of DNA continuously interact with the basic
surface of this octamer14. Consequently, in the nucleosome, the DNA
is left-handedly wrapped 1.65 times around the histone octamer, and
becomes inaccessible to DNA-binding proteins. In the HR process,
RAD51 somehow binds to the DNA tightly wrapped in the nucleosome,
recognizes the DSB and forms an active nucleoprotein filament at the DSB terminus in chromatin. However, the mechanism by which RAD51
promotes these steps in chromatin remains unclear.
Structures of RAD51 bound to nucleosomes
To determine how RAD51 assembles on chromatin with a DSB terminus,
we reconstituted the nucleosome with DNA containing the Widom
601 nucleosome positioning sequence15. The resulting nucleosome
was positioned at one end of the DNA. At the other DNA end of the
nucleosome, the eight-base-pair dsDNA plus a three-base 3′ ssDNA
overhang, designed to mimic the dsDNA–ssDNA junction created at a
DSB terminus, protruded as the linker DNA of the nucleosome (Fig. 1a
and Extended Data Fig. 1a,b). Purified human RAD51 was then incubated
with the nucleosome in the absence or presence of nucleotide cofac-
tors, such as ADP, ATP or a non-hydrolysable ATP analogue, AMP-PNP,
and the resulting RAD51–nucleosome complexes were separated by
sucrose gradient ultracentrifugation in the presence of glutaraldehyde
(GraFix) (Extended Data Figs. 2a, 3a, 4a and 5a).
The purified RAD51–nucleosome complexes were then visual -
ized by cryo-electron microscopy (cryo-EM). The structures of the
RAD51–nucleosome complexes were processed, and then subjected to
a single-particle workflow in the RELION software package16 (Extended
Data Figs. 2–5). We found that RAD51 forms multiple conformations
in the complex with the nucleosome, such as ring forms with eight
(octameric), nine (nonameric) or ten (decameric) protomers, and a https://doi.org/10.1038/s41586-024-07196-4
Received: 17 June 2023
Accepted: 13 February 2024
Published online: 20 March 2024
Open access
Check for updates
1Laboratory of Chromatin Structure and Function, Institute for Quantitative Biosciences, The University of Tokyo, Tokyo, Japan. 2Department of Biological Sciences, Graduate School of Science,
The University of Tokyo, Tokyo, Japan. 3Laboratory of Genome Regeneration, Institute for Quantitative Biosciences, The University of Tokyo, Tokyo, Japan. 4Laboratory of Molecular Radiology,
Center for Disease Biology and Integrative Medicine, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan. ✉e-mail: kurumizaka@iqb.u-tokyo.ac.jp |
2401.01335.pdf | Self-Play Fine-Tuning Converts Weak Language Models
to Strong Language Models
Zixiang Chen∗†Yihe Deng∗‡Huizhuo Yuan∗§Kaixuan Ji¶Quanquan Gu‖
Abstract
Harnessing the power of human-annotated data through Supervised Fine-Tuning (SFT) is
pivotal for advancing Large Language Models (LLMs). In this paper, we delve into the prospect
of growing a strong LLM out of a weak one without the need for acquiring additional human-
annotated data. We propose a new fine-tuning method called Self-Play fIne-tuNing ( SPIN),
which starts from a supervised fine-tuned model. At the heart of SPINlies a self-play mechanism,
where the LLM refines its capability by playing against instances of itself. More specifically, the
LLM generates its own training data from its previous iterations, refining its policy by discerning
these self-generated responses from those obtained from human-annotated data. Our method
progressively elevates the LLM from a nascent model to a formidable one, unlocking the full
potential of human-annotated demonstration data for SFT. Theoretically, we prove that the
global optimum to the training objective function of our method is achieved only when the LLM
policy aligns with the target data distribution. Empirically, we evaluate our method on several
benchmark datasets including the HuggingFace Open LLM Leaderboard, MT-Bench, and datasets
from Big-Bench. Our results show that SPINcan significantly improve the LLM’s performance
across a variety of benchmarks and even outperform models trained through direct preference
optimization (DPO) supplemented with extra GPT-4 preference data. This sheds light on the
promise of self-play, enabling the achievement of human-level performance in LLMs without the
need for expert opponents. Codes are available at https://github.com/uclaml/SPIN .
1 Introduction
Large Language Models (LLMs) have began a groundbreaking era in artificial general intelligence
(AGI), demonstrating extraordinary capabilities across a wide range of domains that require in-
tricate reasoning and specialized knowledge. These models excel in areas such as mathematical
reasoning/problem solving (Cobbe et al., 2021; Wei et al., 2022; Lewkowycz et al., 2022), code gener-
ation/programming (Chen et al., 2021; Austin et al., 2021; Li et al., 2022), text generation (Bubeck
∗Equal contribution
†Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail:
chenzx19@cs.ucla.edu
‡Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail:
yihedeng@cs.ucla.edu
§Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail:
hzyuan@cs.ucla.edu
¶Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail:
kaixuanji@cs.ucla.edu
‖Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail: qgu@cs.ucla.edu
1arXiv:2401.01335v2 [cs.LG] 12 Feb 2024 |
10.1016.j.cell.2024.01.005.pdf | Leading Edge
Review
Integrating cellular electron microscopy
with multimodal data to explore biologyacross space and time
Caitlyn L. McCafferty,1,*Sven Klumpe,2,*Rommie E. Amaro,3,*Wanda Kukulski,4,*Lucy Collinson,5,*
and Benjamin D. Engel1,*
1Biozentrum, University of Basel, Spitalstrasse 41, 4056 Basel, Switzerland
2Research Group CryoEM Technology, Max-Planck-Institute of Biochemistry, Am Klopferspitz 18, 82152 Martinsried, Germany
3Department of Molecular Biology, University of California, San Diego, La Jolla, CA 92093, USA
4Institute of Biochemistry and Molecular Medicine, University of Bern, Bu ¨hlstrasse 28, 3012 Bern, Switzerland
5Electron Microscopy Science Technology Platform, Francis Crick Institute, 1 Midland Road, London NW1 1AT, UK
*Correspondence: caitlyn.mccafferty@unibas.ch (C.L.M.), klumpe@biochem.mpg.de (S.K.), ramaro@ucsd.edu (R.E.A.), wanda.kukulski@
unibe.ch (W.K.), lucy.collinson@crick.ac.uk (L.C.), ben.engel@unibas.ch (B.D.E.)
https://doi.org/10.1016/j.cell.2024.01.005
SUMMARY
Biology spans a continuum of length and time scales. Individual experimental methods only glimpse discrete
pieces of this spectrum but can be combined to construct a more holistic view. In this Review, we detail thelatest advancements in volume electron microscopy (vEM) and cryo-electron tomography (cryo-ET), whichtogether can visualize biological complexity across scales from the organization of cells in large tissues tothe molecular details inside native cellular environments. In addition, we discuss emerging methodologiesfor integrating three-dimensional electron microscopy (3DEM) imaging with multimodal data, including fluo-rescence microscopy, mass spectrometry, single-particle analysis, and AI-based structure prediction. This
multifaceted approach fills gaps in the biological continuum, providing functional context, spatial organiza-
tion, molecular identity, and native interactions. We conclude with a perspective on incorporating diversedata into computational simulations that further bridge and extend length scales while integrating the dimen-sion of time.
INTRODUCTION
Fifty years ago, the Nobel Prize in Physiology or Medicine was
awarded to Albert Claude, Christian de Duve, and George E. Pal-
ade for their discoveries on the structural and functional organi-
zation of the cell. These pioneering investigations into structuralcell biology were facilitated by electron microscopy (EM) andsubcellular fractionation
1–4and complemented by functional an-
alyses.5,6Integration of these techniques helped define our un-
derstanding of organelles, linking the ultrastructure of thesecellular compartments to their specialized functions. These foun-
dational studies propelled the development of new instrumenta-
tion, methodology, and computation that have improved our un-derstanding of cellular structures in situ —within their native
context.
Now, decades later, a new generation of cellular structural
biology is emerging ( Figure 1 A), combining advancements in
three-dimensional (3D) EM with complementary approaches,
including live-cell and super-resolution light microscopy, prote-
omics, biophysical assays, bioinformatics, high-resolution struc-ture determination, artificial intelligence (AI)-based structure pre-
diction, integrative modeling, and multiscale simulations. This
integration of methodologies can help assemble a more com-plete understanding of the biological continuum, from atoms
over femtoseconds to large tissues over hours and days.
New developments in 3DEM continue to advance the field of
cellular structural biology. These techniques span scales, from
the 3D visualization of entire cells and tissues with volume electron
microscopy (vEM)
14to the high-resolution imaging of molecular
complexes inside native cells using cryo-electron tomography(cryo-ET).
15vEM is a catch-all term encompassing a variety of
techniques that reconstruct 3D volumes from a series of single im-
ages generated either by transmission electron microscopy (TEM)or scanning electron microscopy (SEM) of thin sections (array to-
mography) or by SEM of a sample block face exposed by either a
diamond knife or a focused ion beam (FIB). These vEM methodscan reconstruct volumes that are hundreds of microns thick while
attaining a resolution of several nanometers, revealing the 3D ar-
chitecture of organelles inside whole cells and tissues. In contrast,cellular cryo-ET relies on TEM tilt-series to attain sub-nanometerresolution inside cells frozen in a near-native state. However,
this gain in resolution comes at the cost of continuous volume
due to the mean free path of electrons through ice (300–400 nmat 300 kV
16). With the exception of smaller bacteria and thin cell
protrusions, frozen cells must first be thinned, accomplished by
cryo-FIB milling.17These two EM disciplines are complementary,
ll
Cell 187, February 1, 2024 ª2024 Elsevier Inc. 563 |
2206.01079.pdf | When does return-conditioned supervised learning
work for offline reinforcement learning?
David Brandfonbrener
New York University
david.brandfonbrener@nyu.eduAlberto Bietti
New York UniversityJacob Buckman
MILA
Romain Laroche
Microsoft ResearchJoan Bruna
New York University
Abstract
Several recent works have proposed a class of algorithms for the offline reinforce-
ment learning (RL) problem that we will refer to as return-conditioned supervised
learning (RCSL). RCSL algorithms learn the distribution of actions conditioned
on both the state and the return of the trajectory. Then they define a policy by
conditioning on achieving high return. In this paper, we provide a rigorous study
of the capabilities and limitations of RCSL, something which is crucially miss-
ing in previous work. We find that RCSL returns the optimal policy under a set
of assumptions that are stronger than those needed for the more traditional dy-
namic programming-based algorithms. We provide specific examples of MDPs
and datasets that illustrate the necessity of these assumptions and the limits of
RCSL. Finally, we present empirical evidence that these limitations will also
cause issues in practice by providing illustrative experiments in simple point-mass
environments and on datasets from the D4RL benchmark.
1 Introduction
In recent years, deep learning has proven to be an exceptionally powerful generic algorithm for
solving supervised learning (SL) tasks. These approaches tend to be stable, and scale well with
compute and data [17]. In contrast, deep reinforcement learning algorithms seem to lack these nice
properties; results are well known to be sensitive to hyperparameters and difficult to replicate. In
spite of this, deep reinforcement learning (RL) has achieved impressive feats, such as defeating
human champions at Go [25]. This juxtaposition of success and instability has inspired researchers
to explore alternative approaches to reinforcement learning that more closely resemble supervised
learning in hopes of making deep RL as well-behaved as deep SL.
One family of algorithms that has garnered great interest recently is return-conditioned supervised
learning (RCSL). The core idea of RCSL is to learn the return-conditional distribution of actions
in each state, and then define a policy by sampling from the distribution of actions that receive
high return. This was first proposed for the online RL setting by work on Upside Down RL [23,
26] and Reward Conditioned Policies [21]. The idea was extended to the offline RL setting using
transformers that condition on the entire history of states rather than just the current Markovian state
in the Decision Transformer (DT) work [8, 12]. Recent work on RL via Supervised Learning (RvS)
[9] unifies and simplifies ideas from these prior works with ideas about goal-conditioned policies.
Importantly, none of this prior work provides theoretical guarantees or analysis of the failure modes
of the return-conditioning approach. In contrast, the more established dynamic programming (DP)
algorithms for RL are better understood theoretically. This paper attempts to address this gap in
36th Conference on Neural Information Processing Systems (NeurIPS 2022).arXiv:2206.01079v3 [cs.LG] 11 Jan 2023 |
2403.08540.pdf | Language models scale reliably with over-training and on
downstream tasks
Samir Yitzhak Gadre1,2Georgios Smyrnis3Vaishaal Shankar4
Suchin Gururangan5Mitchell Wortsman5Rulin Shao5Jean Mercat2
Alex Fang5Jeffrey Li5Sedrick Keh2Rui Xin5Marianna Nezhurina6,7Igor Vasiljevic2
Jenia Jitsev6,7Alexandros G. Dimakis3Gabriel Ilharco5Shuran Song8Thomas Kollar2
Yair Carmon9∗Achal Dave2∗Reinhard Heckel10∗Niklas Muennighoff11∗Ludwig Schmidt5∗
Abstract
Scaling laws are useful guides for developing language models, but there are still gaps between
current scaling studies and how language models are ultimately trained and evaluated. For
instance, scaling is usually studied in the compute-optimal training regime (i.e., “Chinchilla
optimal” regime); however, in practice, models are often over-trained to reduce inference costs.
Moreover, scaling laws mostly predict loss on next-token prediction, but ultimately models are
compared based on downstream task performance. In this paper, we address both shortcomings.
To do so, we create a testbed of 104 models with 0.011B to 6.9B parameters trained with various
numbers of tokens on three data distributions. First, we investigate scaling in the over-trained
regime. We fit scaling laws that extrapolate in both the number of model parameters and the ratio
oftrainingtokenstoparameters. Thisenablesustopredictthevalidationlossofa1.4Bparameter,
900B token run (i.e., 32 ×over-trained) and a 6.9B parameter, 138B token run—each from
experiments that take 300 ×less compute. Second, we relate the perplexity of a language model to
its downstream task performance via a power law. We use this law to predict top-1 error averaged
over downstream tasks for the two aforementioned models using experiments that take 20 ×less
compute. Our experiments are available at https://github.com/mlfoundations/scaling .
1 Introduction
Training large language models is expensive. Moreover, training high-quality models requires a
complex recipe of algorithmic techniques and training data. To reduce the cost of finding successful
training recipes, researchers first evaluate ideas with small experiments and then extrapolate their
efficacy to larger scales. With reliable extrapolation, it is possible to quickly iterate at small scale
and still pick the method that will perform best for the final large training run. Indeed, this workflow
has become commonplace for training state-of-the-art language models such as Chinchilla 70B [ 43],
PaLM 540B [17], and GPT-4 [74].
Despite their importance for model development, published scaling laws differ from the goals of
training state-of-the-art models in important ways. For instance, scaling studies usually focus on
∗Equal advising, ordered alphabetically. Correspondence to sy@cs.columbia.edu .1Columbia University
2Toyota Research Insitute3UT Austin4Apple5University of Washington6Juelich Supercomputing Center,
Research Center Juelich7LAION8Stanford University9Tel Aviv University10TU Munich11Contextual AI
1arXiv:2403.08540v1 [cs.CL] 13 Mar 2024 |
2305.01625.pdf | Unlimiformer: Long-Range Transformers with Unlimited Length Input
Amanda Bertsch andUri Alon andGraham Neubig andMatthew R. Gormley
Carnegie Mellon University, USA
{abertsch,ualon,gneubig,mgormley}@cs.cmu.edu
Abstract
Transformer-based models typically have a
predefined bound to their input length, because
of their need to potentially attend to every to-
ken in the input. In this work, we propose
Unlimiformer: a general approach that can
wrap any existing pretrained encoder-decoder
transformer, and offload the attention compu-
tation across all layers to a single k-nearest-
neighbor index; this index can be kept on ei-
ther the GPU or CPU memory and queried in
sub-linear time. This way, we can index ex-
tremely long input sequences, while every at-
tention head in every decoder layer retrieves
its top- kkeys, instead of attending to every
key. We demonstrate Unlimiformer’s efficacy
on several long-document and multi-document
summarization benchmarks, showing that it
can summarize even 350k token-long inputs
from the BookSum dataset, without any input
truncation at test time. Unlimiformer improves
pretrained models such as BART (Lewis et al.,
2020a) and Longformer (Beltagy et al., 2020a)
by extending them to unlimited inputs without
additional learned weights and without modi-
fying their code. We make our code and mod-
els publicly available1.
1 Introduction
Transformers (Vaswani et al., 2017) are the domi-
nant sequence-to-sequence architecture. Pretrained
transformers generally have a context window of
512 (e.g. BERT (Devlin et al., 2019)) or 1024 to-
kens (e.g. BART (Lewis et al., 2020b)), which
are sufficient lengths for many current conditional
generation datasets (XSum; Narayan et al., 2018)
(CNN/DM; Nallapati et al., 2016).
For inputs between 1k and 16k tokens, special-
ized long-context models have been developed.
These models employ clever techniques to spar-
sify or approximate attention (e.g. Longformer
1https://github.com/abertsch72/unlimiformer
XSum (Avg)CNN/DM (Avg)ArXiv (Avg)GovReport (Avg)WikiSum (Avg)NarrativeQA (Avg)BookSum (Avg)NarrativeQA (Max)BookSum (Max)WikiSum (Max)103104105Input tokens16384 tokens
4096 tokens
1024 tokensFigure 1: Long-range transformers can avoid input
truncation in some datasets; however, there are datasets
with inputs many times longer than these models’ max-
imum input length. The dotted lines represent three
common maximum input lengths for models; the bars
are the average or maximum input length in each
dataset, as indicated. Averages for datasets from Koh
et al. (2022).
(Beltagy et al., 2020b), Performers (Choroman-
ski et al., 2020)), allowing the maximum input
length to quadruple while remaining computation-
ally feasible. Datasets in this length include most
long-document summarization or question answer-
ing datasets, such as arXiv summarization (Cohan
et al., 2018).
But 16,384 is not the upper limit for the length of
context required for generation: tasks that involve
long narratives, such as book summarization (Kry ´s-
ci´nski et al., 2021) or narrative question-answering
(Koˇciský et al., 2018), often have inputs exceeding
100k tokens . A challenge set for Wikipedia arti-
cle generation (Liu* et al., 2018) contains inputs
longer than 500k tokens. Open-domain tasks in
generative question answering could conceivably
synthesize information from even larger inputs, e.g.arXiv:2305.01625v1 [cs.CL] 2 May 2023 |
2312.06585.pdf | 2023-12-12
Beyond Human Data: Scaling Self-Training for
Problem-Solving with Language Models
Avi Singh1,*, John D Co-Reyes1,*, Rishabh Agarwal1,2,*,
Ankesh Anand1, Piyush Patil1, Peter J. Liu1, James Harrison1, Jaehoon Lee1, Kelvin Xu1,
Aaron Parisi1, Abhishek Kumar1, Alex Alemi1, Alex Rizkowsky1, Azade Nova1, Ben Adlam1, Bernd Bohnet1,
Hanie Sedghi1, Igor Mordatch1, Isabelle Simpson1, Izzeddin Gur1, Jasper Snoek1, Jeffrey Pennington1, Jiri
Hron1, Kathleen Kenealy1, Kevin Swersky1, Kshiteej Mahajan1, Laura Culp1, Lechao Xiao1, Maxwell L
Bileschi1, Noah Constant1, Roman Novak1, Rosanne Liu1, Tris Warkentin1, Yundi Qian1,
Ethan Dyer1, Behnam Neyshabur1, Jascha Sohl-Dickstein1, Noah Fiedel1
*Contributed equally,1Google DeepMind,2Mila
Fine-tuning language models (LMs) on human-generated data remains a prevalent practice. However,
theperformanceofsuchmodelsisoftenlimitedbythequantityanddiversityofhigh-qualityhumandata.
In this paper, we explore whether we can go beyond human data on tasks where we have access to scalar
feedback, for example, on math problems where one can verify correctness. To do so, we investigate a
simple self-training method based on expectation-maximization, which we call ReST𝐸𝑀, where we (1)
generate samples from the model and filter them using binary feedback, (2) fine-tune the model on
these samples, and (3) repeat this process a few times. Testing on advanced MATH reasoning and APPS
coding benchmarks using PaLM-2 models, we find that ReST𝐸𝑀scales favorably with model size and
significantly surpasses fine-tuning only on human data. Overall, our findings suggest self-training with
feedback can substantially reduce dependence on human-generated data.
Keywords: RL from external feedback, EM for RL, Language, LLMs, Reasoning, Coding, Self-Improvement
1. Introduction
Large Language Models (LLMs) are revolutionizing the landscape of deep learning, showcasing
remarkable capabilities in generating human-quality text and tackling diverse language tasks (Google
et al., 2023; OpenAI, 2023). While supervised fine-tuning (SFT) on human-collected data further
boosts their performance on tasks of interest, acquiring high-quality human data poses a significant
bottleneck. This is particularly demanding for complex problem-solving tasks, requiring significant
resources and expert knowledge. To address this hurdle, model-generated synthetic data emerges as
a promising alternative, offering scalability and cost-effectiveness, provided its quality can be ensured.
While LLMs hold the potential to self-evaluate generated data, this paper explores a simpler setting
where an external, scalar feedback signal serves as a quality indicator for each generated sample.
To investigate training on model-generated data, we consider a simple yet powerful self-training
approach for language models that requires only two capabilities: 1) generating samples from the
model and 2) evaluating these samples with a scoring mechanism. To ensure clarity and consistency,
we adopt the terminology of Reinforced Self-Training (Gulcehre et al., 2023) and call this approach
ReST𝐸𝑀. We show that ReST𝐸𝑀can be viewed as applying expectation-maximization for reinforcement
learning (Dayan and Hinton, 1997; Peters and Schaal, 2007), which we present formally in Section 3.
Specifically, ReST𝐸𝑀alternates between the expectation and maximization steps:
1.Generate (E-step): The language model generates multiple output samples for each input
Corresponding author(s): singhavi@google.com, jcoreyes@google.com, rishabhagarwal@google.com
©2023 Google DeepMind. All rights reservedarXiv:2312.06585v1 [cs.LG] 11 Dec 2023 |
23-0037.pdf | Journal of Machine Learning Research 24 (2023) 1-43 Submitted 1/23; Revised 7/23; Published 7/23
Atlas : Few-shot Learning with
Retrieval Augmented Language Models
Gautier Izacard1,2,∗,†gautier@inflection.ai
Patrick Lewis1,∗,†patrick@cohere.com
Maria Lomeli1marialomeli@meta.com
Lucas Hosseini1,†hoss@meta.com
Fabio Petroni1,†fabiopetroni@meta.com
Timo Schick1,†schick@meta.com
Jane Dwivedi-Yu1janeyu@meta.com
Armand Joulin1,†ajoulin@meta.com
Sebastian Riedel1,3,†sriedel@meta.com
Edouard Grave1,†egrave@meta.com
1Meta AI,2ENS, PSL University & Inria,3University College London
Editor: Ivan Titov
Abstract
Large language models have shown impressive few-shot results on a wide range of tasks.
However, when knowledge is key for such results, as is the case for tasks such as question
answering and fact checking, massive parameter counts to store knowledge seem to be needed.
Retrieval-augmented models are known to excel at knowledge intensive tasks without the
need for as many parameters, but it is unclear whether they work in few-shot settings.
In this work we present Atlas, a carefully designed and pre-trained retrieval-augmented
language model able to learn knowledge intensive tasks with very few training examples.
We perform evaluations on a wide range of tasks, including MMLU, KILT and Natural
Questions, and study the impact of the content of the document index, showing that it can
easily be updated. Notably, Atlasreaches over 42% accuracy on Natural Questions using
only 64 examples, outperforming a 540B parameter model by 3% despite having 50x fewer
parameters.
Keywords: retrieval augmented language models, information retrieval, language models
1. Introduction
Large language models (LLMs) are impressive few-shot learners (Brown et al., 2020; Rae
et al., 2021; Hoffmann et al., 2022; Chowdhery et al., 2022). They are able to learn new
tasks with very few examples or even from instructions alone. For this generalisation ability
to emerge, the key ingredients are scaling both the parameter count of the model, and the
size of the training data. Large language models owe this improvement to both a larger
computational budget, enabling more complex reasoning, and the ability to memorize more
∗. Equal contribution
†. Work done while at Meta AI
c⃝2023 Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu,
Armand Joulin, Sebastian Riedel, Edouard Grave.
License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/ . Attribution requirements are provided
athttp://jmlr.org/papers/v24/23-0037.html . |
10.1101.2022.12.21.521526.pdf | A high-level programming language for generative protein design
Brian Hie12 *Salvatore Candido1 *Zeming Lin1 3Ori Kabeli1
Roshan Rao1Nikita Smetanin1Tom Sercu1Alexander Rives1 4 †
Abstract
Combining a basic set of building blocks into
more complex forms is a universal design princi-
ple. Most protein designs have proceeded from a
manual bottom-up approach using parts created
by nature, but top-down design of proteins is fun-
damentally hard due to biological complexity. We
demonstrate how the modularity and programma-
bility long sought for protein design can be re-
alized through generative artificial intelligence.
Advanced protein language models demonstrate
emergent learning of atomic resolution structure
and protein design principles. We leverage these
developments to enable the programmable design
of de novo protein sequences and structures of
high complexity. First, we describe a high-level
programming language based on modular build-
ing blocks that allows a designer to easily com-
pose a set of desired properties. We then develop
an energy-based generative model, built on atomic
resolution structure prediction with a language
model, that realizes all-atom structure designs that
have the programmed properties. Designing a di-
verse set of specifications, including constraints
on atomic coordinates, secondary structure, sym-
metry, and multimerization, demonstrates the gen-
erality and controllability of the approach. Enu-
merating constraints at increasing levels of hier-
archical complexity shows that the approach can
access a combinatorially large design space.
Introduction
Protein design would benefit from the regularity, simplicity,
and programmability provided by a basic set of abstractions
(1–4) like those used in the engineering of buildings, ma-
*Equal contribution1Meta Fundamental AI Research Protein
Team (FAIR).2Stanford University. Work performed as a vis-
iting researcher at Meta AI.3New York University. Work per-
formed as a visiting researcher at Meta AI.4New York University.
†Correspondence to <arives@meta.com>.
Preprint. Copyright 2022 by the authors.chines, circuits, and computer software. But unlike these
artificial creations, proteins cannot be decomposed into eas-
ily recombinable parts because the local structure of the
sequence is entangled in its global context ( 5,6). Classical
de novo protein design has attempted to determine a funda-
mental set of structural building blocks, which could then be
assembled into higher-order structures ( 7–11). Likewise, tra-
ditional protein engineering often recombines segments or
domains of natural protein sequences into hybrid chimeras
(12–14). However, existing approaches have not been able
to achieve the high combinatorial complexity that is neces-
sary for true programmability.
We show modern generative models realize these classical
goals of modularity and programmability at a new level of
combinatorial complexity. Our idea is to place the modu-
larity and programmability at a higher level of abstraction,
where a generative model bridges the gap between human
intuition and the production of specific sequences and struc-
tures. In this setting, the protein designer needs only to
recombine high-level directives, while the task of obtain-
ing a protein that fulfills those directives is placed on the
generative model.
We propose a programming language for generative protein
design, which allows a designer to specify intuitive, mod-
ular, and hierarchical programs. We show that high-level
programs can be translated into low-level sequences and
structures by a generative model. Our approach leverages
advances in protein language models, which learn structural
information ( 15,16) and the design principles of proteins
(see accompanying paper by Verkuil et al.).
In this study, our specific implementation is based on an
energy-based generative model. First, a protein designer
specifies a high-level program consisting of a set of hier-
archically organized constraints (Figure 1A). Then, this
program compiles to an energy function that evaluates com-
patibility with the constraints, which can be arbitrary and
non-differentiable (Figure 1B). We apply constraints on
structure by incorporating atomic-level structure predictions,
enabled by a language model, into the energy function. This
approach enables the generation of a wide set of complex
designs (Figure 1C).
The use of a high-level language allows the protein designer. CC-BY-NC-ND 4.0 International license available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted December 22, 2022. ; https://doi.org/10.1101/2022.12.21.521526doi: bioRxiv preprint |
2403.03950.pdf | Stop Regressing: Training Value Functions via
Classification for Scalable Deep RL
Jesse Farebrother1,2,*, Jordi Orbay1,†, Quan Vuong1,†, Adrien Ali Taïga1,†, Yevgen Chebotar1, Ted Xiao1, Alex
Irpan1, Sergey Levine1, Pablo Samuel Castro1,3,†, Aleksandra Faust1, Aviral Kumar1,†, Rishabh Agarwal1,3,*
*Equal Contribution,†Core Contribution,1Google DeepMind,2Mila, McGill University,3Mila, Université de Montréal
Value functions are a central component of deep reinforcement learning (RL). These functions, param-
eterized by neural networks, are trained using a mean squared error regression objective to match
bootstrapped target values. However, scaling value-based RL methods that use regression to large
networks, such as high-capacity Transformers, has proven challenging. This difficulty is in stark contrast
to supervised learning: by leveraging a cross-entropy classification loss, supervised methods have scaled
reliably to massive networks. Observing this discrepancy, in this paper, we investigate whether the scala-
bility of deep RL can also be improved simply by using classification in place of regression for training
value functions. We demonstrate that value functions trained with categorical cross-entropy significantly
improves performance and scalability in a variety of domains. These include: single-task RL on Atari
2600 games with SoftMoEs, multi-task RL on Atari with large-scale ResNets, robotic manipulation with
Q-transformers, playing Chess without search, and a language-agent Wordle task with high-capacity
Transformers, achieving state-of-the-art results on these domains. Through careful analysis, we show
that the benefits of categorical cross-entropy primarily stem from its ability to mitigate issues inherent
to value-based RL, such as noisy targets and non-stationarity. Overall, we argue that a simple shift
to training value functions with categorical cross-entropy can yield substantial improvements in the
scalability of deep RL at little-to-no cost.
1. Introduction
A clear pattern emerges in deep learning breakthroughs – from AlexNet (Krizhevsky et al., 2012) to
Transformers (Vaswani et al., 2017) – classification problems seem to be particularly amenable to
effective training with large neural networks. Even in scenarios where a regression approach appears
natural, framing the problem instead as a classification problem often improves performance (Torgo
and Gama, 1996; Rothe et al., 2018; Rogez et al., 2019). This involves converting real-valued targets
into categorical labels and minimizing categorical cross-entropy rather than the mean-squared error.
Several hypotheses have been put forward to explain the superiority of this approach, including
stable gradients (Imani and White, 2018; Imani et al., 2024), better representations (Zhang et al.,
2023), implicit bias (Stewart et al., 2023), and dealing with imbalanced data (Pintea et al., 2023) –
suggesting their potential utility beyond supervised regression.
Unlike trends in supervised learning, value-based reinforcement learning (RL) methods primarily
rely on regression. For example, deep RL methods such as deep Q-learning (Mnih et al., 2015) and
actor-critic (Mnih et al., 2016) use a regression loss, such as mean-squared error, to train a value
function from continuous scalar targets. While these value-based deep RL methods, powered by
regression losses, have led to high-profile results (Silver et al., 2017), it has been challenging to scale
them up to large networks, such as high-capacity transformers. This lack of scalability has been
attributed to several issues (Kumar et al., 2021, 2022; Agarwal et al., 2021; Lyle et al., 2022; Le Lan
et al., 2023; Obando-Ceron et al., 2024), but what if simply reframing the regression problem as
classification can enable the same level of scalability achieved in supervised learning?
Corresponding author(s): jfarebro@cs.mcgill.ca, aviralkumar@google.com, rishabhagarwal@google.comarXiv:2403.03950v1 [cs.LG] 6 Mar 2024 |
2405.00675v1.pdf | Self-Play Preference Optimization for Language Model
Alignment
Yue Wu∗†Zhiqing Sun∗‡Huizhuo Yuan∗§Kaixuan Ji¶Yiming Yang‖Quanquan Gu∗∗
Abstract
Traditional reinforcement learning from human feedback (RLHF) approaches relying on
parametric models like the Bradley-Terry model fall short in capturing the intransitivity and
irrationality in human preferences. Recent advancements suggest that directly working with
preference probabilities can yield a more accurate reflection of human preferences, enabling
more flexible and accurate language model alignment. In this paper, we propose a self-play-
based method for language model alignment, which treats the problem as a constant-sum
two-player game aimed at identifying the Nash equilibrium policy. Our approach, dubbed
Self-Play Preference Optimization (SPPO), approximates the Nash equilibrium through iterative
policy updates and enjoys theoretical convergence guarantee. Our method can effectively increase
the log-likelihood of the chosen response and decrease that of the rejected response, which cannot
be trivially achieved by symmetric pairwise loss such as Direct Preference Optimization (DPO)
and Identity Preference Optimization (IPO). In our experiments, using only 60k prompts (without
responses) from the UltraFeedback dataset and without any prompt augmentation, by leveraging
a pre-trained preference model PairRM with only 0.4B parameters, SPPO can obtain a model
from fine-tuning Mistral-7B-Instruct-v0.2 that achieves the state-of-the-art length-controlled
win-rate of 28.53% against GPT-4-Turbo on AlpacaEval 2.0. It also outperforms the (iterative)
DPO and IPO on MT-Bench and the Open LLM Leaderboard. Notably, the strong performance
of SPPO is achieved without additional external supervision (e.g., responses, preferences, etc.)
from GPT-4 or other stronger language models.
1 Introduction
Large Language Models (LLMs) (e.g., Ouyang et al., 2022; OpenAI et al., 2023), have shown
remarkable capabilities in producing human-like text, fielding questions, and coding. Despite
∗Equal contribution
†Department of Computer Science, University of California, Los Angeles, Los Angeles, CA 90095; e-mail:
ywu@cs.ucla.edu
‡Language Technologies Institute, Carnegie Mellon University, Pittsburgh, PA 15213; e-mail: zhiqings@cs.cmu.edu
§Department of Computer Science, University of California, Los Angeles, Los Angeles, CA 90095; e-mail:
hzyuan@cs.ucla.edu
¶Department of Computer Science, University of California, Los Angeles, Los Angeles, CA 90095; e-mail:
kauxuanji@cs.ucla.edu
‖Language Technologies Institute & Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA
15213; e-mail: yiming@cs.cmu.edu
∗∗Department of Computer Science, University of California, Los Angeles, Los Angeles, CA 90095; e-mail:
qgu@cs.ucla.edu
1arXiv:2405.00675v1 [cs.LG] 1 May 2024 |
2402.04494.pdf | Grandmaster-Level Chess Without Search
Anian Ruoss*,1, Grégoire Delétang*,1, Sourabh Medapati1, Jordi Grau-Moya1, Li Kevin Wenliang1, Elliot Catt1,
John Reid1and Tim Genewein1
*Equal contributions,1Google DeepMind
The recent breakthrough successes in machine learning are mainly attributed to scale: namely large-
scale attention-based architectures and datasets of unprecedented scale. This paper investigates the
impact of training at scale for chess. Unlike traditional chess engines that rely on complex heuristics,
explicit search, or a combination of both, we train a 270M parameter transformer model with supervised
learningonadatasetof10millionchessgames. Weannotateeachboardinthedatasetwithaction-values
provided by the powerful Stockfish 16 engine, leading to roughly 15 billion data points. Our largest
model reaches a Lichess blitz Elo of 2895 against humans, and successfully solves a series of challenging
chess puzzles, without any domain-specific tweaks or explicit search algorithms. We also show that our
model outperforms AlphaZero’s policy and value networks (without MCTS) and GPT-3.5-turbo-instruct.
A systematic investigation of model and dataset size shows that strong chess performance only arises at
sufficient scale. To validate our results, we perform an extensive series of ablations of design choices
and hyperparameters.
1. Introduction
One of the most iconic successes of AI is IBM’s Deep
Blue(Campbelletal.,2002)defeatingtheworldchess
champion Garry Kasparov in 1997. This was widely
seen as the first major demonstration that machines
are capable of out-competing humans in intellectual
domains that require sophisticated rational reason-
ing and strategic planning—feats of intelligence that
were long believed to be exclusive to humans. Deep
Blue was an expert system that combined an exten-
sive database of chess knowledge and heuristics with
a strong tree search algorithm (alpha-beta pruning).
Almost all modern and much stronger chess engines
follow a similar recipe, with Stockfish 16 currently be-
ing the world’s strongest (publicly available) engine.
Notable exceptions are DeepMind’s AlphaZero (Sil-
ver et al., 2017), which uses search and self-taught
heuristics but no human chess knowledge, and its
open-source replication Leela Chess Zero, which cur-
rently often comes in as a close second in chess com-
puter competitions (Haworth and Hernandez, 2021).
Recent breakthroughs in scaling up AI systems have
resulted in dramatic progress in cognitive domains
that remained challenging for earlier-generation sys-
tems like Deep Blue. This progress has been driven
by general-purpose techniques, in particular (self-) su-
pervised training on expert data with attention-based
architectures (Vaswani et al., 2017) applied at scale,
resulting in the development of LLMs with impres-
sive and unexpected cognitive abilities like OpenAI’s
GPT series (Brown et al., 2020; OpenAI, 2023), theLLaMA family of models (Touvron et al., 2023a,b),
or Google DeepMind’s Chinchilla (Hoffmann et al.,
2022) and Gemini (Anil et al., 2023). However, it is
unclear whether the same technique would work in a
domain like chess, where successful policies typically
rely on sophisticated algorithmic reasoning (search,
dynamic programming) and complex heuristics. Thus,
the main question of this paper is: Is it possible to use
supervised learning to obtain a chess policy that gener-
alizes well and thus leads to strong play without explicit
search?
To study this question we apply the success recipe
of general supervised training at scale to chess (see
Figure 1). We use a standard attention-based archi-
tecture and a standard supervised training protocol to
learn to predict action-values (corresponding to win-
percentages) for chess boards. The strength of the
resulting chess policy thus depends entirely on the
strength of the underlying action-value predictor. To
get a large corpus of “ground-truth” action-values we
use Stockfish 16 as an oracle to annotate millions of
board states obtained from randomly drawn games on
lichess.org, which are mostly played by humans vary-
ing significantly in playing strength. As we will show
this leads to a strong, grandmaster-level chess policy
(Lichess blitz Elo 2895 against humans), driven by a
modern transformer to predict action-values without
any explicit search . This policy outperforms GPT-3.5-
turbo-instruct (and, therefore, GPT-4 (Carlini, 2023))
and AlphaZero’s policy and value networks, which
reach Elo ratings of 1755, 1620, and 1853, respec-
tively. Therefore, our work shows that it is possible
Corresponding author(s): {anianr,gdelt}@google.com
©2024 Google DeepMind. All rights reservedarXiv:2402.04494v1 [cs.LG] 7 Feb 2024 |
10.1016.j.cell.2023.12.016.pdf | Article
Inherited blood cancer predisposition through
altered transcription elongation
Graphical abstract
Highlights
dInherited CTR9 loss-of-function variants predispose to the
myeloid malignancies
dPartial, but not complete, loss of CTR9 expandshuman HSCs
dPartial CTR9 loss expands HSCs through increasedtranscription elongation
dSelect PAF1 complex subunits interact with and activate thesuper elongation complexAuthors
Jiawei Zhao, Liam D. Cato,Uma P. Arora, ..., Seychelle M. Vos,Scott A. Armstrong, Vijay G. Sankaran
Correspondence
jw.zhao3@siat.ac.cn (J.Z.),
sankaran@broadinstitute.org (V.G.S.)
In brief
Rare inherited CTR9 loss-of-function
variants predispose to myeloidmalignancies by altering the balancebetween the PAF1 and super elongationcomplexes. Specific subunits of the PAF1complex then act in concert with thesuper elongation complex to promotetranscription elongation of genes that candrive the self-renewal of hematopoieticstem cells.
Zhao et al., 2024, Cell 187, 642–658
February 1, 2024 ª2023 The Author(s). Published by Elsevier Inc.
https://doi.org/10.1016/j.cell.2023.12.016 ll
|
2311.00088.pdf | Random coordinate descent: a simple alternative for
optimizing parameterized quantum circuits
Zhiyan Ding∗1, Taehee Ko†‡2, Jiahao Yao§1, Lin Lin¶1,4,5, and Xiantao Li‖3
1Department of Mathematics, University of California, Berkeley
2School of Computational Sciences, Korea Institute for Advanced Study
3Department of Mathematics, Pennsylvania State University
4Applied Mathematics and Computational Research Division, Lawrence Berkeley
National Laboratory
5Challenge Institute for Quantum Computation, University of California, Berkeley
November 2, 2023
Abstract
Variational quantum algorithms rely on the optimization of parameterized quantum
circuits in noisy settings. The commonly used back-propagation procedure in classical
machine learning is not directly applicable in this setting due to the collapse of quan-
tum states after measurements. Thus, gradient estimations constitute a significant
overhead in a gradient-based optimization of such quantum circuits. This paper in-
troduces a random coordinate descent algorithm as a practical and easy-to-implement
alternative to the full gradient descent algorithm. This algorithm only requires one
partial derivative at each iteration. Motivated by the behavior of measurement noise
in the practical optimization of parameterized quantum circuits, this paper presents
an optimization problem setting that is amenable to analysis. Under this setting, the
random coordinate descent algorithm exhibits the same level of stochastic stability as
the full gradient approach, making it as resilient to noise. The complexity of the ran-
dom coordinate descent method is generally no worse than that of the gradient descent
and can be much better for various quantum optimization problems with anisotropic
Lipschitz constants. Theoretical analysis and extensive numerical experiments validate
our findings.
∗zding.m@math.berkeley.edu
†kthmomo@kias.re.kr
‡Ding and Ko are co-first authors with equal contribution.
§jiahao@math.berkeley.edu
¶linlin@math.berkeley.edu
‖xiantao.li@psu.edu
1arXiv:2311.00088v1 [quant-ph] 31 Oct 2023 |
Avik-Manuscript-SI-Combined.pdf |
Kinetic co- evolutionary models predict the temporal emergence of HIV resistance
mutations under drug selection pressure
Avik Biswas1,3,5†, Indrani Choudhuri2,3†, Eddy Arnold,4 Dmitry Lyumkis5,6, Allan
Haldane1,3*, Ronald M. Levy2,3*
1Department of Physics, Temple University, Philadelphia, PA, USA
2Department of Chemistry, Temple University, Philadelphia, PA, USA
3Center for Biophysics and Computational Biology, Temple University, Philadelphia, PA, USA
1925 N. 12th Street, Philadelphia, PA 19122, USA
4Center for Advanced Biotechnology and Medicine, Department of Chemistry and Chemical
Biology, Rutgers University, Piscataway, NJ 08854, USA
5Laboratory of Genetics, Salk Institute for Biological Studies, La Jolla, CA, 92037, USA
6Graduate schools for Biological Sciences, Section of Molecular Biology, University of
California, San Diego, La Jolla, CA, 92093, USA
†Both authors contributed equally to this work
*To whom correspondence should be addressed
*allan.haldane@temple.edu
*ronlevy@temple.edu
Abstract
Drug resistance in human immunodeficiency virus (HIV) is a pervasive problem that affects
the lives of millions of people worldwide. Although records of drug-resistant mutations (DRMs)
have been extensively tabulated within public repositories, our understanding of the
evolutionary kinetics of DRMs and how they evolve together remains limited. Epistasis, the
interactions between a DRM and other residues in HIV protein sequences, is found to be key to the temporal evolution of drug resistance. We use a Potts sequence -cova riation statistical -
energy model of HIV protein fitness under drug selection pressure, which captures epistatic interactions between all positions, combined with kinetic Monte- Carlo simulations of sequence
evolutionary trajectories, to explore the acquisition of DRMs as they arise in an ensemble of drug- naïve patient protein sequences. We follow the time course of 52 DRMs in the enzymes
protease, reverse transcriptase, and integrase, the primary targets of antiretroviral therapy (ART). The rates at which DRMs emerge are highly correlated with their observed acquisition rates reported in the literature when drug pressure is applied. This result highlights the central role of epistasis in determining the kinetics governing DRM emergence. Whereas rapidly
acquir ed DRMs begin to accumulate as soon as drug pressure is applied, slowly acquired
DRMs are contingent on accessory mutations that appear only after prolonged drug pressure.
We provide a foundation for using computational methods to determine the temporal ev olution
of drug resistance using Potts statistical potentials, which can be used to gain mechanistic
insights into drug resistance pathways in HIV and other infectious agents.
Keywords: HIV, epistasis, drug- resistance mutation (DRM), kinetic Monte- Carlo (KMC),
timeline of resistance
|
2304.05187.pdf | Automatic Gradient Descent:
Deep Learning without Hyperparameters
Jeremy Bernstein‹
MITChris Mingard‹
U. OxfordKevin Huang
U. WashingtonNavid Azizan
MITYisong Yue
Caltech
‹denotes equal contribution.
Abstract
The architecture of a deep neural network is defined explicitly in terms of the number of layers,
the width of each layer and the general network topology. Existing optimisation frameworks
neglect this information in favour of implicit architectural information (e.g. second-order
methods) or architecture-agnostic distance functions (e.g. mirror descent). Meanwhile, the
most popular optimiser in practice—Adam—is based on heuristics. This paper builds a new
framework for deriving optimisation algorithms that explicitly leverage neural architecture.
The theory extends mirror descent to non-convex composite objective functions: the idea is to
transform a Bregman divergence to account for the non-linear structure of neural architecture.
Working through the details for deep fully-connected networks yields automatic gradient
descent: a first-order optimiser without any hyperparameters. Automatic gradient descent
trains both fully-connected and convolutional networks out-of-the-box and at ImageNet
scale. A PyTorch implementation is available at https://github.com/jxbz/agd and also in
AppendixB.Overall, thepapersuppliesarigoroustheoreticalfoundationforanext-generation
of architecture-dependent optimisers that work automatically and without hyperparameters.
Keywords: majorise-minimise meta-algorithm, operator perturbation theory, architecture-aware optimisation
Contents
1 Introduction 2
1.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Majorise-Minimise for Generic Learning Problems 5
2.1 Decomposition of linearisation error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Functional expansion and functional majorisation . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Recovering existing frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 Majorise-Minimise for Deep Learning Problems 8
3.1 Deriving automatic gradient descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Convergence analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4 Discussion 12
A Proofs 18
B PyTorch Implementation 23arXiv:2304.05187v1 [cs.LG] 11 Apr 2023 |
1610.02424.pdf | DIVERSE BEAM SEARCH :
DECODING DIVERSE SOLUTIONS FROM
NEURAL SEQUENCE MODELS
Ashwin K Vijayakumar1, Michael Cogswell1, Ramprasath R. Selvaraju1, Qing Sun1
Stefan Lee1, David Crandall2& Dhruv Batra1
{ashwinkv,cogswell,ram21,sunqing,steflee}@vt.edu
djcran@indiana.edu ,dbatra@vt.edu
1Department of Electrical and Computer Engineering,
Virginia Tech, Blacksburg, V A, USA
2School of Informatics and Computing
Indiana University, Bloomington, IN, USA
ABSTRACT
Neural sequence models are widely used to model time-series data. Equally ubiq-
uitous is the usage of beam search (BS) as an approximate inference algorithm
to decode output sequences from these models. BS explores the search space
in a greedy left-right fashion retaining only the top- Bcandidates – resulting in
sequences that differ only slightly from each other. Producing lists of nearly iden-
tical sequences is not only computationally wasteful but also typically fails to
capture the inherent ambiguity of complex AI tasks. To overcome this problem,
we propose Diverse Beam Search (DBS), an alternative to BS that decodes a list
of diverse outputs by optimizing for a diversity-augmented objective. We observe
that our method finds better top-1 solutions by controlling for the exploration and
exploitation of the search space – implying that DBS is a better search algorithm .
Moreover, these gains are achieved with minimal computational or memory over-
head as compared to beam search. To demonstrate the broad applicability of our
method, we present results on image captioning, machine translation and visual
question generation using both standard quantitative metrics and qualitative hu-
man studies. Further, we study the role of diversity for image-grounded language
generation tasks as the complexity of the image changes. We observe that our
method consistently outperforms BS and previously proposed techniques for di-
verse decoding from neural sequence models.
1 I NTRODUCTION
In the last few years, Recurrent Neural Networks (RNNs), Long Short-Term Memory networks
(LSTMs) or more generally, neural sequence models have become the standard choice for modeling
time-series data for a wide range of applications such as speech recognition (Graves et al., 2013),
machine translation (Bahdanau et al., 2014), conversation modeling (Vinyals & Le, 2015), image
and video captioning (Vinyals et al., 2015; Venugopalan et al., 2015), and visual question answering
(Antol et al., 2015). RNN based sequence generation architectures model the conditional probability,
Pr(y|x)of an output sequence y= (y1,...,yT)given an input x(possibly also a sequence); where
the output tokens ytare from a finite vocabulary, V.
Inference in RNNs. Maximum a Posteriori (MAP) inference for RNNs is te task of finding the
most likely output sequence given the input. Since the number of possible sequences grows as
|V|T, exact inference is NP-hard so approximate inference algorithms like Beam Search (BS) are
commonly employed. BS is a heuristic graph-search algorithm that maintains the Btop-scoring
partial sequences expanded in a greedy left-to-right fashion. Fig. 1 shows a sample BS search tree.
1arXiv:1610.02424v2 [cs.AI] 22 Oct 2018 |
2310.09144.pdf | GOODHART ’SLAW IN REINFORCEMENT LEARNING
Jacek Karwowski
Department of Computer Science
University of Oxford
jacek.karwowski@cs.ox.ac.ukOliver Hayman
Department of Computer Science
University of Oxford
oliver.hayman@linacre.ox.ac.uk
Xingjian Bai
Department of Computer Science
University of Oxford
xingjian.bai@sjc.ox.ac.ukKlaus Kiendlhofer
Independent
klaus.kiendlhofer@gmail.com
Charlie Griffin
Department of Computer Science
University of Oxford
charlie.griffin@cs.ox.ac.ukJoar Skalse
Department of Computer Science
Future of Humanity Institute
University of Oxford
joar.skalse@cs.ox.ac.uk
ABSTRACT
Implementing a reward function that perfectly captures a complex task in the
real world is impractical. As a result, it is often appropriate to think of the
reward function as a proxy for the true objective rather than as its definition.
We study this phenomenon through the lens of Goodhart’s law , which predicts
that increasing optimisation of an imperfect proxy beyond some critical point
decreases performance on the true objective. First, we propose a way to quantify
the magnitude of this effect and show empirically that optimising an imperfect
proxy reward often leads to the behaviour predicted by Goodhart’s law for a
wide range of environments and reward functions. We then provide a geometric
explanation for why Goodhart’s law occurs in Markov decision processes. We
use these theoretical insights to propose an optimal early stopping method that
provably avoids the aforementioned pitfall and derive theoretical regret bounds for
this method. Moreover, we derive a training method that maximises worst-case
reward, for the setting where there is uncertainty about the true reward function.
Finally, we evaluate our early stopping method experimentally. Our results support
a foundation for a theoretically-principled study of reinforcement learning under
reward misspecification.
1 I NTRODUCTION
To solve a problem using Reinforcement Learning (RL), it is necessary first to formalise that problem
using a reward function (Sutton & Barto, 2018). However, due to the complexity of many real-world
tasks, it is exceedingly difficult to directly specify a reward function that fully captures the task in
the intended way. However, misspecified reward functions will often lead to undesirable behaviour
(Paulus et al., 2018; Ibarz et al., 2018; Knox et al., 2023; Pan et al., 2021). This makes designing good
reward functions a major obstacle to using RL in practice, especially for safety-critical applications.
An increasingly popular solution is to learn reward functions from mechanisms such as human or
automated feedback (e.g. Christiano et al., 2017; Ng & Russell, 2000). However, this approach
comes with its own set of challenges: the right data can be difficult to collect (e.g. Paulus et al.,
2018), and it is often challenging to interpret it correctly (e.g. Mindermann & Armstrong, 2018;
Skalse & Abate, 2023). Moreover, optimising a policy against a learned reward model effectively
constitutes a distributional shift (Gao et al., 2023); i.e., even if a reward function is accurate under the
training distribution, it may fail to induce desirable behaviour from the RL agent.
1arXiv:2310.09144v1 [cs.LG] 13 Oct 2023 |
2308.13418.pdf | Nougat: Neural Optical Understanding for Academic Documents
Lukas Blecher∗Guillem Cucurull Thomas Scialom Robert Stojnic
Meta AI
Abstract
Scientific knowledge is predominantly stored in books and scientific journals, often in the form of
PDFs. However, the PDF format leads to a loss of semantic information, particularly for mathematical
expressions. We propose Nougat ( Neural Optical Understandin gforAcademic Documen ts), a Visual
Transformer model that performs an Optical Character Recognition (OCR) task for processing
scientific documents into a markup language, and demonstrate the effectiveness of our model on a
new dataset of scientific documents. The proposed approach offers a promising solution to enhance
the accessibility of scientific knowledge in the digital age, by bridging the gap between human-
readable documents and machine-readable text. We release the models and code to accelerate future
work on scientific text recognition.
1 Introduction
The majority of scientific knowledge is stored in books or published in scientific journals, most commonly in the
Portable Document Format (PDF). Next to HTML, PDFs are the second most prominent data format on the internet,
making up 2.4% of common crawl [ 1]. However, the information stored in these files is very difficult to extract into
any other formats. This is especially true for highly specialized documents, such as scientific research papers, where the
semantic information of mathematical expressions is lost.
Existing Optical Character Recognition (OCR) engines, such as Tesseract OCR [ 2], excel at detecting and classifying
individual characters and words in an image, but fail to understand the relationship between them due to their line-by-line
approach. This means that they treat superscripts and subscripts in the same way as the surrounding text, which is a
significant drawback for mathematical expressions. In mathematical notations like fractions, exponents, and matrices,
relative positions of characters are crucial.
Converting academic research papers into machine-readable text also enables accessibility and searchability of science
as a whole. The information of millions of academic papers can not be fully accessed because they are locked behind
an unreadable format. Existing corpora, such as the S2ORC dataset [ 3], capture the text of 12M2papers using GROBID
[4], but are missing meaningful representations of the mathematical equations.
To this end, we introduce Nougat, a transformer based model that can convert images of document pages to formatted
markup text.
The primary contributions in this paper are
•Release of a pre-trained model capable of converting a PDF to a lightweight markup language. We release the
code and the model on GitHub3
• We introduce a pipeline to create dataset for pairing PDFs to source code
• Our method is only dependent on the image of a page, allowing access to scanned papers and books
∗Correspondence to: lblecher@meta.com
2The paper reports 8.1M papers but the authors recently updated the numbers on the GitHub page https://github.com/allenai/s2orc
3https://github.com/facebookresearch/nougatarXiv:2308.13418v1 [cs.LG] 25 Aug 2023 |
1611.02731.pdf | Published as a conference paper at ICLR 2017
VARIATIONAL LOSSY AUTOENCODER
Xi Chen†‡, Diederik P. Kingma‡, Tim Salimans‡, Yan Duan†‡, Prafulla Dhariwal‡,
John Schulman†‡, Ilya Sutskever‡, Pieter Abbeel†‡
†UC Berkeley, Department of Electrical Engineering and Computer Science
‡OpenAI
{peter,dpkingma,tim,rocky,prafulla,joschu,ilyasu,pieter }@openai.com
ABSTRACT
Representation learning seeks to expose certain aspects of observed data in a
learned representation that’s amenable to downstream tasks like classification. For
instance, a good representation for 2D images might be one that describes only
global structure and discards information about detailed texture. In this paper,
we present a simple but principled method to learn such global representations
by combining Variational Autoencoder (V AE) with neural autoregressive models
such as RNN, MADE and PixelRNN/CNN. Our proposed V AE model allows us
to have control over what the global latent code can learn and by designing the
architecture accordingly, we can force the global latent code to discard irrelevant
information such as texture in 2D images, and hence the V AE only “autoencodes”
data in a lossy fashion. In addition, by leveraging autoregressive models as both
prior distribution p(z)and decoding distribution p(x|z), we can greatly improve
generative modeling performance of V AEs, achieving new state-of-the-art results
on MNIST, OMNIGLOT and Caltech-101 Silhouettes density estimation tasks as
well as competitive results on CIFAR10.
1 I NTRODUCTION
A key goal of representation learning is to identify and disentangle the underlying causal factors of
the data, so that it becomes easier to understand the data, to classify it, or to perform other tasks
(Bengio et al., 2013). For image data this often means that we are interested in uncovering the
“global structure” that captures the content of an image (for example, the identity of objects present
in the image) and its “style”, but that we are typically less interested in the local and high frequency
sources of variation such as the specific textures or white noise patterns.
A popular approach for learning representations is to fit a probabilistic latent variable model, an ap-
proach also known as analysis-by-synthesis (Yuille & Kersten, 2006; Nair et al., 2008). By learning
a generative model of the data with the appropriate hierarchical structure of latent variables, it is
hoped that the model will somehow uncover and untangle those causal sources of variations that
we happen to be interested in. However, without further assumptions, representation learning via
generative modeling is ill-posed: there are many different possible generative models with different
(or no) kinds of latent variables that all encode the same probability density function on our ob-
served data. Thus, the results we empirically get using this approach are highly dependent on the
specific architectural and modeling choices that are made. Moreover, the objective that we optimize
is often completely disconnected from the goal of learning a good representation: An autoregressive
model of the data may achieve the same log-likelihood as a variational autoencoder (V AE) (Kingma
& Welling, 2013), but the structure learned by the two models is completely different: the latter
typically has a clear hierarchy of latent variables, while the autoregressive model has no stochastic
latent variables at all (although it is conceivable that the deterministic hidden units of the autore-
gressive models will have meaningful and useful representations). For this reason, autoregressive
models have thus far not been popular for the purpose of learning representations, even though they
are extremely powerful as generative models (see e.g. van den Oord et al., 2016a).
A natural question becomes: is it possible to have a model that is a powerful density estimator
and at the same time has the right hierarchical structure for representation learning? A potential
solution would be to use a hybrid model that has both the latent variable structure of a V AE, as
1
arXiv:1611.02731v2 [cs.LG] 4 Mar 2017 |
2104.08253.pdf | Condenser: a Pre-training Architecture for Dense Retrieval
Luyu Gao and Jamie Callan
Language Technologies Institute
Carnegie Mellon University
{luyug, callan}@cs.cmu.edu
Abstract
Pre-trained Transformer language mod-
els (LM) have become go-to text represen-
tation encoders. Prior research fine-tunes
deep LMs to encode text sequences such
as sentences and passages into single dense
vector representations for efficient text
comparison and retrieval. However, dense
encoders require a lot of data and sophisti-
cated techniques to effectively train and suffer
in low data situations. This paper finds a
key reason is that standard LMs’ internal
attention structure is not ready-to-use for
dense encoders, which needs to aggregate text
information into the dense representation. We
propose to pre-train towards dense encoder
with a novel Transformer architecture, Con-
denser, where LM prediction CONditions on
DENSE Representation. Our experiments
show Condenser improves over standard LM
by large margins on various text retrieval and
similarity tasks.1
1 Introduction
Language model (LM) pre-training has been very
effective in learning text encoders that can be fine-
tuned for many downstream tasks (Peters et al.,
2018; Devlin et al., 2019). Deep bidirectional
Transformer encoder (Vaswani et al., 2017) LMs
like BERT (Devlin et al., 2019) are the state-of-
the-art. Recent works fine-tune the CLS token to
encode input text sequence into a single vector rep-
resentation (Lee et al., 2019; Chang et al., 2020;
Karpukhin et al., 2020). The resulting model is
referred to as dense encoder or bi-encoder. Fine-
tuning associates with vector similarities some
practical semantics, e.g., textual similarity or rel-
evance, and therefore the vectors can be used for
efficient text comparison or retrieval by inner prod-
uct. Despite their efficiency, bi-encoders are hard
to train. Even with sufficient data, bi-encoders still
1Code available at https://github.com/luyug/
Condenserrequire carefully designed sophisticated methods
to train effectively (Xiong et al., 2021; Qu et al.,
2020; Lin et al., 2020). They can also take big
performance hits in low data situations (Karpukhin
et al., 2020; Thakur et al., 2020; Chang et al., 2020).
Another common use of deep LM is cross-encoder,
pass compared text pair directly in and use attention
overall tokens to do prediction. In contrast to bi-
encoder, cross encoder trains easier and is effective
in low data for similarity and ranking tasks (Devlin
et al., 2019; Yang et al., 2019).
Based on the same LM, however, bi-encoder and
cross encoder have similar language understanding
capabilities. To explain the difficulty in training
bi-encoder not seen in cross-encoder, we look into
the internal structure of pre-trained LM. We find
LM like BERT directly out of pre-training has a
non-optimal attention structure. In particular, they
were not trained to aggregate sophisticated infor-
mation into a single dense representation. We term
effort during fine-tuning to adjust the LM internal
activation to channel its knowledge out for the tar-
get task, structural readiness . We argue bi-encoder
fine-tuning is inefficient due to the lacking struc-
tural readiness. Many updates are used to adjust
model attention structure than learn good represen-
tation.
Based on our observations, we propose to ad-
dress structural readiness during pre-training. We
introduce a novel Transformer pre-training archi-
tecture, Condenser, which establishes structural
readiness by doing LM pre-training actively CON-
dition on DENSE Representation. Unlike previ-
ous works that pre-train towards a particular task,
Condenser pre-trains towards the bi-encoder struc-
ture. Our results show the importance of structural
readiness. We experiment with sentence similar-
ity tasks, and retrieval for question answering and
web search. We find under low data setups, with
identical test time architecture, Condenser yields
sizable improvement over standard LM and showsarXiv:2104.08253v2 [cs.CL] 20 Sep 2021 |
10.1038.s41586-024-07128-2.pdf | Nature | www.nature.com | 1
ArticleSynthetic reversed sequences reveal default
genomic states
Brendan R. Camellato1, Ran Brosh1, Hannah J. Ashe1, Matthew T . Maurano1,2 & Jef D. Boeke1,3,4 ✉
Pervasive transcriptional activity is observed across diverse species. The genomes of
extant organisms have undergone billions of years of evolution, making it unclear
whether these genomic activities represent effects of selection or ‘noise’1–4.
Characterizing default genome states could help understand whether pervasive
transcriptional activity has biological meaning. Here we addressed this question by
introducing a synthetic 101-kb locus into the genomes of Saccharomyces cerevisiae
and Mus musculus and characterizing genomic activity. The locus was designed by
reversing but not complementing human HPRT1, including its flanking regions, thus
retaining basic features of the natural sequence but ablating evolved coding or
regulatory information. We observed widespread activity of both reversed and native
HPRT1 loci in yeast, despite the lack of evolved yeast promoters. By contrast, the
reversed locus displayed no activity at all in mouse embryonic stem cells, and instead
exhibited repressive chromatin signatures. The repressive signature was alleviated
in a locus variant lacking CpG dinucleotides; nevertheless, this variant was also
transcriptionally inactive. These results show that synthetic genomic sequences that
lack coding information are active in yeast, but inactive in mouse embryonic stem
cells, consistent with a major difference in ‘default genomic states’ between these
two divergent eukaryotic cell types, with implications for understanding pervasive
transcription, horizontal transfer of genetic information and the birth of new genes.
The majority of the human genome may be transcribed1–4, even though
only a small fraction is annotated as discrete mature RNA species5,6.
Debate remains over whether the approximately 75% of the genome
that is covered by detectable transcripts4, and the approximately 80%
of such transcripts for which there is predicted biochemical activity2,
represent truly functional activity or random and pervasive ‘noise’7–9.
In another eukaryotic species, the yeast S. cerevisiae , a similar frac -
tion of the genome is transcribed10, although the genome is relatively
gene-dense with an average intergenic distance11 of around 400 bp
compared with the approximately 100,000 bp in the human genome12.
This raises the question of whether all eukaryotic genomes are tran -
scribed at the same level, regardless of their structure. Understanding
the ‘default state’ of a genome—that is, the way a sequence lacking
evolved features is acted on by the host—would be useful in interpret-
ing the meaning of such transcriptional activity.
A genome that is active by default would present ample opportunity
for transcriptional machinery to bind non-specifically, leading to spuri -
ous activity, whereas a genome that is inactive by default would gener-
ally preclude such low-specificity activity. The true default state of a
genome, if such a thing exists, is difficult to determine, owing to billions
of years of evolutionary pressure that has acted on existing sequences.
It is thus unclear to what extent observed genomic states are passively
present by default, or actively produced by chromatin-interacting pro -
teins that recognize specific sequences selected for over time. A true
default genomic state can be queried by observing activity of a newly introduced, evolutionarily naive locus. Indeed, a hypothetical ‘random
genome’ experiment has been proposed as the ideal negative control
for interpreting reports of large-scale genomic activity13, in which
megabase-sized fragments of random DNA can be introduced into a
cell and its activity compared with that of the endogenous genome.
However, owing to technical limitations, such experiments have not
yet been performed.
To date there has not been any well-controlled characterization of
novel DNA loci in mammalian genomes, or a comparison of genomic
activity for the same locus in different organismal contexts. Current
techniques in synthetic genomics enable the design, assembly and
delivery of very large pieces of DNA14,15. Locus-scale DNA constructs,
up to hundreds of kilobases long, can be assembled de novo in yeast
assembly vectors (YAVs), which exist as episomal DNA circles separate
from native yeast and bacterial genomes. The ability to synthesize
large DNA loci de novo enables complete design freedom over the
sequence of synthetic DNA, although this realization has been limited
in practice. In recent years, we have developed a workflow for synthetic
regulatory genomics involving the de novo assembly of large DNA loci,
including an intermediate step involving S. cerevisiae , for delivery
and characterization in a desired eukaryotic context, typically mouse
embryonic stem (ES) cells16–20. This enables straightforward design and
assembly of novel DNA loci that do not exist in nature, and characteri -
zation of such loci in the distinct genomic contexts of S. cerevisiae and
M. musculus. By introducing novel DNA loci to both yeast and mouse https://doi.org/10.1038/s41586-024-07128-2
Received: 27 December 2022
Accepted: 29 January 2024
Published online: xx xx xxxx
Open access
Check for updates
1Institute for Systems Genetics, NYU Langone Health, New York, NY, USA. 2Department of Pathology, NYU Langone Health, New York, NY, USA. 3Department of Biochemistry and Molecular
Pharmacology, NYU Langone Health, New York, NY, USA. 4Department of Biomedical Engineering, NYU Tandon School of Engineering, New York, NY, USA. ✉e-mail: Jef.Boeke@nyulangone.org
|
2203.08913.pdf | Published as a conference paper at ICLR 2022
MEMORIZING TRANSFORMERS
Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy
{yuhuai,mrabe,delesley,szegedy}@google.com
ABSTRACT
Language models typically need to be trained or finetuned in order to acquire
new knowledge, which involves updating their weights. We instead envision
language models that can simply read and memorize new data at inference time,
thus acquiring new knowledge immediately. In this work, we extend language
models with the ability to memorize the internal representations of past inputs. We
demonstrate that an approximate kNN lookup into a non-differentiable memory of
recent (key, value) pairs improves language modeling across various benchmarks
and tasks, including generic webtext (C4), math papers (arXiv), books (PG-19),
code (Github), as well as formal theorems (Isabelle). We show that the performance
steadily improves when we increase the size of memory up to 262K tokens. On
benchmarks including code and mathematics, we find that the model is capable of
making use of newly defined functions and theorems during test time.
1 I NTRODUCTION
Transformers (Vaswani et al., 2017) have led to remarkable progress in natural language process-
ing (Devlin et al., 2019; Brown et al., 2020), mathematical reasoning (Polu & Sutskever, 2020; Wang
et al., 2020a; Rabe et al., 2021; Li et al., 2021; Hahn et al., 2021; Cobbe et al., 2021), and program
synthesis (Austin et al., 2021; Chen et al., 2021; Li et al., 2022). However, transformer performance
on many of these tasks is limited by the context length of attention, which is typically short. The
ability to attend to far-away tokens is important in many situations. In novels, characters and events
are referenced across multiple chapters. In source code, references to classes and functions may
occur quite far from the places in which they are defined. In theorem proving, proofs make use of
previously defined lemmas.
Attention over long sequences is also useful as a form of rapid learning. Facts and information
which are stored in the form of weight matrices must be slowly trained over hundreds of thousands
of training steps. By using attention, however, a model can simply memorize facts (e.g. function
definitions) by storing them as (key, value) pairs in long-term memory, and then retrieve those facts
later by creating a query that attends to them. In this case, attention acts as a form of information
retrieval, allowing the model to look up facts that it has seen previously.
We demonstrate that a simple and effective way to increase the size of the attention context is to use
approximate k-nearest-neighbor ( kNN) lookup, which is widely used in information retrieval. A
number of extremely scalable implementations of kNN lookup are available, such as ScaNN (Guo
et al., 2020) and Faiss (Johnson et al., 2021).
There are two things which distinguish our approach from previous work on long-range attention (c.f.
Section 2). First, unlike some other approaches, kNN lookup does not do averaging or summarization
of tokens at long distances, but retrieves exact values even from the distant context.
Second, gradients are not backpropagated into the external memory, which is critical to the scalability
of our technique. The keys and values are a function of model parameters, so attempting to backprop-
agate gradients into external memory would necessarily involve computing all of the keys and values
with the current model parameters on every training step. However, if the external memory is not
differentiable, then we can instead instead reuse keys and values that were previously computed on
prior training steps, which drastically reduces the amount of computation for large memories. With
1arXiv:2203.08913v1 [cs.LG] 16 Mar 2022 |
2402.09371.pdf | Transformers Can Achieve Length
Generalization But Not Robustly
Yongchao Zhou1,2, Uri Alon1, Xinyun Chen1, Xuezhi Wang1, Rishabh Agarwal1and Denny Zhou1
1Google DeepMind,2University of Toronto
Length generalization, defined as the ability to extrapolate from shorter training sequences to longer
test ones, is a significant challenge for language models. This issue persists even with large-scale
Transformers handling relatively straightforward tasks. In this paper, we test the Transformer’s ability
of length generalization using the task of addition of two integers. We show that the success of length
generalization is intricately linked to the data format and the type of position encoding. Using the
right combination of data format and position encodings, we show for the first time that standard
Transformers can extrapolate to a sequence length that is 2.5×the input length. Nevertheless, unlike
in-distribution generalization, length generalization remains fragile, significantly influenced by factors
like random weight initialization and training data order, leading to large variances across different
random seeds.
1. Introduction
Transformer-based models have revolutionized natural language understanding and generation
across diverse applications (Gemini et al., 2023; OpenAI, 2023). Despite their impressive abilities
in mathematical reasoning (Lewkowycz et al., 2022), code synthesis (Li et al., 2022), and theorem
proving (Wu et al., 2022), Transformers often struggle with length generalization, an ability that
requires the model to generalize to longer sequences than seen during training (Abbe et al., 2023;
Anil et al., 2022; Zhou et al., 2023). This limitation raises an essential question: do Transformers
genuinely grasp the correct underlying algorithms for a given task, or are they merely resorting to
superficial memorization or shortcuts that fail to scale to more complex problems (Liu et al., 2023b)?
010 20 30 40 50 60 70 80 90100
Digit Length020406080100Exact Match Accuracy (%)
Our Work
Zhou et al. (2023)
Shen et al. (2023)
Kazemnejad et al. (2023)
Lee et al. (2023)
Figure 1|Using an appropriate position encoding and data formatting, we demonstrate that Trans-
formers can generalize to 100-digit decimal addition tasks with more than 98% of accuracy when
trained up to 40-digit addition, resulting in a length extension ratio of 2.5×, which is much more than
the ratio of Lee et al. (2023) ( 1.0×), Kazemnejad et al. (2023) ( 1.125×), Shen et al. (2023) ( 1.1×),
and Zhou et al. (2023) ( 1.5×). Unfilled markers (— ▼ ▽) denote in-distribution test results, filled markers
(—▼) denote out-of-distribution results. In Zhou et al. (2023) and Our Work, each curve is the best out
of 10 trials. For the other three methods, we report the value from their corresponding paper.
Corresponding author(s): yczhou@cs.toronto.eduarXiv:2402.09371v1 [cs.LG] 14 Feb 2024 |
2312.02696.pdf | Analyzing and Improving the Training Dynamics of Diffusion Models
Tero Karras
NVIDIAMiika Aittala
NVIDIAJaakko Lehtinen
NVIDIA, Aalto University
Janne Hellsten
NVIDIATimo Aila
NVIDIASamuli Laine
NVIDIA
Abstract
Diffusion models currently dominate the field of data-
driven image synthesis with their unparalleled scaling to
large datasets. In this paper, we identify and rectify several
causes for uneven and ineffective training in the popular
ADM diffusion model architecture, without altering its high-
level structure. Observing uncontrolled magnitude changes
and imbalances in both the network activations and weights
over the course of training, we redesign the network layers
to preserve activation, weight, and update magnitudes on ex-
pectation. We find that systematic application of this philoso-
phy eliminates the observed drifts and imbalances, resulting
in considerably better networks at equal computational com-
plexity. Our modifications improve the previous record FID
of 2.41 in ImageNet-512 synthesis to 1.81, achieved using
fast deterministic sampling.
As an independent contribution, we present a method for
setting the exponential moving average (EMA) parameters
post-hoc, i.e., after completing the training run. This allows
precise tuning of EMA length without the cost of performing
several training runs, and reveals its surprising interactions
with network architecture, training time, and guidance.
1. Introduction
High-quality image synthesis based on text prompts, ex-
ample images, or other forms of input has become widely
popular thanks to advances in denoising diffusion mod-
els [22,52,71–74,81]. Diffusion-based approaches pro-
duce high-quality images while offering versatile controls
[9,18,21,50,88] and convenient ways to introduce novel
subjects [ 13,65], and they also extend to other modalities
such as audio [ 41,58], video [ 6,23,25], and 3D shapes
[46,57,60,70]. A recent survey of methods and applica-
tions is given by Yang et al. [83].
On a high level, diffusion models convert an image of
pure noise to a novel generated image through repeated
application of image denoising. Mathematically, each de-50 100 200 500 1000 20002351020FIDADM
ADMADM-U
ADM-UDiT-XL/2
DiT-XL/2RIN
U-ViT, L
VDM++
VDM++
StyleGAN-XLXS
S
M
L
XLXXLXS
S
M
LXLXXL
Model complexity (gigaflops per evaluation), ImageNet-512Previous, no guidance
Previous, with guidance
Ours, no guidance
Ours, with guidance
Figure 1. Our contributions significantly improve the quality of
results w.r.t. model complexity, surpassing the previous state-of-the-
art with a 5 ×smaller model. In this plot, we use gigaflops per single
model evaluation as a measure of a model’s intrinsic computational
complexity; a similar advantage holds in terms of parameter count,
as well as training and sampling cost (see Appendix A).
noising step can be understood through the lens of score
matching [ 28], and it is typically implemented using a U-Net
[22,64] equipped with self-attention [ 80] layers. Since we
do not contribute to the theory behind diffusion models, we
refer the interested reader to the seminal works of Sohl-
Dickstein et al. [71], Song and Ermon [73], and Ho et al.
[22], as well as to Karras et al. [36], who frame various
mathematical frameworks in a common context.
Despite the seemingly frictionless scaling to very large
datasets and models, the training dynamics of diffusion mod-
els remain challenging due to the highly stochastic loss func-
tion. The final image quality is dictated by faint image
details predicted throughout the sampling chain, and small
mistakes at intermediate steps can have snowball effects in
subsequent iterations. The network must accurately estimate
the average clean image across a vast range of noise levels,
Gaussian noise realizations, and conditioning inputs. Learn-
1arXiv:2312.02696v1 [cs.CV] 5 Dec 2023 |
2305.17126.pdf | Large Language Models as Tool Makers
Tianle Cai1,2∗Xuezhi Wang1Tengyu Ma1,3†Xinyun Chen1Denny Zhou1
1Google Deepmind2Princeton University3Stanford University
Abstract
Recent research shows the potential of enhancing the problem-solving ability of
large language models (LLMs) through the use of external tools . However, prior
work along this line depends on the availability of existing tools. In this work, we
take an initial step towards removing this dependency by proposing a closed-loop
framework , referred to as LLMs AsToolMakers ( LATM ), where LLMs create
their own reusable tools for problem-solving. Our approach consists of two key
phases: 1) tool making: an LLM acts as the tool maker that crafts tools for given
tasks, where a tool is implemented as a Python utility function. 2) tool using:
an LLM acts as the tool user , which applies the tool built by the tool maker for
problem-solving. The tool user can be either the same or a different LLM from the
tool maker. Tool-making enables an LLM to continually generate tools that can be
applied to different requests so that future requests can call the corresponding APIs
when beneficial for solving the tasks. Furthermore, the division of labor among
LLMs for tool-making and tool-using phases introduces the opportunity to achieve
cost effectiveness without degrading the quality of generated tools and problem
solutions. For example, recognizing that tool-making demands more sophisticated
capabilities than tool-using, we can apply a powerful yet resource-intensive model
as the tool maker, and a lightweight while cost-effective model as the tool user. We
validate the effectiveness of our approach across a variety of complex reasoning
tasks, including Big-Bench tasks. With GPT-4 as the tool maker and GPT-3.5 as
the tool user, LATM can achieve performance that is on par with using GPT-4 for
both tool making and tool using, while the inference cost is significantly reduced.
1 Introduction
Large language models (LLMs) have demonstrated outstanding capabilities across a broad array of
NLP tasks [Brown et al., 2020, Chowdhery et al., 2022, Zhang et al., 2022, Hoffmann et al., 2022,
OpenAI, 2023, Google, 2023] and have even shown promising signs of achieving certain aspects
of artificial general intelligence [Bubeck et al., 2023, Kosinski, 2023]. Moreover, analogous to the
evolution of human intelligence, recent research has unveiled the potential of augmenting LLMs with
external tools , thereby significantly enhancing their problem-solving capacities and efficiencies [Yao
et al., 2023, Liu et al., 2023, Parisi et al., 2022, Schick et al., 2023].
However, the applicability of these tool-using methods is largely contingent on the availability of
suitable tools. According to the lessons learned from the evolutionary milestones of humans, a
crucial turning point was that humans got the ability to fabricate their own tools to address emerging
challenges. Inspired by the importance of tool-making for humans, in this work, we embark on an
initial exploration to apply this evolutionary concept to the realm of LLMs. We propose a closed-loop
framework , which we term as LLMs AsToolMakers ( LATM ), enables LLMs to generate their own
∗Work done as a Student Researcher at Google Deepmind.
†Work done as a Visiting Researcher at Google Deepmind.
Code available at https://github.com/ctlllll/LLM-ToolMaker .
Preprint. Under review.arXiv:2305.17126v1 [cs.LG] 26 May 2023 |
1810.08575v1.pdf | Supervising strong learners
by amplifying weak experts
Paul Christiano
OpenAI
paul@openai.comBuck Shlegeris∗
bshlegeris@gmail.comDario Amodei
OpenAI
damodei@openai.com
Abstract
Many real world learning tasks involve complex or hard-to-specify objectives, and
using an easier-to-specify proxy can lead to poor performance or misaligned be-
havior. One solution is to have humans provide a training signal by demonstrating
or judging performance, but this approach fails if the task is too complicated for a
human to directly evaluate. We propose Iterated Amplification, an alternative train-
ing strategy which progressively builds up a training signal for difficult problems
by combining solutions to easier subproblems. Iterated Amplification is closely
related to Expert Iteration (Anthony et al., 2017; Silver et al., 2017b), except that it
uses no external reward function. We present results in algorithmic environments,
showing that Iterated Amplification can efficiently learn complex behaviors.
1 Introduction
If we want to train an ML system to perform a task, we need to be able to evaluate how well it is
doing. Whether our training signal takes the form of labels, rewards, or something else entirely, we
need some way to generate that signal.
If our goal can be evaluated automatically, such as winning a game of Go, or if we have an algorithm
that can generate examples of correct behavior, then generating a training signal is trivial. In these
cases we might say that there is an “algorithmic” training signal.
Unfortunately, most useful tasks don’t have an algorithmic training signal. So in current applications
of machine learning, humans often provide the training signal. This can be done by having a human
demonstrate the task, for example labeling an image or teleoperating a robot, or by learning a reward
function from human judgments. For these classes of tasks, we could say there is a “human” training
signal.
However, there are harder tasks for which we can’t compute demonstrations or rewards even with
human assistance, and for which we currently have no clear method to get a meaningful training
signal. Consider making economic policy decisions, advancing the scientific frontier, or managing the
security of a large network of computers. Some of these tasks are “beyond human scale” – a single
human can’t perform them and can’t make sense of their massive observation space well enough to
judge the behavior of an agent. It may be possible for a human to judge performance in the very long
run (for example, by looking at economic growth over several years), but such long-term feedback is
very slow to learn from. We currently have no way to learn how to perform such tasks much better
than a human.
The overall situation is depicted in Table 1, which shows six different combinations of training signal
source and problem formulation (supervised learning or RL). The bulk of ML practice operates in
the top center box (supervised learning from human labels), the bottom left box (RL with a scripted
reward), and sometimes the top left box (supervised learning of algorithms). The bottom center box
∗Work done while at OpenAI.arXiv:1810.08575v1 [cs.LG] 19 Oct 2018 |
2106.04985.pdf | Energy-Based Models for Code Generation
under Compilability Constraints
Tomasz Korbak,1,∗Hady Elsahar,2Marc Dymetman,2Germ ´an Kruszewski2
t.korbak@sussex.ac.uk
{hady.elsahar,marc.dymetman,german.kruszewski }@naverlabs.com
1University of Sussex, United Kingdom
2Naver Labs Europe, France
Abstract
Neural language models can be successfully
trained on source code, leading to applications
such as code completion. However, their ver-
satile autoregressive self-supervision objective
overlooks important global sequence-level fea-
tures that are present in the data such as syn-
tactic correctness or compilability. In this
work, we pose the problem of learning to
generate compilable code as constraint satis-
faction. We define an Energy-Based Model
(EBM) representing a pre-trained generative
model with an imposed constraint of generat-
ing only compilable sequences. We then use
the KL-Adaptive Distributional Policy Gradi-
ent algorithm (Khalifa et al., 2021) to train
a generative model approximating the EBM.
We conduct experiments showing that our pro-
posed approach is able to improve compilabil-
ity rates without sacrificing diversity and com-
plexity of the generated samples.
1 Introduction
Code completion is an essential feature of any mod-
ern Integrated Development Environment (IDEs).
It supports developers with recommendations about
the next token to write given a context, speed-
ing up software development and reducing the
number of mistakes. A large body of work has
relied on statistical language modeling, treating
programming languages as natural languages us-
ing probabilistic grammars (Raychev et al., 2014;
Bielik et al., 2016), and more recently relying on
neural language models (Liu et al., 2016a; Svy-
atkovskiy et al., 2020a,b; Arkesteijn et al., 2020;
Ciniselli et al., 2021).1In particular, neural autore-
∗Work done during a research internship at Naver Labs
Europe.
1See Allamanis et al. (2018) for a survey.gressive language models have been favoured due
to their scalability and generic training procedure
that can exploit large codebases (e.g. open source
code repositories available on GitHub) through self-
supervised training.
Despite these desirable traits, neural language
models, trained in the standard way, are known
to suffer from myopia and to overlook global
sequence-level features that are present in the data
and which might be crucial for the quality of gen-
erated sequences (Parshakova et al., 2019b). This
leads to repetitions, hallucinations and failing to
capture long-distance consistency requirements. In
a code generation context, this is demonstrated in
compilation errors that are a common failure mode
in such tasks as translation between programming
languages (Roziere et al., 2020). This problem has
inspired a large body of work on different fronts
on injecting sequence-level priors by either directly
optimizing sequence-level features (Ranzato et al.,
2016) or through fusion with grammars and au-
tomata (Xiao et al., 2016). These techniques aim
to balance between the desirable traits and fast
inference of neural autoregressive models trained
in the standard way and the satisfaction of global
sequence-level features.
In this work, we formulate compilable code gen-
eration as a constraint satisfaction problem. We
show that this formulation leads to a unique dis-
tribution represented by an Energy-Based Model
(EBM). This unique distribution by definition fully
satisfies the compilability constraints while having
a minimal KL divergence from the original autore-
gressive generative model trained through cross en-
tropy. We then train an auto-regressive generative
model to approximate the underlying distribution
of this EBM using the KL-Adaptive DistributionalarXiv:2106.04985v1 [cs.LG] 9 Jun 2021 |
2023.08.18.553799v1.full.pdf | Deep reconstructing generative networks for visualizing dynamic
biomolecules inside cells
Ramya Rangan1, Sagar Khavnekar2, Adam Lerer3, Jake Johnston4,5, Ron Kelley6, Martin
Obr6, Abhay Kotecha6*, and Ellen D. Zhong1*
ABSTRACT
Advances in cryo-electron tomography (cryo-ET) have produced new opportunities to visualize the structures of dynamic
macromolecular machinery in native cellular environments. Here, we describe a machine learning approach that can
reconstruct the structural landscape and dynamics of biomolecular complexes present in cryo-ET subtomograms. This
method, cryoDRGN-ET, learns a deep generative model of 3D density maps directly from subtomogram tilt series images
and can capture states diverse in both composition and conformation. We use this approach to reconstruct the in situ
translation dynamics of prokaryotic ribosomes, and we reveal the distribution of functional states during translation
elongation populated by S. cerevisiae ribosomes inside cells.
Additional Key Words and Phrases: cryo-electron microscopy, cryo-electron tomography, in cell structural biology, machine
learning, deep generative modeling
1 INTRODUCTION
Cryo-electron tomography (cryo-ET) is an imaging technique that provides structural insights spanning
cellular to molecular length scales [ 1,2]. By computationally combining a series of tilt images of intact cells or
thinly milled lamella, cryo-ET can visualize the architecture of whole cells in three dimensions at nanometer
resolution. Further computational processing of the resulting 3D tomograms with algorithms for segmentation
and subtomogram reconstruction can resolve structures at sub-nanometer resolution, providing detailed
snapshots of macromolecular structures and their localization in native contexts [3–8].
A major challenge in image processing workflows for cryo-ET is the analysis of structural heterogeneity
within subtomogram data. Subtomogram reconstruction algorithms must cope with imaging attributes specific
to cryo-ET such as the extremely low signal-to-noise ratio in exposure-limited individual tilt images, as well as
the inherent complexity from variations in conformation and composition of biomolecular complexes within
cellular samples taken without purification. While some advanced methods for heterogeneity analysis have
been proposed [ 5,9–11], the majority of subtomogram processing workflows rely on 3D classification to
cluster subtomograms into a few, discrete conformational states. Although this approach has been successfully
used to reveal distinct states of macromolecular machines in situ [12–14], current processing workflows
remain unwieldy, with many manual steps and significant computational requirements. Furthermore, these
methods are not well-suited for modeling continuous heterogeneity and require specifying the number of
expected states a priori, often additionally requiring user-provided masks to focus classification on regions
with known variability. More fundamentally, 3D classification requires averaging subtomograms for thousands
of particles to obtain well-resolved structures, leading to trade-offs between the number of states that can be
1Department of Computer Science, Princeton University, Princeton, NJ, USA2Max Planck Institute of Biochemistry, Martinsried, Germany
3Google DeepMind, New York, NY, USA4Physiology and Cellular Biophysics, Columbia University, New York, NY, USA5Simons Electron
Microscopy Center, New York Structural Biology Center; New York, NY, USA6Materials and Structural Analysis Division, Thermo Fisher
Scientific, Eindhoven, The Netherlands *Correspondence to: abhay.kotecha@thermofisher.com, zhonge@princeton.edu
.
1 |
10.1016.j.cell.2023.12.017.pdf | Leading Edge
Perspective
Understanding the cell:
Future views of structural biology
Martin Beck,1,3,4,5, *Roberto Covino,2,4,5,*Inga Ha ¨nelt,3,4,5,*and Michaela Mu ¨ ller-McNicoll3,4,5,*
1Max Planck Institute of Biophysics, Max-von-Laue-Straße 3, 60438 Frankfurt am Main, Germany
2Frankfurt Institute for Advanced Studies, Ruth-Moufang-Straße 1, 60438 Frankfurt am Main, Germany
3Goethe University Frankfurt, Frankfurt, Germany
4Senior author
5These authors contributed equally
*Correspondence: martin.beck@biophys.mpg.de (M.B.), covino@fias.uni-frankfurt.de (R.C.), haenelt@biochem.uni-frankfurt.de (I.H.),
mueller-mcnicoll@bio.uni-frankfurt.de (M.M.-M.)
https://doi.org/10.1016/j.cell.2023.12.017
SUMMARY
Determining the structure and mechanisms of all individual functional modules of cells at high molecular
detail has often been seen as equal to understanding how cells work. Recent technical advances have led
to a flush of high-resolution structures of various macromolecular machines, but despite this wealth ofdetailed information, our understanding of cellular function remains incomplete. Here, we discuss present-day limitations of structural biology and highlight novel technologies that may enable us to analyze molecularfunctions directly inside cells. We predict that the progression toward structural cell biology will involve a shifttoward conceptualizing a 4D virtual reality of cells using digital twins. These will capture cellular segments in ahighly enriched molecular detail, include dynamic changes, and facilitate simulations of molecular pro-cesses, leading to novel and experimentally testable predictions. Transferring biological questions into algo-
rithms that learn from the existing wealth of data and explore novel solutions may ultimately unveil how
cells work.
INTRODUCTION
Structural biology is an attempt to answer the question ‘‘what
are we made of?’’ This attempt follows the reductionist
approach, which aims to identify the most fundamental constit-
uents of matter and study their properties. It led us to discovera hierarchy of structures, from molecules through atoms all the
way down to fundamental particles, such as quarks and elec-
trons. Cells are the minimal units of life and are made of billionsof distinct molecules. Although this answers part of the ques-
tion of what we are made of, it does not answer a key question
of cell biology—how do cellular functions spontaneouslyemerge from the interaction of these billions of molecules?Cell biology usually lacks the structural resolution to under-
stand the role of individual molecules and the choreography
that organizes them in functional units, which ultimately distin-guishes a living cell from an inanimate object. To gain this un-
derstanding, the integration of structural and cellular biology is
an outstanding challenge.
With the discovery of the DNA double-helix and the first pro-
tein structures, a structure-function paradigm emerged, under-
pinning the implicit assumption of structural biology: by knowingthe detailed structures of biomolecules, one will understand their
function, and the sum of all individual structure-function relation-
ships will enable us to explain how cells work. This approach hasbeen immensely successful because it led to an atomistic pictureof many molecular machines and for many molecules set the
foundation of our present understanding of their function. How-ever, with increasing coverage and in-depth characterization ofthe cell’s constituents, challenges to this assumption are
emerging.
The first challenge stems from the realization that all biomole-
cules are inherently dynamic. Thermal fluctuations can transmit
energy to molecules from their environment. In response,
these molecules will experience spontaneous conformationalchanges, ranging from the local flipping of a side chain to global
folding processes. Instead of considering a biomolecule as a sin-
gle well-defined static structure, we must think of it as a struc-tural ensemble, i.e., a large collection of conformations, eachpopulated with different probabilities.
1The molecule will sto-
chastically interconvert between different conformations. For
some molecules, there will be few conformations overwhelm-ingly more probable than others, such as the globular protein
serum albumin; but for others, the ensemble will be very hetero-
geneous, consisting of many conformations, all nearly equallyprobable, such as in the case of disordered proteins. Increasing
evidence points to the fact that the entire conformational
ensemble, including rare conformations, determines the functionof a biomolecule.
2,3Such an ensemble view implies that the
probability of populating the different alternative conformations
can be modulated by thermodynamic parameters, interactionswith other biomolecules, post-translational modifications
ll
OPEN ACCESS
Cell 187, February 1, 2024 ª2023 The Authors. Published by Elsevier Inc. 545
This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ ). |
1608.03983.pdf | Published as a conference paper at ICLR 2017
SGDR: S TOCHASTIC GRADIENT DESCENT WITH
WARM RESTARTS
Ilya Loshchilov & Frank Hutter
University of Freiburg
Freiburg, Germany,
{ilya,fh}@cs.uni-freiburg.de
ABSTRACT
Restart techniques are common in gradient-free optimization to deal with multi-
modal functions. Partial warm restarts are also gaining popularity in gradient-
based optimization to improve the rate of convergence in accelerated gradient
schemes to deal with ill-conditioned functions. In this paper, we propose a sim-
ple warm restart technique for stochastic gradient descent to improve its anytime
performance when training deep neural networks. We empirically study its per-
formance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new
state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate
its advantages on a dataset of EEG recordings and on a downsampled version of
the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR
1 I NTRODUCTION
Deep neural networks (DNNs) are currently the best-performing method for many classification
problems, such as object recognition from images (Krizhevsky et al., 2012a; Donahue et al., 2014)
or speech recognition from audio data (Deng et al., 2013). Their training on large datasets (where
DNNs perform particularly well) is the main computational bottleneck: it often requires several
days, even on high-performance GPUs, and any speedups would be of substantial value.
The training of a DNN with nfree parameters can be formulated as the problem of minimizing a
functionf: I Rn→I R. The commonly used procedure to optimize fis to iteratively adjust xt∈I Rn
(the parameter vector at time step t) using gradient information ∇ft(xt)obtained on a relatively
smallt-th batch of bdatapoints. The Stochastic Gradient Descent (SGD) procedure then becomes
an extension of the Gradient Descent (GD) to stochastic optimization of fas follows:
xt+1=xt−ηt∇ft(xt), (1)
whereηtis a learning rate. One would like to consider second-order information
xt+1=xt−ηtH−1
t∇ft(xt), (2)
but this is often infeasible since the computation and storage of the inverse Hessian H−1
tis in-
tractable for large n. The usual way to deal with this problem by using limited-memory quasi-
Newton methods such as L-BFGS (Liu & Nocedal, 1989) is not currently in favor in deep learning,
not the least due to (i) the stochasticity of ∇ft(xt), (ii) ill-conditioning of fand (iii) the presence
of saddle points as a result of the hierarchical geometric structure of the parameter space (Fukumizu
& Amari, 2000). Despite some recent progress in understanding and addressing the latter problems
(Bordes et al., 2009; Dauphin et al., 2014; Choromanska et al., 2014; Dauphin et al., 2015), state-of-
the-art optimization techniques attempt to approximate the inverse Hessian in a reduced way, e.g.,
by considering only its diagonal to achieve adaptive learning rates. AdaDelta (Zeiler, 2012) and
Adam (Kingma & Ba, 2014) are notable examples of such methods.
1arXiv:1608.03983v5 [cs.LG] 3 May 2017 |
2212.04458.pdf | GENERAL -PURPOSE IN-CONTEXT LEARNING
BYMETA-LEARNING TRANSFORMERS
Louis Kirsch1 2, James Harrison1, Jascha Sohl-Dickstein1, Luke Metz1
1Google Research, Brain Team2The Swiss AI Lab IDSIA, USI, SUPSI
louis@idsia.ch, {jamesharrison,jaschasd,lmetz }@google.com
ABSTRACT
Modern machine learning requires system designers to specify aspects of the
learning pipeline, such as losses, architectures, and optimizers. Meta-learning,
or learning-to-learn, instead aims to learn those aspects, and promises to un-
lock greater capabilities with less manual effort. One particularly ambitious goal
of meta-learning is to train general-purpose in-context learning algorithms from
scratch, using only black-box models with minimal inductive bias . Such a model
takes in training data, and produces test-set predictions across a wide range of
problems, without any explicit definition of an inference model, training loss, or
optimization algorithm. In this paper we show that Transformers and other black-
box models can be meta-trained to act as general-purpose in-context learners. We
characterize transitions between algorithms that generalize, algorithms that mem-
orize, and algorithms that fail to meta-train at all, induced by changes in model
size, number of tasks, and meta-optimization. We further show that the capabili-
ties of meta-trained algorithms are bottlenecked by the accessible state size (mem-
ory) determining the next prediction, unlike standard models which are thought to
be bottlenecked by parameter count. Finally, we propose practical interventions
such as biasing the training distribution that improve the meta-training and meta-
generalization of general-purpose in-context learning algorithms.
1 I NTRODUCTION
Meta-learning is the process of automatically discovering new learning algorithms instead of de-
signing them manually (Schmidhuber, 1987). An important quality of human-engineered learning
algorithms, such as backpropagation and gradient descent, is their applicability to a wide range of
tasks or environments. For learning-to-learn to exceed those capabilities, the meta-learned learn-
ing algorithms must be similarily general-purpose . Recently, there has been significant progress
toward this goal (Kirsch et al., 2019; Oh et al., 2020). The improved generality of the discovered
learning algorithms has been achieved by introducing inductive bias, such as by bottlenecking the
architecture or by hiding information, which encourage learning over memorization. Methods in-
clude restricting learning rules to use gradients (Metz et al., 2019; Kirsch et al., 2019; Oh et al.,
2020), symbolic graphs (Real et al., 2020; Co-Reyes et al., 2021), or parameter sharing (Kirsch &
Schmidhuber, 2020; Kirsch et al., 2021).
While enabling generalization, these inductive biases come at the cost of increasing the effort to
design these systems and potentially restrict the space of discoverable learning algorithms. Instead,
we seek to explore general-purpose meta-learning systems with minimal inductive bias . Good can-
didates for this are black-box sequence-models as meta-learners such as LSTMs (Hochreiter et al.,
2001; Wang et al., 2016; Duan et al., 2016) or Transformers (Vaswani et al., 2017). These memory-
based or in-context learners take in training data and produce test-set predictions without any explicit
definition of an inference model, training loss, or optimization algorithm. With recent advances of
in-context learning in large language models (Brown et al., 2020), neural networks can already learn
many concepts from demonstrations. What are the necessary conditions such that those models can
learn from a wide range of demonstrations? To what extent can we elicit in-context learning that
generalizes to a wider range of problems, in a similar way how learning via backpropagation and
gradient descent can generalize?
1arXiv:2212.04458v2 [cs.LG] 9 Jan 2024 |
2309.16058v1.pdf | AnyMAL: An Efficient and Scalable Any-Modality
Augmented Language Model
Seungwhan Moon∗Andrea Madotto∗Zhaojiang Lin∗Tushar Nagarajan∗
Matt Smith Shashank Jain Chun-Fu Yeh Prakash Murugesan
Peyman Heidari Yue Liu Kavya Srinet Babak Damavandi Anuj Kumar
FAIR, Meta & Meta Reality Labs
Abstract
We present Any-Modality Augmented Language Model (AnyMAL), a unified
model that reasons over diverse input modality signals ( i.e. text, image, video,
audio, IMU motion sensor), and generates textual responses. AnyMAL inherits
the powerful text-based reasoning abilities of the state-of-the-art LLMs including
LLaMA-2 (70B), and converts modality-specific signals to the joint textual space
through a pre-trained aligner module. To further strengthen the multimodal LLM’s
capabilities, we fine-tune the model with a multimodal instruction set manually
collected to cover diverse topics and tasks beyond simple QAs. We conduct com-
prehensive empirical analysis comprising both human and automatic evaluations,
and demonstrate state-of-the-art performance on various multimodal tasks.
1 Introduction
Large Language Models (LLMs), known for their substantial size and complexity, have significantly
enhanced the capacity of machines to understand and articulate human language. The progress in
LLMs has also led to notable advancements in the vision-language domain [ 1,2,3,4], bridging the
gap between image encoders and LLMs to combine their reasoning capabilities. Prior multimodal
LLM research has concentrated on models that combine text and one other modality [ 3,5], such as text
and image models, or has centered on proprietary language models that are not open sourced [2, 4].
To tackle the previously mentioned challenges, we introduce Any-Modality Augmented Language
Model (AnyMAL) — a collection of multi-modal encoders trained to transform data from various
modalities, including images, videos, audio, and IMU motion sensor data, into the text embedding
space of an LLM. To achieve this, we extend the work by [ 1] to (1) more capable instruction-tuned
LLMs ( i.e. LLaMA-2-70B-chat [ 6]), (2) larger pre-trained modality encoders, and (3) advanced
projection layers to handle variable input lengths. The model output examples are shown in Figure 1,
and an illustration of the overall methodology is shown in Figure 2.
The key contributions of the work are as follows:
•We present an efficient and scalable solution for building Multimodal LLMs. We provide
projection layers pre-trained on large datasets with diverse modalities ( e.g. 200Mimages,
2.2Maudio, 500 KIMU time-series, 28 Mvideos) all aligned to the same LLM (LLaMA-2-
70B-chat), thus enabling interleaved multimodal in-context prompting.
•We further fine-tune the model with the multimodal instruction set across three modalities
(image, video, and audio) covering diverse unconstrained tasks beyond simple QA domains.
The dataset features high-quality manually collected instruction data, which we thus also
use as a benchmark for complex multimodal reasoning tasks.
•Our best model achieves strong zero-shot performance in both automatic and human eval-
uation on diverse tasks and modalities, setting new SOTA with +7.0% relative accuracy
∗Joint First Authors. :{shanemoon,andreamad8,zhaojiang,tusharn}@meta.com
Preprint. Under review.arXiv:2309.16058v1 [cs.LG] 27 Sep 2023 |
2401.01335v2.pdf | Self-Play Fine-Tuning Converts Weak Language Models
to Strong Language Models
Zixiang Chen∗†Yihe Deng∗‡Huizhuo Yuan∗§Kaixuan Ji¶Quanquan Gu‖
Abstract
Harnessing the power of human-annotated data through Supervised Fine-Tuning (SFT) is
pivotal for advancing Large Language Models (LLMs). In this paper, we delve into the prospect
of growing a strong LLM out of a weak one without the need for acquiring additional human-
annotated data. We propose a new fine-tuning method called Self-Play fIne-tuNing ( SPIN),
which starts from a supervised fine-tuned model. At the heart of SPINlies a self-play mechanism,
where the LLM refines its capability by playing against instances of itself. More specifically, the
LLM generates its own training data from its previous iterations, refining its policy by discerning
these self-generated responses from those obtained from human-annotated data. Our method
progressively elevates the LLM from a nascent model to a formidable one, unlocking the full
potential of human-annotated demonstration data for SFT. Theoretically, we prove that the
global optimum to the training objective function of our method is achieved only when the LLM
policy aligns with the target data distribution. Empirically, we evaluate our method on several
benchmark datasets including the HuggingFace Open LLM Leaderboard, MT-Bench, and datasets
from Big-Bench. Our results show that SPINcan significantly improve the LLM’s performance
across a variety of benchmarks and even outperform models trained through direct preference
optimization (DPO) supplemented with extra GPT-4 preference data. This sheds light on the
promise of self-play, enabling the achievement of human-level performance in LLMs without the
need for expert opponents. Codes are available at https://github.com/uclaml/SPIN .
1 Introduction
Large Language Models (LLMs) have began a groundbreaking era in artificial general intelligence
(AGI), demonstrating extraordinary capabilities across a wide range of domains that require in-
tricate reasoning and specialized knowledge. These models excel in areas such as mathematical
reasoning/problem solving (Cobbe et al., 2021; Wei et al., 2022; Lewkowycz et al., 2022), code gener-
ation/programming (Chen et al., 2021; Austin et al., 2021; Li et al., 2022), text generation (Bubeck
∗Equal contribution
†Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail:
chenzx19@cs.ucla.edu
‡Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail:
yihedeng@cs.ucla.edu
§Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail:
hzyuan@cs.ucla.edu
¶Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail:
kaixuanji@cs.ucla.edu
‖Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail: qgu@cs.ucla.edu
1arXiv:2401.01335v2 [cs.LG] 12 Feb 2024 |
10.1038.s41586-021-03819-2.pdf | Nature | Vol 596 | 26 August 2021 | 583
ArticleHighly accurate protein structure prediction
with AlphaFold
John Jumper1,4 ✉, Richard Evans1,4, Alexander Pritzel1,4, Tim Green1,4, Michael Figurnov1,4,
Olaf Ronneberger1,4, Kathryn Tunyasuvunakool1,4, Russ Bates1,4, Augustin Žídek1,4,
Anna Potapenko1,4, Alex Bridgland1,4, Clemens Meyer1,4, Simon A. A. Kohl1,4,
Andrew J. Ballard1,4, Andrew Cowie1,4, Bernardino Romera-Paredes1,4, Stanislav Nikolov1,4,
Rishub Jain1,4, Jonas Adler1, Trevor Back1, Stig Petersen1, David Reiman1, Ellen Clancy1,
Michal Zielinski1, Martin Steinegger2,3, Michalina Pacholska1, Tamas Berghammer1,
Sebastian Bodenstein1, David Silver1, Oriol Vinyals1, Andrew W. Senior1, Koray Kavukcuoglu1,
Pushmeet Kohli1 & Demis Hassabis1,4 ✉
Proteins are essential to life, and understanding their structure can facilitate a
mechanistic understanding of their function. Through an enormous experimental
effort1–4, the structures of around 100,000 unique proteins have been determined5, but
this represents a small fraction of the billions of known protein sequences6,7. Structural
coverage is bottlenecked by the months to years of painstaking effort required to determine a single protein structure. Accurate computational approaches are needed
to address this gap and to enable large-scale structural bioinformatics. Predicting the
three-dimensional structure that a protein will adopt based solely on its amino acid
sequence—the structure prediction component of the ‘protein folding problem’
8—has
been an important open research problem for more than 50 years9. Despite recent
progress10–14, existing methods fall far short of atomic accuracy, especially when no
homologous structure is available. Here we provide the first computational method
that can regularly predict protein structures with atomic accuracy even in cases in which
no similar structure is known. We validated an entirely redesigned version of our neural
network-based model, AlphaFold, in the challenging 14th Critical Assessment of protein
Structure Prediction (CASP14)15, demonstrating accuracy competitive with
experimental structures in a majority of cases and greatly outperforming other
methods. Underpinning the latest version of AlphaFold is a novel machine learning
approach that incorporates physical and biological knowledge about protein structure,
leveraging multi-sequence alignments, into the design of the deep learning algorithm.
The development of computational methods to predict
three-dimensional (3D) protein structures from the protein sequence
has proceeded along two complementary paths that focus on either the
physical interactions or the evolutionary history. The physical interac-
tion programme heavily integrates our understanding of molecular
driving forces into either thermodynamic or kinetic simulation of pro -
tein physics16 or statistical approximations thereof17. Although theoreti -
cally very appealing, this approach has proved highly challenging for
even moderate-sized proteins due to the computational intractability
of molecular simulation, the context dependence of protein stability
and the difficulty of producing sufficiently accurate models of protein
physics. The evolutionary programme has provided an alternative in
recent years, in which the constraints on protein structure are derived
from bioinformatics analysis of the evolutionary history of proteins,
homology to solved structures18,19 and pairwise evolutionary correla-
tions20–24. This bioinformatics approach has benefited greatly from the steady growth of experimental protein structures deposited in
the Protein Data Bank (PDB)5, the explosion of genomic sequencing
and the rapid development of deep learning techniques to interpret
these correlations. Despite these advances, contemporary physical
and evolutionary-history-based approaches produce predictions that
are far short of experimental accuracy in the majority of cases in which
a close homologue has not been solved experimentally and this has
limited their utility for many biological applications.
In this study, we develop the first, to our knowledge, computational
approach capable of predicting protein structures to near experimental
accuracy in a majority of cases. The neural network AlphaFold that we
developed was entered into the CASP14 assessment (May–July 2020;
entered under the team name ‘ AlphaFold2’ and a completely different
model from our CASP13 AlphaFold system10). The CASP assessment is
carried out biennially using recently solved structures that have not
been deposited in the PDB or publicly disclosed so that it is a blind test https://doi.org/10.1038/s41586-021-03819-2
Received: 11 May 2021
Accepted: 12 July 2021
Published online: 15 July 2021
Open access
Check for updates
1DeepMind, London, UK. 2School of Biological Sciences, Seoul National University, Seoul, South Korea. 3Artificial Intelligence Institute, Seoul National University, Seoul, South Korea. 4These
authors contributed equally: John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna
Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Demis Hassabis.
✉e-mail: jumper@deepmind.com; dhcontact@deepmind.com |